Compare commits

...

255 Commits

Author SHA1 Message Date
Phil Peble 9081324e6b
Merge pull request #5306 from yanjunding/patch-1
Typo
2025-08-25 14:42:13 -05:00
Phil Peble cc19ca0746
Merge pull request #5864 from emissary-ingress/release-3.10-fix-CHANGELOG
Update CHANGELOG with correct metadata for 3.10 release
2025-08-14 16:01:01 -05:00
Phil Peble db5d38e826
Update CHANGELOG with correct metadata for 3.10 release
Signed-off-by: Phil Peble <ppeble@activecampaign.com>
2025-08-14 15:56:44 -05:00
Flynn a8e8f4aacd
Merge pull request #5849 from emissary-ingress/release-3-10-quickstart
Point quickstart link in README to emissary-ingress.dev
2025-07-29 13:30:12 -04:00
Phil Peble e6fa8e56e3
Point quickstart link in README to emissary-ingress.dev
Signed-off-by: Phil Peble <ppeble@activecampaign.com>
2025-07-29 12:17:48 -05:00
Flynn 4f12337556
Merge pull request #5839 from emissary-ingress/flynn/update-docs
Update README and QUICKSTART for 3.10.0
2025-05-07 15:50:26 -04:00
Flynn dd98ecd66a Minor tweaks
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-05-07 10:15:53 -04:00
Flynn c815e182b2 Update README and SUPPORT.md
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-05-07 10:15:47 -04:00
Flynn 96a49735a8 TRY-3.10 -> QUICKSTART
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-05-07 10:15:41 -04:00
Flynn d25610acbe
Merge pull request #5831 from emissary-ingress/flynn/update-try-3.10
Update the TRY-3.10 document for 3.10.0-rc.3.
2025-03-26 12:28:37 -04:00
Flynn 0f94681cfb Update the TRY-3.10 document for 3.10.0-rc.3.
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-25 22:19:49 -04:00
Flynn 5d1dea8ba8
Merge pull request #5795 from emissary-ingress/ci/5794
[CI Run] ambex: Remove usage of md5
2025-03-21 20:22:50 -04:00
Alice Wasko 7f3c6a8868 fix linting errors
Signed-off-by: Alice Wasko <aliceproxy@pm.me>
2025-03-21 16:36:55 -04:00
Flynn 214320b2e4 Update release notes
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-21 16:36:55 -04:00
Flynn 433ac459a0 Remove usage of md5
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-21 16:36:55 -04:00
Flynn 79170dbc4a
Merge pull request #5827 from emissary-ingress/flynn/python-deps
Update Python dependencies
2025-03-21 16:34:17 -04:00
Flynn 2f95c68bf1 Update dependency licenses. Ugh.
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-06 09:17:00 -05:00
Flynn da250b7cc7 Update Python dependencies
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-05 22:08:49 -05:00
Flynn 08d78948ac Use py-version to choose the Python version for our venv
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-05 22:08:45 -05:00
Flynn d14c84c690
Merge pull request #5823 from emissary-ingress/flynn/isker-5821
Pass client certificate and SNI to auth service -- thanks, @isker!
2025-02-14 09:54:43 -05:00
Flynn 2ae71716cc Automatic formatter stuff
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-13 18:36:41 -05:00
Flynn 6c161bd268 Move CHANGELOG tweak into docs/releaseNotes.yml
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-13 18:36:24 -05:00
Ian Kerins 9b6894249f Pass client certificate and SNI to auth service
This enables the auth service to do things like mTLS.

Signed-off-by: Ian Kerins <git@isk.haus>
2025-02-13 18:29:47 -05:00
Flynn cffdd53f8e
Merge pull request #5825 from emissary-ingress/flynn/readme-fix
🤦‍♂️ right, TRY-3.10.md is on master at the moment.
2025-02-13 10:22:52 -05:00
Flynn ccdc52db1d 🤦‍♂️ right, TRY-3.10.md is on master at the moment.
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 23:18:00 -05:00
Flynn 600dcaf4b8
Merge pull request #5822 from emissary-ingress/flynn/try-3.10
"Try 3.10" instructions for the release/v3.10 branch
2025-02-12 17:05:05 -05:00
Flynn def2e22bc2 Disable the broken chart test for the moment (I've torn the charts apart at the moment).
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 15:31:26 -05:00
Flynn 1c5819bce5 Tweak language around ALabs contributions
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 14:54:22 -05:00
Flynn 0e1a1d1d9d D'oh, include links for Ajay and Luke
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 14:53:49 -05:00
Flynn c8f597d7ce "Try 3.10" instructions for the release/v3.10 branch
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 14:47:34 -05:00
Flynn faf6f7a057
Merge pull request #5818 from emissary-ingress/lukeshu/go-updates
Update Go dependencies (from PR 5817)
2025-02-07 13:35:57 -05:00
Flynn 672c554e16 Use (hopefully) unique names for artifacts
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-06 18:27:36 -05:00
Flynn f0afe10599 Format GitHub actions
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-06 18:27:31 -05:00
Flynn 120314d95b Switch to upload-artifacts@v4 (and make quoting consistent)
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-06 18:27:15 -05:00
Luke T. Shumaker 368ca59863 Upgrade Go dependencies
sed -i \
        -e 's,^replace k8s.io/code-generator.*,replace k8s.io/code-generator v0.32.1 => github.com/emissary-ingress/code-generator 4d5bf4656f7139d290a2fa3684a6903cd04cbf97,' \
        -e 's,k8s.io/code-generator v0\.30.*,k8s.io/code-generator v0.32.1,' \
        go.mod
    GOFLAGS=-tags=pin go get -u ./...
    sed -i 's,sigs.k8s.io/e2e-framework/support/utils,sigs.k8s.io/e2e-framework/pkg/utils,' test/apiext/apiext_test.go
    go mod tidy
    go mod tidy
    go mod vendor
    make generate

Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker 79917553b1 DO NOT MERGE YET: Upgrade go-mkopensource
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker a4933625e8 go.mod: Preemptively pin pre-rename versions of go-metrics and mergo
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker e053f3b716 Upgrade k8s.io/code-generator to match the version it's supposed to be
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker 1f8b0cb718 tools: Prevent `goimports` from ruining the pin.go files
We did this in Telepresence a long time ago (at Thomas' suggestion).

Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker 2e92aa8a21 go.mod: Tidy comments (SO MAYBE PEOPLE READ THEM)
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:14 -07:00
Luke T. Shumaker d784486390 py-list-deps: Adjust to work with newer Python
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:14 -07:00
Flynn c2cd9ddfc6
Merge pull request #5808 from emissary-ingress/flynn/pr5798
Include PR5798
2024-12-09 12:07:42 -05:00
Flynn 20e3f63e7c
Merge pull request #5807 from emissary-ingress/flynn/update
Update Envoy, go-control-plane, Go, and dependencies
2024-12-05 17:15:15 -05:00
Flynn f30725562c Clean up releaseNotes/CHANGELOG and 'make generate'
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:25:55 -05:00
ajaychoudhary-hotstar f8829ee1d8 Fixed test case
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar ab7b539a47 Added condition to take only Ready pods for load balancing
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar b1fb2d6bfb Added endpoints fallback in case endpointslice doesn't exists
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar 46ab826f03 Removed break
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar 88712774f4 updated test Yaml
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar c5e28b8fbe Added support for endpointslices
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
Flynn 2b124a957d Whitespace
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:10:29 -05:00
Flynn d9f94770a3 Use cryptography instead of OpenSSL.crypto
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:10:29 -05:00
Flynn d0e902dceb Fix lint errors
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:10:02 -05:00
Flynn 95495a54c2 Switch to the GCR mirror for the base Envoy image
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:38 -05:00
Flynn c9a542be33 gmake generate
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:37 -05:00
Flynn 233307cb95 Fix make generate
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:36 -05:00
Flynn a47f7482e1 gmake compile-envoy-protos
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:35 -05:00
Flynn 33a9ce8c80 Switch to Golang 1.23.3
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:34 -05:00
Flynn c55dad2cf3 Bump google.golang.org/grpc (to get grpc.NewClient for go-control-plane) and go mod tidy.
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:34 -05:00
Flynn dfcf01298f Update ENVOY_COMMIT and ENVOY_GO_CONTROL_PLANE_COMMIT
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:33 -05:00
Flynn 8e7ee3b7cf Switch GitHub workflows to ubuntu-24.04
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:32 -05:00
Flynn 9f6c9b536e Switch KAT to Ubuntu 24.04. Clean up Docker lint stuff
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:31 -05:00
Flynn 02e88319b7
Merge pull request #5789 from emissary-ingress/kai-tillman/update-monthly-meeting
Update Zoom meeting link
2024-10-02 17:11:02 -04:00
Kai Tillman f853d23884 Update Zoom meeting link
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-10-02 14:42:14 -04:00
Kai Tillman ac2dc64c66 Add ArtifactHub badge
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-09-03 10:44:46 -07:00
Kai Tillman 010ac84078 Rename DEVELOPING.md to CONTRIBUTING.md
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-09-03 10:44:46 -07:00
Flynn 30c8adca06
Merge pull request #5757 from emissary-ingress/flynn/dev/no-dependabot
Disable dependabot
2024-08-20 16:41:54 -04:00
Flynn 4734dff37d
Merge pull request #5766 from emissary-ingress/flynn/dev/nominate-mark-s
Add Mark Schlachter (@the-wondersmith) as a maintainer
2024-08-15 13:44:14 -04:00
Flynn 0bca8b0352 Add Mark Schlachter (@the-wondersmith) as a maintainer
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2024-08-13 22:58:37 -04:00
Flynn b7929aed97 Disable dependabot
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2024-08-08 23:08:24 -04:00
Flynn 3ab7632e37
Merge pull request #5731 from emissary-ingress/will-h-helm-chart-update
Add Helm Chart Support for Opt-In Support for Exposing Port 8878 in the AdminService
2024-08-08 23:05:13 -04:00
Flynn 778869f935
Merge branch 'master' into will-h-helm-chart-update 2024-08-08 22:08:13 -04:00
Tenshin Higashi 680e3dc3d0
Re-adding HostIP to helm charts (#5733)
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-08-08 10:28:31 -04:00
Tenshin Higashi eb817604ec
Fix Lint (#5735)
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-08-07 16:10:16 -04:00
w-h37 4103bc151d Changes for the helm chart added, specifically allowing users to opt-in for access to the Go Plugin Filter Metrics
Signed-off-by: w-h37 <47009048+w-h37@users.noreply.github.com>
2024-08-01 08:44:21 -04:00
Flynn 70cfbfb935
Merge pull request #5727 from emissary-ingress/flynn/nominate-phil-peble
Add Phil Peble as a maintainer (and fix table formatting)
2024-07-25 13:34:08 -04:00
Flynn 5d5eaa0677 Add Phil Peble as a maintainer (and fix table formatting)
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2024-07-22 15:36:15 -04:00
Flynn 6521f85a7e
Merge pull request #5716 from emissary-ingress/ci-5715
CI for 5715
2024-07-19 14:00:05 -04:00
Flynn 642c78428c Whitespace changes from my editor cleaning up YAML files...
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2024-07-05 14:52:37 -04:00
Flynn d04280e3a8 Move the changelog comment into `releaseNotes.yml` as required by the build process.
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2024-07-05 14:52:09 -04:00
Sekar Saravanan e8ca65046a issue-5714 - incorrect _cache_key generation fixed
Signed-off-by: Sekar Saravanan <sekar.saravanan@hotstar.com>
2024-07-05 14:50:55 -04:00
Alice Wasko 5f7ac30080 update envoy to 1.30.3 (patched)
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2024-06-26 15:45:56 -07:00
Tenshin Higashi 2947049e06 Fixing CI to be 3.10-dev
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-06-17 11:11:37 -07:00
Tenshin Higashi 16c971e3d3 Updating release notes
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-06-17 11:11:37 -07:00
Tenshin Higashi 1d56ae0965 upgrade envoy to 1.30.2
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-06-10 15:51:55 -07:00
Alice Wasko 1c96a9df06 update go
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2024-06-07 12:55:20 -07:00
Alice Wasko 8b343d5989 update envoy/go-control-plane
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2024-05-28 10:31:00 -07:00
Alice Wasko 8d75fd48bc upgrade Envoy proxy to 1.30.1 with patches
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2024-05-28 10:31:00 -07:00
Lance Austin 9292c47470 deps: bump go-control-plane with envoy 1.28 support
Updates go-control-plane to the latest version sync'd and tested against
Envoy 1.28.

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2024-05-28 10:31:00 -07:00
Lance Austin 2d36cf21e0 deps: update to envoy 1.28.0
Bumps to Envoy 1.28.0 and regenerates compiled protos.

steps:
1. update envoy.mk to v1.28 commit with custom commits
2. ran `make update-base`

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2024-05-28 10:31:00 -07:00
Tenshin Higashi 084034461e Updating Golang
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-05-09 08:47:37 -07:00
Tenshin Higashi 99256f1e12 Updating Deps (Werkzeug)
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-05-09 08:47:37 -07:00
Tenshin Higashi 3b18d9a636 Updating google.golang.org/protobuf
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-05-01 08:43:55 -07:00
Tenshin Higashi ab531b7a2b Updating x/net
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-05-01 08:43:55 -07:00
Tenshin Higashi f88f6e0e45 Upgrading python dependencies
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-05-01 08:43:55 -07:00
Alex Gervais 7b7d39e261
Merge pull request #5587 from emissary-ingress/laustin/maintainer-status-update
Update maintainers.md
2024-03-05 10:00:04 -05:00
Lance Austin 75b4a5e445 Update maintainers.md
Lance is no longer able to commit the time necessary to be a
maintainer and thus is stepping down as a maintainer on the
project.

Signed-off-by: Lance Austin <laustin@coreweave.com>
2024-03-05 08:37:05 -06:00
Alex Gervais 31dd587529 Update MAINTAINERS.md
Stepping down as a maintainer.
2024-03-05 08:32:35 -06:00
Flynn bd0dbefdb7
Merge pull request #5555 from emissary-ingress/dd/dependabot-actions
deps: add github-action on dependabot
2024-02-13 10:28:51 -05:00
David Dymko 0fee7f95bd
Merge pull request #5554 from emissary-ingress/dependabot/go_modules/tools/src/golangci-lint/github.com/golangci/golangci-lint-1.56.1
build(deps): bump github.com/golangci/golangci-lint from 1.55.2 to 1.56.1 in /tools/src/golangci-lint
2024-02-12 07:53:48 -05:00
David Dymko f772a0c332 deps: add github-action on dependabot
Signed-off-by: David Dymko <dymkod@gmail.com>
2024-02-09 16:47:04 -05:00
David Dymko 76985900e5 deps: signoff on golang-ci-lint
Signed-off-by: David Dymko <dymkod@gmail.com>
2024-02-09 15:06:00 -05:00
David Dymko d781ecfb9e readme: fix broken emoji
Signed-off-by: David Dymko <dymkod@gmail.com>
2024-02-08 09:19:43 -06:00
Lance Austin a6afd065b6
Merge pull request #5549 from emissary-ingress/laustin/apiext-log
apiext: remove unnecessary log line
2024-02-05 12:44:21 -06:00
Lance Austin 7a2e1b6d67
apiext: remove unnecessary log line
remove log line that logs too often and was missed as a debugging item.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-02-05 12:28:57 -06:00
Lance Austin 2bf8880e95
Merge pull request #5540 from emissary-ingress/laustin/apiext-debug
apiext: restrict which crds are watched and patched.
2024-02-05 10:25:51 -06:00
Lance Austin 2c21e72146
apiext: add configurable crd-label-selectors
This returns the default restrictions on CRD labels for what
gets watched but makes it configurable for custom installs.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-02-03 13:00:34 -06:00
Lance Austin 799c9a5a39
apiext: ignore conversion strategy None
When a getambassador.io CRD explicitly sets webhook strategy
to None then the following should occur:

- CRD Patching ignores CRD and doesn't reconcile it
- CRD readiness checks ignore crd when checking for readiness

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-02-02 12:24:00 -06:00
Lance Austin 53a1631519 apiext: remove crd label restriction
To limit the scope of CRD's being pulled into the Manager cache
a resitrction was put on the default labels. This works in most cases
when users are using the default crds provided.

However, in some cases users have their own conventions on labels
so if the label was modified or removed then the CRD Patching would
not work.

This removes the cache option and instead relies upon the
`getAmbassadorioPredicate` for filtering out events for non
getambassadorio crds that we do not care about as seen in
`pkg/apiext/internal/controller/crd/predicate.go`

We also did this to limit the scope of checked CRD's when checking
CRD readiness. This has been removed and we instead check to ensure
we only look at the `getambassador.io` group.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-31 12:38:12 -06:00
Lance Austin 086914f235
Merge pull request #5532 from emissary-ingress/laustin/fix-apiext-namespaces
apiext: pass configured ca namespace to cert manager
2024-01-26 10:02:31 -06:00
Lance Austin 42eab8ab69
apiext: pass configured ca namespace to cert manager
The CACertManager ensures the CA Cert is created and managed
and by default it assumes this is done in the emissary-system
namespace.

We missed passing the configured namespace into the CACertManager
so it would fail when users installed the emissary-apiext into
a different namespace than emissary-system.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-26 08:38:42 -06:00
Rick Lane f621c33a4a
Merge pull request #5522 from emissary-ingress/lukeshu/step-down
Formally remove myself as a maintainer
2024-01-18 13:09:45 -05:00
Flynn 56a235c1a1
Merge pull request #5521 from emissary-ingress/dd/update-maintainer-affiliation
Update maintainer affiliation
2024-01-18 12:42:41 -05:00
Luke T. Shumaker 3de3ff793e Formally remove myself as a maintainer
Realistically, I haven't been active in a while, and I don't see that
changing any time soon.

Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2024-01-18 10:36:24 -07:00
David Dymko 3b61f68cdc Update maintainer affiliation
Signed-off-by: David Dymko <dymkod@gmail.com>
2024-01-18 10:33:51 -05:00
Lance Austin ed2aef94f3 apiext: adjust log levels and remove extra log lines
This removes some log lines that were in place for debugging
and swithes some to Debug. This will ensure we are not logging
CA Cert private keys unless the user wants to see debug logs.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-08 10:12:00 -06:00
Lance Austin d8831a40de
Merge pull request #5504 from emissary-ingress/laustin/revert-envoy 2024-01-08 10:05:46 -06:00
Lance Austin 8239684191
Revert "deps: bump go-control-plane with envoy 1.28 support"
This reverts commit 85bba5d86f.

This goes along with revert Envoy 1.28 back to Envoy 1.27.2. When
we upgrade to 1.29, we will restore this update.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-08 09:12:55 -06:00
Lance Austin 1026db35ad
Revert "deps: update to envoy 1.28.0"
This reverts commit 7b7be436c5.

HTTP/3 support (udp/quic) is broken in 1.28 and will cause emissary-ingress
to shut down when Envoy tries to validate the config. In testing, 1.27 and
1.29 both are ok. So, this will temporarily revert back to Envoy 1.27.2
until 1.29 is released and we can jump to 1.29 instead.

Note: none of the current commits on the unreleased 1.28.1
branch (release/v1.28) seem to address this but rather quite a few
larger commits in master may be required thus not macking it feasible
to backport.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-08 09:12:32 -06:00
Lance Austin f9e27c94a8 python: fix integration test
A recent refactor broke integration tests by generating config envoy would reject.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-04 14:43:56 -06:00
Lance Austin c8edb16264 apiext: add crd ca bundle check to ready probe
Although the apiext server has CA bundle it might not
have been injected into the CRD. Unfortunatelly, there is
no good Condition/Readiness check on the CRD to ensure
it has been patched correctly.

This causes a race condition when using something like Helm
because the apiext pod will say it is ready because it has a CA
cert but the CA bundle might not have been picked up by the
k8s api-extension server.

This adds an additional check to the Ready Probe to validate
both that we have a CA Cert and in fact it matches the CA bundle
in the CRD's. Since we are using the controller-runtime Manager
client which caches this List this will be a low latency way to ensure
the CRD's are patched and ready as well.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-04 12:05:14 -06:00
Lance Austin 628aba93e5 deps(go): bump deps to address open dependabot prs
Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-04 08:52:23 -06:00
Lance Austin c57fc34b3b python: cleanup noop env-vars
This cleans up some env-vars that are not used and are effectively no-ops.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-04 08:52:11 -06:00
Kai Tillman c543058d66 Fix typo
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-01-04 08:37:57 -06:00
Kai Tillman 90d7afef29 Update docs links and CNCF Slack links
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-01-04 08:37:57 -06:00
Lance Austin 14acd8e8c5 apiext: address review feedback
Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-03 16:27:39 -06:00
Lance Austin 72bd2c7ae7 apiext: add e2e test
This adds basic e2e tests for the apiext to ensure that it can properly
create, watch and renew expired Certs.

An additional test and example was added on how to run it in an externally
managed mode with CertManager providing the Certificate and Patching
CRD's. Note:  this has limited support because it has only been tested against
the same settings (RSA, PKCS8) as self-managed mode. It may work with
other settings but those are not guaranteed at this time.

A new standalone container is generated locally and pushed to k3d. This
is only for e2e testing but sets up the framework for having it standalone
in the future.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-03 16:27:39 -06:00
Lance Austin 2a12d65a44 fix-crds: add support for outputting independent files
Extends the fix-crds tool so that it can output standalone files for
RBAC, Deployments and CRD's so that we can make easier and
more predicatable e2e testing.

Generally, speaking we should re-think fix-crds and whether it makes
sense but for now this adjusts it to meet the needs for e2e testing
the new apiext server.

- removed uncessary comments that don't play nice with e2e-framework
- added port and path to default Conversion data structure to support
externally Managed mode. Note: when apiext server is patching CRD's it
will override it like it did previously.

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-03 16:27:39 -06:00
Lance Austin cade34b2d9 apiext: rewrite internals and enhance capabilities
This is a complete re-write of the apiext internals for managing
and wathcing CA Certs, Patch CRDs and providing a ConversionWebHook
Service for converting the getambassador.io resources.

- Decouples apiext binary from busyambassador binary to make e2e test
simpler and lay framework for extracting from core container (future work)
- Add leader election support to provide predictability when managing CA
and CRD Patching
- New Controller for Patching CRDs with CA bundle
- New Controler for Watching CA Cert
- New CertificateAuthority abstraction for generating Server certs on
conversion webhook requests and cache invalidation on CA Cert changes
- New Controller for Managing CA Cert, creates, updates and auto-renews
when about to expire
- Add ability to run in external managed mode (aka turn off CA and CRD management)
and let external tool like CertManager manage the certs

Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-03 16:27:39 -06:00
Lance Austin b43c95bd6d lint: remove dlog suggestion
Signed-off-by: Lance Austin <laustin@datawire.io>
2024-01-03 16:27:39 -06:00
Tenshin Higashi 19a15543c0
Adding typed_json log format (#5270)
* Initial Commit

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Adding test

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Addressing Feedback

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Make Generate and fix tests

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

---------

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2024-01-02 13:22:21 -05:00
Alice Wasko 3ce2ba28c1
Merge pull request #5468 from emissary-ingress/laustin/crd-status
apiext: add rbac for updating crd statuses
2023-12-06 14:43:22 -08:00
Alice Wasko c11896ba67
Merge pull request #5467 from emissary-ingress/laustin/update-go
deps(golang): upgrade to 1.21.5
2023-12-06 14:42:08 -08:00
Lance Austin 2cac062f7f
apiext: add rbac for updating crd statuses
On some locked down clusters, the status update for CRD's
will fail due to missing RBAC. This updates the fix-crds tool
so that it includes `customresourcedefinitions/status` in the
RBAC permissions when updating CRDs.

Fixes https://github.com/emissary-ingress/emissary/issues/5436

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-12-06 15:57:01 -06:00
Lance Austin 5a7c9b90c5
deps(golang): upgrade to 1.21.5
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-12-06 15:32:48 -06:00
Lance Austin f76e5d4642 cleanup: remove ambassador_cli
The cli is a hold over from old Ambassador pre golang days.
We only run diagd for translation and diagnostics ui. This will
reduce our footprint.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 16:23:46 -06:00
Lance Austin 43aab8470d cleanup: remove post installers and no longer used files
This cleans up the optimized docker image to no longer
run installers. These are not used at all and are around from
pre emissary-ingress days.

Simplified optimized image to reduce footprint.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 15:39:19 -06:00
Lance Austin ee7c77ad01 cleanup: remove k8sregistryctl cmd
This command isn't used and is likely a hold over from existing work.
Removing to clean up surface and usages of pkg/k8s and kubeapply.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 15:39:19 -06:00
Lance Austin a289c618aa cleanup: remove watch_hook.py
This is no longer needed and was used by mockery. This cleans it up now
that mockery is gone.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 15:39:19 -06:00
Lance Austin 819628d48d cleanup: remove grab_snapshots.py
Removes grab snapshot since it is no longer maintained
snapshots can be obtained via kubectl cp from pod to local.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 15:39:19 -06:00
Lance Austin 982be874e1 deps(golang): bump to latest 1.21.4 z-patch
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 14:09:10 -06:00
Lance Austin d04b904ab6 deps (python): bump black and urllib to layes
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 14:09:10 -06:00
Lance Austin bbfcd578ab deps: bump github.com/hashicorp/consul/api to 1.26.1
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 14:09:10 -06:00
Lance Austin 25e6949fae deps: bump golang.org/x/mod to v0.14.0
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 14:09:10 -06:00
Lance Austin e3944b055c deps: bump gorilla/websocket to v1.5.1
Gorilla is used for KAT Client/Server during e2e testing.
This upgrades it to latest.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 14:09:10 -06:00
Lance Austin 0ebded51d0 deps: bump k8s/* to v0.28.4
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 14:09:10 -06:00
Lance Austin be9403d528 cleanup: remove mockery from cli
Mockery hasn't been used or maintained in a very long time.
This cleans it up to reduce surface area.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 12:07:26 -06:00
Lance Austin b2a901df2c cleanup: remove ert.py from cli
This removes the `ert.py` command which is no
longer used and maintained. Allow us to reduce
our deps and surface area.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 12:07:12 -06:00
Lance Austin deee4153d1 docs: fix developing.md TOC
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 09:58:36 -06:00
Lance Austin 2c9cfaa933 cleanup: remove agent symlink from build
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 09:58:36 -06:00
Lance Austin 71b981eeb9 cleanup: remove unused reproducer cmd
This removes the reproducer command from emissary-ingress.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-17 09:58:36 -06:00
Lance Austin 85bba5d86f deps: bump go-control-plane with envoy 1.28 support
Updates go-control-plane to the latest version sync'd and tested against
Envoy 1.28.

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2023-11-17 09:58:20 -06:00
Lance Austin 7b7be436c5 deps: update to envoy 1.28.0
Bumps to Envoy 1.28.0 and regenerates compiled protos.

steps:
1. update envoy.mk to v1.28 commit with custom commits
2. ran `make update-base`

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2023-11-17 09:58:20 -06:00
Lance Austin cc9502a511 agent: make Ambassador Agent opt-in
The Ambassador Agent is not a necessary component to
run emissary-ingress as an API Gateway. This removes it from the yaml
manifest published and turns it off by default for the Helm chart published.

We recommend you switch to the stand alone chart which can be found
here https://github.com/datawire/ambassador-agent.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-16 15:24:21 -06:00
Tenshin Higashi 644eba62f2
Merge pull request #5424 from emissary-ingress/alicewasko/route-shifting
route shifting test
2023-11-16 13:19:51 -05:00
Tenshin Higashi 65f43d1fe5 Adding collision testing
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2023-11-16 12:28:58 -05:00
Lance Austin f89c3595ed cleanup: remove kubewatch.py and associated python deps
Kubewatch was a hold over from pre golang watt code. By
removing it we can remove quite a few python deps that
have caused noise for us in the past with cve scanners.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-16 08:39:06 -06:00
Alice Wasko 576ff7de02 add transforms test
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2023-11-13 17:09:29 -08:00
Joe Andaverde 512c71f313 Fix routes shifting causing routing issues during a reconfiguration due to ordinal indexed route name
Signed-off-by: Joe Andaverde <joe@temporal.io>
(cherry picked from commit ecb43a1b9e)
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2023-11-13 15:33:02 -08:00
Lance Austin 63222e20cf
Merge pull request #5423 from emissary-ingress/laustin/prep-v3.10-dev
[dev]: prep branch for v3.10 development
2023-11-13 13:52:29 -06:00
Lance Austin a56bfce8b7
dev: prep branch for 3.10 development
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-12 19:16:05 -06:00
Lance Austin 229abf0750
release: update changelog for v3.9.0 release
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-12 19:11:11 -06:00
Tenshin Higashi ec880db9fe
Merge pull request #5421 from emissary-ingress/tenshinhigashi/revert-agent
Reverting Agent Changes
2023-11-08 12:16:14 -05:00
Tenshin Higashi cf7ad46a89 Revert "Moving agent helm chart to dependency (#5328)"
This reverts commit c5fc4a7ce4.

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2023-11-08 11:07:02 -05:00
Tenshin Higashi a5a4c688b1 Revert "Updating Agent"
This reverts commit 81de0a26fc.

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2023-11-08 11:07:02 -05:00
Tenshin Higashi 66e86429bc
Merge pull request #5418 from emissary-ingress/tenshinhigashi/patch-agent
Updating agent to most recent
2023-11-07 10:15:19 -05:00
Tenshin Higashi 81de0a26fc Updating Agent
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2023-11-07 01:27:23 -05:00
Lance Austin 6b6b628ab9 k8s-e2e: bump k3s to latest z-patches for each minor version
Note: there are some RC's for the latest z-patch but will wait
until those are official releases.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 999f292b84 deps: bump k8s libs to 0.28.3
This bumps to the recent v0.28.3 z-patch of
the k8s deps, along with any transitive dep
updates.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 0341aa7adb deps(go): bump google.golang.org/grpc to v1.58.3
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin ebea5f5a55 deps(go): bump golangci-lint to v1.55.2
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin f4c0535e10 deps(go): bump chart-testing to 3.10.1
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 3d01d338a5 deps(go): bump github.com/docker/docker to v24.0.7+incompatible
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 6dc259f448 deps(python): bump urllib3 to 1.26.18
Addresses CVE scanner noise from flagging the deps
as vulnerable due to CVE-2023-45803.

However, we are not vulnerable because this is a transitive
dep pulled in by the Kubernetes client which is part of
kubewatch a debug tool which isn't exposed to external
users.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 3164b8bf61 deps(python): update black formatter to 23.10.1
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 36b3d213ca deps(python): bump cachetools to 5.3.2
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin ccbf179736 deps(python): bump werkzeug to 3.0.1
Although we are not affected by the CVE this will quiet down
cve scanners.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 1eccac57d5 deps(python): bump orjson to 3.9.10
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin ac662313a1 deps(python): bump prometheus-client to 0.18.0
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 7fc11933e4 deps(python): bump google-auth to 2.23.4
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin 1cd8650cb9 deps(python): bump charset-normalizer to 3.3.2
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin d94b4c21a2 deps(python): bump blinker to 1.7.0
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 17:33:17 -06:00
Lance Austin cac12065fb
Merge pull request #5411 from emissary-ingress/laustin/go-1.21
golang: bump to 1.21.3
2023-11-05 12:13:35 -06:00
Lance Austin 90c31adb0e
deps: bump datawire/dlib in ocibuild tool
The ocibuild tool was still referencing 1.2.5 which is not compatibile
with golang 1.21 due to "dot" imports.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-05 06:14:41 -06:00
Lance Austin 27adba6eab
golang: bump to 1.21.3
Bumps to the latest minor release of Golang.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-11-04 23:55:45 -05:00
Alice Wasko 9e1ea1c9aa remove unused test helper
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2023-11-04 21:39:36 -05:00
Alice Wasko 8b7723c0a9 remove filter/filterpolicy watch
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2023-11-04 21:39:36 -05:00
Alice Wasko 47c72fa0f7 update relnotes
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2023-11-04 11:35:25 -05:00
Alice Wasko 7e23a9b924 add module support for envoy runtime flags
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2023-11-04 11:35:25 -05:00
Tenshin Higashi c5fc4a7ce4
Moving agent helm chart to dependency (#5328)
* First commit

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Updating Makefile

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Updating chart.yaml to v2

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Fixing make commands

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Adding agent alias

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Updating Agent

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Minor changes for apro

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Addressing feedback

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

* Updating Agent version

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>

---------

Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2023-11-03 20:53:00 -04:00
Rick Lane ac19ea6609
Merge pull request #5395 from emissary-ingress/rlane/crd-update-status
Include CRD status updates
2023-10-30 11:43:23 -04:00
Rick Lane 28ff281739
Include CRD status updates
Signed-off-by: Rick Lane <rlane@datawire.io>
2023-10-27 18:39:38 -07:00
Lance Austin c316a95e2a deps: update to alpine 3.18 and python 3.11
The base alpine image we use that has musl and glibc is now
updated to alpine 3.18 which ships with python 3.11. This
does the following:

- bumps the various Dockerfile base images
- unpins python and pip3
- Pushs new base-envoy image by bump baserel ver

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2023-10-20 10:16:54 -05:00
Lance Austin 9209418d67 deps: bump to envoy 1.27.2
Signed-off-by: Lance Austin <laustin@dataiwre.io>
2023-10-19 09:42:10 -05:00
Lance Austin fe02667366 envoy: bump go-control-plane to envoy 1.27 compatible
This updates the go-control-plane so that it is using a version
that is sync'd and tested against the protos compatible with
envoy 1.27.

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2023-10-17 21:08:31 -05:00
Lance Austin 0a620e8abb envoy: update compiled protos for envoy 1.27
Ran `make compile-envoy-protos` to ensure that we have the latest
raw protos and compiled go code.

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2023-10-17 21:08:31 -05:00
Lance Austin 2b55a41868 envoy: update to Envoy 1.27.1 with new build process
This bumps our custom Envoy to be based on v1.27.1. The build
process has been revamped for the following:

1. align build steps with upstream Envoy's CI steps
2. Simplify envoy.mk into a set of simple Phony targets and shell scripts
3. Remove compiling protos from general `make generate`
4. Update DEVELOPING.md to match revamped workflow

A couple of key  differences are that we leverage the underlying
tools (bazel, docker) for caching and volume mounting rather than
implicit make targets. This should make it more clear what is happening
when running certain commands and will allow for more flexibility in
the dev workflow.

I tried to maintain support for FIPS_MODE but its not tested since we do not
support it and we have added it for developer. If it is not working correctly,
then follow PR's can address it as needed.

Signed-off-by: Lance Austin <laustin@dataiwre.io>
2023-10-17 21:08:31 -05:00
Lance Austin bc148a7620 deps(python): bump google-auth to 2.23.3
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-17 10:39:55 -05:00
Lance Austin 650418f364 deps(python): bump python black formatter to 23.9
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-17 10:39:55 -05:00
Lance Austin 2820aafdf7 deps: bump github.com/google/go-cmp to v0.6.0
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-17 10:39:55 -05:00
Lance Austin 89b318f94a deps: bump google.golang.org/grpc to v1.58.3
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-17 10:39:55 -05:00
Lance Austin e7f36a6555 deps: bump golang.org/x/net to v0.17.0
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-17 10:39:55 -05:00
Lance Austin 57344cd774 deps: bump orjson to v3.9.9
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-17 10:39:55 -05:00
Lance Austin 767c429dc3 deps: update to golang 1.20.10
Bump to latest z-patch to address CVE-2023-39325.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-17 10:39:55 -05:00
Alice Wasko 179ee90179
Merge pull request #5346 from emissary-ingress/alicewasko/patch-envoy
Apply envoy security patches
2023-10-10 16:49:23 -07:00
Alice Wasko 59f9c9185c apply envoy security patches
Signed-off-by: Alice Wasko <alicewasko@datawire.io>
2023-10-10 15:01:33 -07:00
Lance Austin 10171ad4ca tools: bump to chart-doc-gen v0.5.0
This allows us to drop the usage of a custom fork in favor
of just using upstream.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-10 12:22:06 -05:00
Lance Austin 9fe6be4d95 base-python: cleanup old comments
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 11:16:45 -05:00
Lance Austin 12418ed2ff test-stats: bumps docker and python deps
Resolves open dependabot PRs and updates to
latest Flask release.

Normalizes project to use pyproject.toml with pip-compile
for creating requirements.txt.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 11:16:45 -05:00
Lance Austin ccaf980639 test-shadow: bumps docker and python deps
Resolves open dependabot PRs and updates to
latest Flask release.

Normalizes project to use pyproject.toml with pip-compile
for creating requirements.txt.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 11:16:45 -05:00
Lance Austin ba9e399058 test-auth: bumps docker and python deps
Resolves open dependabot PRs and updates to
latest Flask release.

Normalizes project to use pyproject.toml with pip-compile
for creating requirements.txt.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 11:16:45 -05:00
Lance Austin 772ce1e044 deps(python): upgrade multiple python libs via pip-compile
Ran `cd python && pip-compile --upgrade --allow-unsafe -q requirements.in`
which bumped all the deps in `python` folder.

Some notable ones:
- setuptools
- click
- charset-normalizer
- flask
- werkzeug
- typing-extensions
- gunicorn

Removed direct import of deprecated disutils because it was cause
it was causing conflicts with the latest `setuptools`.

Also, due to similiar issue as outlined here:
https://github.com/pypa/pip/issues/5247

I had to ignore uninstalling older version of packaging to allow
pip to install the newer versions in base-pip container.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 11:16:45 -05:00
Lance Austin 5d07d29382 deps(python): bumps pip3 and pip-tools in base-python
Updating to latest tools before updating other python deps.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 11:16:45 -05:00
Lance Austin 4d936bfcdd
Merge pull request #5337 from emissary-ingress/laustin/bump-consul
deps(go): update github.com/hashicorp/consul/api to v1.25.1
2023-10-09 11:00:19 -05:00
Lance Austin 68e7e3ce88
deps(go): update github.com/hashicorp/consul/api to v1.25.1
Bumps to the latest and adds licenses to unparsable-packages.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 10:26:32 -05:00
Lance Austin 4c8cc88f38 deps(go): bump google.golang.org/grpc to v1.58.2
Bumps gRPC to latest to resolve dependabot updates.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 10:24:55 -05:00
Lance Austin 06a0d7b45c deps(go): bump github.com/prometheus/client_model to 0.5.0
Bumps client_model to latest and resolves open dependabot
PR.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 10:24:55 -05:00
Lance Austin 2513ae1c13 deps(go): bump gopkg.in/yaml.v3 in tools/src/chart-doc-gen
This bumps the indirect dep for chart-doc-gen to v3.0.1 to
address dependabot bumps.

Note: we should look to update upstream chart-doc-gen and
remove our custom version.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 10:24:55 -05:00
Lance Austin f937f08d63 deps(go): bump github.com/golangci/golangci-lint to 1.54.2
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 10:24:55 -05:00
Lance Austin e4d0b43df7 deps(go): update golang.org/x/* to latest
Updates to latest `mod` and `sys` packages along with
multiple other `x/*` packages.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-09 10:24:55 -05:00
Lance Austin 4a8a0132dd lint: fix broken linting in irratelimit.py
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-05 18:10:19 -05:00
Lance Austin 78ee2df1fc deps: bump to golang 1.20.9
This ensures that CI and developers are using the latest
golang to build emissary-ingress to address: CVE-2023-39323.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-10-05 18:10:19 -05:00
Alice Wasko 89228f169a
Merge pull request #5046 from jeromefroe/jerome/add-rate-limited-as-resource-exhausted-field
Add RateLimitedAsResourceExhausted field to RateLimitServiceSpec
2023-09-28 13:36:27 -07:00
Alice Wasko 585922f950
add init container that waits for apiext (#5241)
* add init container that waits for apiext

Signed-off-by: AliceProxy <alicewasko@datawire.io>
---------

Signed-off-by: AliceProxy <alicewasko@datawire.io>
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
Co-authored-by: Tenshin Higashi <thigashi@datawire.io>
2023-09-27 14:51:36 -04:00
Lance Austin 40ad84ce0a deps: bump orjson to 3.9.7
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-20 12:24:44 -05:00
Lance Austin da9b271c36 deps: bump controller-tools to latest release
Two notable changes occurred with the generated
deep copy code:

1. Removed obsolete build tags
    - https://github.com/kubernetes-sigs/controller-tools/pull/828

2. Fix implicit for loop aliasing
    - https://github.com/kubernetes-sigs/controller-tools/pull/810

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-20 12:24:29 -05:00
Lance Austin 3d5635fc99 deps: bump controller-runtime to latest release
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-20 12:24:29 -05:00
Lance Austin dc39f11a17 ci: update k8s-e2e test matrix
Updates the matrix to cover the last 3 versions of K8s to
align with K8s support policy.

Note: we already test k8 1.22 which is the earliest supported verision.
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-19 12:55:21 -05:00
Lance Austin f3ab7183df deps: update go and python K8s libs to 1.28
This upgrades our dependencies to the latest z-patch release of
the k8s golang packages and python dep.

- updated our custom fork for `k8s.io/code-generator`
- refactored `kates` pkg due to refactoring in the
 `apiextensions-apiserver` pkg.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-19 12:55:21 -05:00
Adrian Ding 7f56afa587
Typo 2023-09-19 07:26:27 +12:00
Lance Austin d6a828895a deps: update to latest go-mkopensource
This updates to the latest version which contains a new
--ignore-dirty flag to simplify local dev work flow when
bumping deps.

The IsAmbassadorProprietarySoftware was replaced with the
new --proprietary-packages flag so the test in for the
`py-mkopensource` tool needed to be updated to match the
new expected output due to new behavior.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-18 10:52:31 -05:00
Jerome Froelich 3155b60e8a Revert go.sum changes
Signed-off-by: Jerome Froelich <jeromefroelich@hotmail.com>
2023-09-17 12:43:39 -04:00
Jerome Froelich 9593640f2c Address feedback on pull request
Signed-off-by: Jerome Froelich <jeromefroelich@hotmail.com>
2023-09-17 12:42:30 -04:00
Jerome Froelich 2a88d45bb6 Add RateLimitedAsResourceExhausted field to RateLimitServiceSpec
Signed-off-by: Jerome Froelich <jeromefroelich@hotmail.com>
2023-09-17 12:41:52 -04:00
Alice Wasko f17e1898c4
Merge pull request #5288 from emissary-ingress/tenshinhigashi/go1-20-8
deps(go): updating to go 1.20.8
2023-09-12 08:21:08 -07:00
Tenshin Higashi 5dd181f03e updating to go 1.20.8
Signed-off-by: Tenshin Higashi <thigashi@datawire.io>
2023-09-11 15:09:07 -04:00
Eivind Valderhaug e9014c6493 fix(helm): removed wrong/confusing '-ratelimit' suffix from label app.kubernetes.io/component in Module
Signed-off-by: Eivind Valderhaug <79476307+eevdev@users.noreply.github.com>
2023-09-07 11:07:01 -05:00
Eivind Valderhaug 56ec74de75 docs(helm): added comment in values.yaml on how to create multiple ambassador Modules in the same Kubernetes namespace
Signed-off-by: Eivind Valderhaug <79476307+eevdev@users.noreply.github.com>
2023-09-07 11:07:01 -05:00
Alice Wasko b987a5c2d3
Merge pull request #5283 from emissary-ingress/laustin/revert-envoy
revert: downgrade to envoy 1.26.4
2023-09-07 08:51:49 -07:00
Alice Wasko f6a6ac57df
Merge pull request #5221 from eevdev/ingressclass-annotation
fix(helm): missing annotation for IngressClass
2023-09-07 08:44:49 -07:00
Lance Austin e5848545c4 host: add warning for unsupported previewURL field
The `Host.spec.previewURL` is a hold over from when the previewURL
feature was previously apart of the old ambassador and telepresence 1.
Documentation and support for this has long been removed.

But just in case someone is still using this and is upgrading from older
versions then we want to provide user feedback.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-07 10:16:29 -05:00
Lance Austin 1ac7b74f29
docs: tidy up changelogs for v3.8 release
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-07 09:55:51 -05:00
Lance Austin a28ef97c45
revert: back to existing version of go-control-plane
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-07 09:52:39 -05:00
Lance Austin cb6eea73cb
revert: back to envoy 1.26
Signed-off-by: Lance Austin <laustin@datawire.io>
2023-09-07 09:52:31 -05:00
Alice Wasko 172ad72ab4
Merge pull request #5248 from teejaded/apiext-tlsversion
apiext: set minimum tls version to 1.3
2023-09-06 12:40:25 -07:00
Alice Wasko 330582b9a4
Merge pull request #5261 from emissary-ingress/alicewasko/remove-docker-demo
kill docker demo mode
2023-08-31 13:33:00 -07:00
Eivind Valderhaug 3099436e2d fix(helm): with AMBASSADOR_ID set and `ingressClassResource` enabled, annotation `getambassador.io/ambassador-id` was missing for the generated IngressClass
Signed-off-by: Eivind Valderhaug <79476307+eevdev@users.noreply.github.com>
2023-08-31 21:57:17 +02:00
Eivind Valderhaug 30a6ddd292 feat(helm): configure autoscaling behaviors for HorizontalPodAutoscaler in .Values.autoscaling.behavior
Signed-off-by: Eivind Valderhaug <79476307+eevdev@users.noreply.github.com>
2023-08-31 08:47:25 -05:00
Eivind Valderhaug 03a32471a3 fix(helm): autoscaling on EKS - potentially wrong apiVersion when deployed to AWS EKS due to them not following semantic versioning
Also fixed inaccurate previous changelog entry.

Signed-off-by: Eivind Valderhaug <79476307+eevdev@users.noreply.github.com>
2023-08-31 08:47:25 -05:00
AliceProxy d740937f36 kill docker demo mode
Signed-off-by: AliceProxy <alicewasko@datawire.io>
2023-08-24 12:39:20 -07:00
Rick Lane a7fc5d9343
Merge pull request #5260 from emissary-ingress/rlane/remove-todo-from-crds
Remove legacy TODO/FIXME comments from CRDs
2023-08-23 13:44:49 -07:00
Thibault Deutsch faf29b6012 diagd: atomically write the ADS config
Write the generated ADS config to a temporary file first, then rename
the file. This ensure that the update is atomic and that ambex won't
load a partially written file.

Fix #5093

Signed-off-by: Thibault Deutsch <thibault@arista.com>
2023-08-23 13:14:57 -05:00
Rick Lane a666525df7
Remove legacy TODO/FIXME comments from CRDs
Signed-off-by: Rick Lane <rlane@datawire.io>
2023-08-23 08:58:29 -07:00
Lance Austin b965751835 [v3.9] Next Release Dev Prep
Prepare master for the next release version.

Signed-off-by: Lance Austin <laustin@datawire.io>
2023-08-22 10:53:52 -05:00
TJ Miller 5c5325ffeb apiext: set minimum tls version to 1.3
Signed-off-by: TJ Miller <millert@us.ibm.com>
2023-08-18 11:21:37 -07:00
2106 changed files with 282433 additions and 38369 deletions

View File

@ -8,14 +8,15 @@ This document is intended for developers looking to contribute to the Emissary-i
> Looking for end user guides for Emissary-ingress? You can check out the end user guides at <https://www.getambassador.io/docs/emissary/>.
After reading this document if you have questions we encourage you to join us on our [Slack channel](https://d6e.co/slack) in the [#emissary-dev](https://datawire-oss.slack.com/archives/CB46TNG83) channel.
After reading this document if you have questions we encourage you to join us on our [Slack channel](https://communityinviter.com/apps/cloud-native/cncf) in the #emissary-ingress channel.
- [Code of Conduct](../Community/CODE_OF_CONDUCT.md)
- [Governance](../Community/GOVERNANCE.md)
- [Maintainers](../Community/MAINTAINERS.md)
**Table of Contents**
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Development Setup](#development-setup)
- [Step 1: Install Build Dependencies](#step-1-install-build-dependencies)
- [Step 2: Clone Project](#step-2-clone-project)
@ -47,16 +48,16 @@ After reading this document if you have questions we encourage you to join us on
- [Shutting up the pod labels error](#shutting-up-the-pod-labels-error)
- [Extra credit](#extra-credit)
- [Debugging and Developing Envoy Configuration](#debugging-and-developing-envoy-configuration)
- [Mockery](#mockery)
- [Ambassador Dump](#ambassador-dump)
- [Making changes to Envoy](#making-changes-to-envoy)
- [1. Preparing your machine](#1-preparing-your-machine)
- [2. Setting up your workspace to hack on Envoy](#2-setting-up-your-workspace-to-hack-on-envoy)
- [3. Hacking on Envoy](#3-hacking-on-envoy)
- [4. Building and testing your hacked-up Envoy](#4-building-and-testing-your-hacked-up-envoy)
- [5. Finalizing your changes](#5-finalizing-your-changes)
- [6. Checklist for landing the changes](#6-checklist-for-landing-the-changes)
- [Developing Emissary-ingress (Ambassador Labs -only advice)](#developing-emissary-ingress-ambassador-labs--only-advice)
- [5. Test Devloop](#5-test-devloop)
- [6. Protobuf changes](#6-protobuf-changes)
- [7. Finalizing your changes](#7-finalizing-your-changes)
- [8. Final Checklist](#8-final-checklist)
- [Developing Emissary-ingress (Maintainers-only advice)](#developing-emissary-ingress-maintainers-only-advice)
- [Updating license documentation](#updating-license-documentation)
- [Upgrading Python dependencies](#upgrading-python-dependencies)
- [FAQ](#faq)
@ -70,9 +71,7 @@ After reading this document if you have questions we encourage you to join us on
- [My editor is changing `go.mod` or `go.sum`, should I commit that?](#my-editor-is-changing-gomod-or-gosum-should-i-commit-that)
- [How do I debug "This should not happen in CI" errors?](#how-do-i-debug-this-should-not-happen-in-ci-errors)
- [How do I run Emissary-ingress tests?](#how-do-i-run-emissary-ingress-tests)
- [How do I update the python test cache?](#how-do-i-update-the-python-test-cache)
- [How do I type check my python code?](#how-do-i-type-check-my-python-code)
- [How do I get the source code for a release?](#how-do-i-get-the-source-code-for-a-release)
## Development Setup
@ -535,10 +534,7 @@ You should now be able to launch ambassador if you set the
#### Getting envoy
If you do not have envoy in your path already, the entrypoint will use
docker to run it. At the moment this is untested for macs which probably
means it is broken since localhost communication does not work by
default on macs. This can be made to work as soon an intrepid volunteer
with a mac reaches out to me (rhs@datawire.io).
docker to run it.
#### Shutting up the pod labels error
@ -569,108 +565,6 @@ we need to push both the code and any relevant kubernetes resources
into the cluster. The following sections will provide tips for improving
this development experience.
#### Mockery
Fortunately we have the `mockery` tool which lets us run the compiler
code directly on kubernetes resources without having to push that code
or the relevant kubernetes resources into the cluster. This is the
fastest way to hack on and debug the compiler.
The `mockery` tool runs inside the Docker container used to build
Ambassador, using `make shell`, so it's important to realize that it
won't have access to your entire filesystem. There are two easy ways
to arrange to get data in and out of the container:
1. If you `make sync`, everything in the Ambassador source tree gets rsync'd
into the container's `/buildroot/ambassador`. The first time you start the
shell, this can take a bit, but after that it's pretty fast. You'll
probably need to use `docker cp` to get data out of the container, though.
2. You may be able to use Docker volume mounts by exporting `BUILDER_MOUNTS`
with the appropriate `-v` switches before running `make shell` -- e.g.
```bash
export BUILDER_MOUNTS=$(pwd)/xfer:/xfer
make shell
```
will cause the dev shell to mount `xfer` in your current directory as `/xfer`.
This is known to work well on MacOS (though volume mounts are slow on Mac,
so moving gigabytes of data around this way isn't ideal).
Once you've sorted out how to move data around:
1. Put together a set of Ambassador configuration CRDs in a file that's somewhere
that you'll be able to get them into the builder container. The easy way to do
this is to use the files you'd feed to `kubectl apply`; they should be actual
Kubernetes objects with `metadata` and `spec` sections, etc. (If you want to
use annotations, that's OK too, just put the whole `Service` object in there.)
2. Run `make compile shell` to build everything and start the dev shell.
3. From inside the build shell, run
```bash
mockery $path_to_your_file
```
If you're using a non-default `ambassador_id` you need to provide it in the
environment:
```bash
AMBASSADOR_ID=whatever mockery $path_to_your_file
```
Finally, if you're trying to mimic `KAT`, copy the `/tmp/k8s-AmbassadorTest.yaml`
file from a KAT run to use as input, then
```bash
mockery --kat $kat_test_name $path_to_k8s_AmbassadorTest.yaml
```
where `$kat_test_name` is the class name of a `KAT` test class, like `LuaTest` or
`TLSContextTest`.
4. Once it's done, `/tmp/ambassador/snapshots` will have all the output from the
compiler phase of Ambassador.
The point of `mockery` is that it mimics the configuration cycle of real Ambassador,
without relying at all on a Kubernetes cluster. This means that you can easily and
quickly take a Kubernetes input and look at the generated Envoy configuration without
any other infrastructure.
#### Ambassador Dump
The `ambassador dump` tool is also useful for debugging and hacking on
the compiler. After running `make shell`, you'll also be able to use
the `ambassador` CLI, which can export the most import data structures
that Ambassador works with as JSON. It works from an input which can
be either a single file or a directory full of files in the following
formats:
- raw Ambassador resources like you'll find in the `demo/config` directory; or
- an annotated Kubernetes resources like you'll find in `/tmp/k8s-AmbassadorTest.yaml` after running `make test`; or
- a `watt` snapshot like you'll find in the `$AMBASSADOR_CONFIG_BASE_DIR/snapshots/snapshot.yaml` (which is a JSON file, I know, it's misnamed).
Given an input source, running
```bash
ambassador dump --ir --xds [$input_flags] $input > test.json
```
will dump the Ambassador IR and v2 Envoy configuration into `test.json`. Here
`$input_flags` will be
- nothing for raw Ambassador resources;
- `--k8s` for Kubernetes resources; or
- `--watt` for a `watt` snapshot.
You can get more information with
```bash
ambassador dump --help
```
### Making changes to Envoy
Emissary-ingress is built on top of Envoy and leverages a vendored version of Envoy (*we track upstream very closely*). This section will go into how to make changes to the Envoy that is packaged with Emissary-ingress.
@ -712,30 +606,7 @@ and tests on a RAM disk (see the `/etc/fstab` line above).
make $PWD/_cxx/envoy
git -C _cxx/envoy checkout -b YOUR_BRANCHNAME
```
2. Tell the build system that, yes, you really would like to be
compiling envoy, as you'll be modifying Envoy:
```shell
export YES_I_AM_OK_WITH_COMPILING_ENVOY=true
export ENVOY_COMMIT='-'
```
Building Envoy is slow, and most Emissary-ingress contributors do not
want to rebuild Envoy, so we require the first two environment
variables as a safety.
Setting `ENVOY_COMMIT=-` does 3 things:
1. Tell it to use whatever is currently checked out in
`./_cxx/envoy/` (instead of checking out a specific commit), so
that you are free to modify those sources.
2. Don't try to download a cached build of Envoy from a Docker
cache (since it wouldn't know which `ENVOY_COMMIT` do download
the cached build for).
3. Don't push the build of Envoy to a Docker cache (since you're
still actively working on it).
3. To build Envoy in FIPS mode, set the following variable:
2. To build Envoy in FIPS mode, set the following variable:
```shell
export FIPS_MODE=true
@ -746,54 +617,64 @@ and tests on a RAM disk (see the `/etc/fstab` line above).
Emissary does not claim to be FIPS compliant or certified.
See [here](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/ssl#fips-140-2) for more information on FIPS and Envoy.
> _NOTE:_ FIPS_MODE is NOT supported by the emissary-ingress maintainers but we provide this for developers as convience
#### 3. Hacking on Envoy
Modify the sources in `./_cxx/envoy/`.
Modify the sources in `./_cxx/envoy/`. or update the branch and/or `ENVOY_COMMIT` as necessary in `./_cxx/envoy.mk`
#### 4. Building and testing your hacked-up Envoy
- **Build Envoy** with `make update-base`. Again, this is *not* a
quick process. The build happens in a Docker container; you can
set `DOCKER_HOST` to point to a powerful machine if you like.
> See `./_cxx/envoy.mk` for the full list of targets.
- **Test Envoy** and run with Envoy's test suite (which we don't run
during normal Ambassador development) by running `make check-envoy`.
Be warned that Envoy's full **test suite** requires several hundred
gigabytes of disk space to run.
Multiple Phony targets are provided so that developers can run the steps they are interested in when developing, here are few of the key ones:
Inner dev-loop steps:
- `make update-base`: will perform all the steps necessary to verify, build envoy, build docker images, push images to the container repository and compile the updated protos.
- To run just specific tests, instead of the whole test suite, set
the `ENVOY_TEST_LABEL` environment variable. For example, to run
just the unit tests in
`test/common/network/listener_impl_test.cc`, you should run
- `make build-envoy`: will build the envoy binaries using the same build container as the upstream Envoy project. Build outputs are mounted to the `_cxx/envoy-docker-build` directory and Bazel will write the results there.
```shell
ENVOY_TEST_LABEL='//test/common/network:listener_impl_test' make check-envoy
```
- `make build-base-envoy-image`: will use the release outputs from building envoy to generate a new `base-envoy` container which is then used in the main emissary-ingress container build.
- You can run `make envoy-shell` to get a Bash shell in the Docker
container that does the Envoy builds.
- `make push-base-envoy`: will push the built container to the remote container repository.
Interpreting the test results:
- `make check-envoy`: will use the build docker container to run the Envoy test suite against the currently checked out envoy in the `_cxx/envoy` folder.
- If you see the following message, don't worry, it's harmless; the
tests still ran:
- `make envoy-shell`: will run the envoy build container and open a bash shell session. The `_cxx/envoy` folder is volume mounted into the container and the user is set to the `envoybuild` user in the container to ensure you are not running as root to ensure hermetic builds.
```text
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
```
#### 5. Test Devloop
The message means that the test passed, but it passed too
quickly, and Bazel is suggesting that you declare it as smaller.
Something along the lines of "This test only took 2s, but you
declared it as being in the 60s-300s ('moderate') bucket,
consider declaring it as being in the 0s-60s ('short')
bucket".
Running the Envoy test suite will compile all the test targets. This is a slow process and can use lots of disk space.
Don't be confused (as I was) in to thinking that it was saying
that the test was too big and was skipped and that you need to
throw more hardware at it.
The Envoy Inner Devloop for build and testing:
- You can make a change to Envoy code and run the whole test by just calling `make check-envoy`
- You can run a specific test instead of the whole test suite by setting the `ENVOY_TEST_LABEL` environment variable.
- For example, to run just the unit tests in `test/common/network/listener_impl_test.cc`, you should run:
```shell
ENVOY_TEST_LABEL='//test/common/network:listener_impl_test' make check-envoy
```
- Alternatively, you can run `make envoy-shell` to get a bash shell into the Docker container that does the Envoy builds and you are free to interact with `Bazel` directly.
Interpreting the test results:
- If you see the following message, don't worry, it's harmless; the tests still ran:
```text
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
```
The message means that the test passed, but it passed too
quickly, and Bazel is suggesting that you declare it as smaller.
Something along the lines of "This test only took 2s, but you
declared it as being in the 60s-300s ('moderate') bucket,
consider declaring it as being in the 0s-60s ('short')
bucket".
Don't be confused (as I was) in to thinking that it was saying
that the test was too big and was skipped and that you need to
throw more hardware at it.
- **Build or test Emissary-ingress** with the usual `make` commands, with
the exception that you MUST run `make update-base` first whenever
@ -802,86 +683,68 @@ Modify the sources in `./_cxx/envoy/`.
`make update-base && make test`, and `make images` to just build
Emissary-ingress would become `make update-base && make images`.
#### 5. Finalizing your changes
The Envoy changes with Emissary-ingress:
Once you're happy with your changes to Envoy:
- Either run `make update-base` to build, and push a new base container and then you can run `make test` for the Emissary-ingress test suite.
- If you do not want to push the container you can instead:
- Build Envoy - `make build-envoy`
- Build container - `make build-base-envoy-image`
- Test Emissary - `make test`
1. Ensure they're committed to `_cxx/envoy/` and push/PR them into
<https://github.com/datawire/envoy> branch `rebase/master`.
#### 6. Protobuf changes
If you're outside of Ambassador Labs, you'll need to
a. Create a fork of <https://github.com/datawire/envoy> on the
GitHub web interface
b. Add it as a remote to your `./_cxx/envoy/`:
`git remote add my-fork git@github.com:YOUR_USERNAME/envoy.git`
c. Push the branch to that fork:
`git push my-fork YOUR_BRANCHNAME`
If you made any changes to the Protocol Buffer files or if you bumped versions of Envoy then you
should make sure that you are re-compiling the Protobufs so that they are available and checked-in
to the emissary.git repository.
2. Update `ENVOY_COMMIT` in `_cxx/envoy.mk`
```sh
make compile-envoy-protos
```
3. Unset `ENVOY_COMMIT=-` and run a final `make update-base` to
push a cached build:
This will copy over the raw proto files, compile and copy the generated go code over to emisary-ignress repository.
```shell
export YES_I_AM_OK_WITH_COMPILING_ENVOY=true
unset ENVOY_COMMIT
make update-base
```
#### 7. Finalizing your changes
The image will be pushed to `$ENVOY_DOCKER_REPO`, by default
`ENVOY_DOCKER_REPO=docker.io/datawire/ambassador-base`; if you're
outside of Ambassador Labs, you can skip this step if you don't want to
share your Envoy binary anywhere. If you don't skip this step,
you'll need to `export
ENVOY_DOCKER_REPO=${your-envoy-docker-registry}` to tell it to push
somewhere other than Datawire's registry.
> NOTE: we are no longer accepting PR's in `datawire/envoy.git`.
If you're at Ambassador Labs, you'll then want to make sure that the image
is also pushed to the backup container registries:
If you have custom changes then land them in your custom envoy repository and update the `ENVOY_COMMIT` and `ENVOY_DOCKER_REPO` variable in `_cxx/envoy.mk` so that the image will be pushed to the correct repository.
```shell
# upload image to the mirror in GCR
SHA=GET_THIS_FROM_THE_make_update-base_OUTPUT
TAG="envoy-0.$SHA.opt"
FULL_TAG="envoy-full-0.$SHA.opt"
docker pull "docker.io/emissaryingress/base-envoy:envoy-0.$TAG.opt"
docker tag "docker.io/emissaryingress/base-envoy:$TAG" "gcr.io/datawire/ambassador-base:$TAG"
docker push "gcr.io/datawire/ambassador-base:$TAG"
Then run `make update-base` does all the bits so assuming that was successful then are all good.
## repeat for the "FULL" version which has debug symbols enabled for envoy. It is large (GB's) big.
TAG=envoy-full-0.386367b8c99f843fbc2a42a38fe625fce480de19.opt
docker pull "docker.io/emissaryingress/base-envoy:$FULL_TAG"
docker tag "docker.io/emissaryingress/base-envoy:$FULL_TAG" "gcr.io/datawire/ambassador-base:$FULL_TAG"
docker push "gcr.io/datawire/ambassador-base:$FULL_TAG"
```
**For maintainers:**
If you're outside of Ambassador Labs, you can skip this step if you
don't want to share your Envoy binary anywhere. If you don't
skip this step, you'll need to `export
ENVOY_DOCKER_REPO=${your-envoy-docker-registry}` to tell it to
push somewhere other than Datawire's registry.
You will want to make sure that the image is pushed to the backup container registries:
4. Push and PR the `envoy.mk` `ENVOY_COMMIT` change to
<https://github.com/emissary-ingress/emissary>.
```shell
# upload image to the mirror in GCR
SHA=GET_THIS_FROM_THE_make_update-base_OUTPUT
TAG="envoy-0.$SHA.opt"
docker pull "docker.io/emissaryingress/base-envoy:envoy-0.$TAG.opt"
docker tag "docker.io/emissaryingress/base-envoy:$TAG" "gcr.io/datawire/ambassador-base:$TAG"
docker push "gcr.io/datawire/ambassador-base:$TAG"
```
#### 6. Checklist for landing the changes
#### 8. Final Checklist
I'd put this in the pull request template, but so few PRs change Envoy...
**For Maintainers Only**
Here is a checklist of things to do when bumping the `base-envoy` version:
- [ ] The image has been pushed to...
- [ ] `docker.io/emissaryingress/base-envoy`
- [ ] `gcr.io/datawire/ambassador-base`
- [ ] The envoy.git commit has been tagged as `datawire-$(gitdescribe --tags --match='v*')`
- [ ] The `datawire/envoy.git` commit has been tagged as `datawire-$(git describe --tags --match='v*')`
(the `--match` is to prevent `datawire-*` tags from stacking on each other).
- [ ] It's been tested with...
- [ ] `make check-envoy`
The `check-envoy-version` CI job should check all of those things,
except for `make check-envoy`.
The `check-envoy-version` CI job will double check all these things, with the exception of running
the Envoy tests. If the `check-envoy-version` is failing then double check the above, fix them and
re-run the job.
### Developing Emissary-ingress (Ambassador Labs -only advice)
### Developing Emissary-ingress (Maintainers-only advice)
At the moment, these techniques will only work internally to Ambassador Labs. Mostly
At the moment, these techniques will only work internally to Maintainers. Mostly
this is because they require credentials to access internal resources at the
moment, though in several cases we're working to fix that.
@ -948,7 +811,7 @@ curl localhost:8877/ambassador/v0/diag/?loglevel=debug
```
Note: This affects diagd and Envoy, but NOT the AES `amb-sidecar`.
See the AES `DEVELOPING.md` for how to do that.
See the AES `CONTRIBUTING.md` for how to do that.
### Can I build from a docker container instead of on my local computer?
@ -1064,10 +927,3 @@ Ambassador code should produce *no* warnings and *no* errors.
If you're concerned that the mypy cache is somehow wrong, delete the
`.mypy_cache/` directory to clear the cache.
### How do I get the source code for a release?
The current shipping release of Ambassador lives on the `master`
branch. It is tagged with its version (e.g. `v0.78.0`).
Changes on `master` after the last tag have not been released yet, but
will be included in the next release of Ambassador.

View File

@ -1,4 +1,4 @@
name: 'Collect Logs'
name: "Collect Logs"
description: >-
Store any log files as artifacts.
inputs:
@ -49,7 +49,7 @@ runs:
cp /tmp/*.yaml /tmp/test-logs || true
cp /tmp/kat-client-*.log /tmp/test-logs || true
- name: "Upload Logs"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: logs-${{ inputs.jobname }}
path: /tmp/test-logs

View File

@ -71,7 +71,7 @@ updates:
ignore:
- dependency-name: pytest
- dependency-name: urllib3
versions:
versions:
- "<2.0"
- package-ecosystem: docker
directory: "/docker/base-python"
@ -102,3 +102,9 @@ updates:
schedule:
interval: daily
open-pull-requests-limit: 10
- package-ecosystem: github-actions
directory: "/"
schedule:
interval: weekly
open-pull-requests-limit: 10

View File

@ -46,6 +46,6 @@ A few sentences describing what testing you've done, e.g., manual tests, automat
- We should lean on the bulk of code being covered by unit tests, but...
- ... an end-to-end test should cover the integration points
- [ ] **I updated `DEVELOPING.md` with any any special dev tricks I had to use to work on this code efficiently.**
- [ ] **I updated `CONTRIBUTING.md` with any special dev tricks I had to use to work on this code efficiently.**
- [ ] **The changes in this PR have been reviewed for security concerns and adherence to security best practices.**

View File

@ -22,7 +22,7 @@ name: Check branch version
jobs:
check-branch-version:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:

View File

@ -10,7 +10,7 @@ name: job-promote-to-passed
jobs:
lint: ########################################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -27,10 +27,12 @@ jobs:
run: |
make lint
- uses: ./.github/actions/after-job
with:
jobname: lint
if: always()
generate: ####################################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -75,10 +77,46 @@ jobs:
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate' (again!)"
- uses: ./.github/actions/after-job
with:
jobname: generate
if: always()
check-envoy-protos: ####################################################################
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: "Git Login"
run: |
if [[ -n '${{ secrets.GHA_SSH_KEY }}' ]]; then
install -m700 -d ~/.ssh
install -m600 /dev/stdin ~/.ssh/id_rsa <<<'${{ secrets.GHA_SSH_KEY }}'
fi
- name: "Docker Login"
uses: docker/login-action@v2
with:
registry: ${{ (!startsWith(secrets.RELEASE_REGISTRY, 'docker.io/')) && secrets.RELEASE_REGISTRY || null }}
username: ${{ secrets.GH_DOCKER_RELEASE_USERNAME }}
password: ${{ secrets.GH_DOCKER_RELEASE_TOKEN }}
- name: "'make compile-envoy-protos'"
shell: bash
run: |
make compile-envoy-protos
- name: "Check Git not dirty from 'make compile-envoy-protos'"
uses: ./.github/actions/git-dirty-check
- uses: ./.github/actions/after-job
with:
jobname: check-envoy-protos
if: always()
check-envoy-version: #########################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -103,11 +141,44 @@ jobs:
password: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
- run: make check-envoy-version
- uses: ./.github/actions/after-job
with:
jobname: check-envoy-version
if: always()
# Tests ######################################################################
apiext-e2e:
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
APIEXT_E2E: ""
APIEXT_BUILD_ARCH: linux/amd64
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver-opts: |
network=host
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: Install k3d
shell: bash
run: |
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | TAG=v5.6.0 bash
k3d --version
- name: go mod vendor
shell: bash
run: |
make vendor
- name: run apiext-e2e tests
shell: bash
run: |
go test -p 1 -parallel 1 -v -tags=apiext ./test/apiext/... -timeout 15m
check-gotest:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -128,9 +199,11 @@ jobs:
run: |
make gotest
- uses: ./.github/actions/after-job
with:
jobname: check-gotest
if: always()
check-pytest:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -186,7 +259,7 @@ jobs:
with:
jobname: check-pytest-${{ matrix.test }}
check-pytest-unit:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -212,9 +285,11 @@ jobs:
export PYTEST_ARGS=' --cov-branch --cov=ambassador --cov-report html:/tmp/cov_html '
make pytest-unit-tests
- uses: ./.github/actions/after-job
with:
jobname: check-pytest-unit
if: always()
check-chart:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
DEV_REGISTRY: ${{ secrets.DEV_REGISTRY }}
# See docker/base-python.docker.gen
@ -224,28 +299,33 @@ jobs:
DOCKER_BUILD_USERNAME: ${{ secrets.GH_DOCKER_BUILD_USERNAME }}
DOCKER_BUILD_PASSWORD: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
steps:
- uses: docker/login-action@v2
with:
registry: ${{ (!startsWith(secrets.DEV_REGISTRY, 'docker.io/')) && secrets.DEV_REGISTRY || null }}
username: ${{ secrets.GH_DOCKER_BUILD_USERNAME }}
password: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: make test-chart
- name: Warn about skip
run: |
make ci/setup-k3d
export DEV_KUBECONFIG=~/.kube/config
echo "SKIPPING CHART TEST; check the charts manually"
# - uses: docker/login-action@v2
# with:
# registry: ${{ (!startsWith(secrets.DEV_REGISTRY, 'docker.io/')) && secrets.DEV_REGISTRY || null }}
# username: ${{ secrets.GH_DOCKER_BUILD_USERNAME }}
# password: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
# - uses: actions/checkout@v3
# with:
# fetch-depth: 0
# ref: ${{ github.event.pull_request.head.sha }}
# - name: Install Deps
# uses: ./.github/actions/setup-deps
# - name: make test-chart
# run: |
# make ci/setup-k3d
# export DEV_KUBECONFIG=~/.kube/config
make test-chart
- uses: ./.github/actions/after-job
if: always()
# make test-chart
# - uses: ./.github/actions/after-job
# with:
# jobname: check-chart
# if: always()
build: #######################################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
DEV_REGISTRY: ${{ secrets.DEV_REGISTRY }}
# See docker/base-python.docker.gen
@ -279,12 +359,14 @@ jobs:
run: |
make push-dev
- uses: ./.github/actions/after-job
with:
jobname: build
if: always()
######################################################################
######################### CVE Scanning ###############################
trivy-container-scan:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
needs: [build]
steps:
# upload of results to github uses git so checkout of code is needed
@ -314,16 +396,18 @@ jobs:
pass:
name: "job-promote-to-passed" # This is the job name that the branch protection looks for
needs:
- apiext-e2e
- lint
- build
- generate
- check-envoy-protos
- check-envoy-version
- check-gotest
- check-pytest
- check-pytest-unit
- check-chart
- trivy-container-scan
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- name: No-Op
if: ${{ false }}

View File

@ -3,42 +3,44 @@ on:
schedule:
# run at noon on sundays to prepare for monday
# used https://crontab.guru/ to generate
- cron: '0 12 * * SUN'
- cron: "0 12 * * SUN"
jobs:
generate: ####################################################################
runs-on: ubuntu-latest
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: "Git Login"
run: |
if [[ -n '${{ secrets.GHA_SSH_KEY }}' ]]; then
install -m700 -d ~/.ssh
install -m600 /dev/stdin ~/.ssh/id_rsa <<<'${{ secrets.GHA_SSH_KEY }}'
fi
- name: "Docker Login"
uses: docker/login-action@v2
with:
registry: ${{ (!startsWith(secrets.RELEASE_REGISTRY, 'docker.io/')) && secrets.RELEASE_REGISTRY || null }}
username: ${{ secrets.GH_DOCKER_RELEASE_USERNAME }}
password: ${{ secrets.GH_DOCKER_RELEASE_TOKEN }}
- name: "'make generate'"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate'"
- name: "'make generate' (again!)"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate' (again!)"
- uses: ./.github/actions/after-job
if: always()
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: "Git Login"
run: |
if [[ -n '${{ secrets.GHA_SSH_KEY }}' ]]; then
install -m700 -d ~/.ssh
install -m600 /dev/stdin ~/.ssh/id_rsa <<<'${{ secrets.GHA_SSH_KEY }}'
fi
- name: "Docker Login"
uses: docker/login-action@v2
with:
registry: ${{ (!startsWith(secrets.RELEASE_REGISTRY, 'docker.io/')) && secrets.RELEASE_REGISTRY || null }}
username: ${{ secrets.GH_DOCKER_RELEASE_USERNAME }}
password: ${{ secrets.GH_DOCKER_RELEASE_TOKEN }}
- name: "'make generate'"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate'"
- name: "'make generate' (again!)"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate' (again!)"
- uses: ./.github/actions/after-job
with:
jobname: generate-base-python
if: always()

View File

@ -8,7 +8,7 @@ name: k8s-e2e
jobs:
acceptance_tests:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -21,8 +21,9 @@ jobs:
matrix:
k8s:
[
{ k3s: 1.25.9+k3s1, kubectl: 1.25.9 },
{ k3s: 1.26.4+k3s1, kubectl: 1.26.4 },
{ k3s: 1.26.9+k3s1, kubectl: 1.26.9 },
{ k3s: 1.27.6+k3s1, kubectl: 1.27.6 },
{ k3s: 1.28.2+k3s1, kubectl: 1.28.2 },
]
test:
- integration-tests
@ -70,4 +71,4 @@ jobs:
- uses: ./.github/actions/after-job
if: always()
with:
jobname: check-pytest-${{ matrix.test }}
jobname: check-pytest-${{matrix.k8s.kubectl}}-${{ matrix.test }}

View File

@ -2,10 +2,10 @@ name: promote-to-ga
"on":
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
- "v[0-9]+.[0-9]+.[0-9]+"
jobs:
promote-to-ga:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
name: promote-to-ga
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
@ -30,6 +30,8 @@ jobs:
run: |
make release/promote-oss/to-ga
- uses: ./.github/actions/after-job
with:
jobname: promote-to-ga-1
if: always()
- id: check-slack-webhook
name: Assign slack webhook variable
@ -41,18 +43,20 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
with:
status: ${{ job.status }}
success_text: 'Emissary GA for ${env.GITHUB_REF} successfully built'
failure_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed'
cancelled_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled'
success_text: "Emissary GA for ${env.GITHUB_REF} successfully built"
failure_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed"
cancelled_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: promote-to-ga-2
if: always()
create-gh-release:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
needs: [promote-to-ga]
name: "Create GitHub release"
env:
@ -80,13 +84,15 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
with:
status: ${{ job.status }}
success_text: 'Emissary GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}'
failure_text: 'Emissary GitHub release failed'
cancelled_text: 'Emissary GitHub release was was cancelled'
success_text: "Emissary GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}"
failure_text: "Emissary GitHub release failed"
cancelled_text: "Emissary GitHub release was was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: create-gh-release
if: always()

View File

@ -2,11 +2,11 @@ name: promote-to-rc
"on":
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+-dev'
- "v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+"
- "v[0-9]+.[0-9]+.[0-9]+-dev"
jobs:
promote-to-rc:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
name: promote-to-rc
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
@ -49,12 +49,14 @@ jobs:
export AMBASSADOR_MANIFEST_URL=https://app.getambassador.io/yaml/emissary/${{ steps.step-main.outputs.version }}
export HELM_CHART_VERSION=${{ steps.step-main.outputs.chart_version }}
\`\`\`
failure_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed'
cancelled_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled'
failure_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed"
cancelled_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: promote-to-rc
if: always()

View File

@ -2,10 +2,10 @@ name: chart-publish
"on":
push:
tags:
- 'chart/v*'
- "chart/v*"
jobs:
chart-publish:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
name: chart-publish
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
@ -34,18 +34,20 @@ jobs:
with:
status: ${{ job.status }}
success_text: "Chart successfully published for ${env.GITHUB_REF}"
failure_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed'
cancelled_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled'
failure_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed"
cancelled_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: chart-publish
if: always()
chart-create-gh-release:
if: ${{ ! contains(github.ref, '-') }}
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
needs: [chart-publish]
name: "Create GitHub release"
steps:
@ -71,13 +73,15 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
with:
status: ${{ job.status }}
success_text: 'Chart GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}'
failure_text: 'Chart GitHub release failed'
cancelled_text: 'Chart GitHub release was was cancelled'
success_text: "Chart GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}"
failure_text: "Chart GitHub release failed"
cancelled_text: "Chart GitHub release was was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: chart-create-gh-release
if: always()

View File

@ -15,10 +15,6 @@ linters-settings:
rules:
main:
deny:
- pkg: "log"
desc: "Use `github.com/datawire/dlib/dlog` instead of `log`"
- pkg: "github.com/sirupsen/logrus"
desc: "Use `github.com/datawire/dlib/dlog` instead of `github.com/sirupsen/logrus`"
- pkg: "github.com/datawire/dlib/dutil"
desc: "Use either `github.com/datawire/dlib/derror` or `github.com/datawire/dlib/dhttp` instead of `github.com/datawire/dlib/dutil`"
- pkg: "github.com/gogo/protobuf"

View File

@ -85,7 +85,75 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
## RELEASE NOTES
## [3.8.0] TBD
## [3.10.0] July 29, 2025
[3.10.0]: https://github.com/emissary-ingress/emissary/compare/v3.9.0...v3.10.0
### Emissary-ingress and Ambassador Edge Stack
- Feature: This upgrades Emissary-ingress to be built on Envoy v1.28.0 which provides security,
performance and feature enhancements. You can read more about them here: <a
href="https://www.envoyproxy.io/docs/envoy/v1.28.0/version_history/version_history">Envoy Proxy
1.28.0 Release Notes</a>
- Change: Emissary-ingress will no longer publish YAML manifest with the Ambassador Agent being
installed by default. This is an optional component that provides additional features on top of
Emissary-ingress and we recommend installing it using the instructions found in the <a
href="https://github.com/datawire/ambassador-agenty">Ambassador Agent Repo</a>.
- Change: Upgraded Emissary-ingress to the latest release of Golang as part of our general
dependency upgrade process.
- Bugfix: Emissary-ingress was incorrectly caching Mappings with regex headers using the header name
instead of the Mapping name, which could reduce the cache's effectiveness. This has been fixed so
that the correct key is used. ([Incorrect Cache Key for Mapping])
- Feature: Emissary-ingress now supports resolving Endpoints from EndpointSlices in addition to the
existing support for Endpoints, supporting Services with more than 1000 endpoints.
- Feature: Emissary-ingress now passes the client TLS certificate and SNI, if any, to the external
auth service. These are available in the `source.certificate` and `tls_session.sni` fields, as
described in the <a
href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/service/auth/v3/attribute_context.proto">
Envoy extauth documentation</a>.
- Change: The `ambex` component of Emissary-ingress now uses `xxhash64` instead of `md5`, since
`md5` can cause problems in crypto-restricted environments (e.g. FIPS) ([Remove usage of md5])
[Incorrect Cache Key for Mapping]: https://github.com/emissary-ingress/emissary/issues/5714
[Remove usage of md5]: https://github.com/emissary-ingress/emissary/pull/5794
## [3.9.0] November 13, 2023
[3.9.0]: https://github.com/emissary-ingress/emissary/compare/v3.8.0...v3.9.0
### Emissary-ingress and Ambassador Edge Stack
- Feature: This upgrades Emissary-ingress to be built on Envoy v1.27.2 which provides security,
performance and feature enhancements. You can read more about them here: <a
href="https://www.envoyproxy.io/docs/envoy/v1.27.2/version_history/version_history">Envoy Proxy
1.27.2 Release Notes</a>
- Feature: By default, Emissary-ingress will return an `UNAVAILABLE` code when a request using gRPC
is rate limited. The `RateLimitService` resource now exposes a new
`grpc.use_resource_exhausted_code` field that when set to `true`, Emissary-ingress will return a
`RESOURCE_EXHAUSTED` gRPC code instead. Thanks to <a href="https://github.com/jeromefroe">Jerome
Froelich</a> for contributing this feature!
- Feature: Envoy runtime fields that were provided to mitigate the recent HTTP/2 rapid reset
vulnerability can now be configured via the Module resource so the configuration will persist
between restarts. This configuration is added to the Envoy bootstrap config, so restarting
Emissary is necessary after changing these fields for the configuration to take effect.
- Change: APIExt would previously allow for TLS 1.0 connections. We have updated it to now only use
a minimum TLS version of 1.3 to resolve security concerns.
- Change: - Update default image to Emissary-ingress v3.9.0. <br/>
- Bugfix: The APIExt server provides CRD conversion between the stored version v2 and the version
watched for by Emissary-ingress v3alpha1. Since this component is required to operate
Emissary-ingress, we have introduced an init container that will ensure it is available before
starting. This will help address some of the intermittent issues seen during install and upgrades.
## [3.8.0] August 29, 2023
[3.8.0]: https://github.com/emissary-ingress/emissary/compare/v3.7.2...v3.8.0
### Emissary-ingress and Ambassador Edge Stack
@ -141,6 +209,12 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
Emissary-ingress with the latest security patches, performances enhancments, and features offered
by the envoy proxy.
- Feature: By default, Envoy will return an `UNAVAILABLE` gRPC code when a request is rate limited.
The `RateLimitService` resource now exposes the <a
href="https://www.envoyproxy.io/docs/envoy/v1.26.0/configuration/http/http_filters/rate_limit_filter">use_resource_exhausted_code</a>
option. Set `grpc.use_resource_exhausted_code: true` so Envoy will return a `RESOURCE_EXHAUSTED`
gRPC code instead.
## [3.6.0] April 17, 2023
[3.6.0]: https://github.com/emissary-ingress/emissary/compare/v3.5.0...v3.6.0
@ -340,7 +414,7 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
releases, or a `Host` with or without a `TLSContext` as in prior 2.y releases.
- Bugfix: Prior releases of Emissary-ingress had the arbitrary limitation that a `TCPMapping` cannot
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
Emissary-ingress now allows `TCPMappings` to be used on the same `Listener` port as HTTP `Hosts`,
as long as that `Listener` terminates TLS.
@ -506,7 +580,7 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
releases, or a `Host` with or without a `TLSContext` as in prior 2.y releases.
- Bugfix: Prior releases of Emissary-ingress had the arbitrary limitation that a `TCPMapping` cannot
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
Emissary-ingress now allows `TCPMappings` to be used on the same `Listener` port as HTTP `Hosts`,
as long as that `Listener` terminates TLS.

View File

@ -7,17 +7,15 @@ maintainer responsibilities.
Maintainers are listed in alphabetical order.
| Maintainer | GitHub ID | Affiliation |
| ---------------- | --------------------------------------------- | --------------------------------------------------- |
| Alex Gervais | [alexgervais](https://github.com/alexgervais) | [Ambassador Labs](https://www.github.com/datawire/) |
| Alice Wasko | [aliceproxy](https://github.com/aliceproxy) | [Ambassador Labs](https://www.github.com/datawire/) |
| David Dymko | [ddymko](https://github.com/ddymko) | [Ambassador Labs](https://www.github.com/datawire/) |
| Flynn | [kflynn](https://github.com/kflynn) | [Buoyant](https://www.buoyant.io) |
| Hamzah Qudsi | [haq204](https://github.com/haq204) | [Ambassador Labs](https://www.github.com/datawire/) |
| Lance Austin | [lanceea](https://github.com/lanceea) | [Ambassador Labs](https://www.github.com/datawire/) |
| Luke Shumaker | [lukeshu](https://github.com/lukeshu) | [Ambassador Labs](https://www.github.com/datawire/) |
| Rafael Schloming | [rhs](https://github.com/rhs) | [Ambassador Labs](https://www.github.com/datawire/) |
| Maintainer | GitHub ID | Affiliation |
| ---------------- | ------------------------------------------------------ | --------------------------------------------------- |
| Alice Wasko | [aliceproxy](https://github.com/aliceproxy) | [Ambassador Labs](https://www.github.com/datawire/) |
| David Dymko | [ddymko](https://github.com/ddymko) | [CoreWeave](https://www.coreweave.com) |
| Flynn | [kflynn](https://github.com/kflynn) | [Buoyant](https://www.buoyant.io) |
| Hamzah Qudsi | [haq204](https://github.com/haq204) | [Ambassador Labs](https://www.github.com/datawire/) |
| Mark Schlachter | [the-wondersmith](https://github.com/the-wondersmith) | [Shuttle](https://www.shuttle.rs) |
| Phil Peble | [ppeble](https://github.com/ppeble) | [ActiveCampaign](https://www.activecampaign.com/) |
| Rafael Schloming | [rhs](https://github.com/rhs) | [Ambassador Labs](https://www.github.com/datawire/) |
In addition to the maintainers, Emissary releases may be created by any
@ -32,6 +30,9 @@ of the following (also listed in alphabetical order):
* Ava Hahn ([aidanhahn](https://github.com/aidanhahn))
* Alix Cook ([acookin](https://github.com/acookin))
* John Esmet ([esmet](https://github.com/esmet))
* Luke T. Shumaker ([lukeshu](https://github.com/lukeshu))
* Alex Gervais ([alexgervais](https://github.com/alexgervais))
* Lance Austin ([LanceEa](https://github.com/LanceEa))
## Releasers Emeriti

View File

@ -1,18 +1,11 @@
# Community Meeting Schedule
## Weekly Troubleshooting
We hold troubleshooting sessions once a week on Thursdays, at 2:30 pm Eastern. These sessions are a way to connect in person with project maintainers and get help with any problems you might be encountering while using Emissary-ingress.
**Zoom Meeting Link**: https://us02web.zoom.us/j/83032365622
## Monthly Contributors Meeting
The Emissary-ingress Contributors Meeting is held on the first Wednesday of every month at 3:30pm Eastern. The focus of this meeting is discussion of technical issues related to development of Emissary-ingress.
New contributors are always welcome! Check out our [contributor's guide](../DevDocumentation/DEVELOPING.md) to learn how you can help make Emissary-ingress better.
New contributors are always welcome! Check out our [contributor's guide](../DevDocumentation/CONTRIBUTING.md) to learn how you can help make Emissary-ingress better.
**Zoom Meeting Link**: [https://ambassadorlabs.zoom.us/j/86139262248?pwd=bzZlcU96WjAxN2E1RFZFZXJXZ1FwQT09](https://ambassadorlabs.zoom.us/j/86139262248?pwd=bzZlcU96WjAxN2E1RFZFZXJXZ1FwQT09)
- Meeting ID: 861 3926 2248
- Passcode: 113675
**Zoom Meeting Link**: [https://ambassadorlabs.zoom.us/j/81589589470?pwd=U8qNvZSqjQx7abIzwRtGryFU35pi3T.1](https://ambassadorlabs.zoom.us/j/81589589470?pwd=U8qNvZSqjQx7abIzwRtGryFU35pi3T.1)
- Meeting ID: 815 8958 9470
- Passcode: 199217

View File

@ -1,16 +1,12 @@
## Support for deploying and using Ambassador
## Support for deploying and using Emissary
Welcome to Ambassador! We use GitHub for tracking bugs and feature requests. If you need support, the following resources are available. Thanks for understanding.
Welcome to Emissary! The Emissary community is the best current resource for
Emissary support, with the best options being:
### Documentation
- Checking out the [documentation] at https://emissary-ingress.dev/
- Joining the `#emissary-ingress` channel in the [CNCF Slack]
- [Opening an issue][GitHub] in [GitHub]
* [User Documentation](https://www.getambassador.io/docs)
* [Troubleshooting Guide](https://www.getambassador.io/reference/debugging)
### Real-time Chat
* [Slack](https://d6e.co/slack): The `#ambassador` channel is a good place to start.
### Commercial Support
* Commercial Support is available as part of [Ambassador Pro](https://www.getambassador.io/pro/).
[CNCF Slack]: https://communityinviter.com/apps/cloud-native/cncf)
[documentation]: https://emissary-ingress.dev/
[GitHub]: https://github.com/emissary-ingress/emissary/issues

View File

@ -1,198 +1,219 @@
The Go module "github.com/emissary-ingress/emissary/v3" incorporates the
following Free and Open Source software:
Name Version License(s)
---- ------- ----------
the Go language standard library ("std") v1.20.7 3-clause BSD license
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 MIT license
github.com/MakeNowJust/heredoc v1.0.0 MIT license
github.com/Masterminds/goutils v1.1.1 Apache License 2.0
github.com/Masterminds/semver v1.5.0 MIT license
github.com/Masterminds/sprig v2.22.0+incompatible MIT license
github.com/Microsoft/go-winio v0.5.2 MIT license
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 3-clause BSD license
github.com/acomagu/bufpipe v1.0.4 MIT license
github.com/armon/go-metrics v0.3.10 MIT license
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d MIT license
github.com/beorn7/perks v1.0.1 MIT license
github.com/blang/semver/v4 v4.0.0 MIT license
github.com/census-instrumentation/opencensus-proto v0.4.1 Apache License 2.0
github.com/cespare/xxhash/v2 v2.2.0 MIT license
github.com/chai2010/gettext-go v1.0.2 3-clause BSD license
github.com/cloudflare/circl v1.3.3 3-clause BSD license
github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 Apache License 2.0
github.com/datawire/dlib v1.3.0 Apache License 2.0
github.com/datawire/dtest v0.0.0-20210928162311-722b199c4c2f Apache License 2.0
github.com/datawire/go-mkopensource v0.0.10-0.20230523185412-ce8a269623cd Apache License 2.0
github.com/davecgh/go-spew v1.1.1 ISC license
github.com/docker/distribution v2.8.2+incompatible Apache License 2.0
github.com/emicklei/go-restful/v3 v3.10.2 MIT license
github.com/emirpasic/gods v1.18.1 2-clause BSD license, ISC license
github.com/envoyproxy/protoc-gen-validate v1.0.2 Apache License 2.0
github.com/evanphx/json-patch v5.6.0+incompatible 3-clause BSD license
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f MIT license
github.com/fatih/camelcase v1.0.0 MIT license
github.com/fatih/color v1.15.0 MIT license
github.com/fsnotify/fsnotify v1.6.0 3-clause BSD license
github.com/go-errors/errors v1.4.2 MIT license
github.com/go-git/gcfg v1.5.0 3-clause BSD license
github.com/go-git/go-billy/v5 v5.4.1 Apache License 2.0
github.com/go-git/go-git/v5 v5.6.1 Apache License 2.0
github.com/go-logr/logr v1.2.4 Apache License 2.0
github.com/go-openapi/jsonpointer v0.19.6 Apache License 2.0
github.com/go-openapi/jsonreference v0.20.2 Apache License 2.0
github.com/go-openapi/swag v0.22.3 Apache License 2.0
github.com/gobuffalo/flect v1.0.2 MIT license
github.com/gogo/protobuf v1.3.2 3-clause BSD license
github.com/golang/protobuf v1.5.3 3-clause BSD license
github.com/google/btree v1.0.1 Apache License 2.0
github.com/google/gnostic v0.6.9 Apache License 2.0
github.com/google/go-cmp v0.5.9 3-clause BSD license
github.com/google/gofuzz v1.2.0 Apache License 2.0
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 Apache License 2.0
github.com/google/uuid v1.3.0 3-clause BSD license
github.com/gorilla/websocket v1.5.0 2-clause BSD license
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 MIT license
github.com/hashicorp/consul/api v1.12.0 Mozilla Public License 2.0
github.com/hashicorp/go-cleanhttp v0.5.2 Mozilla Public License 2.0
github.com/hashicorp/go-hclog v1.1.0 MIT license
github.com/hashicorp/go-immutable-radix v1.3.1 Mozilla Public License 2.0
github.com/hashicorp/go-rootcerts v1.0.2 Mozilla Public License 2.0
github.com/hashicorp/golang-lru v0.5.4 Mozilla Public License 2.0
github.com/hashicorp/serf v0.9.7 Mozilla Public License 2.0
github.com/huandu/xstrings v1.3.2 MIT license
github.com/imdario/mergo v0.3.15 3-clause BSD license
github.com/inconshreveable/mousetrap v1.1.0 Apache License 2.0
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 MIT license
github.com/josharian/intern v1.0.1-0.20211109044230-42b52b674af5 MIT license
github.com/json-iterator/go v1.1.12 MIT license
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 MIT license
github.com/kevinburke/ssh_config v1.2.0 MIT license
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de 3-clause BSD license
github.com/mailru/easyjson v0.7.7 MIT license
github.com/mattn/go-colorable v0.1.13 MIT license
github.com/mattn/go-isatty v0.0.17 MIT license
github.com/matttproud/golang_protobuf_extensions v1.0.4 Apache License 2.0
github.com/mitchellh/copystructure v1.2.0 MIT license
github.com/mitchellh/go-homedir v1.1.0 MIT license
github.com/mitchellh/go-wordwrap v1.0.1 MIT license
github.com/mitchellh/mapstructure v1.4.3 MIT license
github.com/mitchellh/reflectwalk v1.0.2 MIT license
github.com/moby/spdystream v0.2.0 Apache License 2.0
github.com/moby/term v0.0.0-20221205130635-1aeaba878587 Apache License 2.0
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd Apache License 2.0
github.com/modern-go/reflect2 v1.0.2 Apache License 2.0
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 MIT license
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 3-clause BSD license
github.com/opencontainers/go-digest v1.0.0 Apache License 2.0
github.com/peterbourgon/diskv v2.0.1+incompatible MIT license
github.com/pjbgf/sha1cd v0.3.0 Apache License 2.0
github.com/pkg/errors v0.9.1 2-clause BSD license
github.com/pmezard/go-difflib v1.0.0 3-clause BSD license
github.com/prometheus/client_golang v1.15.1 Apache License 2.0
github.com/prometheus/client_model v0.4.0 Apache License 2.0
github.com/prometheus/common v0.42.0 Apache License 2.0
github.com/prometheus/procfs v0.9.0 Apache License 2.0
github.com/russross/blackfriday/v2 v2.1.0 2-clause BSD license
github.com/sergi/go-diff v1.1.0 MIT license
github.com/sirupsen/logrus v1.9.3 MIT license
github.com/skeema/knownhosts v1.1.0 Apache License 2.0
github.com/spf13/cobra v1.7.0 Apache License 2.0
github.com/spf13/pflag v1.0.5 3-clause BSD license
github.com/stretchr/testify v1.8.4 MIT license
github.com/xanzy/ssh-agent v0.3.3 Apache License 2.0
github.com/xlab/treeprint v1.1.0 MIT license
go.opentelemetry.io/proto/otlp v0.19.0 Apache License 2.0
go.starlark.net v0.0.0-20220203230714-bb14e151c28f 3-clause BSD license
golang.org/x/crypto v0.11.0 3-clause BSD license
golang.org/x/mod v0.12.0 3-clause BSD license
golang.org/x/net v0.13.0 3-clause BSD license
golang.org/x/oauth2 v0.7.0 3-clause BSD license
golang.org/x/sys v0.10.0 3-clause BSD license
golang.org/x/term v0.10.0 3-clause BSD license
golang.org/x/text v0.11.0 3-clause BSD license
golang.org/x/time v0.3.0 3-clause BSD license
golang.org/x/tools v0.11.1 3-clause BSD license
google.golang.org/appengine v1.6.7 Apache License 2.0
google.golang.org/genproto v0.0.0-20230706204954-ccb25ca9f130 Apache License 2.0
google.golang.org/genproto/googleapis/api v0.0.0-20230629202037-9506855d4529 Apache License 2.0
google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 Apache License 2.0
google.golang.org/grpc v1.56.2 Apache License 2.0
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.3.0 Apache License 2.0
google.golang.org/protobuf v1.31.0 3-clause BSD license
gopkg.in/inf.v0 v0.9.1 3-clause BSD license
gopkg.in/warnings.v0 v0.1.2 2-clause BSD license
gopkg.in/yaml.v2 v2.4.0 Apache License 2.0, MIT license
gopkg.in/yaml.v3 v3.0.1 Apache License 2.0, MIT license
k8s.io/api v0.27.2 Apache License 2.0
k8s.io/apiextensions-apiserver v0.27.2 Apache License 2.0
k8s.io/apimachinery v0.27.3 3-clause BSD license, Apache License 2.0
k8s.io/apiserver v0.27.2 Apache License 2.0
k8s.io/cli-runtime v0.27.2 Apache License 2.0
k8s.io/client-go v0.27.2 3-clause BSD license, Apache License 2.0
github.com/emissary-ingress/code-generator (modified from k8s.io/code-generator) v0.27.2-0.20230503153040-f70eb21dcda6 Apache License 2.0
k8s.io/component-base v0.27.2 Apache License 2.0
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d Apache License 2.0
k8s.io/klog/v2 v2.100.1 Apache License 2.0
k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f 3-clause BSD license, Apache License 2.0
k8s.io/kubectl v0.27.2 Apache License 2.0
k8s.io/kubernetes v1.27.2 Apache License 2.0
k8s.io/metrics v0.27.2 Apache License 2.0
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 3-clause BSD license, Apache License 2.0
sigs.k8s.io/controller-runtime v0.15.0 Apache License 2.0
sigs.k8s.io/controller-tools v0.12.0 Apache License 2.0
sigs.k8s.io/gateway-api v0.2.0 Apache License 2.0
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd 3-clause BSD license, Apache License 2.0
sigs.k8s.io/kustomize/api v0.13.2 Apache License 2.0
sigs.k8s.io/kustomize/kyaml v0.14.1 Apache License 2.0, MIT license
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 Apache License 2.0
sigs.k8s.io/yaml v1.3.0 3-clause BSD license, MIT license
Name Version License(s)
---- ------- ----------
the Go language standard library ("std") v1.23.3 3-clause BSD license
cel.dev/expr v0.19.2 Apache License 2.0
dario.cat/mergo v1.0.1 3-clause BSD license
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c MIT license
github.com/MakeNowJust/heredoc v1.0.0 MIT license
github.com/Masterminds/goutils v1.1.1 Apache License 2.0
github.com/Masterminds/semver v1.5.0 MIT license
github.com/Masterminds/sprig v2.22.0+incompatible MIT license
github.com/Microsoft/go-winio v0.6.2 MIT license
github.com/ProtonMail/go-crypto v1.1.5 3-clause BSD license
github.com/antlr4-go/antlr/v4 v4.13.1 3-clause BSD license
github.com/armon/go-metrics v0.4.1 MIT license
github.com/beorn7/perks v1.0.1 MIT license
github.com/blang/semver/v4 v4.0.0 MIT license
github.com/cenkalti/backoff/v4 v4.3.0 MIT license
github.com/census-instrumentation/opencensus-proto v0.4.1 Apache License 2.0
github.com/cespare/xxhash/v2 v2.3.0 MIT license
github.com/chai2010/gettext-go v1.0.3 3-clause BSD license
github.com/cloudflare/circl v1.6.0 3-clause BSD license
github.com/cncf/xds/go v0.0.0-20250121191232-2f005788dc42 Apache License 2.0
github.com/cyphar/filepath-securejoin v0.4.1 3-clause BSD license
github.com/datawire/dlib v1.3.1 Apache License 2.0
github.com/datawire/dtest v0.0.0-20210928162311-722b199c4c2f Apache License 2.0
github.com/LukeShu/go-mkopensource (modified from github.com/datawire/go-mkopensource) v0.0.0-20250206080114-4ff6b660d8d4 Apache License 2.0
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc ISC license
github.com/distribution/reference v0.6.0 Apache License 2.0
github.com/emicklei/go-restful/v3 v3.12.1 MIT license
github.com/emirpasic/gods v1.18.1 2-clause BSD license, ISC license
github.com/envoyproxy/protoc-gen-validate v1.2.1 Apache License 2.0
github.com/evanphx/json-patch/v5 v5.9.11 3-clause BSD license
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f MIT license
github.com/fatih/camelcase v1.0.0 MIT license
github.com/fatih/color v1.18.0 MIT license
github.com/fsnotify/fsnotify v1.8.0 3-clause BSD license
github.com/fxamacker/cbor/v2 v2.7.0 MIT license
github.com/go-errors/errors v1.5.1 MIT license
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 3-clause BSD license
github.com/go-git/go-billy/v5 v5.6.2 Apache License 2.0
github.com/go-git/go-git/v5 v5.13.2 Apache License 2.0
github.com/go-logr/logr v1.4.2 Apache License 2.0
github.com/go-logr/zapr v1.3.0 Apache License 2.0
github.com/go-openapi/jsonpointer v0.21.0 Apache License 2.0
github.com/go-openapi/jsonreference v0.21.0 Apache License 2.0
github.com/go-openapi/swag v0.23.0 Apache License 2.0
github.com/gobuffalo/flect v1.0.3 MIT license
github.com/gogo/protobuf v1.3.2 3-clause BSD license
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 Apache License 2.0
github.com/golang/protobuf v1.5.4 3-clause BSD license
github.com/google/btree v1.1.3 Apache License 2.0
github.com/google/cel-go v0.23.2 3-clause BSD license, Apache License 2.0
github.com/google/gnostic-models v0.6.9 Apache License 2.0
github.com/google/go-cmp v0.6.0 3-clause BSD license
github.com/google/gofuzz v1.2.0 Apache License 2.0
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 Apache License 2.0
github.com/google/uuid v1.6.0 3-clause BSD license
github.com/gorilla/websocket v1.5.3 2-clause BSD license
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 MIT license
github.com/hashicorp/consul/api v1.31.0 Mozilla Public License 2.0
github.com/hashicorp/errwrap v1.1.0 Mozilla Public License 2.0
github.com/hashicorp/go-cleanhttp v0.5.2 Mozilla Public License 2.0
github.com/hashicorp/go-hclog v1.6.3 MIT license
github.com/hashicorp/go-immutable-radix v1.3.1 Mozilla Public License 2.0
github.com/hashicorp/go-metrics v0.5.4 MIT license
github.com/hashicorp/go-multierror v1.1.1 Mozilla Public License 2.0
github.com/hashicorp/go-rootcerts v1.0.2 Mozilla Public License 2.0
github.com/hashicorp/golang-lru v1.0.2 Mozilla Public License 2.0
github.com/hashicorp/hcl v1.0.0 Mozilla Public License 2.0
github.com/hashicorp/serf v0.10.2 Mozilla Public License 2.0
github.com/huandu/xstrings v1.5.0 MIT license
github.com/imdario/mergo v0.3.16 3-clause BSD license
github.com/inconshreveable/mousetrap v1.1.0 Apache License 2.0
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 MIT license
github.com/josharian/intern v1.0.1-0.20211109044230-42b52b674af5 MIT license
github.com/json-iterator/go v1.1.12 MIT license
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 MIT license
github.com/kevinburke/ssh_config v1.2.0 MIT license
github.com/klauspost/compress v1.17.11 3-clause BSD license, Apache License 2.0, MIT license
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de 3-clause BSD license
github.com/magiconair/properties v1.8.9 2-clause BSD license
github.com/mailru/easyjson v0.9.0 MIT license
github.com/mattn/go-colorable v0.1.14 MIT license
github.com/mattn/go-isatty v0.0.20 MIT license
github.com/mitchellh/copystructure v1.2.0 MIT license
github.com/mitchellh/go-homedir v1.1.0 MIT license
github.com/mitchellh/go-wordwrap v1.0.1 MIT license
github.com/mitchellh/mapstructure v1.5.0 MIT license
github.com/mitchellh/reflectwalk v1.0.2 MIT license
github.com/moby/spdystream v0.5.0 Apache License 2.0
github.com/moby/term v0.5.2 Apache License 2.0
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd Apache License 2.0
github.com/modern-go/reflect2 v1.0.2 Apache License 2.0
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 MIT license
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 3-clause BSD license
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f 3-clause BSD license
github.com/opencontainers/go-digest v1.0.0 Apache License 2.0
github.com/pelletier/go-toml/v2 v2.2.3 MIT license
github.com/peterbourgon/diskv v2.0.1+incompatible MIT license
github.com/pjbgf/sha1cd v0.3.2 Apache License 2.0
github.com/pkg/errors v0.9.1 2-clause BSD license
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 3-clause BSD license
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 3-clause BSD license
github.com/prometheus/client_golang v1.20.5 3-clause BSD license, Apache License 2.0
github.com/prometheus/client_model v0.6.1 Apache License 2.0
github.com/prometheus/common v0.62.0 Apache License 2.0
github.com/prometheus/procfs v0.15.1 Apache License 2.0
github.com/russross/blackfriday/v2 v2.1.0 2-clause BSD license
github.com/sagikazarmark/locafero v0.7.0 MIT license
github.com/sagikazarmark/slog-shim v0.1.0 3-clause BSD license
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 MIT license
github.com/sirupsen/logrus v1.9.3 MIT license
github.com/skeema/knownhosts v1.3.1 Apache License 2.0
github.com/sourcegraph/conc v0.3.0 MIT license
github.com/spf13/afero v1.12.0 Apache License 2.0
github.com/spf13/cast v1.7.1 MIT license
github.com/spf13/cobra v1.8.1 Apache License 2.0
github.com/spf13/pflag v1.0.6 3-clause BSD license
github.com/spf13/viper v1.19.0 MIT license
github.com/stoewer/go-strcase v1.3.0 MIT license
github.com/stretchr/testify v1.10.0 MIT license
github.com/subosito/gotenv v1.6.0 MIT license
github.com/vladimirvivien/gexe v0.4.1 MIT license
github.com/x448/float16 v0.8.4 MIT license
github.com/xanzy/ssh-agent v0.3.3 Apache License 2.0
github.com/xlab/treeprint v1.2.0 MIT license
go.opentelemetry.io/otel v1.34.0 Apache License 2.0
go.opentelemetry.io/otel/trace v1.34.0 Apache License 2.0
go.opentelemetry.io/proto/otlp v1.5.0 Apache License 2.0
go.uber.org/goleak v1.3.0 MIT license
go.uber.org/multierr v1.11.0 MIT license
go.uber.org/zap v1.27.0 MIT license
golang.org/x/crypto v0.32.0 3-clause BSD license
golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c 3-clause BSD license
golang.org/x/mod v0.23.0 3-clause BSD license
golang.org/x/net v0.34.0 3-clause BSD license
golang.org/x/oauth2 v0.26.0 3-clause BSD license
golang.org/x/sync v0.11.0 3-clause BSD license
golang.org/x/sys v0.30.0 3-clause BSD license
golang.org/x/term v0.29.0 3-clause BSD license
golang.org/x/text v0.22.0 3-clause BSD license
golang.org/x/time v0.10.0 3-clause BSD license
golang.org/x/tools v0.29.0 3-clause BSD license
gomodules.xyz/jsonpatch/v2 v2.4.0 Apache License 2.0
google.golang.org/genproto/googleapis/api v0.0.0-20250204164813-702378808489 Apache License 2.0
google.golang.org/genproto/googleapis/rpc v0.0.0-20250204164813-702378808489 Apache License 2.0
google.golang.org/grpc v1.70.0 Apache License 2.0
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 Apache License 2.0
google.golang.org/protobuf v1.36.4 3-clause BSD license
gopkg.in/evanphx/json-patch.v4 v4.12.0 3-clause BSD license
gopkg.in/inf.v0 v0.9.1 3-clause BSD license
gopkg.in/ini.v1 v1.67.0 Apache License 2.0
gopkg.in/warnings.v0 v0.1.2 2-clause BSD license
gopkg.in/yaml.v2 v2.4.0 Apache License 2.0, MIT license
gopkg.in/yaml.v3 v3.0.1 Apache License 2.0, MIT license
k8s.io/api v0.32.1 Apache License 2.0
k8s.io/apiextensions-apiserver v0.32.1 Apache License 2.0
k8s.io/apimachinery v0.32.1 3-clause BSD license, Apache License 2.0
k8s.io/apiserver v0.32.1 Apache License 2.0
k8s.io/cli-runtime v0.32.1 Apache License 2.0
k8s.io/client-go v0.32.1 3-clause BSD license, Apache License 2.0
github.com/emissary-ingress/code-generator (modified from k8s.io/code-generator) v0.32.2-0.20250205235421-4d5bf4656f71 Apache License 2.0
k8s.io/component-base v0.32.1 Apache License 2.0
k8s.io/component-helpers v0.32.1 Apache License 2.0
k8s.io/controller-manager v0.32.1 Apache License 2.0
k8s.io/gengo/v2 v2.0.0-20250130153323-76c5745d3511 Apache License 2.0
k8s.io/klog/v2 v2.130.1 Apache License 2.0
k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 3-clause BSD license, Apache License 2.0, MIT license
k8s.io/kubectl v0.32.1 Apache License 2.0
k8s.io/kubernetes v1.32.1 Apache License 2.0
k8s.io/metrics v0.32.1 Apache License 2.0
k8s.io/utils v0.0.0-20241210054802-24370beab758 3-clause BSD license, Apache License 2.0
sigs.k8s.io/controller-runtime v0.20.1 Apache License 2.0
sigs.k8s.io/controller-tools v0.17.1 Apache License 2.0
sigs.k8s.io/e2e-framework v0.6.0 Apache License 2.0
sigs.k8s.io/gateway-api v0.2.0 Apache License 2.0
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 3-clause BSD license, Apache License 2.0
sigs.k8s.io/kustomize/api v0.19.0 Apache License 2.0
sigs.k8s.io/kustomize/kyaml v0.19.0 Apache License 2.0
sigs.k8s.io/structured-merge-diff/v4 v4.5.0 Apache License 2.0
sigs.k8s.io/yaml v1.4.0 3-clause BSD license, Apache License 2.0, MIT license
The Emissary-ingress Python code makes use of the following Free and Open Source
libraries:
Name Version License(s)
---- ------- ----------
Cython 0.29.36 Apache License 2.0
Flask 2.3.2 3-clause BSD license
Jinja2 3.1.2 3-clause BSD license
MarkupSafe 2.1.3 3-clause BSD license
PyYAML 6.0.1 MIT license
Werkzeug 2.3.6 3-clause BSD license
blinker 1.6.2 MIT license
build 0.9.0 MIT license
cachetools 5.3.1 MIT license
certifi 2023.7.22 Mozilla Public License 2.0
charset-normalizer 3.1.0 MIT license
click 8.1.3 3-clause BSD license
dpath 2.1.6 MIT license
durationpy 0.5 MIT license
expiringdict 1.2.2 Apache License 2.0
google-auth 2.22.0 Apache License 2.0
gunicorn 20.1.0 MIT license
idna 3.4 3-clause BSD license
itsdangerous 2.1.2 3-clause BSD license
jsonpatch 1.33 3-clause BSD license
jsonpointer 2.4 3-clause BSD license
kubernetes 27.2.0 Apache License 2.0
oauthlib 3.2.2 3-clause BSD license
orjson 3.9.1 Apache License 2.0, MIT license
packaging 21.3 2-clause BSD license, Apache License 2.0
pep517 0.13.0 MIT license
pip-tools 6.12.1 3-clause BSD license
prometheus-client 0.17.1 Apache License 2.0
pyasn1 0.5.0 2-clause BSD license
pyasn1-modules 0.3.0 2-clause BSD license
pyparsing 3.0.9 MIT license
python-dateutil 2.8.2 3-clause BSD license, Apache License 2.0
python-json-logger 2.0.7 2-clause BSD license
requests 2.31.0 Apache License 2.0
requests-oauthlib 1.3.1 ISC license
retrying 1.3.3 Apache License 2.0
rsa 4.9 Apache License 2.0
semantic-version 2.10.0 2-clause BSD license
six 1.16.0 MIT license
tomli 2.0.1 MIT license
typing_extensions 4.7.1 Python Software Foundation license
urllib3 1.26.16 MIT license
websocket-client 1.6.1 Apache License 2.0
Name Version License(s)
---- ------- ----------
Cython 0.29.37 Apache License 2.0
Flask 3.1.0 3-clause BSD license
Jinja2 3.1.6 3-clause BSD license
MarkupSafe 3.0.2 2-clause BSD license
PyYAML 6.0.1 MIT license
Werkzeug 3.1.3 3-clause BSD license
blinker 1.9.0 MIT license
build 1.2.2.post1 MIT license
certifi 2025.1.31 Mozilla Public License 2.0
charset-normalizer 3.4.1 MIT license
click 8.1.8 3-clause BSD license
durationpy 0.9 MIT license
expiringdict 1.2.2 Apache License 2.0
gunicorn 23.0.0 MIT license
idna 3.10 3-clause BSD license
itsdangerous 2.2.0 3-clause BSD license
jsonpatch 1.33 3-clause BSD license
jsonpointer 3.0.0 3-clause BSD license
orjson 3.10.15 Apache License 2.0, MIT license
packaging 23.1 2-clause BSD license, Apache License 2.0
pip-tools 7.3.0 3-clause BSD license
prometheus_client 0.21.1 Apache License 2.0
pyparsing 3.0.9 MIT license
pyproject_hooks 1.2.0 MIT license
python-json-logger 3.2.1 2-clause BSD license
requests 2.32.3 Apache License 2.0
semantic-version 2.10.0 2-clause BSD license
typing_extensions 4.12.2 Python Software Foundation license
urllib3 2.3.0 MIT license

View File

@ -31,7 +31,7 @@ Check [this blog post](https://blog.getambassador.io/building-ambassador-an-open
At the core of Emissary-ingress is Envoy Proxy which has very extensive configuration and extensions points. Getting this right can be challenging so Emissary-ingress provides Kubernetes Administrators and Developers a cloud-native way to configure Envoy using declarative yaml files. Here are the core components of Emissary-Ingress:
- CRDs - extend K8s to enable Emissary-ingress's abstractions (*generated yaml*)
- Apiext - A server that implements the Webhook Conversion interface for CRD's (**own container**)
- Apiext - A server that implements the Webhook Conversion interface for CRD's (**own container**)
- Diagd - provides diagnostic ui, translates snapshots/ir into envoy configuration (*in-process*)
- Ambex - gRPC server implementation of envoy xDS for dynamic envoy configration (*in-process*)
- Envoy Proxy - Proxy that handles routing all user traffic (*in-process*)
@ -51,7 +51,7 @@ The build system (`make`) uses [controller-gen](https://book.kubebuilder.io/refe
### Apiext
Kubernetes provides the ability to have multiple versions of Custom Resources similiar to the core K8s resources but it is only capable of having a single `storage` version that is persisted in `etcd`. Custom Resource Definitions can define a `ConversionWebHook` that Kubernetes will call whenever it receives a version that is not the storage version.
Kubernetes provides the ability to have multiple versions of Custom Resources similiar to the core K8s resources but it is only capable of having a single `storage` version that is persisted in `etcd`. Custom Resource Definitions can define a `ConversionWebHook` that Kubernetes will call whenever it receives a version that is not the storage version.
You can check the current storage version by looking at `pkg/getambassador.io/crds.yaml` and searching for the `storage: true` field and seeing which version is the storage version of the custom resource (*at the time of writing this it is `v2`*).
@ -119,7 +119,6 @@ Here is a list of everything managed by the `entrypoint` binary. Each one is ind
| Description | Goroutine | OS.Exec |
| ------------------------------------------------------------------------- | :----------------: | :----------------: |
| `demomode` (*if enabled*) | :white_check_mark: | |
| `diagd` - admin ui & config processor | | :white_check_mark: |
| `ambex` - the Envoy ADS Server | :white_check_mark: | |
| `envoy` - proxy routing data | | :white_check_mark: |
@ -164,34 +163,34 @@ Provides two main functions:
2. Processing Cluster changes into Envoy ready configuration
1. This process has all the steps i'm outlining below
- receives "CONFIG" event and pushes on queue
- event queue loop listens for commands and pops them off
- on CONFIG event it calls back to emissary Snapshot Server to grab current snapshot stored in-memory
- It is serialized and stored in `/ambassador/snapshots/snapshot-tmp.yaml`.
- A SecretHandler and Config is initialized
- A ResourceFetcher (aka, parse the snapshot into an in-memory representation)
- Generate IR and envoy configs (load_ir function)
- Take each Resource generated in ResourceFetcher and add it to the Config object as strongly typed objects
- Store Config Object in `/ambassador/snapshots/aconf-tmp.json`
- Check Deltas for Mappings cach and determine if we needs to be reset
- Create IR with a Config, Cache, and invalidated items
- IR is generated which basically just converts our stuff to strongly typed generic "envoy" items (handling filters, clusters, listeners, removing duplicates, etc...)
- IR is updated in-memory for diagd process
- IR is persisted to temp storage in `/ambassador/snapshots/ir-tmp.json`
- generate envoy config from IR and cache
- Split envoy config into bootstrap config, ads_config and clustermap config
- Validate econfig
- Rotate Snapshots for each of the files `aconf`, `econf`, `ir`, `snapshot` that get persisted in the snapshot path `/ambassador/snapshots`.
- Rotating them allows for seeing the history of snapshots up to a limit and then they are dropped
- this also renames the `-tmp` files written above into
- Persist bootstrap, envoy ads config and clustermap config to base directory:
- `/ambassador/bootstrap-ads.json` # this is used by envoy during startup to initial config itself and let it know about the static ADS Service
- `/ambassador/enovy/envoy.json` # this is used in `ambex` to generate the ADS snapshots along with the fastPath items
- `/ambassador/clustermap.json` # this might not be used either...
- Notify `envoy` and `ambex` that a new snapshot has been persisted using signal SIGHUP
- the Goroutine within `entrypoint` that starts up `envoy` is blocking waiting for this signal to start envoy
- the `ambex` process continuously listens for this signal and it triggers a configuration update for ambex.
- Update the appropriate status fields with metatdata by making calls to the `kubestatus` binary found in `cmd/kubestatus` which handles the communication to the cluster
- receives "CONFIG" event and pushes on queue
- event queue loop listens for commands and pops them off
- on CONFIG event it calls back to emissary Snapshot Server to grab current snapshot stored in-memory
- It is serialized and stored in `/ambassador/snapshots/snapshot-tmp.yaml`.
- A SecretHandler and Config is initialized
- A ResourceFetcher (aka, parse the snapshot into an in-memory representation)
- Generate IR and envoy configs (load_ir function)
- Take each Resource generated in ResourceFetcher and add it to the Config object as strongly typed objects
- Store Config Object in `/ambassador/snapshots/aconf-tmp.json`
- Check Deltas for Mappings cache and determine if we needs to be reset
- Create IR with a Config, Cache, and invalidated items
- IR is generated which basically just converts our stuff to strongly typed generic "envoy" items (handling filters, clusters, listeners, removing duplicates, etc...)
- IR is updated in-memory for diagd process
- IR is persisted to temp storage in `/ambassador/snapshots/ir-tmp.json`
- generate envoy config from IR and cache
- Split envoy config into bootstrap config, ads_config and clustermap config
- Validate econfig
- Rotate Snapshots for each of the files `aconf`, `econf`, `ir`, `snapshot` that get persisted in the snapshot path `/ambassador/snapshots`.
- Rotating them allows for seeing the history of snapshots up to a limit and then they are dropped
- this also renames the `-tmp` files written above into
- Persist bootstrap, envoy ads config and clustermap config to base directory:
- `/ambassador/bootstrap-ads.json` # this is used by envoy during startup to initial config itself and let it know about the static ADS Service
- `/ambassador/enovy/envoy.json` # this is used in `ambex` to generate the ADS snapshots along with the fastPath items
- `/ambassador/clustermap.json` # this might not be used either...
- Notify `envoy` and `ambex` that a new snapshot has been persisted using signal SIGHUP
- the Goroutine within `entrypoint` that starts up `envoy` is blocking waiting for this signal to start envoy
- the `ambex` process continuously listens for this signal and it triggers a configuration update for ambex.
- Update the appropriate status fields with metatdata by making calls to the `kubestatus` binary found in `cmd/kubestatus` which handles the communication to the cluster
## Ambex
@ -206,6 +205,7 @@ This is the gRPC server implementation of the envoy xDS v2 and v3 api's based on
We maintain our own [fork](https://github.com/datawire/envoy) of Envoy that includes some additional commits for implementing some features in Emissary-Ingress.
Envoy does all the heavy-lifting
- does all routing, filtering, TLS termination, metrics collection, tracing, etc...
- It is bootstraps from the output of diagd
- It is dynamically updated using the xDS services and specifically the ADS service

View File

@ -1,4 +1,4 @@
Building Ambassador
===================
The content in this document has been moved to [DEVELOPING.md].
The content in this document has been moved to [CONTRIBUTING.md].

View File

@ -0,0 +1,929 @@
# Developing Emissary-ingress
Welcome to the Emissary-ingress Community!
Thank you for contributing, we appreciate small and large contributions and look forward to working with you to make Emissary-ingress better.
This document is intended for developers looking to contribute to the Emissary-ingress project. In this document you will learn how to get your development environment setup and how to contribute to the project. Also, you will find more information about the internal components of Emissary-ingress and other questions about working on the project.
> Looking for end user guides for Emissary-ingress? You can check out the end user guides at <https://www.getambassador.io/docs/emissary/>.
After reading this document if you have questions we encourage you to join us on our [Slack channel](https://communityinviter.com/apps/cloud-native/cncf) in the #emissary-ingress channel.
- [Code of Conduct](../Community/CODE_OF_CONDUCT.md)
- [Governance](../Community/GOVERNANCE.md)
- [Maintainers](../Community/MAINTAINERS.md)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Development Setup](#development-setup)
- [Step 1: Install Build Dependencies](#step-1-install-build-dependencies)
- [Step 2: Clone Project](#step-2-clone-project)
- [Step 3: Configuration](#step-3-configuration)
- [Step 4: Building](#step-4-building)
- [Step 5: Push](#step-5-push)
- [Step 6: Deploy](#step-6-deploy)
- [Step 7: Dev-loop](#step-7-dev-loop)
- [What should I do next?](#what-should-i-do-next)
- [Contributing](#contributing)
- [Submitting a Pull Request (PR)](#submitting-a-pull-request-pr)
- [Pull Request Review Process](#pull-request-review-process)
- [Rebasing a branch under review](#rebasing-a-branch-under-review)
- [Fixup commits during PR review](#fixup-commits-during-pr-review)
- [Development Workflow](#development-workflow)
- [Branching Strategy](#branching-strategy)
- [Backport Strategy](#backport-strategy)
- [What if I need a patch to land in a previous supported version?](#what-if-i-need-a-patch-to-land-in-a-previous-supported-version)
- [What if my patch is only for a previous supported version?](#what-if-my-patch-is-only-for-a-previous-supported-version)
- [What if I'm still not sure?](#what-if-im-still-not-sure)
- [Merge Strategy](#merge-strategy)
- [What about merge commit strategy?](#what-about-merge-commit-strategy)
- [Contributing to the Docs](#contributing-to-the-docs)
- [Advanced Topics](#advanced-topics)
- [Running Emissary-ingress internals locally](#running-emissary-ingress-internals-locally)
- [Setting up diagd](#setting-up-diagd)
- [Changing the ambassador root](#changing-the-ambassador-root)
- [Getting envoy](#getting-envoy)
- [Shutting up the pod labels error](#shutting-up-the-pod-labels-error)
- [Extra credit](#extra-credit)
- [Debugging and Developing Envoy Configuration](#debugging-and-developing-envoy-configuration)
- [Making changes to Envoy](#making-changes-to-envoy)
- [1. Preparing your machine](#1-preparing-your-machine)
- [2. Setting up your workspace to hack on Envoy](#2-setting-up-your-workspace-to-hack-on-envoy)
- [3. Hacking on Envoy](#3-hacking-on-envoy)
- [4. Building and testing your hacked-up Envoy](#4-building-and-testing-your-hacked-up-envoy)
- [5. Test Devloop](#5-test-devloop)
- [6. Protobuf changes](#6-protobuf-changes)
- [7. Finalizing your changes](#7-finalizing-your-changes)
- [8. Final Checklist](#8-final-checklist)
- [Developing Emissary-ingress (Maintainers-only advice)](#developing-emissary-ingress-maintainers-only-advice)
- [Updating license documentation](#updating-license-documentation)
- [Upgrading Python dependencies](#upgrading-python-dependencies)
- [FAQ](#faq)
- [How do I find out what build targets are available?](#how-do-i-find-out-what-build-targets-are-available)
- [How do I develop on a Mac with Apple Silicon?](#how-do-i-develop-on-a-mac-with-apple-silicon)
- [How do I develop on Windows using WSL?](#how-do-i-develop-on-windows-using-wsl)
- [How do I test using a private Docker repository?](#how-do-i-test-using-a-private-docker-repository)
- [How do I change the loglevel at runtime?](#how-do-i-change-the-loglevel-at-runtime)
- [Can I build from a docker container instead of on my local computer?](#can-i-build-from-a-docker-container-instead-of-on-my-local-computer)
- [How do I clear everything out to make sure my build runs like it will in CI?](#how-do-i-clear-everything-out-to-make-sure-my-build-runs-like-it-will-in-ci)
- [My editor is changing `go.mod` or `go.sum`, should I commit that?](#my-editor-is-changing-gomod-or-gosum-should-i-commit-that)
- [How do I debug "This should not happen in CI" errors?](#how-do-i-debug-this-should-not-happen-in-ci-errors)
- [How do I run Emissary-ingress tests?](#how-do-i-run-emissary-ingress-tests)
- [How do I type check my python code?](#how-do-i-type-check-my-python-code)
## Development Setup
This section provides the steps for getting started developing on Emissary-ingress. There are a number of prerequisites that need to be setup. In general, our tooling tries to detect any missing requirements and provide a friendly error message. If you ever find that this is not the case please file an issue.
> **Note:** To enable developers contributing on Macs with Apple Silicon, we ensure that the artifacts are built for `linux/amd64`
> rather than the host `linux/arm64` architecture. This can be overriden using the `BUILD_ARCH` environment variable. Pull Request are welcome :).
### Step 1: Install Build Dependencies
Here is a list of tools that are used by the build system to generate the build artifacts, packaging them up into containers, generating crds, helm charts and for running tests.
- git
- make
- docker (make sure you can run docker commands as your dev user without sudo)
- bash
- rsync
- golang - `go.mod` for current version
- python (>=3.10.9)
- kubectl
- a kubernetes cluster (you need permissions to create resources, i.e. crds, deployments, services, etc...)
- a Docker registry
- bsdtar (Provided by libarchive-tools on Ubuntu 19.10 and newer)
- gawk
- jq
- helm
### Step 2: Clone Project
If you haven't already then this would be a good time to clone the project running the following commands:
```bash
# clone to your preferred folder
git clone https://github.com/emissary-ingress/emissary.git
# navigate to project
cd emissary
```
### Step 3: Configuration
You can configure the build system using environment variables, two required variables are used for setting the container registry and the kubeconfig used.
> **Important**: the test and build system perform destructive operations against your cluster. Therefore, we recommend that you
> use a development cluster. Setting the DEV_KUBECONFIG variable described below ensures you don't accidently perform actions on a production cluster.
Open a terminal in the location where you cloned the repository and run the following commands:
```bash
# set container registry using `export DEV_REGISTRY=<your-registry>
# note: you need to be logged in and have permissions to push
# Example:
export DEV_REGISTRY=docker.io/parsec86
# set kube config file using `export DEV_KUBECONFIG=<dev-kubeconfig>`
# your cluster needs the ability to read from the configured container registry
export DEV_KUBECONFIG="$HOME/.kube/dev-config.yaml"
```
### Step 4: Building
The build system for this project leverages `make` and multi-stage `docker` builds to produce the following containers:
- `emissary.local/emissary` - single deployable container for Emissary-ingress
- `emissary.local/kat-client` - test client container used for testing
- `emissary.local/kat-server` - test server container used for testing
Using the terminal session you opened in step 2, run the following commands
>
```bash
# This will pull and build the necessary docker containers and produce multiple containers.
# If this is the first time running this command it will take a little bit while the base images are built up and cached.
make images
# verify containers were successfully created, you should also see some of the intermediate builder containers as well
docker images | grep emissary.local
```
*What just happened?*
The build system generated a build container that pulled in envoy, the build dependencies, built various binaries from within this project and packaged them into a single deployable container. More information on this can be found in the [Architecture Document](ARCHITECTURE.md).
### Step 5: Push
Now that you have successfully built the containers its time to push them to your container registry which you setup in step 2.
In the same terminal session you can run the following command:
```bash
# re-tags the images and pushes them to your configured container registry
# docker must be able to login to your registry and you have to have push permissions
make push
# you can view the newly tag images by running
docker images | grep <your -registry>
# alternatively, we have two make targets that provide information as well
make env
# or in a bash export friendly format
make export
```
### Step 6: Deploy
Now its time to deploy the container out to your Kubernetes cluster that was configured in step 2. Hopefully, it is already becoming apparent that we love to leverage Make to handle the complexity for you :).
```bash
# generate helm charts and K8's Configs with your container swapped in and apply them to your cluster
make deploy
# check your cluster to see if emissary is running
# note: kubectl doesn't know about DEV_KUBECONFIG so you may need to ensure KUBECONFIG is pointing to the correct cluster
kubectl get pod -n ambassador
```
🥳 If all has gone well then you should have your development environment setup for building and testing Emissary-ingress.
### Step 7: Dev-loop
Now that you are all setup and able to deploy a development container of Emissary-ingress to a cluster, it is time to start making some changes.
Lookup an issue that you want to work on, assign it to yourself and if you have any questions feel free to ping us on slack in the #emissary-dev channel.
Make a change to Emissary-ingress and when you want to test it in a live cluster just re-run
`make deploy`
This will:
- recompile the go binary
- rebuild containers
- push them to the docker registry
- rebuild helm charts and manifest
- reapply manifest to cluster and re-deploy Emissary-ingress to the cluster
> *Do I have to run the other make targets `make images` or `make push` ?*
> No you don't have to because `make deploy` will actually run those commands for you. The steps above were meant to introduce you to the various make targets so that you aware of them and have options when developing.
### What should I do next?
Now that you have your dev system up and running here are some additional content that we recommend you check out:
- [Emissary-ingress Architecture](ARCHITECTURE.md)
- [Contributing Code](#contributing)
- [Contributing to Docs](#contributing-to-the-docs)
- [Advanced Topics](#advanced-topics)
- [Faq](#faq)
## Contributing
This section goes over how to contribute code to the project and how to get started contributing. More information on how we manage our branches can be found below in [Development Workflow](#development-workflow).
Before contributing be sure to read our [Code of Conduct](../Community/CODE_OF_CONDUCT.md) and [Governance](../Community/GOVERNANCE.md) to get an understanding of how our project is structured.
### Submitting a Pull Request (PR)
> If you haven't set up your development environment then please see the [Development Setup](#development-setup) section.
When submitting a Pull Request (PR) here are a set of guidelines to follow:
1. Search for an [existing issue](https://github.com/emissary-ingress/emissary/issues) or create a [new issue](https://github.com/emissary-ingress/emissary/issues/new/choose).
2. Be sure to describe your proposed change and any open questions you might have in the issue. This allows us to collect historical context around an issue, provide feedback on the proposed solution and discuss what versions a fix should target.
3. If you haven't done so already create a fork of the respository and clone it locally
```shell
git clone <your-fork>
```
4. Cut a new patch branch from `master`:
```shell
git checkout master
git checkout -b my-patch-branch master
```
5. Make necessary code changes.
- Make sure you include test coverage for the change, see [How do I run Tests](#how-do-i-run-emissary-ingress-tests)
- Ensure code linting is passing by running `make lint`
- Code changes must have associated documentation updates.
- Make changes in <https://github.com/datawire/ambassador-docs> as necessary, and include a reference to those changes the pull request for your code changes.
- See [Contributing to Docs](#contributing-to-the-docs) for more details.
> Smaller pull requests are easier to review and can get merged faster thus reducing potential for merge conflicts so it is recommend to keep them small and focused.
6. Commit your changes using descriptive commit messages.
- we **require** that all commits are signed off so please be sure to commit using the `--signoff` flag, e.g. `git commit --signoff`
- commit message should summarize the fix and motivation for the proposed fix. Include issue # that the fix looks to address.
- we are "ok" with multiple commits but we may ask you to squash some commits during the PR review process
7. Push your branch to your forked repository:
> It is good practice to make sure your change is rebased on the latest master to ensure it will merge cleanly so if it has been awhile since you rebased on upstream you should do it now to ensure there are no merge conflicts
```shell
git push origin my-patch-branch
```
8. Submit a Pull Request from your fork targeting upstream `emissary/master`.
Thanks for your contribution! One of the [Maintainers](../Community/MAINTAINERS.md) will review your PR and discuss any changes that need to be made.
### Pull Request Review Process
This is an opportunity for the Maintainers to review the code for accuracy and ensure that it solves the problem outlined in the issue. This is an iterative process and meant to ensure the quality of the code base. During this process we may ask you to break up Pull Request into smaller changes, squash commits, rebase on master, etc...
Once you have been provided feedback:
1. Make the required updates to the code per the review discussion
2. Retest the code and ensure linting is still passing
3. Commit the changes and push to Github
- see [Fixup Commits](#fixup-commits-during-pr-review) below
4. Repeat these steps as necessary
Once you have **two approvals** then one of the Maintainers will merge the PR.
:tada: Thank you for contributing and being apart of the Emissary-ingress Community!
### Rebasing a branch under review
Many times the base branch will have new commits added to it which may cause merge conflicts with your open pull request. First, a good rule of thumb is to make pull request small so that these conflicts are less likely to occur but this is not always possible when have multiple people working on similiar features. Second, if it is just addressing commit feedback a `fixup` commit is also a good option so that the reviewers can see what changed since their last review.
If you need to address merge conflicts then it is preferred that you use **Rebase** on the base branch rather than merging base branch into the feature branch. This ensures that when the PR is merged that it will cleanly replay on top of the base branch ensuring we maintain a clean linear history.
To do a rebase you can do the following:
```shell
# add emissary.git as a remote repository, only needs to be done once
git remote add upstream https://github.com/emissary-ingress/emissary.git
# fetch upstream master
git fetch upstream master
# checkout local master and update it from upstream master
git checkout master
git pull -ff upstream master
# rebase patch branch on local master
git checkout my-patch-branch
git rebase -i master
```
Once the merge conflicts are addressed and you are ready to push the code up you will need to force push your changes because during the rebase process the commit sha's are re-written and it has diverged from what is in your remote fork (Github).
To force push a branch you can:
```shell
git push head --force-with-lease
```
> Note: the `--force-with-lease` is recommended over `--force` because it is safer because it will check if the remote branch had new commits added during your rebase. You can read more detail here: <https://itnext.io/git-force-vs-force-with-lease-9d0e753e8c41>
### Fixup commits during PR review
One of the major downsides to rebasing a branch is that it requires force pushing over the remote (Github) which then marks all the existing review history outdated. This makes it hard for a reviewer to figure out whether or not the new changes addressed the feedback.
One way you can help the reviewer out is by using **fixup** commits. Fixup commits are special git commits that append `fixup!` to the subject of a commit. `Git` provides tools for easily creating these and also squashing them after the PR review process is done.
Since this is a new commit on top of the other commits, you will not lose your previous review and the new commit can be reviewed independently to determine if the new changes addressed the feedback correctly. Then once the reviewers are happy we will ask you to squash them so that we when it is merged we will maintain a clean linear history.
Here is a quick read on it: <https://jordanelver.co.uk/blog/2020/06/04/fixing-commits-with-git-commit-fixup-and-git-rebase-autosquash/>
TL;DR;
```shell
# make code change and create new commit
git commit --fixup <sha>
# push to Github for review
git push
# reviewers are happy and ask you to do a final rebase before merging
git rebase -i --autosquash master
# final push before merging
git push --force-with-lease
```
## Development Workflow
This section introduces the development workflow used for this repository. It is recommended that both Contributors, Release Engineers and Maintainers familiarize themselves with this content.
### Branching Strategy
This repository follows a trunk based development workflow. Depending on what article you read there are slight nuances to this so this section will outline how this repository interprets that workflow.
The most important branch is `master` this is our **Next Release** version and it should always be in a shippable state. This means that CI should be green and at any point we can decided to ship a new release from it. In a traditional trunk based development workflow, developers are encouraged to land partially finished work daily and to keep that work hidden behind feature flags. This repository does **NOT** follow that and instead if code lands on master it is something we are comfortable with shipping.
We ship release candidate (RC) builds from the `master` branch (current major) and also from `release/v{major.minor}` branches (last major version) during our development cycles. Therefore, it is important that it remains shippable at all times!
When we do a final release then we will cut a new `release/v{major.minor}` branch. These are long lived release branches which capture a snapshot in time for that release. For example here are some of the current release branches (as of writing this):
- release/v3.2
- release/v3.1
- release/v3.0
- release/v2.4
- release/v2.3
- release/v1.14
These branches contain the codebase as it was at that time when the release was done. These branches have branch protection enabled to ensure that they are not removed or accidently overwritten. If we needed to do a security fix or bug patch then we may cut a new `.Z` patch release from an existing release branch. For example, the `release/v2.4` branch is currently on `2.4.1`.
As you can see we currently support mutliple major versions of Emissary-ingress and you can read more about our [End-of-Life Policy](https://www.getambassador.io/docs/emissary/latest/about/aes-emissary-eol/).
For more information on our current RC and Release process you can find that in our [Release Wiki](https://github.com/emissary-ingress/emissary/wiki).
### Backport Strategy
Since we follow a trunk based development workflow this means that the majority of the time your patch branch will be based off from `master` and that most Pull Request will target `master`.
This ensures that we do not miss bug fixes or features for the "Next" shippable release and simplifies the mental-model for deciding how to get started contributing code.
#### What if I need a patch to land in a previous supported version?
Let's say I have a bug fix for CRD round trip conversion for AuthService, which is affecting both `v2.y` and `v3.y`.
First within the issue we should discuss what versions we want to target. This can depend on current cycle work and any upcoming releases we may have.
The general rules we follow are:
1. land patch in "next" version which is `master`
2. backport patch to any `release/v{major}.{minor}` branches
So, let's say we discuss it and say that the "next" major version is a long ways away so we want to do a z patch release on our current minor version(`v3.2`) and we also want to do a z patch release on our last supported major version (`v2.4`).
This means that these patches need to land in three separate branches:
1. `master` - next release
2. `release/v3.2` - patch release
3. `release/v2.4` - patch release
In this scenario, we first ask you to land the patch in the `master` branch and then provide separate PR's with the commits backported onto the `release/v*` branches.
> Recommendation: using the `git cherry-pick -x` will add the source commit sha to the commit message. This helps with tracing work back to the original commit.
#### What if my patch is only for a previous supported version?
Although, this should be an edge case, it does happen where the code has diverged enough that a fix may only be relevant to an existing supported version. In these cases we may need to do a patch release for that older supported version.
A good example, if we were to find a bug in the Envoy v2 protocol configuration we would only want to target the v2 release.
In this scenario, the base branch that we would create our feature branch off from would be the latest `minor` version for that release. As of writing this, that would be the `release/v2.4` branch. We would **not** need to target master.
But, let's say during our fix we notice other things that need to be addressed that would also need to be fixed in `master`. Then you need to submit a **separate Pull Request** that should first land on master and then follow the normal backporting process for the other patches.
#### What if I'm still not sure?
This is what the issue discussions and disucssion in Slack are for so that we can help guide you so feel free to ping us in the `#emissary-dev` channel on Slack to discuss directly with us.
### Merge Strategy
> The audience for this section is the Maintainers but also beneficial for Contributors so that they are familiar with how the project operates.
Having a clean linear commit history for a repository makes it easier to understand what is being changed and reduces the mental load for new comers to the project.
To maintain a clean linear commit history the following rules should be followed:
First, always rebase patch branch on to base branch. This means **NO** merge commits from merging base branch into the patch branch. This can be accomplished using git rebase.
```shell
# first, make sure you pull latest upstream changes
git fetch upstream
git checkout master
git pull -ff upstream/master
# checkout patch branch and rebase interactive
# you may have merge conflicts you need to resolve
git checkout my-patch-branch
git rebase -i master
```
> Note: this does rewrite your commit shas so be aware when sharing branches with co-workers.
Once the Pull Request is reviewed and has **two approvals** then a Maintainer can merge. Maintainers should follow prefer the following merge strategies:
1. rebase and merge
2. squash merge
When `rebase and merge` is used your commits are played on top of the base branch so that it creates a clean linear history. This will maintain all the commits from the Pull Request. In most cases this should be the **preferred** merge strategy.
When a Pull Request has lots of fixup commits, or pr feedback fixes then you should ask the Contributor to squash them as part of the PR process.
If the contributor is unable to squash them then using a `squash merge` in some cases makes sense. **IMPORTANT**, when this does happen it is important that the commit messages are cleaned up and not just blindly accepted the way proposed by Github. Since it is easy to miss that cleanup step, this should be used less frequently compared to `rebase and merge`.
#### What about merge commit strategy?
> The audience for this section is the Maintainers but also beneficial for Contributors so that they are familiar with how the project operates.
When maintaining a linear commit history, each commit tells the story of what was changed in the repository. When using `merge commits` it
adds an additional commit to the history that is not necessary because the commit history and PR history already tell the story.
Now `merge commits` can be useful when you are concerned with not rewriting the commit sha. Based on the current release process which includes using `rel/v` branches that are tagged and merged into `release/v` branches we must use a `merge commit` when merging these branches. This ensures that the commit sha a Git Tag is pointing at still exists once merged into the `release/v` branch.
## Contributing to the Docs
The Emissary-ingress community will all benefit from having documentation that is useful and correct. If you have found an issue with the end user documentation, then please help us out by submitting an issue and/or pull request with a fix!
The end user documentation for Emissary-ingress lives in a different repository and can be found at <https://github.com/datawire/ambassador-docs>.
See this repository for details on how to contribute to either a `pre-release` or already-released version of Emissary-ingress.
## Advanced Topics
This section is for more advanced topics that provide more detailed instructions. Make sure you go through the Development Setup and read the Architecture document before exploring these topics.
### Running Emissary-ingress internals locally
The main entrypoint is written in go. It strives to be as compatible as possible
with the normal go toolchain. You can run it with:
```bash
go run ./cmd/busyambassador entrypoint
```
Of course just because you can run it this way does not mean it will succeed.
The entrypoint needs to launch `diagd` and `envoy` in order to function, and it
also expect to be able to write to the `/ambassador` directory.
#### Setting up diagd
If you want to hack on diagd, its easiest to setup a virtualenv with an editable
copy and launch your `go run` from within that virtualenv. Note that these
instructions depend on the virtualenvwrapper
(<https://virtualenvwrapper.readthedocs.io/en/latest/>) package:
```bash
# Create a virtualenv named venv with all the python requirements
# installed.
python3 -m venv venv
. venv/bin/activate
# If you're doing this in Datawire's apro.git, then:
cd ambassador
# Update pip and install dependencies
pip install --upgrade pip
pip install orjson # see below
pip install -r builder/requirements.txt
# Created an editable installation of ambassador:
pip install -e python/
# Check that we do indeed have diagd in our path.
which diagd
# If you're doing this in Datawire's apro.git, then:
cd ..
```
(Note: it shouldn't be necessary to install `orjson` by hand. The fact that it is
at the moment is an artifact of the way Ambassador builds currently happen.)
#### Changing the ambassador root
You should now be able to launch ambassador if you set the
`ambassador_root` environment variable to a writable location:
ambassador_root=/tmp go run ./cmd/busyambassador entrypoint
#### Getting envoy
If you do not have envoy in your path already, the entrypoint will use
docker to run it.
#### Shutting up the pod labels error
An astute observe of the logs will notice that ambassador complains
vociferously that pod labels are not mounted in the ambassador
container. To reduce this noise, you can:
```bash
mkdir /tmp/ambassador-pod-info && touch /tmp/ambassador-pod-info/labels
```
#### Extra credit
When you run ambassador locally it will configure itself exactly as it
would in the cluster. That means with two caveats you can actually
interact with it and it will function normally:
1. You need to run `telepresence connect` or equivalent so it can
connect to the backend services in its configuration.
2. You need to supply the host header when you talk to it.
### Debugging and Developing Envoy Configuration
Envoy configuration is generated by the ambassador compiler. Debugging
the ambassador compiler by running it in kubernetes is very slow since
we need to push both the code and any relevant kubernetes resources
into the cluster. The following sections will provide tips for improving
this development experience.
### Making changes to Envoy
Emissary-ingress is built on top of Envoy and leverages a vendored version of Envoy (*we track upstream very closely*). This section will go into how to make changes to the Envoy that is packaged with Emissary-ingress.
This is a bit more complex than anyone likes, but here goes:
#### 1. Preparing your machine
Building and testing Envoy can be very resource intensive. A laptop
often can build Envoy... if you plug in an external hard drive, point
a fan at it, and leave it running overnight and most of the next day.
At Ambassador Labs, we'll often spin up a temporary build machine in GCE, so
that we can build it very quickly.
As of Envoy 1.15.0, we've measure the resource use to build and test
it as:
> | Command | Disk Size | Disk Used | Duration[1] |
> |--------------------|-----------|-----------|-------------|
> | `make update-base` | 450G | 12GB | ~11m |
> | `make check-envoy` | 450G | 424GB | ~45m |
>
> [1] On a "Machine type: custom (32 vCPUs, 512 GB memory)" VM on GCE,
> with the following entry in its `/etc/fstab`:
>
> ```bash
> tmpfs:docker /var/lib/docker tmpfs size=450G 0 0
> ```
If you have the RAM, we've seen huge speed gains from doing the builds
and tests on a RAM disk (see the `/etc/fstab` line above).
#### 2. Setting up your workspace to hack on Envoy
1. From your `emissary.git` checkout, get Emissary-ingress's current
version of the Envoy sources, and create a branch from that:
```shell
make $PWD/_cxx/envoy
git -C _cxx/envoy checkout -b YOUR_BRANCHNAME
```
2. To build Envoy in FIPS mode, set the following variable:
```shell
export FIPS_MODE=true
```
It is important to note that while building Envoy in FIPS mode is
required for FIPS compliance, additional steps may be necessary.
Emissary does not claim to be FIPS compliant or certified.
See [here](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/ssl#fips-140-2) for more information on FIPS and Envoy.
> _NOTE:_ FIPS_MODE is NOT supported by the emissary-ingress maintainers but we provide this for developers as convience
#### 3. Hacking on Envoy
Modify the sources in `./_cxx/envoy/`. or update the branch and/or `ENVOY_COMMIT` as necessary in `./_cxx/envoy.mk`
#### 4. Building and testing your hacked-up Envoy
> See `./_cxx/envoy.mk` for the full list of targets.
Multiple Phony targets are provided so that developers can run the steps they are interested in when developing, here are few of the key ones:
- `make update-base`: will perform all the steps necessary to verify, build envoy, build docker images, push images to the container repository and compile the updated protos.
- `make build-envoy`: will build the envoy binaries using the same build container as the upstream Envoy project. Build outputs are mounted to the `_cxx/envoy-docker-build` directory and Bazel will write the results there.
- `make build-base-envoy-image`: will use the release outputs from building envoy to generate a new `base-envoy` container which is then used in the main emissary-ingress container build.
- `make push-base-envoy`: will push the built container to the remote container repository.
- `make check-envoy`: will use the build docker container to run the Envoy test suite against the currently checked out envoy in the `_cxx/envoy` folder.
- `make envoy-shell`: will run the envoy build container and open a bash shell session. The `_cxx/envoy` folder is volume mounted into the container and the user is set to the `envoybuild` user in the container to ensure you are not running as root to ensure hermetic builds.
#### 5. Test Devloop
Running the Envoy test suite will compile all the test targets. This is a slow process and can use lots of disk space.
The Envoy Inner Devloop for build and testing:
- You can make a change to Envoy code and run the whole test by just calling `make check-envoy`
- You can run a specific test instead of the whole test suite by setting the `ENVOY_TEST_LABEL` environment variable.
- For example, to run just the unit tests in `test/common/network/listener_impl_test.cc`, you should run:
```shell
ENVOY_TEST_LABEL='//test/common/network:listener_impl_test' make check-envoy
```
- Alternatively, you can run `make envoy-shell` to get a bash shell into the Docker container that does the Envoy builds and you are free to interact with `Bazel` directly.
Interpreting the test results:
- If you see the following message, don't worry, it's harmless; the tests still ran:
```text
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
```
The message means that the test passed, but it passed too
quickly, and Bazel is suggesting that you declare it as smaller.
Something along the lines of "This test only took 2s, but you
declared it as being in the 60s-300s ('moderate') bucket,
consider declaring it as being in the 0s-60s ('short')
bucket".
Don't be confused (as I was) in to thinking that it was saying
that the test was too big and was skipped and that you need to
throw more hardware at it.
- **Build or test Emissary-ingress** with the usual `make` commands, with
the exception that you MUST run `make update-base` first whenever
Envoy needs to be recompiled; it won't happen automatically. So
`make test` to build-and-test Emissary-ingress would become
`make update-base && make test`, and `make images` to just build
Emissary-ingress would become `make update-base && make images`.
The Envoy changes with Emissary-ingress:
- Either run `make update-base` to build, and push a new base container and then you can run `make test` for the Emissary-ingress test suite.
- If you do not want to push the container you can instead:
- Build Envoy - `make build-envoy`
- Build container - `make build-base-envoy-image`
- Test Emissary - `make test`
#### 6. Protobuf changes
If you made any changes to the Protocol Buffer files or if you bumped versions of Envoy then you
should make sure that you are re-compiling the Protobufs so that they are available and checked-in
to the emissary.git repository.
```sh
make compile-envoy-protos
```
This will copy over the raw proto files, compile and copy the generated go code over to emisary-ignress repository.
#### 7. Finalizing your changes
> NOTE: we are no longer accepting PR's in `datawire/envoy.git`.
If you have custom changes then land them in your custom envoy repository and update the `ENVOY_COMMIT` and `ENVOY_DOCKER_REPO` variable in `_cxx/envoy.mk` so that the image will be pushed to the correct repository.
Then run `make update-base` does all the bits so assuming that was successful then are all good.
**For maintainers:**
You will want to make sure that the image is pushed to the backup container registries:
```shell
# upload image to the mirror in GCR
SHA=GET_THIS_FROM_THE_make_update-base_OUTPUT
TAG="envoy-0.$SHA.opt"
docker pull "docker.io/emissaryingress/base-envoy:envoy-0.$TAG.opt"
docker tag "docker.io/emissaryingress/base-envoy:$TAG" "gcr.io/datawire/ambassador-base:$TAG"
docker push "gcr.io/datawire/ambassador-base:$TAG"
```
#### 8. Final Checklist
**For Maintainers Only**
Here is a checklist of things to do when bumping the `base-envoy` version:
- [ ] The image has been pushed to...
- [ ] `docker.io/emissaryingress/base-envoy`
- [ ] `gcr.io/datawire/ambassador-base`
- [ ] The `datawire/envoy.git` commit has been tagged as `datawire-$(git describe --tags --match='v*')`
(the `--match` is to prevent `datawire-*` tags from stacking on each other).
- [ ] It's been tested with...
- [ ] `make check-envoy`
The `check-envoy-version` CI job will double check all these things, with the exception of running
the Envoy tests. If the `check-envoy-version` is failing then double check the above, fix them and
re-run the job.
### Developing Emissary-ingress (Maintainers-only advice)
At the moment, these techniques will only work internally to Maintainers. Mostly
this is because they require credentials to access internal resources at the
moment, though in several cases we're working to fix that.
#### Updating license documentation
When new dependencies are added or existing ones are updated, run
`make generate` and commit changes to `DEPENDENCIES.md` and
`DEPENDENCY_LICENSES.md`
#### Upgrading Python dependencies
Delete `python/requirements.txt`, then run `make generate`.
If there are some dependencies you don't want to upgrade, but want to
upgrade everything else, then
1. Remove from `python/requirements.txt` all of the entries except
for those you want to pin.
2. Delete `python/requirements.in` (if it exists).
3. Run `make generate`.
> **Note**: If you are updating orjson you will need to also update `docker/base-python/Dockerfile` before running `make generate` for the new version. orjson uses rust bindings and the default wheels on PyPI rely on glibc. Because our base python image is Alpine based, it is built from scratch using rustc to build a musl compatable version.
> :warning: You may run into an error when running `make generate` where it can't detect the licenses for new or upgraded dependencies, which is needed so that so that we can properly generate DEPENDENCIES.md and DEPENDENCY_LICENSES.md. If that is the case, you may also have to update `build-aux/tools/src/py-mkopensource/main.go:parseLicenses` for any license changes then run `make generate` again.
## FAQ
This section contains a set of Frequently Asked Questions that may answer a question you have. Also, feel free to ping us in Slack.
### How do I find out what build targets are available?
Use `make help` and `make targets` to see what build targets are
available along with documentation for what each target does.
### How do I develop on a Mac with Apple Silicon?
To ensure that developers using a Mac with Apple Silicon can contribute, the build system ensures
the build artifacts are `linux/amd64` rather than the host architecture. This behavior can be overriden
using the `BUILD_ARCH` environment variable (e.g. `BUILD_ARCH=linux/arm64 make images`).
### How do I develop on Windows using WSL?
- [WSL 2](https://learn.microsoft.com/en-us/windows/wsl/)
- [Docker Desktop for Windows](https://docs.docker.com/desktop/windows/wsl/)
- [VS Code](https://code.visualstudio.com/)
### How do I test using a private Docker repository?
If you are pushing your development images to a private Docker repo,
then:
```sh
export DEV_USE_IMAGEPULLSECRET=true
export DOCKER_BUILD_USERNAME=...
export DOCKER_BUILD_PASSWORD=...
```
and the test machinery should create an `imagePullSecret` from those Docker credentials such that it can pull the images.
### How do I change the loglevel at runtime?
```console
curl localhost:8877/ambassador/v0/diag/?loglevel=debug
```
Note: This affects diagd and Envoy, but NOT the AES `amb-sidecar`.
See the AES `CONTRIBUTING.md` for how to do that.
### Can I build from a docker container instead of on my local computer?
If you want to build within a container instead of setting up dependencies on your local machine then you can run the build within a docker container and leverage "Docker in Docker" to build it.
1. `docker pull docker:latest`
2. `docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -it docker:latest sh`
3. `apk add --update --no-cache bash build-base go curl rsync python3 python2 git libarchive-tools gawk jq`
4. `git clone https://github.com/emissary-ingress/emissary.git && cd emissary`
5. `make images`
Steps 0 and 1 are run on your machine, and 2 - 4 are from within the docker container. The base image is a "Docker in Docker" image, ran with `-v /var/run/docker.sock:/var/run/docker.sock` in order to connect to your local daemon from the docker inside the container. More info on Docker in Docker [here](https://hub.docker.com/_/docker).
The images will be created and tagged as defined above, and will be available in docker on your local machine.
### How do I clear everything out to make sure my build runs like it will in CI?
Use `make clobber` to completely remove all derived objects, all cached artifacts, everything, and get back to a clean slate. This is recommended if you change branches within a clone, or if you need to `make generate` when you're not *certain* that your last `make generate` was using the same Envoy version.
Use `make clean` to remove derived objects, but *not* clear the caches.
### My editor is changing `go.mod` or `go.sum`, should I commit that?
If you notice this happening, run `make go-mod-tidy`, and commit that.
(If you're in Ambassador Labs, you should do this from `apro/`, not
`apro/ambassador/`, so that apro.git's files are included too.)
### How do I debug "This should not happen in CI" errors?
These checks indicate that some output file changed in the middle of a
run, when it should only change if a source file has changed. Since
CI isn't editing the source files, this shouldn't happen in CI!
This is problematic because it means that running the build multiple
times can give different results, and that the tests are probably not
testing the same image that would be released.
These checks will show you a patch showing how the output file
changed; it is up to you to figure out what is happening in the
build/test system that would cause that change in the middle of a run.
For the most part, this is pretty simple... except when the output
file is a Docker image; you just see that one image hash is different
than another image hash.
Fortunately, the failure showing the changed image hash is usually
immediately preceded by a `docker build`. Earlier in the CI output,
you should find an identical `docker build` command from the first time it
ran. In the second `docker build`'s output, each step should say
`---> Using cache`; the first few steps will say this, but at some
point later steps will stop saying this; find the first step that is
missing the `---> Using cache` line, and try to figure out what could
have changed between the two runs that would cause it to not use the
cache.
If that step is an `ADD` command that is adding a directory, the
problem is probably that you need to add something to `.dockerignore`.
To help figure out what you need to add, try adding a `RUN find
DIRECTORY -exec ls -ld -- {} +` step after the `ADD` step, so that you
can see what it added, and see what is different on that between the
first and second `docker build` commands.
### How do I run Emissary-ingress tests?
- `export DEV_REGISTRY=<your-dev-docker-registry>` (you need to be logged in and have permission to push)
- `export DEV_KUBECONFIG=<your-dev-kubeconfig>`
If you want to run the Go tests for `cmd/entrypoint`, you'll need `diagd`
in your `PATH`. See the instructions below about `Setting up diagd` to do
that.
| Group | Command |
| --------------- | ---------------------------------------------------------------------- |
| All Tests | `make test` |
| All Golang | `make gotest` |
| All Python | `make pytest` |
| Some/One Golang | `make gotest GOTEST_PKGS=./cmd/entrypoint GOTEST_ARGS="-run TestName"` |
| Some/One Python | `make pytest PYTEST_ARGS="-k TestName"` |
Please note the python tests use a local cache to speed up test
results. If you make a code update that changes the generated envoy
configuration, those tests will fail and you will need to update the
python test cache.
Note that it is invalid to run one of the `main[Plain.*]` Python tests
without running all of the other `main[Plain*]` tests; the test will
fail to run (not even showing up as a failure or xfail--it will fail
to run at all). For example, `PYTEST_ARGS="-k WebSocket"` would match
the `main[Plain.WebSocketMapping-GRPC]` test, and that test would fail
to run; one should instead say `PYTEST_ARGS="-k Plain or WebSocket"`
to avoid breaking the sub-tests of "Plain".
### How do I type check my python code?
Ambassador uses Python 3 type hinting and the `mypy` static type checker to
help find bugs before runtime. If you haven't worked with hinting before, a
good place to start is
[the `mypy` cheat sheet](https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html).
New code must be hinted, and the build process will verify that the type
check passes when you `make test`. Fair warning: this means that
PRs will not pass CI if the type checker fails.
We strongly recommend using an editor that can do realtime type checking
(at Datawire we tend to use PyCharm and VSCode a lot, but many many editors
can do this now) and also running the type checker by hand before submitting
anything:
- `make lint/mypy` will check all the Ambassador code
Ambassador code should produce *no* warnings and *no* errors.
If you're concerned that the mypy cache is somehow wrong, delete the
`.mypy_cache/` directory to clear the cache.

View File

@ -111,7 +111,7 @@ These steps should be completed within the 1-7 days of Disclosure.
[CVSS](https://www.first.org/cvss/specification-document) using the [CVSS
Calculator](https://www.first.org/cvss/calculator/3.0). The Fix Lead makes the final call on the
calculated CVSS; it is better to move quickly than to spend time making the CVSS perfect.
- The Fix Team will work per the usual [Emissary Development Process](DEVELOPING.md), including
- The Fix Team will work per the usual [Emissary Development Process](CONTRIBUTING.md), including
fix branches, PRs, reviews, etc.
- The Fix Team will notify the Fix Lead that work on the fix branch is complete once the fix is
present in the relevant release branch(es) in the private security repo.

176
QUICKSTART.md Normal file
View File

@ -0,0 +1,176 @@
# Emissary-ingress 3.10 Quickstart
**We recommend using Helm** to install Emissary.
### Installing if you're starting fresh
**If you are already running Emissary and just want to upgrade, DO NOT FOLLOW
THESE DIRECTIONS.** Instead, check out "Upgrading from an earlier Emissary"
below.
If you're starting from scratch and you don't need to worry about older CRD
versions, install using `--set enableLegacyVersions=false` to avoid install
the old versions of the CRDs and the conversion webhook:
```bash
helm install emissary-crds \
--namespace emissary --create-namespace \
oci://ghcr.io/emissary-ingress/emissary-crds-chart --version=3.10.0 \
--set enableLegacyVersions=false \
--wait
```
This will install only v3alpha1 CRDs and skip the conversion webhook entirely.
It will create the `emissary` namespace for you, but there won't be anything
in it at this point.
Next up, install Emissary itself, with `--set waitForApiext.enabled=false` to
tell Emissary not to wait for the conversion webhook to be ready:
```bash
helm install emissary \
--namespace emissary \
oci://ghcr.io/emissary-ingress/emissary-ingress --version=3.10.0 \
--set waitForApiext.enabled=false \
--wait
```
### Upgrading from an earlier Emissary
First, install the CRDs and the conversion webhook:
```bash
helm install emissary-crds \
--namespace emissary-system --create-namespace \
oci://ghcr.io/emissary-ingress/emissary-crds-chart --version=3.10.0 \
--wait
```
This will install all the versions of the CRDs (v1, v2, and v3alpha1) and the
conversion webhook into the `emissary-system` namespace. Once that's done, you'll install Emissary itself:
```bash
helm install emissary \
--namespace emissary --create-namespace \
oci://ghcr.io/emissary-ingress/emissary-ingress --version=3.10.0 \
--wait
```
### Using Emissary
In either case above, you should have a running Emissary behind the Service
named `emissary-emissary-ingress` in the `emissary` namespace. How exactly you
connect to that Service will vary with your cluster provider, but you can
start with
```bash
kubectl get svc -n emissary emissary-emissary-ingress
```
and that should get you started. Or, of course, you can use something like
```bash
kubectl port-forward -n emissary svc/emissary-emissary-ingress 8080:80
```
(after you configure a Listener!) and then talk to localhost:8080 with any
kind of cluster.
## Using Faces for a sanity check
[Faces Demo]: https://github.com/buoyantio/faces-demo
If you like, you can continue by using the [Faces Demo] as a quick sanity
check. First, install Faces itself using Helm:
```bash
helm install faces \
--namespace faces --create-namespace \
oci://ghcr.io/buoyantio/faces-chart --version 2.0.0-rc.4 \
--wait
```
Next, you'll need to configure Emissary to route to Faces. First, we'll do the
basic configuration to tell Emissary to listen for HTTP traffic:
```bash
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: ambassador-https-listener
spec:
port: 8443
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: ambassador-http-listener
spec:
port: 8080
protocol: HTTP
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: wildcard-host
spec:
hostname: "*"
requestPolicy:
insecure:
action: Route
EOF
```
(This actually supports both HTTPS and HTTP, but since we haven't set up TLS
certificates, we'll just stick with HTTP.)
Next, we need two Mappings:
| Prefix | Routes to Service | in Namespace |
| --------- | ----------------- | ------------ |
| `/faces/` | `faces-gui` | `faces` |
| `/face/` | `face` | `faces` |
```bash
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: gui-mapping
namespace: faces
spec:
hostname: "*"
prefix: /faces/
service: faces-gui.faces
rewrite: /
timeout_ms: 0
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: face-mapping
namespace: faces
spec:
hostname: "*"
prefix: /face/
service: face.faces
timeout_ms: 0
EOF
```
Once that's done, then you'll be able to access the Faces Demo at `/faces/`,
on whatever IP address or hostname your cluster provides for the
`emissary-emissary-ingress` Service. Or you can port-forward as above and
access it at `http://localhost:8080/faces/`.

134
README.md
View File

@ -6,69 +6,105 @@ Emissary-ingress
[![Docker Repository][badge-docker-img]][badge-docker-link]
[![Join Slack][badge-slack-img]][badge-slack-link]
[![Core Infrastructure Initiative: Best Practices][badge-cii-img]][badge-cii-link]
[![Artifact HUB][badge-artifacthub-img]][badge-artifacthub-link]
[badge-version-img]: https://img.shields.io/docker/v/emissaryingress/emissary?sort=semver
[badge-version-link]: https://github.com/emissary-ingress/emissary/releases
[badge-docker-img]: https://img.shields.io/docker/pulls/emissaryingress/emissary
[badge-docker-link]: https://hub.docker.com/r/emissaryingress/emissary
[badge-slack-img]: https://img.shields.io/badge/slack-join-orange.svg
[badge-slack-link]: https://a8r.io/slack
[badge-slack-link]: https://communityinviter.com/apps/cloud-native/cncf
[badge-cii-img]: https://bestpractices.coreinfrastructure.org/projects/1852/badge
[badge-cii-link]: https://bestpractices.coreinfrastructure.org/projects/1852
[badge-artifacthub-img]: https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/emissary-ingress
[badge-artifacthub-link]: https://artifacthub.io/packages/helm/datawire/emissary-ingress
<!-- Links are (mostly) at the end of this document, for legibility. -->
[Emissary-Ingress](https://www.getambassador.io) is an open-source Kubernetes-native API Gateway +
Layer 7 load balancer + Kubernetes Ingress built on [Envoy Proxy](https://www.envoyproxy.io).
Emissary-ingress is a CNCF incubation project (and was formerly known as Ambassador API Gateway).
---
Emissary-ingress enables its users to:
* Manage ingress traffic with [load balancing], support for multiple protocols ([gRPC and HTTP/2], [TCP], and [web sockets]), and Kubernetes integration
* Manage changes to routing with an easy to use declarative policy engine and [self-service configuration], via Kubernetes [CRDs] or annotations
* Secure microservices with [authentication], [rate limiting], and [TLS]
* Ensure high availability with [sticky sessions], [rate limiting], and [circuit breaking]
* Leverage observability with integrations with [Grafana], [Prometheus], and [Datadog], and comprehensive [metrics] support
* Enable progressive delivery with [canary releases]
* Connect service meshes including [Consul], [Linkerd], and [Istio]
## QUICKSTART
Looking to get started as quickly as possible? Check out [the
QUICKSTART](https://emissary-ingress.dev/docs/3.10/quick-start/)!
### Latest Release
The latest production version of Emissary is **3.10.0**.
**Note well** that there is also an Ambassador Edge Stack 3.10.0, but
**Emissary 3.10 and Edge Stack 3.10 are not equivalent**. Their codebases have
diverged and will continue to do so.
---
Emissary-ingress
================
[Emissary-ingress](https://www.getambassador.io/docs/open-source) is an
open-source, developer-centric, Kubernetes-native API gateway built on [Envoy
Proxy]. Emissary-ingress is a CNCF incubating project (and was formerly known
as Ambassador API Gateway).
### Design Goals
The first problem faced by any organization trying to develop cloud-native
applications is the _ingress problem_: allowing users outside the cluster to
access the application running inside the cluster. Emissary is built around
the idea that the application developers should be able to solve the ingress
problem themselves, without needing to become Kubernetes experts and without
needing dedicated operations staff: a self-service, developer-centric workflow
is necessary to develop at scale.
Emissary is open-source, developer-centric, role-oriented, opinionated, and
Kubernatives-native.
- open-source: Emissary is licensed under the Apache 2 license, permitting use
or modification by anyone.
- developer-centric: Emissary is designed taking the application developer
into account first.
- role-oriented: Emissary's configuration deliberately tries to separate
elements to allow separation of concerns between developers and operations.
- opinionated: Emissary deliberately tries to make easy things easy, even if
that comes of the cost of not allowing some uncommon features.
### Features
Emissary supports all the table-stakes features needed for a modern API
gateway:
* Per-request [load balancing]
* Support for routing [gRPC], [HTTP/2], [TCP], and [web sockets]
* Declarative configuration via Kubernetes [custom resources]
* Fine-grained [authentication] and [authorization]
* Advanced routing features like [canary releases], [A/B testing], [dynamic routing], and [sticky sessions]
* Resilience features like [retries], [rate limiting], and [circuit breaking]
* Observability features including comprehensive [metrics] support using the [Prometheus] stack
* Easy service mesh integration with [Linkerd], [Istio], [Consul], etc.
* [Knative serverless integration]
See the full list of [features](https://www.getambassador.io/features/) here.
See the full list of [features](https://www.getambassador.io/docs/emissary) here.
Branches
========
### Branches
(If you are looking at this list on a branch other than `master`, it
may be out of date.)
- [`master`](https://github.com/emissary-ingress/emissary/tree/master) - branch for Emissary-ingress 3.8.z work (:heavy_check_mark: upcoming release)
- [`release/v3.7`](https://github.com/emissary-ingress/emissary/tree/release/v3.7) - branch for Emissary-ingress 3.7.z work
- [`release/v2.5`](https://github.com/emissary-ingress/emissary/tree/release/v2.5) - branch for Emissary-ingress 2.5.z work (:heavy_check_mark: upcoming release)
- [`release/v1.14`](https://github.com/emissary-ingress/emissary/tree/release/v1.14) - branch for Emissary-ingress 1.14.z work (:heavy_check_mark: maintenance, supported through September 2022)
- [`main`](https://github.com/emissary-ingress/emissary/tree/main): Emissary 4 development work
Architecture
============
**No further development is planned on any branches listed below.**
Emissary is configured via Kubernetes CRDs, or via annotations on Kubernetes `Service`s. Internally,
it uses the [Envoy Proxy] to actually handle routing data; externally, it relies on Kubernetes for
scaling and resiliency. For more on Emissary's architecture and motivation, read [this blog post](https://blog.getambassador.io/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy-ed01ed520844).
- [`master`](https://github.com/emissary-ingress/emissary/tree/master) - **Frozen** at Emissary 3.10.0
- [`release/v3.10`](https://github.com/emissary-ingress/emissary/tree/release/v3.10) - Emissary-ingress 3.10.0 release branch
- [`release/v3.9`](https://github.com/emissary-ingress/emissary/tree/release/v3.9)
- Emissary-ingress 3.9.1 release branch
- [`release/v2.5`](https://github.com/emissary-ingress/emissary/tree/release/v2.5) - Emissary-ingress 2.5.1 release branch
Getting Started
===============
**Note well** that there is also an Ambassador Edge Stack 3.10.0, but
**Emissary 3.10 and Edge Stack 3.10 are not equivalent**. Their codebases have
diverged and will continue to do so.
You can get Emissary up and running in just three steps. Follow the instructions here: https://www.getambassador.io/docs/emissary/latest/tutorials/getting-started/
If you are looking for a Kubernetes ingress controller, Emissary provides a superset of the functionality of a typical ingress controller. (It does the traditional routing, and layers on a raft of configuration options.) This blog post covers [Kubernetes ingress](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d).
For other common questions, view this [FAQ page](https://www.getambassador.io/docs/emissary/latest/about/faq/).
You can also use Helm to install Emissary. For more information, see the instructions in the [Helm installation documentation](https://www.getambassador.io/docs/emissary/latest/topics/install/helm/)
Check out full the [Emissary
documentation](https://www.getambassador.io/docs/emissary/) at
www.getambassador.io.
Community
=========
#### Community
Emissary-ingress is a CNCF Incubating project and welcomes any and all
contributors.
@ -83,21 +119,21 @@ the way the community is run, including:
regular trouble-shooting meetings and contributor meetings
- how to get [`SUPPORT.md`](Community/SUPPORT.md).
The best way to join the community is to join our [Slack
channel](https://a8r.io/slack).
Check out the [`DevDocumentation/`](DevDocumentation/) directory for
information on the technicals of Emissary, most notably the
[`DEVELOPING.md`](DevDocumentation/DEVELOPING.md) contributor's guide.
The best way to join the community is to join the `#emissary-ingress` channel
in the [CNCF Slack]. This is also the best place for technical information
about Emissary's architecture or development.
If you're interested in contributing, here are some ways:
* Write a blog post for [our blog](https://blog.getambassador.io)
* Investigate an [open issue](https://github.com/emissary-ingress/emissary/issues)
* Add [more tests](https://github.com/emissary-ingress/emissary/tree/master/ambassador/tests)
The Ambassador Edge Stack is a superset of Emissary-ingress that provides additional functionality including OAuth/OpenID Connect, advanced rate limiting, Swagger/OpenAPI support, integrated ACME support for automatic TLS certificate management, and a cloud-based UI. For more information, visit https://www.getambassador.io/editions/.
* Add [more tests](https://github.com/emissary-ingress/emissary/tree/main/ambassador/tests)
<!-- Please keep this list sorted. -->
[CNCF Slack]: https://communityinviter.com/apps/cloud-native/cncf
[Envoy Proxy]: https://www.envoyproxy.io
<!-- Legacy: clean up these links! -->
[authentication]: https://www.getambassador.io/docs/emissary/latest/topics/running/services/auth-service/
[canary releases]: https://www.getambassador.io/docs/emissary/latest/topics/using/canary/
[circuit breaking]: https://www.getambassador.io/docs/emissary/latest/topics/using/circuit-breakers/

3
_cxx/.gitignore vendored
View File

@ -4,3 +4,6 @@
/envoy-build-container.txt
/go-control-plane/
# folder is mounted to envoy build container and build outputs are copied here
/envoy-docker-build

View File

@ -1,32 +1,33 @@
#
# Variables that the dev might set in the env or CLI
# Set to non-empty to enable compiling Envoy as-needed.
YES_I_AM_OK_WITH_COMPILING_ENVOY ?=
# Adjust to run just a subset of the tests.
ENVOY_TEST_LABEL ?= //test/...
# Set RSYNC_EXTRAS=Pv or something to increase verbosity.
RSYNC_EXTRAS ?=
ENVOY_TEST_LABEL ?= //contrib/golang/... //test/...
export ENVOY_TEST_LABEL
#
# Variables that are meant to be set by editing this file
# IF YOU MESS WITH ANY OF THESE VALUES, YOU MUST RUN `make update-base`.
ENVOY_REPO ?= $(if $(IS_PRIVATE),git@github.com:datawire/envoy-private.git,https://github.com/datawire/envoy.git)
# rebase/release/v1.27.0
ENVOY_COMMIT ?= b2891a118f31f0f582797273e9948cfae4212c5b
ENVOY_COMPILATION_MODE ?= opt
# Increment BASE_ENVOY_RELVER on changes to `docker/base-envoy/Dockerfile`, or Envoy recipes.
# You may reset BASE_ENVOY_RELVER when adjusting ENVOY_COMMIT.
BASE_ENVOY_RELVER ?= 0
ENVOY_REPO ?= https://github.com/datawire/envoy.git
# Set to non-empty to enable compiling Envoy in FIPS mode.
FIPS_MODE ?=
# https://github.com/datawire/envoy/tree/rebase/release/v1.31.3
ENVOY_COMMIT ?= 628f5afc75a894a08504fa0f416269ec50c07bf9
ENVOY_DOCKER_REPO ?= $(if $(IS_PRIVATE),quay.io/datawire-private/base-envoy,docker.io/emissaryingress/base-envoy)
ENVOY_DOCKER_VERSION ?= $(BASE_ENVOY_RELVER).$(ENVOY_COMMIT).$(ENVOY_COMPILATION_MODE)$(if $(FIPS_MODE),.FIPS)
ENVOY_DOCKER_TAG ?= $(ENVOY_DOCKER_REPO):envoy-$(ENVOY_DOCKER_VERSION)
ENVOY_FULL_DOCKER_TAG ?= $(ENVOY_DOCKER_REPO):envoy-full-$(ENVOY_DOCKER_VERSION)
ENVOY_COMPILATION_MODE ?= opt
# Increment BASE_ENVOY_RELVER on changes to `docker/base-envoy/Dockerfile`, or Envoy recipes.
# You may reset BASE_ENVOY_RELVER when adjusting ENVOY_COMMIT.
BASE_ENVOY_RELVER ?= 0
# Set to non-empty to enable compiling Envoy in FIPS mode.
FIPS_MODE ?=
export FIPS_MODE
# ENVOY_DOCKER_REPO ?= docker.io/emissaryingress/base-envoy
ENVOY_DOCKER_REPO ?= gcr.io/datawire/ambassador-base
ENVOY_DOCKER_VERSION ?= $(BASE_ENVOY_RELVER).$(ENVOY_COMMIT).$(ENVOY_COMPILATION_MODE)$(if $(FIPS_MODE),.FIPS)
ENVOY_DOCKER_TAG ?= $(ENVOY_DOCKER_REPO):envoy-$(ENVOY_DOCKER_VERSION)
# END LIST OF VARIABLES REQUIRING `make update-base`.
# How to set ENVOY_GO_CONTROL_PLANE_COMMIT: In github.com/envoyproxy/go-control-plane.git, the majority
@ -37,96 +38,21 @@ RSYNC_EXTRAS ?=
# which commits are ancestors, I added `make guess-envoy-go-control-plane-commit` to do that in an
# automated way! Still look at the commit yourself to make sure it seems sane; blindly trusting
# machines is bad, mmkay?
ENVOY_GO_CONTROL_PLANE_COMMIT = b501c94cb61e3235b9156629377fba229d9571d8
ENVOY_GO_CONTROL_PLANE_COMMIT = f888b4f71207d0d268dee7cb824de92848da9ede
# Set ENVOY_DOCKER_REPO to the list of mirrors that we should
# sanity-check that things get pushed to.
ifneq ($(IS_PRIVATE),)
# If $(IS_PRIVATE), then just the private repo...
ENVOY_DOCKER_REPOS = $(ENVOY_DOCKER_REPO)
else
# ...otherwise, this list of repos:
ENVOY_DOCKER_REPOS = docker.io/emissaryingress/base-envoy
ENVOY_DOCKER_REPOS += gcr.io/datawire/ambassador-base
endif
# Set ENVOY_DOCKER_REPO to the list of mirrors to check
# ENVOY_DOCKER_REPOS = docker.io/emissaryingress/base-envoy
# ENVOY_DOCKER_REPOS += gcr.io/datawire/ambassador-base
#
# Intro
include $(OSS_HOME)/build-aux/prelude.mk
# for builder.mk...
export ENVOY_DOCKER_TAG
# Extra contrib api proto types to build
envoy-api-contrib =
envoy-api-contrib += contrib/envoy/extensions/filters/http/golang
old_envoy_commits = $(shell { \
{ \
git log --patch --format='' -G'^ *ENVOY_COMMIT' -- _cxx/envoy.mk; \
git log --patch --format='' -G'^ *ENVOY_COMMIT' -- cxx/envoy.mk; \
git log --patch --format='' -G'^ *ENVOY_COMMIT' -- Makefile; \
} | sed -En 's/^.*ENVOY_COMMIT *\?= *//p'; \
git log --patch --format='' -G'^ *ENVOY_BASE_IMAGE' 511ca54c3004019758980ba82f708269c373ba28 -- Makefile | sed -n 's/^. *ENVOY_BASE_IMAGE.*-g//p'; \
git log --patch --format='' -G'FROM.*envoy.*:' 7593e7dca9aea2f146ddfd5a3676bcc30ee25aff -- Dockerfile | sed -n '/FROM.*envoy.*:/s/.*://p' | sed -e 's/ .*//' -e 's/.*-g//' -e 's/.*-//' -e '/^latest$$/d'; \
} | uniq)
lost_history += 251b7d345 # mentioned in a605b62ee (wip - patched and fixed authentication, Gabriel, 2019-04-04)
lost_history += 27770bf3d # mentioned in 026dc4cd4 (updated envoy image, Gabriel, 2019-04-04)
check-envoy-version: ## Check that Envoy version has been pushed to the right places
check-envoy-version: $(OSS_HOME)/_cxx/envoy
# First, we're going to check whether the Envoy commit is tagged, which
# is one of the things that has to happen before landing a PR that bumps
# the ENVOY_COMMIT.
#
# We strictly check for tags matching 'datawire-*' to remove the
# temptation to jump the gun and create an 'ambassador-*' or
# 'emissary-*' tag before we know that's actually the commit that will
# be in the released Ambassador/Emissary.
#
# Also, don't just check the tip of the PR ('HEAD'), also check that all
# intermediate commits in the PR are also (ancestors of?) a tag. We
# don't want history to get lost!
set -e; { \
cd $<; unset GIT_DIR GIT_WORK_TREE; \
for commit in HEAD $(filter-out $(lost_history),$(old_envoy_commits)); do \
echo "=> checking Envoy commit $$commit"; \
desc=$$(git describe --tags --contains --match='datawire-*' "$$commit"); \
[[ "$$desc" == datawire-* ]]; \
echo " got $$desc"; \
done; \
}
# Now, we're going to check that the Envoy Docker images have been
# pushed to all of the mirrors, which is another thing that has to
# happen before landing a PR that bumps the ENVOY_COMMIT.
#
# We "could" use `docker manifest inspect` instead of `docker
# pull` to test that these exist without actually pulling
# them... except that gcr.io doesn't allow `manifest inspect`.
# So just go ahead and do the `pull` :(
$(foreach ENVOY_DOCKER_REPO,$(ENVOY_DOCKER_REPOS), docker pull $(ENVOY_DOCKER_TAG) >/dev/null$(NL))
$(foreach ENVOY_DOCKER_REPO,$(ENVOY_DOCKER_REPOS), docker pull $(ENVOY_FULL_DOCKER_TAG) >/dev/null$(NL))
.PHONY: check-envoy-version
# See the comment on ENVOY_GO_CONTROL_PLANE_COMMIT at the top of the file for more explanation on how this target works.
guess-envoy-go-control-plane-commit: # Have the computer suggest a value for ENVOY_GO_CONTROL_PLANE_COMMIT
guess-envoy-go-control-plane-commit: $(OSS_HOME)/_cxx/envoy $(OSS_HOME)/_cxx/go-control-plane
@echo
@echo '######################################################################'
@echo
@set -e; { \
(cd $(OSS_HOME)/_cxx/go-control-plane && git log --format='%H %s' origin/main) | sed -n 's, Mirrored from envoyproxy/envoy @ , ,p' | \
while read -r go_commit cxx_commit; do \
if (cd $(OSS_HOME)/_cxx/envoy && git merge-base --is-ancestor "$$cxx_commit" $(ENVOY_COMMIT) 2>/dev/null); then \
echo "ENVOY_GO_CONTROL_PLANE_COMMIT = $$go_commit"; \
break; \
fi; \
done; \
}
.PHONY: guess-envoy-go-control-plane-commit
#
# Envoy sources and build container
#################### Envoy cxx and build image targets #####################
$(OSS_HOME)/_cxx/envoy: FORCE
@echo "Getting Envoy sources..."
@ -151,10 +77,16 @@ $(OSS_HOME)/_cxx/envoy: FORCE
git checkout origin/master; \
fi; \
}
$(OSS_HOME)/_cxx/envoy.clean: %.clean:
$(if $(filter-out -,$(ENVOY_COMMIT)),rm -rf $*)
clobber: $(OSS_HOME)/_cxx/envoy.clean
# cleanup existing build outputs
$(OSS_HOME)/_cxx/envoy-docker-build.clean: %.clean:
$(if $(filter-out -,$(ENVOY_COMMIT)),sudo rm -rf $*)
clobber: $(OSS_HOME)/_cxx/envoy-docker-build.clean
$(OSS_HOME)/_cxx/envoy-build-image.txt: $(OSS_HOME)/_cxx/envoy $(tools/write-ifchanged) FORCE
@PS4=; set -ex -o pipefail; { \
pushd $</ci; \
@ -165,132 +97,108 @@ $(OSS_HOME)/_cxx/envoy-build-image.txt: $(OSS_HOME)/_cxx/envoy $(tools/write-ifc
}
clean: $(OSS_HOME)/_cxx/envoy-build-image.txt.rm
$(OSS_HOME)/_cxx/envoy-build-container.txt: $(OSS_HOME)/_cxx/envoy-build-image.txt FORCE
# cleanup build artifacts
clean: $(OSS_HOME)/docker/base-envoy/envoy-static.rm
clean: $(OSS_HOME)/docker/base-envoy/envoy-static-stripped.rm
clean: $(OSS_HOME)/docker/base-envoy/envoy-static.dwp.rm
################################# Compile Custom Envoy Protos ######################################
# copy raw protos and compiled go protos into emissary-ingress
.PHONY compile-envoy-protos:
compile-envoy-protos: $(OSS_HOME)/_cxx/envoy-build-image.txt
$(OSS_HOME)/_cxx/tools/compile-protos.sh
################################# Envoy Build PhonyTargets #########################################
# helper to trigger the clone of the datawire/envoy repository
.PHONY: clone-envoy
clone-envoy: $(OSS_HOME)/_cxx/envoy
# clean up envoy resources
.PHONY: clean-envoy
clean-envoy:
cd $(OSS_HOME)/_cxx/envoy && ./ci/run_envoy_docker.sh "./ci/do_ci.sh 'clean'"
# Check to see if we have already built and push an image for the
.PHONY: verify-base-envoy
verify-base-envoy:
@PS4=; set -ex; { \
if [ $@ -nt $< ] && docker exec $$(cat $@) true; then \
exit 0; \
fi; \
if [ -e $@ ]; then \
docker kill $$(cat $@) || true; \
fi; \
docker run --network=host --detach --rm --privileged --volume=envoy-build:/root:rw $$(cat $<) tail -f /dev/null > $@; \
}
$(OSS_HOME)/_cxx/envoy-build-container.txt.clean: %.clean:
if [ -e $* ]; then docker rm -fv $$(cat $*) || true; fi
rm -f $*
if docker volume inspect envoy-build &>/dev/null; then docker volume rm envoy-build >/dev/null; fi
clean: $(OSS_HOME)/_cxx/envoy-build-container.txt.clean
#
# Things that run in the Envoy build container
#
# We do everything with rsync and a persistent build-container
# (instead of using a volume), because
# 1. Docker for Mac's osxfs is very slow, so volumes are bad for
# macOS users.
# 2. Volumes mounts just straight-up don't work for people who use
# Minikube's dockerd.
ENVOY_SYNC_HOST_TO_DOCKER = rsync -a$(RSYNC_EXTRAS) --partial --delete --blocking-io -e "docker exec -i" $(OSS_HOME)/_cxx/envoy/ $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt):/root/envoy
ENVOY_SYNC_DOCKER_TO_HOST = rsync -a$(RSYNC_EXTRAS) --partial --delete --blocking-io -e "docker exec -i" $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt):/root/envoy/ $(OSS_HOME)/_cxx/envoy/
ENVOY_BASH.cmd = bash -c 'PS4=; set -ex; $(ENVOY_SYNC_HOST_TO_DOCKER); trap '\''$(ENVOY_SYNC_DOCKER_TO_HOST)'\'' EXIT; '$(call quote.shell,$1)
ENVOY_BASH.deps = $(OSS_HOME)/_cxx/envoy-build-container.txt
ENVOY_DOCKER.env += PATH=/opt/llvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENVOY_DOCKER.env += CC=clang
ENVOY_DOCKER.env += CXX=clang++
ENVOY_DOCKER.env += CLANG_FORMAT=/opt/llvm/bin/clang-format
ENVOY_DOCKER_EXEC = docker exec --workdir=/root/envoy $(foreach e,$(ENVOY_DOCKER.env), --env=$e ) $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt)
$(OSS_HOME)/docker/base-envoy/envoy-static: $(ENVOY_BASH.deps) FORCE
mkdir -p $(@D)
@PS4=; set -ex; { \
if [ '$(ENVOY_COMMIT)' != '-' ] && docker run --rm --entrypoint=true $(ENVOY_FULL_DOCKER_TAG); then \
rsync -a$(RSYNC_EXTRAS) --partial --blocking-io -e 'docker run --rm -i' $$(docker image inspect $(ENVOY_FULL_DOCKER_TAG) --format='{{.Id}}' | sed 's/^sha256://'):/usr/local/bin/envoy-static $@; \
else \
if [ -z '$(YES_I_AM_OK_WITH_COMPILING_ENVOY)' ]; then \
if docker pull $(ENVOY_DOCKER_TAG); then \
echo 'Already up-to-date: $(ENVOY_DOCKER_TAG)'; \
ENVOY_VERSION_OUTPUT=$$(docker run --platform="$(BUILD_ARCH)" --rm -it --entrypoint envoy-static-stripped $(ENVOY_DOCKER_TAG) --version | grep "version:"); \
ENVOY_VERSION_EXPECTED="envoy-static-stripped .*version:.* $(ENVOY_COMMIT)/.*"; \
if ! echo "$$ENVOY_VERSION_OUTPUT" | grep "$$ENVOY_VERSION_EXPECTED"; then \
{ set +x; } &>/dev/null; \
echo 'error: Envoy compilation triggered, but $$YES_I_AM_OK_WITH_COMPILING_ENVOY is not set'; \
exit 1; \
echo "error: Envoy base image $(ENVOY_DOCKER_TAG) contains envoy-static-stripped binary that reported an unexpected version string!" \
"See ENVOY_VERSION_OUTPUT and ENVOY_VERSION_EXPECTED in the output above. This error is usually not recoverable." \
"You may need to rebuild the Envoy base image after either updating ENVOY_COMMIT or bumping BASE_ENVOY_RELVER" \
"(or both, depending on what you are doing)."; \
exit 1; \
fi; \
$(call ENVOY_BASH.cmd, \
$(ENVOY_DOCKER_EXEC) git config --global --add safe.directory /root/envoy; \
$(ENVOY_DOCKER_EXEC) bazel build $(if $(FIPS_MODE), --define boringssl=fips) --verbose_failures -c $(ENVOY_COMPILATION_MODE) --config=clang //contrib/exe:envoy-static; \
rsync -a$(RSYNC_EXTRAS) --partial --blocking-io -e 'docker exec -i' $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt):/root/envoy/bazel-bin/contrib/exe/envoy-static $@; \
); \
echo "Nothing to build at this time"; \
exit 0; \
fi; \
}
$(OSS_HOME)/docker/base-envoy/envoy-static-stripped: %-stripped: % FORCE
@PS4=; set -ex; { \
if [ '$(ENVOY_COMMIT)' != '-' ] && docker run --rm --entrypoint=true $(ENVOY_FULL_DOCKER_TAG); then \
rsync -a$(RSYNC_EXTRAS) --partial --blocking-io -e 'docker run --rm -i' $$(docker image inspect $(ENVOY_FULL_DOCKER_TAG) --format='{{.Id}}' | sed 's/^sha256://'):/usr/local/bin/$(@F) $@; \
else \
if [ -z '$(YES_I_AM_OK_WITH_COMPILING_ENVOY)' ]; then \
{ set +x; } &>/dev/null; \
echo 'error: Envoy compilation triggered, but $$YES_I_AM_OK_WITH_COMPILING_ENVOY is not set'; \
exit 1; \
fi; \
rsync -a$(RSYNC_EXTRAS) --partial --blocking-io -e 'docker exec -i' $< $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt):/tmp/$(<F); \
docker exec $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt) strip /tmp/$(<F) -o /tmp/$(@F); \
rsync -a$(RSYNC_EXTRAS) --partial --blocking-io -e 'docker exec -i' $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt):/tmp/$(@F) $@; \
fi; \
}
clobber: $(OSS_HOME)/docker/base-envoy/envoy-static.rm $(OSS_HOME)/docker/base-envoy/envoy-static-stripped.rm
check-envoy: ## Run the Envoy test suite
check-envoy: $(ENVOY_BASH.deps)
@echo 'Testing envoy with Bazel label: "$(ENVOY_TEST_LABEL)"'; \
$(call ENVOY_BASH.cmd, \
$(ENVOY_DOCKER_EXEC) git config --global --add safe.directory /root/envoy; \
$(ENVOY_DOCKER_EXEC) bazel test --config=clang --test_output=errors --verbose_failures -c dbg --test_env=ENVOY_IP_TEST_VERSIONS=v4only $(ENVOY_TEST_LABEL); \
)
# builds envoy using release settings, see https://github.com/envoyproxy/envoy/blob/main/ci/README.md for additional
# details on configuring builds
.PHONY: build-envoy
build-envoy: $(OSS_HOME)/_cxx/envoy-build-image.txt
$(OSS_HOME)/_cxx/tools/build-envoy.sh
# build the base-envoy containers and tags them locally, this requires running `build-envoy` first.
.PHONY: build-base-envoy-image
build-base-envoy-image: $(OSS_HOME)/_cxx/envoy-build-image.txt
docker build --platform="$(BUILD_ARCH)" -f $(OSS_HOME)/docker/base-envoy/Dockerfile.stripped -t $(ENVOY_DOCKER_TAG) $(OSS_HOME)/docker/base-envoy
# Allows pushing the docker image independent of building envoy and docker containers
# Note, bump the BASE_ENVOY_RELVER and re-build before pushing when making non-commit changes to have a unique image tag.
.PHONY: push-base-envoy-image
push-base-envoy-image:
docker push $(ENVOY_DOCKER_TAG)
# `make update-base`: Recompile Envoy and do all of the related things.
.PHONY: update-base
update-base: $(OSS_HOME)/_cxx/envoy-build-image.txt
$(MAKE) verify-base-envoy
$(MAKE) build-envoy
$(MAKE) build-base-envoy-image
$(MAKE) push-base-envoy-image
$(MAKE) compile-envoy-protos
.PHONY: check-envoy
check-envoy: $(OSS_HOME)/_cxx/envoy-build-image.txt
$(OSS_HOME)/_cxx/tools/test-envoy.sh;
envoy-shell: ## Run a shell in the Envoy build container
envoy-shell: $(ENVOY_BASH.deps)
$(call ENVOY_BASH.cmd, \
docker exec -it --workdir=/root/envoy $(foreach e,$(ENVOY_DOCKER.env), --env=$e ) $$(cat $(OSS_HOME)/_cxx/envoy-build-container.txt) /bin/bash || true; \
)
.PHONY: envoy-shell
envoy-shell: $(OSS_HOME)/_cxx/envoy-build-image.txt
cd $(OSS_HOME)/_cxx/envoy && ./ci/run_envoy_docker.sh bash || true;
################################# Go-control-plane Targets ####################################
#
# Recipes used by `make generate`; files that get checked in to Git (i.e. protobufs and Go code)
# Recipes used by `make generate`; files that get checked into Git (i.e. protobufs and Go code)
#
# These targets are depended on by `make generate` in `build-aux/generate.mk`.
# Raw Envoy protobufs
$(OSS_HOME)/api/envoy $(addprefix $(OSS_HOME)/api/,$(envoy-api-contrib)): $(OSS_HOME)/api/%: $(OSS_HOME)/_cxx/envoy
mkdir -p $@
rsync --recursive --delete --delete-excluded --prune-empty-dirs --include='*/' --include='*.proto' --exclude='*' $</api/$*/ $@
# Go generated from the protobufs
$(OSS_HOME)/_cxx/envoy/build_go: $(ENVOY_BASH.deps) FORCE
$(call ENVOY_BASH.cmd, \
$(ENVOY_DOCKER_EXEC) git config --global --add safe.directory /root/envoy; \
$(ENVOY_DOCKER_EXEC) ci/do_ci.sh api.go; \
)
test -d $@ && touch $@
$(OSS_HOME)/pkg/api/envoy $(addprefix $(OSS_HOME)/pkg/api/,$(envoy-api-contrib)): $(OSS_HOME)/pkg/api/%: $(OSS_HOME)/_cxx/envoy/build_go
rm -rf $@
@PS4=; set -ex; { \
unset GIT_DIR GIT_WORK_TREE; \
tmpdir=$$(mktemp -d); \
trap 'rm -rf "$$tmpdir"' EXIT; \
build_go_dir=$<; \
root_proto_go_dir=$$(echo $* | awk -F '/' '{print $$1}'); \
cp -r $$build_go_dir/$$root_proto_go_dir "$$tmpdir"; \
find "$$tmpdir" -type f \
-exec chmod 644 {} + \
-exec sed -E -i.bak \
-e 's,github\.com/envoyproxy/go-control-plane/envoy,github.com/emissary-ingress/emissary/v3/pkg/api/envoy,g' \
-- {} +; \
find "$$tmpdir" -name '*.bak' -delete; \
mkdir -p $(dir $@); mv "$$tmpdir/$*" $@; \
# See the comment on ENVOY_GO_CONTROL_PLANE_COMMIT at the top of the file for more explanation on how this target works.
guess-envoy-go-control-plane-commit: # Have the computer suggest a value for ENVOY_GO_CONTROL_PLANE_COMMIT
guess-envoy-go-control-plane-commit: $(OSS_HOME)/_cxx/envoy $(OSS_HOME)/_cxx/go-control-plane
@echo
@echo '######################################################################'
@echo
@set -e; { \
(cd $(OSS_HOME)/_cxx/go-control-plane && git log --format='%H %s' origin/main) | sed -n 's, Mirrored from envoyproxy/envoy @ , ,p' | \
while read -r go_commit cxx_commit; do \
if (cd $(OSS_HOME)/_cxx/envoy && git merge-base --is-ancestor "$$cxx_commit" $(ENVOY_COMMIT) 2>/dev/null); then \
echo "ENVOY_GO_CONTROL_PLANE_COMMIT = $$go_commit"; \
break; \
fi; \
done; \
}
# Envoy's build system still uses an old `protoc-gen-go` that emits
# code that Go 1.19's `gofmt` isn't happy with. Even generated code
# should be gofmt-clean, so gofmt it as a post-processing step.
gofmt -w -s $@
.PHONY: guess-envoy-go-control-plane-commit
# The unmodified go-control-plane
$(OSS_HOME)/_cxx/go-control-plane: FORCE
@ -331,81 +239,51 @@ $(OSS_HOME)/pkg/envoy-control-plane: $(OSS_HOME)/_cxx/go-control-plane FORCE
}
cd $(OSS_HOME) && gofmt -w -s ./pkg/envoy-control-plane/
#
# `make update-base`: Recompile Envoy and do all of the related things.
######################### Envoy Version and Mirror Check #######################
update-base: $(OSS_HOME)/docker/base-envoy/envoy-static $(OSS_HOME)/docker/base-envoy/envoy-static-stripped $(OSS_HOME)/_cxx/envoy-build-image.txt
@PS4=; set -ex; { \
if [ '$(ENVOY_COMMIT)' != '-' ] && docker pull $(ENVOY_FULL_DOCKER_TAG); then \
echo 'Already up-to-date: $(ENVOY_FULL_DOCKER_TAG)'; \
ENVOY_VERSION_OUTPUT=$$(docker run --rm -it --entrypoint envoy-static $(ENVOY_FULL_DOCKER_TAG) --version | grep "version:"); \
ENVOY_VERSION_EXPECTED="envoy-static .*version:.* $(ENVOY_COMMIT)/.*"; \
if ! echo "$$ENVOY_VERSION_OUTPUT" | grep "$$ENVOY_VERSION_EXPECTED"; then \
{ set +x; } &>/dev/null; \
echo "error: Envoy base image $(ENVOY_FULL_DOCKER_TAG) contains envoy-static binary that reported an unexpected version string!" \
"See ENVOY_VERSION_OUTPUT and ENVOY_VERSION_EXPECTED in the output above. This error is usually not recoverable." \
"You may need to rebuild the Envoy base image after either updating ENVOY_COMMIT or bumping BASE_ENVOY_RELVER" \
"(or both, depending on what you are doing)."; \
exit 1; \
fi; \
else \
if [ -z '$(YES_I_AM_OK_WITH_COMPILING_ENVOY)' ]; then \
{ set +x; } &>/dev/null; \
echo 'error: Envoy compilation triggered, but $$YES_I_AM_OK_WITH_COMPILING_ENVOY is not set'; \
exit 1; \
fi; \
docker build --build-arg=base=$$(cat $(OSS_HOME)/_cxx/envoy-build-image.txt) -f $(OSS_HOME)/docker/base-envoy/Dockerfile -t $(ENVOY_FULL_DOCKER_TAG) $(OSS_HOME)/docker/base-envoy; \
if [ '$(ENVOY_COMMIT)' != '-' ]; then \
ENVOY_VERSION_OUTPUT=$$(docker run --rm -it --entrypoint envoy-static $(ENVOY_FULL_DOCKER_TAG) --version | grep "version:"); \
ENVOY_VERSION_EXPECTED="envoy-static .*version:.* $(ENVOY_COMMIT)/.*"; \
if ! echo "$$ENVOY_VERSION_OUTPUT" | grep "$$ENVOY_VERSION_EXPECTED"; then \
{ set +x; } &>/dev/null; \
echo "error: Envoy base image $(ENVOY_FULL_DOCKER_TAG) contains envoy-static binary that reported an unexpected version string!" \
"See ENVOY_VERSION_OUTPUT and ENVOY_VERSION_EXPECTED in the output above. This error is usually not recoverable." \
"You may need to rebuild the Envoy base image after either updating ENVOY_COMMIT or bumping BASE_ENVOY_RELVER" \
"(or both, depending on what you are doing)."; \
exit 1; \
fi; \
docker push $(ENVOY_FULL_DOCKER_TAG); \
fi; \
fi; \
old_envoy_commits = $(shell { \
{ \
git log --patch --format='' -G'^ *ENVOY_COMMIT' -- _cxx/envoy.mk; \
git log --patch --format='' -G'^ *ENVOY_COMMIT' -- cxx/envoy.mk; \
git log --patch --format='' -G'^ *ENVOY_COMMIT' -- Makefile; \
} | sed -En 's/^.*ENVOY_COMMIT *\?= *//p'; \
git log --patch --format='' -G'^ *ENVOY_BASE_IMAGE' 511ca54c3004019758980ba82f708269c373ba28 -- Makefile | sed -n 's/^. *ENVOY_BASE_IMAGE.*-g//p'; \
git log --patch --format='' -G'FROM.*envoy.*:' 7593e7dca9aea2f146ddfd5a3676bcc30ee25aff -- Dockerfile | sed -n '/FROM.*envoy.*:/s/.*://p' | sed -e 's/ .*//' -e 's/.*-g//' -e 's/.*-//' -e '/^latest$$/d'; \
} | uniq)
lost_history += 251b7d345 # mentioned in a605b62ee (wip - patched and fixed authentication, Gabriel, 2019-04-04)
lost_history += 27770bf3d # mentioned in 026dc4cd4 (updated envoy image, Gabriel, 2019-04-04)
check-envoy-version: ## Check that Envoy version has been pushed to the right places
check-envoy-version: $(OSS_HOME)/_cxx/envoy
# First, we're going to check whether the Envoy commit is tagged, which
# is one of the things that has to happen before landing a PR that bumps
# the ENVOY_COMMIT.
#
# We strictly check for tags matching 'datawire-*' to remove the
# temptation to jump the gun and create an 'ambassador-*' or
# 'emissary-*' tag before we know that's actually the commit that will
# be in the released Ambassador/Emissary.
#
# Also, don't just check the tip of the PR ('HEAD'), also check that all
# intermediate commits in the PR are also (ancestors of?) a tag. We
# don't want history to get lost!
set -e; { \
cd $<; unset GIT_DIR GIT_WORK_TREE; \
for commit in HEAD $(filter-out $(lost_history),$(old_envoy_commits)); do \
echo "=> checking Envoy commit $$commit"; \
desc=$$(git describe --tags --contains --match='datawire-*' "$$commit"); \
[[ "$$desc" == datawire-* ]]; \
echo " got $$desc"; \
done; \
}
@PS4=; set -ex; { \
if [ '$(ENVOY_COMMIT)' != '-' ] && docker pull $(ENVOY_DOCKER_TAG); then \
echo 'Already up-to-date: $(ENVOY_DOCKER_TAG)'; \
ENVOY_VERSION_OUTPUT=$$(docker run --rm -it --entrypoint envoy-static-stripped $(ENVOY_DOCKER_TAG) --version | grep "version:"); \
ENVOY_VERSION_EXPECTED="envoy-static-stripped .*version:.* $(ENVOY_COMMIT)/.*"; \
if ! echo "$$ENVOY_VERSION_OUTPUT" | grep "$$ENVOY_VERSION_EXPECTED"; then \
{ set +x; } &>/dev/null; \
echo "error: Envoy base image $(ENVOY_DOCKER_TAG) contains envoy-static-stripped binary that reported an unexpected version string!" \
"See ENVOY_VERSION_OUTPUT and ENVOY_VERSION_EXPECTED in the output above. This error is usually not recoverable." \
"You may need to rebuild the Envoy base image after either updating ENVOY_COMMIT or bumping BASE_ENVOY_RELVER" \
"(or both, depending on what you are doing)."; \
exit 1; \
fi; \
else \
if [ -z '$(YES_I_AM_OK_WITH_COMPILING_ENVOY)' ]; then \
{ set +x; } &>/dev/null; \
echo 'error: Envoy compilation triggered, but $$YES_I_AM_OK_WITH_COMPILING_ENVOY is not set'; \
exit 1; \
fi; \
docker build -f $(OSS_HOME)/docker/base-envoy/Dockerfile.stripped -t $(ENVOY_DOCKER_TAG) $(OSS_HOME)/docker/base-envoy; \
if [ '$(ENVOY_COMMIT)' != '-' ]; then \
ENVOY_VERSION_OUTPUT=$$(docker run --rm -it --entrypoint envoy-static-stripped $(ENVOY_DOCKER_TAG) --version | grep "version:"); \
ENVOY_VERSION_EXPECTED="envoy-static-stripped .*version:.* $(ENVOY_COMMIT)/.*"; \
if ! echo "$$ENVOY_VERSION_OUTPUT" | grep "$$ENVOY_VERSION_EXPECTED"; then \
{ set +x; } &>/dev/null; \
echo "error: Envoy base image $(ENVOY_DOCKER_TAG) contains envoy-static-stripped binary that reported an unexpected version string!" \
"See ENVOY_VERSION_OUTPUT and ENVOY_VERSION_EXPECTED in the output above. This error is usually not recoverable." \
"You may need to rebuild the Envoy base image after either updating ENVOY_COMMIT or bumping BASE_ENVOY_RELVER" \
"(or both, depending on what you are doing)."; \
exit 1; \
fi; \
docker push $(ENVOY_DOCKER_TAG); \
fi; \
fi; \
}
# `make generate` has to come *after* the above, because builder.sh will
# try to use the images that the above create.
$(MAKE) generate
.PHONY: update-base
# Now, we're going to check that the Envoy Docker images have been
# pushed to all of the mirrors, which is another thing that has to
# happen before landing a PR that bumps the ENVOY_COMMIT.
#
# We "could" use `docker manifest inspect` instead of `docker
# pull` to test that these exist without actually pulling
# them... except that gcr.io doesn't allow `manifest inspect`.
# So just go ahead and do the `pull` :(
$(foreach ENVOY_DOCKER_REPO,$(ENVOY_DOCKER_REPOS), docker pull $(ENVOY_DOCKER_TAG) >/dev/null$(NL))
.PHONY: check-envoy-version

74
_cxx/tools/build-envoy.sh Executable file
View File

@ -0,0 +1,74 @@
#!/bin/bash
# The phony make targets have been exported when calling from Make.
FIPS_MODE=${FIPS_MODE:-}
BUILD_ARCH=${BUILD_ARCH:-linux/amd64}
# base directory vars
OSS_SOURCE="$PWD"
BASE_ENVOY_DIR="$PWD/_cxx/envoy"
ENVOY_DOCKER_BUILD_DIR="$PWD/_cxx/envoy-docker-build"
export ENVOY_DOCKER_BUILD_DIR
# container vars
DOCKER_OPTIONS=(
"--platform=${BUILD_ARCH}"
"--env=ENVOY_DELIVERY_DIR=/build/envoy/x64/contrib/exe/envoy"
"--env=ENVOY_BUILD_TARGET=//contrib/exe:envoy-static"
"--env=ENVOY_BUILD_DEBUG_INFORMATION=//contrib/exe:envoy-static.dwp"
# "--env=BAZEL_BUILD_OPTIONS=\-\-define tcmalloc=gperftools"
)
# unset ssh auth sock because we don't need it in the container and
# the `run_envoy_docker.sh` adds it by default. This causes issues
# if trying to run builds on docker for mac.
SSH_AUTH_SOCK=""
export SSH_AUTH_SOCK
BAZEL_BUILD_EXTRA_OPTIONS=()
if [ -n "$FIPS_MODE" ]; then
BAZEL_BUILD_EXTRA_OPTIONS+=(--define boringssl=fips)
fi;
if [ ! -d "$BASE_ENVOY_DIR" ]; then
echo "Looks like Envoy hasn't been cloned locally yet, run clone-envoy target to ensure it is cloned";
exit 1;
fi;
ENVOY_DOCKER_OPTIONS="${DOCKER_OPTIONS[*]}"
export ENVOY_DOCKER_OPTIONS
echo "Building custom build of Envoy using the following parameters:"
echo " FIPS_MODE: ${FIPS_MODE}"
echo " BUILD_ARCH: ${BUILD_ARCH}"
echo " ENVOY_DOCKER_BUILD_DIR: ${ENVOY_DOCKER_BUILD_DIR}"
echo " ENVOY_DOCKER_OPTIONS: ${ENVOY_DOCKER_OPTIONS}"
echo " SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}"
echo " "
ci_cmd="./ci/do_ci.sh 'release.server_only'"
if [ ${#BAZEL_BUILD_EXTRA_OPTIONS[@]} -gt 0 ]; then
ci_cmd="BAZEL_BUILD_EXTRA_OPTIONS='${BAZEL_BUILD_EXTRA_OPTIONS[*]}' $ci_cmd"
fi;
echo "cleaning up any old build binaries"
rm -rf "$ENVOY_DOCKER_BUILD_DIR/envoy";
# build envoy
cd "${BASE_ENVOY_DIR}" || exit
./ci/run_envoy_docker.sh "${ci_cmd}"
cd "${OSS_SOURCE}" || exit
echo "Untar release distribution which includes static builds"
tar -xvf "${ENVOY_DOCKER_BUILD_DIR}/envoy/x64/bin/release.tar.zst" -C "${ENVOY_DOCKER_BUILD_DIR}/envoy/x64/bin";
echo "Copying envoy-static and envoy-static-stripped to 'docker/envoy-build'";
cp "${ENVOY_DOCKER_BUILD_DIR}/envoy/x64/bin/dbg/envoy-contrib" "${PWD}/docker/base-envoy/envoy-static"
chmod +x "${PWD}/docker/base-envoy/envoy-static"
cp "${ENVOY_DOCKER_BUILD_DIR}/envoy/x64/bin/dbg/envoy-contrib.dwp" "${PWD}/docker/base-envoy/envoy-static.dwp"
chmod +x "${PWD}/docker/base-envoy/envoy-static.dwp"
cp "${ENVOY_DOCKER_BUILD_DIR}/envoy/x64/bin/envoy-contrib" "${PWD}/docker/base-envoy/envoy-static-stripped"
chmod +x "${PWD}/docker/base-envoy/envoy-static-stripped"

103
_cxx/tools/compile-protos.sh Executable file
View File

@ -0,0 +1,103 @@
#!/bin/bash
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m'
OSS_SOURCE="${PWD}"
# envoy directories
BASE_ENVOY_DIR="${OSS_SOURCE}/_cxx/envoy"
ENVOY_PROTO_API_BASE="${BASE_ENVOY_DIR}/api"
ENVOY_COMPILED_GO_BASE="${BASE_ENVOY_DIR}/build_go"
# Emissary directories
EMISSARY_PROTO_API_BASE="${OSS_SOURCE}/api"
EMISSARY_COMPILED_PROTO_GO_BASE="${OSS_SOURCE}/pkg/api"
# envoy build container settings
ENVOY_DOCKER_OPTIONS="--platform=${BUILD_ARCH}"
export ENVOY_DOCKER_OPTIONS
# unset ssh auth sock because we don't need it in the container and
# the `run_envoy_docker.sh` adds it by default.
SSH_AUTH_SOCK=""
export SSH_AUTH_SOCK
############### copy raw protos into emissary repo ######################
echo -e "${BLUE}removing existing Envoy Protobuf API from:${GREEN} $EMISSARY_PROTO_API_BASE/envoy";
rm -rf "${EMISSARY_PROTO_API_BASE}/envoy"
echo -e "${BLUE}copying Envoy Protobuf API from ${GREEN} ${ENVOY_PROTO_API_BASE}/envoy ${NC}into ${GREEN}${EMISSARY_PROTO_API_BASE}/envoy";
rsync --recursive --delete --delete-excluded --prune-empty-dirs --include='*/' \
--include='*.proto' --exclude='*' \
"${ENVOY_PROTO_API_BASE}/envoy" "${EMISSARY_PROTO_API_BASE}"
echo -e "${BLUE}removing existing Envoy Contrib Protobuf API from:${GREEN} ${EMISSARY_PROTO_API_BASE}/contrib";
rm -rf "${EMISSARY_PROTO_API_BASE}/contrib"
mkdir -p "${EMISSARY_PROTO_API_BASE}/contrib/envoy/extensions/filters/http"
echo -e "${BLUE}copying Envoy Contrib Protobuf API from ${GREEN} ${ENVOY_PROTO_API_BASE}/contrib ${NC}into ${GREEN}${EMISSARY_PROTO_API_BASE}/contrib";
rsync --recursive --delete --delete-excluded --prune-empty-dirs \
--include='*/' \
--include='*.proto' \
--exclude='*' \
"${ENVOY_PROTO_API_BASE}/contrib/envoy/extensions/filters/http/golang" "${EMISSARY_PROTO_API_BASE}/contrib/envoy/extensions/filters/http"
############### compile go protos ######################
echo -e "${BLUE}compiling go-protobufs in envoy build container${NC}";
rm -rf "${ENVOY_COMPILED_GO_BASE}"
cd "${BASE_ENVOY_DIR}" || exit;
./ci/run_envoy_docker.sh "./ci/do_ci.sh 'api.go'";
cd "${OSS_SOURCE}" || exit;
############## moving envoy compiled protos to emissary #################
echo -e "${BLUE}removing existing compiled protos from: ${GREEN} $EMISSARY_COMPILED_PROTO_GO_BASE/envoy${NC}";
rm -rf "${EMISSARY_COMPILED_PROTO_GO_BASE}/envoy"
echo -e "${BLUE}copying compiled protos from: ${GREEN} ${ENVOY_COMPILED_GO_BASE}/envoy${NC} into ${GREEN}${EMISSARY_COMPILED_PROTO_GO_BASE}/envoy${NC}";
rsync --recursive --delete --delete-excluded --prune-empty-dirs \
--include='*/' \
--include='*.go' \
--exclude='*' \
"${ENVOY_COMPILED_GO_BASE}/envoy" "${EMISSARY_COMPILED_PROTO_GO_BASE}"
echo -e "${BLUE}Updating import pkg references from: ${GREEN}github.com/envoyproxy/go-control-plane/envoy ${NC}--> ${GREEN}github.com/emissary-ingress/emissary/v3/pkg/api/envoy${NC}"
find "${EMISSARY_COMPILED_PROTO_GO_BASE}/envoy" -type f \
-exec chmod 644 {} + \
-exec sed -E -i.bak \
-e 's,github\.com/envoyproxy/go-control-plane/envoy,github.com/emissary-ingress/emissary/v3/pkg/api/envoy,g' \
-- {} +;
find "${EMISSARY_COMPILED_PROTO_GO_BASE}/envoy" -name '*.bak' -delete;
gofmt -w -s "${EMISSARY_COMPILED_PROTO_GO_BASE}/envoy"
############## moving contrib compiled protos to emissary #################
echo -e "${BLUE}removing existing compiled protos from: ${GREEN} $EMISSARY_COMPILED_PROTO_GO_BASE/contrib${NC}";
rm -rf "${EMISSARY_COMPILED_PROTO_GO_BASE}/contrib"
mkdir -p "${EMISSARY_COMPILED_PROTO_GO_BASE}/contrib/envoy/extensions/filters/http"
echo -e "${BLUE}copying compiled protos from: ${GREEN} ${ENVOY_COMPILED_GO_BASE}/contrib${NC} into ${GREEN}${EMISSARY_COMPILED_PROTO_GO_BASE}/contrib${NC}";
rsync --recursive --delete --delete-excluded --prune-empty-dirs \
--include='*/' \
--include='*.go' \
--exclude='*' \
"${ENVOY_COMPILED_GO_BASE}/contrib/envoy/extensions/filters/http/golang" "${EMISSARY_COMPILED_PROTO_GO_BASE}/contrib/envoy/extensions/filters/http"
echo -e "${BLUE}Updating import pkg references from: ${GREEN}github.com/envoyproxy/go-control-plane/envoy ${NC}--> ${GREEN}github.com/emissary-ingress/emissary/v3/pkg/api/envoy${NC}"
find "${EMISSARY_COMPILED_PROTO_GO_BASE}/contrib" -type f \
-exec chmod 644 {} + \
-exec sed -E -i.bak \
-e 's,github\.com/envoyproxy/go-control-plane/envoy,github.com/emissary-ingress/emissary/v3/pkg/api/envoy,g' \
-- {} +;
find "${EMISSARY_COMPILED_PROTO_GO_BASE}/contrib" -name '*.bak' -delete;
gofmt -w -s "${EMISSARY_COMPILED_PROTO_GO_BASE}/contrib"

62
_cxx/tools/test-envoy.sh Executable file
View File

@ -0,0 +1,62 @@
#!/bin/bash
# Input Args capture from Environement Variables
# The phone make targets have been configured to pass these along when using Make.
default_test_targets="//contrib/golang/... //test/..."
FIPS_MODE=${FIPS_MODE:-}
BUILD_ARCH=${BUILD_ARCH:-linux/amd64}
ENVOY_TEST_LABEL=${ENVOY_TEST_LABEL:-$default_test_targets}
# static vars
OSS_SOURCE="$PWD"
BASE_ENVOY_DIR="$PWD/_cxx/envoy"
ENVOY_DOCKER_BUILD_DIR="$PWD/_cxx/envoy-docker-build"
export ENVOY_DOCKER_BUILD_DIR
# Dynamic variables
DOCKER_OPTIONS=(
"--platform=${BUILD_ARCH}"
"--network=host"
)
ENVOY_DOCKER_OPTIONS="${DOCKER_OPTIONS[*]}"
export ENVOY_DOCKER_OPTIONS
# unset ssh auth sock because we don't need it in the container and
# the `run_envoy_docker.sh` adds it by default.
SSH_AUTH_SOCK=""
export SSH_AUTH_SOCK
BAZEL_BUILD_EXTRA_OPTIONS=()
if [ -n "$FIPS_MODE" ]; then
BAZEL_BUILD_EXTRA_OPTIONS+=(--define boringssl=fips)
fi;
if [ ! -d "$BASE_ENVOY_DIR" ]; then
echo "Looks like Envoy hasn't been cloned locally yet, run clone-envoy target to ensure it is cloned";
exit 1;
fi;
echo "Running Envoy Tests with the following parameters set:"
echo " ENVOY_TEST_LABEL: ${ENVOY_TEST_LABEL}"
echo " FIPS_MODE: ${FIPS_MODE}"
echo " BUILD_ARCH: ${BUILD_ARCH}"
echo " ENVOY_DOCKER_BUILD_DIR: ${ENVOY_DOCKER_BUILD_DIR}"
echo " ENVOY_DOCKER_OPTIONS: ${ENVOY_DOCKER_OPTIONS}"
echo " SSH_AUTH_SOCK: ${SSH_AUTH_SOCK}"
echo " BAZEL_BUILD_EXTRA_OPTIONS: ${BAZEL_BUILD_EXTRA_OPTIONS[*]}"
echo " "
echo " "
ci_cmd="bazel test --test_output=errors \
--verbose_failures -c dbg --test_env=ENVOY_IP_TEST_VERSIONS=v4only \
${ENVOY_TEST_LABEL}";
if [ ${#BAZEL_BUILD_EXTRA_OPTIONS[@]} -gt 0 ]; then
ci_cmd="BAZEL_BUILD_EXTRA_OPTIONS='${BAZEL_BUILD_EXTRA_OPTIONS[*]}' $ci_cmd"
fi;
cd "${BASE_ENVOY_DIR}" || exit;
./ci/run_envoy_docker.sh "${ci_cmd}";
cd "${OSS_SOURCE}" || exit;

View File

@ -12,7 +12,7 @@ import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.http.golang.v3alpha";
option java_outer_classname = "GolangProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/http/golang/v3alpha";
option go_package = "github.com/envoyproxy/go-control-plane/contrib/envoy/extensions/filters/http/golang/v3alpha";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
option (xds.annotations.v3.file_status).work_in_progress = true;

View File

@ -33,6 +33,7 @@ message ConfigDump {
// * ``bootstrap``: :ref:`BootstrapConfigDump <envoy_v3_api_msg_admin.v3.BootstrapConfigDump>`
// * ``clusters``: :ref:`ClustersConfigDump <envoy_v3_api_msg_admin.v3.ClustersConfigDump>`
// * ``ecds_filter_http``: :ref:`EcdsConfigDump <envoy_v3_api_msg_admin.v3.EcdsConfigDump>`
// * ``ecds_filter_quic_listener``: :ref:`EcdsConfigDump <envoy_v3_api_msg_admin.v3.EcdsConfigDump>`
// * ``ecds_filter_tcp_listener``: :ref:`EcdsConfigDump <envoy_v3_api_msg_admin.v3.EcdsConfigDump>`
// * ``endpoints``: :ref:`EndpointsConfigDump <envoy_v3_api_msg_admin.v3.EndpointsConfigDump>`
// * ``listeners``: :ref:`ListenersConfigDump <envoy_v3_api_msg_admin.v3.ListenersConfigDump>`

View File

@ -59,7 +59,7 @@ message ServerInfo {
config.core.v3.Node node = 7;
}
// [#next-free-field: 39]
// [#next-free-field: 41]
message CommandLineOptions {
option (udpa.annotations.versioning).previous_message_type =
"envoy.admin.v2alpha.CommandLineOptions";
@ -98,6 +98,12 @@ message CommandLineOptions {
// See :option:`--use-dynamic-base-id` for details.
bool use_dynamic_base_id = 31;
// See :option:`--skip-hot-restart-on-no-parent` for details.
bool skip_hot_restart_on_no_parent = 39;
// See :option:`--skip-hot-restart-parent-stats` for details.
bool skip_hot_restart_parent_stats = 40;
// See :option:`--base-id-path` for details.
string base_id_path = 32;

View File

@ -254,6 +254,9 @@ message ResponseFlagFilter {
in: "UPE"
in: "NC"
in: "OM"
in: "DF"
in: "DO"
in: "DR"
}
}
}];

View File

@ -41,7 +41,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// <config_overview_bootstrap>` for more detail.
// Bootstrap :ref:`configuration overview <config_overview_bootstrap>`.
// [#next-free-field: 40]
// [#next-free-field: 42]
message Bootstrap {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.bootstrap.v2.Bootstrap";
@ -136,6 +136,13 @@ message Bootstrap {
bool enable_deferred_creation_stats = 1;
}
message GrpcAsyncClientManagerConfig {
// Optional field to set the expiration time for the cached gRPC client object.
// The minimal value is 5s and the default is 50s.
google.protobuf.Duration max_cached_entry_idle_duration = 1
[(validate.rules).duration = {gte {seconds: 5}}];
}
reserved 10, 11;
reserved "runtime";
@ -401,6 +408,13 @@ message Bootstrap {
// Optional application log configuration.
ApplicationLogConfig application_log_config = 38;
// Optional gRPC async manager config.
GrpcAsyncClientManagerConfig grpc_async_client_manager_config = 40;
// Optional configuration for memory allocation manager.
// Memory releasing is only supported for `tcmalloc allocator <https://github.com/google/tcmalloc>`_.
MemoryAllocatorManager memory_allocator_manager = 41;
}
// Administration interface :ref:`operations documentation
@ -438,6 +452,7 @@ message Admin {
}
// Cluster manager :ref:`architecture overview <arch_overview_cluster_manager>`.
// [#next-free-field: 6]
message ClusterManager {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.bootstrap.v2.ClusterManager";
@ -478,6 +493,11 @@ message ClusterManager {
// <envoy_v3_api_field_config.core.v3.ApiConfigSource.api_type>` :ref:`GRPC
// <envoy_v3_api_enum_value_config.core.v3.ApiConfigSource.ApiType.GRPC>`.
core.v3.ApiConfigSource load_stats_config = 4;
// Whether the ClusterManager will create clusters on the worker threads
// inline during requests. This will save memory and CPU cycles in cases where
// there are lots of inactive clusters and > 1 worker thread.
bool enable_deferred_cluster_creation = 5;
}
// Allows you to specify different watchdog configs for different subsystems.
@ -718,3 +738,14 @@ message CustomInlineHeader {
// The type of the header that is expected to be set as the inline header.
InlineHeaderType inline_header_type = 2 [(validate.rules).enum = {defined_only: true}];
}
message MemoryAllocatorManager {
// Configures tcmalloc to perform background release of free memory in amount of bytes per ``memory_release_interval`` interval.
// If equals to ``0``, no memory release will occur. Defaults to ``0``.
uint64 bytes_to_release = 1;
// Interval in milliseconds for memory releasing. If specified, during every
// interval Envoy will try to release ``bytes_to_release`` of free memory back to operating system for reuse.
// Defaults to 1000 milliseconds.
google.protobuf.Duration memory_release_interval = 2;
}

View File

@ -45,7 +45,7 @@ message ClusterCollection {
}
// Configuration for a single upstream cluster.
// [#next-free-field: 57]
// [#next-free-field: 58]
message Cluster {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.Cluster";
@ -168,7 +168,7 @@ message Cluster {
// The name of the match, used in stats generation.
string name = 1 [(validate.rules).string = {min_len: 1}];
// Optional endpoint metadata match criteria.
// Optional metadata match criteria.
// The connection to the endpoint with metadata matching what is set in this field
// will use the transport socket configuration specified here.
// The endpoint's metadata entry in ``envoy.transport_socket_match`` is used to match
@ -754,12 +754,14 @@ message Cluster {
reserved "hosts", "tls_context", "extension_protocol_options";
// Configuration to use different transport sockets for different endpoints.
// The entry of ``envoy.transport_socket_match`` in the
// :ref:`LbEndpoint.Metadata <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.metadata>`
// is used to match against the transport sockets as they appear in the list. The first
// :ref:`match <envoy_v3_api_msg_config.cluster.v3.Cluster.TransportSocketMatch>` is used.
// For example, with the following match
// Configuration to use different transport sockets for different endpoints. The entry of
// ``envoy.transport_socket_match`` in the :ref:`LbEndpoint.Metadata
// <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.metadata>` is used to match against the
// transport sockets as they appear in the list. If a match is not found, the search continues in
// :ref:`LocalityLbEndpoints.Metadata
// <envoy_v3_api_field_config.endpoint.v3.LocalityLbEndpoints.metadata>`. The first :ref:`match
// <envoy_v3_api_msg_config.cluster.v3.Cluster.TransportSocketMatch>` is used. For example, with
// the following match
//
// .. code-block:: yaml
//
@ -783,8 +785,9 @@ message Cluster {
// socket match in case above.
//
// If an endpoint metadata's value under ``envoy.transport_socket_match`` does not match any
// ``TransportSocketMatch``, socket configuration fallbacks to use the ``tls_context`` or
// ``transport_socket`` specified in this cluster.
// ``TransportSocketMatch``, the locality metadata is then checked for a match. Barring any
// matches in the endpoint or locality metadata, the socket configuration fallbacks to use the
// ``tls_context`` or ``transport_socket`` specified in this cluster.
//
// This field allows gradual and flexible transport socket configuration changes.
//
@ -1148,6 +1151,22 @@ message Cluster {
// from the LRS stream here.]
core.v3.ConfigSource lrs_server = 42;
// [#not-implemented-hide:]
// A list of metric names from ORCA load reports to propagate to LRS.
//
// For map fields in the ORCA proto, the string will be of the form ``<map_field_name>.<map_key>``.
// For example, the string ``named_metrics.foo`` will mean to look for the key ``foo`` in the ORCA
// ``named_metrics`` field.
//
// The special map key ``*`` means to report all entries in the map (e.g., ``named_metrics.*`` means to
// report all entries in the ORCA named_metrics field). Note that this should be used only with trusted
// backends.
//
// The metric names in LRS will follow the same semantics as this field. In other words, if this field
// contains ``named_metrics.foo``, then the LRS load report will include the data with that same string
// as the key.
repeated string lrs_report_endpoint_metrics = 57;
// If track_timeout_budgets is true, the :ref:`timeout budget histograms
// <config_cluster_manager_cluster_stats_timeout_budgets>` will be published for each
// request. These show what percentage of a request's per try and global timeout was used. A value
@ -1236,6 +1255,26 @@ message UpstreamConnectionOptions {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.UpstreamConnectionOptions";
enum FirstAddressFamilyVersion {
// respect the native ranking of destination ip addresses returned from dns
// resolution
DEFAULT = 0;
V4 = 1;
V6 = 2;
}
message HappyEyeballsConfig {
// Specify the IP address family to attempt connection first in happy
// eyeballs algorithm according to RFC8305#section-4.
FirstAddressFamilyVersion first_address_family_version = 1;
// Specify the number of addresses of the first_address_family_version being
// attempted for connection before the other address family.
google.protobuf.UInt32Value first_address_family_count = 2 [(validate.rules).uint32 = {gte: 1}];
}
// If set then set SO_KEEPALIVE on the socket to enable TCP Keepalives.
core.v3.TcpKeepalive tcp_keepalive = 1;
@ -1243,6 +1282,11 @@ message UpstreamConnectionOptions {
// This can be used by extensions during processing of requests. The association mechanism is
// implementation specific. Defaults to false due to performance concerns.
bool set_local_interface_name_on_upstream_connections = 2;
// Configurations for happy eyeballs algorithm.
// Add configs for first_address_family_version and first_address_family_count
// when sorting destination ip addresses.
HappyEyeballsConfig happy_eyeballs_config = 3;
}
message TrackClusterStats {
@ -1257,4 +1301,19 @@ message TrackClusterStats {
// <config_cluster_manager_cluster_stats_request_response_sizes>` tracking header and body sizes
// of requests and responses will be published.
bool request_response_sizes = 2;
// If true, some stats will be emitted per-endpoint, similar to the stats in admin ``/clusters``
// output.
//
// This does not currently output correct stats during a hot-restart.
//
// This is not currently implemented by all stat sinks.
//
// These stats do not honor filtering or tag extraction rules in :ref:`StatsConfig
// <envoy_v3_api_msg_config.metrics.v3.StatsConfig>` (but fixed-value tags are supported). Admin
// endpoint filtering is supported.
//
// This may not be used at the same time as
// :ref:`load_stats_config <envoy_v3_api_field_config.bootstrap.v3.ClusterManager.load_stats_config>`.
bool per_endpoint_stats = 3;
}

View File

@ -2,6 +2,8 @@ syntax = "proto3";
package envoy.config.cluster.v3;
import "envoy/config/core/v3/config_source.proto";
import "google/protobuf/any.proto";
import "udpa/annotations/status.proto";
@ -14,8 +16,8 @@ option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/config/cluster/v3;clusterv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Upstream filters]
// Upstream filters apply to the connections to the upstream cluster hosts.
// [#protodoc-title: Upstream network filters]
// Upstream network filters apply to the connections to the upstream cluster hosts.
message Filter {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.cluster.Filter";
@ -26,6 +28,13 @@ message Filter {
// Filter specific configuration which depends on the filter being
// instantiated. See the supported filters for further documentation.
// Note that Envoy's :ref:`downstream network
// filters <config_network_filters>` are not valid upstream filters.
// filters <config_network_filters>` are not valid upstream network filters.
// Only one of typed_config or config_discovery can be used.
google.protobuf.Any typed_config = 2;
// Configuration source specifier for an extension configuration discovery
// service. In case of a failure and without the default configuration, the
// listener closes the connections.
// Only one of typed_config or config_discovery can be used.
core.v3.ExtensionConfigSource config_discovery = 3;
}

View File

@ -2,6 +2,8 @@ syntax = "proto3";
package envoy.config.cluster.v3;
import "envoy/config/core/v3/extension.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/wrappers.proto";
@ -19,7 +21,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// See the :ref:`architecture overview <arch_overview_outlier_detection>` for
// more information on outlier detection.
// [#next-free-field: 23]
// [#next-free-field: 26]
message OutlierDetection {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.cluster.OutlierDetection";
@ -40,8 +42,8 @@ message OutlierDetection {
// Defaults to 30000ms or 30s.
google.protobuf.Duration base_ejection_time = 3 [(validate.rules).duration = {gt {}}];
// The maximum % of an upstream cluster that can be ejected due to outlier
// detection. Defaults to 10% but will eject at least one host regardless of the value.
// The maximum % of an upstream cluster that can be ejected due to outlier detection. Defaults to 10% .
// Will eject at least one host regardless of the value if :ref:`always_eject_one_host<envoy_v3_api_field_config.cluster.v3.OutlierDetection.always_eject_one_host>` is enabled.
google.protobuf.UInt32Value max_ejection_percent = 4 [(validate.rules).uint32 = {lte: 100}];
// The % chance that a host will be actually ejected when an outlier status
@ -161,4 +163,18 @@ message OutlierDetection {
// See :ref:`max_ejection_time_jitter<envoy_v3_api_field_config.cluster.v3.OutlierDetection.base_ejection_time>`
// Defaults to 0s.
google.protobuf.Duration max_ejection_time_jitter = 22;
// If active health checking is enabled and a host is ejected by outlier detection, a successful active health check
// unejects the host by default and considers it as healthy. Unejection also clears all the outlier detection counters.
// To change this default behavior set this config to ``false`` where active health checking will not uneject the host.
// Defaults to true.
google.protobuf.BoolValue successful_active_health_check_uneject_host = 23;
// Set of host's passive monitors.
// [#not-implemented-hide:]
repeated core.v3.TypedExtensionConfig monitors = 24;
// If enabled, at least one host is ejected regardless of the value of :ref:`max_ejection_percent<envoy_v3_api_field_config.cluster.v3.OutlierDetection.max_ejection_percent>`.
// Defaults to false.
google.protobuf.BoolValue always_eject_one_host = 25;
}

View File

@ -6,8 +6,6 @@ import "envoy/config/core/v3/extension.proto";
import "envoy/config/route/v3/route_components.proto";
import "envoy/type/matcher/v3/string.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
@ -24,9 +22,10 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// is found the action specified by the most specific on_no_match will be evaluated.
// As an on_no_match might result in another matching tree being evaluated, this process
// might repeat several times until the final OnMatch (or no match) is decided.
//
// .. note::
// Please use the syntactically equivalent :ref:`matching API <envoy_v3_api_msg_.xds.type.matcher.v3.Matcher>`
message Matcher {
option (xds.annotations.v3.message_status).work_in_progress = true;
// What to do if a match is successful.
message OnMatch {
oneof on_match {

View File

@ -2,6 +2,7 @@ syntax = "proto3";
package envoy.config.core.v3;
import "envoy/config/core/v3/extension.proto";
import "envoy/config/core/v3/socket_option.proto";
import "google/protobuf/wrappers.proto";
@ -130,7 +131,7 @@ message ExtraSourceAddress {
SocketOptionsOverride socket_options = 2;
}
// [#next-free-field: 6]
// [#next-free-field: 7]
message BindConfig {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.core.BindConfig";
@ -150,20 +151,22 @@ message BindConfig {
// precompiled binaries.
repeated SocketOption socket_options = 3;
// Extra source addresses appended to the address specified in the `source_address`
// field. This enables to specify multiple source addresses. Currently, only one extra
// address can be supported, and the extra address should have a different IP version
// with the address in the `source_address` field. The address which has the same IP
// version with the target host's address IP version will be used as bind address. If more
// than one extra address specified, only the first address matched IP version will be
// returned. If there is no same IP version address found, the address in the `source_address`
// will be returned.
// Extra source addresses appended to the address specified in the ``source_address``
// field. This enables to specify multiple source addresses.
// The source address selection is determined by :ref:`local_address_selector
// <envoy_v3_api_field_config.core.v3.BindConfig.local_address_selector>`.
repeated ExtraSourceAddress extra_source_addresses = 5;
// Deprecated by
// :ref:`extra_source_addresses <envoy_v3_api_field_config.core.v3.BindConfig.extra_source_addresses>`
repeated SocketAddress additional_source_addresses = 4
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
// Custom local address selector to override the default (i.e.
// :ref:`DefaultLocalAddressSelector
// <envoy_v3_api_msg_config.upstream.local_address_selector.v3.DefaultLocalAddressSelector>`).
// [#extension-category: envoy.upstream.local_address_selector]
TypedExtensionConfig local_address_selector = 6;
}
// Addresses specify either a logical or physical address and port, which are

View File

@ -245,7 +245,8 @@ message Metadata {
// :ref:`typed_filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.typed_filter_metadata>`
// fields are present in the metadata with same keys,
// only ``typed_filter_metadata`` field will be parsed.
map<string, google.protobuf.Struct> filter_metadata = 1;
map<string, google.protobuf.Struct> filter_metadata = 1
[(validate.rules).map = {keys {string {min_len: 1}}}];
// Key is the reverse DNS filter name, e.g. com.acme.widget. The ``envoy.*``
// namespace is reserved for Envoy's built-in filters.
@ -253,7 +254,8 @@ message Metadata {
// If both :ref:`filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.filter_metadata>`
// and ``typed_filter_metadata`` fields are present in the metadata with same keys,
// only ``typed_filter_metadata`` field will be parsed.
map<string, google.protobuf.Any> typed_filter_metadata = 2;
map<string, google.protobuf.Any> typed_filter_metadata = 2
[(validate.rules).map = {keys {string {min_len: 1}}}];
}
// Runtime derived uint32 with a default when not specified.
@ -301,6 +303,59 @@ message RuntimeFeatureFlag {
string runtime_key = 2 [(validate.rules).string = {min_len: 1}];
}
message KeyValue {
// The key of the key/value pair.
string key = 1 [(validate.rules).string = {min_len: 1 max_bytes: 16384}];
// The value of the key/value pair.
bytes value = 2;
}
// Key/value pair plus option to control append behavior. This is used to specify
// key/value pairs that should be appended to a set of existing key/value pairs.
message KeyValueAppend {
// Describes the supported actions types for key/value pair append action.
enum KeyValueAppendAction {
// If the key already exists, this action will result in the following behavior:
//
// - Comma-concatenated value if multiple values are not allowed.
// - New value added to the list of values if multiple values are allowed.
//
// If the key doesn't exist then this will add pair with specified key and value.
APPEND_IF_EXISTS_OR_ADD = 0;
// This action will add the key/value pair if it doesn't already exist. If the
// key already exists then this will be a no-op.
ADD_IF_ABSENT = 1;
// This action will overwrite the specified value by discarding any existing
// values if the key already exists. If the key doesn't exist then this will add
// the pair with specified key and value.
OVERWRITE_IF_EXISTS_OR_ADD = 2;
// This action will overwrite the specified value by discarding any existing
// values if the key already exists. If the key doesn't exist then this will
// be no-op.
OVERWRITE_IF_EXISTS = 3;
}
// Key/value pair entry that this option to append or overwrite.
KeyValue entry = 1 [(validate.rules).message = {required: true}];
// Describes the action taken to append/overwrite the given value for an existing
// key or to only add this key if it's absent.
KeyValueAppendAction action = 2 [(validate.rules).enum = {defined_only: true}];
}
// Key/value pair to append or remove.
message KeyValueMutation {
// Key/value pair to append or overwrite. Only one of ``append`` or ``remove`` can be set.
KeyValueAppend append = 1;
// Key to remove. Only one of ``append`` or ``remove`` can be set.
string remove = 2 [(validate.rules).string = {max_bytes: 16384}];
}
// Query parameter name/value pair.
message QueryParameter {
// The key of the query parameter. Case sensitive.
@ -346,9 +401,12 @@ message HeaderValueOption {
// Describes the supported actions types for header append action.
enum HeaderAppendAction {
// This action will append the specified value to the existing values if the header
// already exists. If the header doesn't exist then this will add the header with
// specified key and value.
// If the header already exists, this action will result in:
//
// - Comma-concatenated for predefined inline headers.
// - Duplicate header added in the ``HeaderMap`` for other headers.
//
// If the header doesn't exist then this will add new header with specified key and value.
APPEND_IF_EXISTS_OR_ADD = 0;
// This action will add the header if it doesn't already exist. If the header
@ -406,6 +464,7 @@ message WatchedDirectory {
}
// Data source consisting of a file, an inline value, or an environment variable.
// [#next-free-field: 6]
message DataSource {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.core.DataSource";
@ -424,12 +483,47 @@ message DataSource {
// Environment variable data source.
string environment_variable = 4 [(validate.rules).string = {min_len: 1}];
}
// Watched directory that is watched for file changes. If this is set explicitly, the file
// specified in the ``filename`` field will be reloaded when relevant file move events occur.
//
// .. note::
// This field only makes sense when the ``filename`` field is set.
//
// .. note::
// Envoy only updates when the file is replaced by a file move, and not when the file is
// edited in place.
//
// .. note::
// Not all use cases of ``DataSource`` support watching directories. It depends on the
// specific usage of the ``DataSource``. See the documentation of the parent message for
// details.
WatchedDirectory watched_directory = 5;
}
// The message specifies the retry policy of remote data source when fetching fails.
// [#next-free-field: 7]
message RetryPolicy {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.core.RetryPolicy";
// See :ref:`RetryPriority <envoy_v3_api_field_config.route.v3.RetryPolicy.retry_priority>`.
message RetryPriority {
string name = 1 [(validate.rules).string = {min_len: 1}];
oneof config_type {
google.protobuf.Any typed_config = 2;
}
}
// See :ref:`RetryHostPredicate <envoy_v3_api_field_config.route.v3.RetryPolicy.retry_host_predicate>`.
message RetryHostPredicate {
string name = 1 [(validate.rules).string = {min_len: 1}];
oneof config_type {
google.protobuf.Any typed_config = 2;
}
}
// Specifies parameters that control :ref:`retry backoff strategy <envoy_v3_api_msg_config.core.v3.BackoffStrategy>`.
// This parameter is optional, in which case the default base interval is 1000 milliseconds. The
// default maximum interval is 10 times the base interval.
@ -439,6 +533,18 @@ message RetryPolicy {
// defaults to 1.
google.protobuf.UInt32Value num_retries = 2
[(udpa.annotations.field_migrate).rename = "max_retries"];
// For details, see :ref:`retry_on <envoy_v3_api_field_config.route.v3.RetryPolicy.retry_on>`.
string retry_on = 3;
// For details, see :ref:`retry_priority <envoy_v3_api_field_config.route.v3.RetryPolicy.retry_priority>`.
RetryPriority retry_priority = 4;
// For details, see :ref:`RetryHostPredicate <envoy_v3_api_field_config.route.v3.RetryPolicy.retry_host_predicate>`.
repeated RetryHostPredicate retry_host_predicate = 5;
// For details, see :ref:`host_selection_retry_max_attempts <envoy_v3_api_field_config.route.v3.RetryPolicy.host_selection_retry_max_attempts>`.
int64 host_selection_retry_max_attempts = 6;
}
// The message specifies how to fetch data from remote and how to verify it.

View File

@ -28,12 +28,10 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// xDS API and non-xDS services version. This is used to describe both resource and transport
// protocol versions (in distinct configuration fields).
enum ApiVersion {
// When not specified, we assume v2, to ease migration to Envoy's stable API
// versioning. If a client does not support v2 (e.g. due to deprecation), this
// is an invalid value.
AUTO = 0 [deprecated = true, (envoy.annotations.deprecated_at_minor_version_enum) = "3.0"];
// When not specified, we assume v3; it is the only supported version.
AUTO = 0;
// Use xDS v2 API.
// Use xDS v2 API. This is no longer supported.
V2 = 1 [deprecated = true, (envoy.annotations.deprecated_at_minor_version_enum) = "3.0"];
// Use xDS v3 API.
@ -152,7 +150,8 @@ message RateLimitSettings {
google.protobuf.UInt32Value max_tokens = 1;
// Rate at which tokens will be filled per second. If not set, a default fill rate of 10 tokens
// per second will be used.
// per second will be used. The minimal fill rate is once per year. Lower
// fill rates will be set to once per year.
google.protobuf.DoubleValue fill_rate = 2 [(validate.rules).double = {gt: 0.0}];
}

View File

@ -25,10 +25,11 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// gRPC service configuration. This is used by :ref:`ApiConfigSource
// <envoy_v3_api_msg_config.core.v3.ApiConfigSource>` and filter configurations.
// [#next-free-field: 6]
// [#next-free-field: 7]
message GrpcService {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.core.GrpcService";
// [#next-free-field: 6]
message EnvoyGrpc {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.core.GrpcService.EnvoyGrpc";
@ -49,6 +50,18 @@ message GrpcService {
// Currently only supported for xDS gRPC streams.
// If not set, xDS gRPC streams default base interval:500ms, maximum interval:30s will be applied.
RetryPolicy retry_policy = 3;
// Maximum gRPC message size that is allowed to be received.
// If a message over this limit is received, the gRPC stream is terminated with the RESOURCE_EXHAUSTED error.
// This limit is applied to individual messages in the streaming response and not the total size of streaming response.
// Defaults to 0, which means unlimited.
google.protobuf.UInt32Value max_receive_message_length = 4;
// This provides gRPC client level control over envoy generated headers.
// If false, the header will be sent but it can be overridden by per stream option.
// If true, the header will be removed and can not be overridden by per stream option.
// Default to false.
bool skip_envoy_headers = 5;
}
// [#next-free-field: 9]
@ -300,4 +313,8 @@ message GrpcService {
// documentation on :ref:`custom request headers
// <config_http_conn_man_headers_custom_request_headers>`.
repeated HeaderValue initial_metadata = 5;
// Optional default retry policy for streams toward the service.
// If an async stream doesn't have retry policy configured in its stream options, this retry policy is used.
RetryPolicy retry_policy = 6;
}

View File

@ -5,6 +5,7 @@ package envoy.config.core.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/event_service_config.proto";
import "envoy/config/core/v3/extension.proto";
import "envoy/config/core/v3/proxy_protocol.proto";
import "envoy/type/matcher/v3/string.proto";
import "envoy/type/v3/http.proto";
import "envoy/type/v3/range.proto";
@ -62,7 +63,7 @@ message HealthStatusSet {
[(validate.rules).repeated = {items {enum {defined_only: true}}}];
}
// [#next-free-field: 26]
// [#next-free-field: 27]
message HealthCheck {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.core.HealthCheck";
@ -95,12 +96,11 @@ message HealthCheck {
// left empty (default value), the name of the cluster this health check is associated
// with will be used. The host header can be customized for a specific endpoint by setting the
// :ref:`hostname <envoy_v3_api_field_config.endpoint.v3.Endpoint.HealthCheckConfig.hostname>` field.
string host = 1 [(validate.rules).string = {well_known_regex: HTTP_HEADER_VALUE strict: false}];
string host = 1 [(validate.rules).string = {well_known_regex: HTTP_HEADER_VALUE}];
// Specifies the HTTP path that will be requested during health checking. For example
// ``/healthcheck``.
string path = 2
[(validate.rules).string = {min_len: 1 well_known_regex: HTTP_HEADER_VALUE strict: false}];
string path = 2 [(validate.rules).string = {min_len: 1 well_known_regex: HTTP_HEADER_VALUE}];
// [#not-implemented-hide:] HTTP specific payload.
Payload send = 3;
@ -178,6 +178,13 @@ message HealthCheck {
// payload block must be found, and in the order specified, but not
// necessarily contiguous.
repeated Payload receive = 2;
// When setting this value, it tries to attempt health check request with ProxyProtocol.
// When ``send`` is presented, they are sent after preceding ProxyProtocol header.
// Only ProxyProtocol header is sent when ``send`` is not presented.
// It allows to use both ProxyProtocol V1 and V2. In V1, it presents L3/L4. In V2, it includes
// LOCAL command and doesn't include L3/L4.
ProxyProtocolConfig proxy_protocol_config = 3;
}
message RedisHealthCheck {
@ -392,6 +399,11 @@ message HealthCheck {
// The default value is false.
bool always_log_health_check_failures = 19;
// If set to true, health check success events will always be logged. If set to false, only host addition event will be logged
// if it is the first successful health check, or if the healthy threshold is reached.
// The default value is false.
bool always_log_health_check_success = 26;
// This allows overriding the cluster TLS settings, just for health check connections.
TlsOptions tls_options = 21;

View File

@ -0,0 +1,35 @@
syntax = "proto3";
package envoy.config.core.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/http_uri.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.config.core.v3";
option java_outer_classname = "HttpServiceProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/config/core/v3;corev3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: HTTP services]
// HTTP service configuration.
message HttpService {
// The service's HTTP URI. For example:
//
// .. code-block:: yaml
//
// http_uri:
// uri: https://www.myserviceapi.com/v1/data
// cluster: www.myserviceapi.com|443
//
HttpUri http_uri = 1;
// Specifies a list of HTTP headers that should be added to each request
// handled by this virtual host.
repeated HeaderValueOption request_headers_to_add = 2
[(validate.rules).repeated = {max_items: 1000}];
}

View File

@ -52,6 +52,7 @@ message HttpUri {
// Sets the maximum duration in milliseconds that a response can take to arrive upon request.
google.protobuf.Duration timeout = 3 [(validate.rules).duration = {
required: true
lt {seconds: 4294967296}
gte {}
}];
}

View File

@ -56,7 +56,7 @@ message QuicKeepAliveSettings {
}
// QUIC protocol options which apply to both downstream and upstream connections.
// [#next-free-field: 6]
// [#next-free-field: 9]
message QuicProtocolOptions {
// Maximum number of streams that the client can negotiate per connection. 100
// if not specified.
@ -64,7 +64,7 @@ message QuicProtocolOptions {
// `Initial stream-level flow-control receive window
// <https://tools.ietf.org/html/draft-ietf-quic-transport-34#section-4.1>`_ size. Valid values range from
// 1 to 16777216 (2^24, maximum supported by QUICHE) and defaults to 65536 (2^16).
// 1 to 16777216 (2^24, maximum supported by QUICHE) and defaults to 16777216 (16 * 1024 * 1024).
//
// NOTE: 16384 (2^14) is the minimum window size supported in Google QUIC. If configured smaller than it, we will use 16384 instead.
// QUICHE IETF Quic implementation supports 1 bytes window. We only support increasing the default window size now, so it's also the minimum.
@ -76,8 +76,8 @@ message QuicProtocolOptions {
[(validate.rules).uint32 = {lte: 16777216 gte: 1}];
// Similar to ``initial_stream_window_size``, but for connection-level
// flow-control. Valid values rage from 1 to 25165824 (24MB, maximum supported by QUICHE) and defaults to 65536 (2^16).
// window. Currently, this has the same minimum/default as ``initial_stream_window_size``.
// flow-control. Valid values rage from 1 to 25165824 (24MB, maximum supported by QUICHE) and defaults
// to 25165824 (24 * 1024 * 1024).
//
// NOTE: 16384 (2^14) is the minimum window size supported in Google QUIC. We only support increasing the default
// window size now, so it's also the minimum.
@ -85,7 +85,7 @@ message QuicProtocolOptions {
[(validate.rules).uint32 = {lte: 25165824 gte: 1}];
// The number of timeouts that can occur before port migration is triggered for QUIC clients.
// This defaults to 1. If set to 0, port migration will not occur on path degrading.
// This defaults to 4. If set to 0, port migration will not occur on path degrading.
// Timeout here refers to QUIC internal path degrading timeout mechanism, such as PTO.
// This has no effect on server sessions.
google.protobuf.UInt32Value num_timeouts_to_trigger_port_migration = 4
@ -94,6 +94,23 @@ message QuicProtocolOptions {
// Probes the peer at the configured interval to solicit traffic, i.e. ACK or PATH_RESPONSE, from the peer to push back connection idle timeout.
// If absent, use the default keepalive behavior of which a client connection sends PINGs every 15s, and a server connection doesn't do anything.
QuicKeepAliveSettings connection_keepalive = 5;
// A comma-separated list of strings representing QUIC connection options defined in
// `QUICHE <https://github.com/google/quiche/blob/main/quiche/quic/core/crypto/crypto_protocol.h>`_ and to be sent by upstream connections.
string connection_options = 6;
// A comma-separated list of strings representing QUIC client connection options defined in
// `QUICHE <https://github.com/google/quiche/blob/main/quiche/quic/core/crypto/crypto_protocol.h>`_ and to be sent by upstream connections.
string client_connection_options = 7;
// The duration that a QUIC connection stays idle before it closes itself. If this field is not present, QUICHE
// default 600s will be applied.
// For internal corporate network, a long timeout is often fine.
// But for client facing network, 30s is usually a good choice.
google.protobuf.Duration idle_network_timeout = 8 [(validate.rules).duration = {
lte {seconds: 600}
gte {seconds: 1}
}];
}
message UpstreamHttpProtocolOptions {
@ -232,10 +249,9 @@ message HttpProtocolOptions {
google.protobuf.Duration idle_timeout = 1;
// The maximum duration of a connection. The duration is defined as a period since a connection
// was established. If not set, there is no max duration. When max_connection_duration is reached
// and if there are no active streams, the connection will be closed. If the connection is a
// downstream connection and there are any active streams, the drain sequence will kick-in,
// and the connection will be force-closed after the drain period. See :ref:`drain_timeout
// was established. If not set, there is no max duration. When max_connection_duration is reached,
// the drain sequence will kick-in. The connection will be closed after the drain timeout period
// if there are no active streams. See :ref:`drain_timeout
// <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpConnectionManager.drain_timeout>`.
google.protobuf.Duration max_connection_duration = 3;
@ -482,10 +498,10 @@ message Http2ProtocolOptions {
// Allows proxying Websocket and other upgrades over H2 connect.
bool allow_connect = 5;
// [#not-implemented-hide:] Hiding until envoy has full metadata support.
// [#not-implemented-hide:] Hiding until Envoy has full metadata support.
// Still under implementation. DO NOT USE.
//
// Allows metadata. See [metadata
// Allows sending and receiving HTTP/2 METADATA frames. See [metadata
// docs](https://github.com/envoyproxy/envoy/blob/main/source/docs/h2_metadata.md) for more
// information.
bool allow_metadata = 6;
@ -614,7 +630,7 @@ message GrpcProtocolOptions {
}
// A message which allows using HTTP/3.
// [#next-free-field: 6]
// [#next-free-field: 7]
message Http3ProtocolOptions {
QuicProtocolOptions quic_protocol_options = 1;
@ -633,12 +649,27 @@ message Http3ProtocolOptions {
// <https://datatracker.ietf.org/doc/draft-ietf-httpbis-h3-websockets/>`_
// Note that HTTP/3 CONNECT is not yet an RFC.
bool allow_extended_connect = 5 [(xds.annotations.v3.field_status).work_in_progress = true];
// [#not-implemented-hide:] Hiding until Envoy has full metadata support.
// Still under implementation. DO NOT USE.
//
// Allows sending and receiving HTTP/3 METADATA frames. See [metadata
// docs](https://github.com/envoyproxy/envoy/blob/main/source/docs/h2_metadata.md) for more
// information.
bool allow_metadata = 6;
}
// A message to control transformations to the :scheme header
message SchemeHeaderTransformation {
oneof transformation {
// Overwrite any Scheme header with the contents of this string.
// If set, takes precedence over match_upstream.
string scheme_to_overwrite = 1 [(validate.rules).string = {in: "http" in: "https"}];
}
// Set the Scheme header to match the upstream transport protocol. For example, should a
// request be sent to the upstream over TLS, the scheme header will be set to "https". Should the
// request be sent over plaintext, the scheme header will be set to "http".
// If scheme_to_overwrite is set, this field is not used.
bool match_upstream = 2;
}

View File

@ -19,9 +19,15 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Substitution format string]
// Optional configuration options to be used with json_format.
message JsonFormatOptions {
// The output JSON string properties will be sorted.
bool sort_properties = 1;
}
// Configuration to use multiple :ref:`command operators <config_access_log_command_operators>`
// to generate a new string in either plain text or JSON format.
// [#next-free-field: 7]
// [#next-free-field: 8]
message SubstitutionFormatString {
oneof format {
option (validate.required) = true;
@ -113,4 +119,7 @@ message SubstitutionFormatString {
// See the formatters extensions documentation for details.
// [#extension-category: envoy.formatter]
repeated TypedExtensionConfig formatters = 6;
// If json_format is used, the options will be applied to the output JSON string.
JsonFormatOptions json_format_options = 7;
}

View File

@ -40,7 +40,6 @@ message ClusterLoadAssignment {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.ClusterLoadAssignment.Policy";
// [#not-implemented-hide:]
message DropOverload {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.ClusterLoadAssignment.Policy.DropOverload";
@ -75,7 +74,15 @@ message ClusterLoadAssignment {
// "throttle"_drop = 60%
// "lb"_drop = 20% // 50% of the remaining 'actual' load, which is 40%.
// actual_outgoing_load = 20% // remaining after applying all categories.
// [#not-implemented-hide:]
//
// Envoy supports only one element and will NACK if more than one element is present.
// Other xDS-capable data planes will not necessarily have this limitation.
//
// In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
// "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
// any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
// When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
// setting are in place, the min of these two wins.
repeated DropOverload drop_overloads = 2;
// Priority levels and localities are considered overprovisioned with this

View File

@ -88,8 +88,8 @@ message Endpoint {
// :ref:`auto_host_rewrite <envoy_v3_api_field_config.route.v3.RouteAction.auto_host_rewrite>`.
string hostname = 3;
// An ordered list of addresses that together with `address` comprise the
// list of addresses for an endpoint. The address given in the `address` is
// An ordered list of addresses that together with ``address`` comprise the
// list of addresses for an endpoint. The address given in the ``address`` is
// prepended to this list. It is assumed that the list must already be
// sorted by preference order of the addresses. This will only be supported
// for STATIC and EDS clusters.
@ -147,7 +147,7 @@ message LedsClusterLocalityConfig {
// A group of endpoints belonging to a Locality.
// One can have multiple LocalityLbEndpoints for a locality, but only if
// they have different priorities.
// [#next-free-field: 9]
// [#next-free-field: 10]
message LocalityLbEndpoints {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.endpoint.LocalityLbEndpoints";
@ -161,6 +161,9 @@ message LocalityLbEndpoints {
// Identifies location of where the upstream hosts run.
core.v3.Locality locality = 1;
// Metadata to provide additional information about the locality endpoints in aggregate.
core.v3.Metadata metadata = 9;
// The group of endpoints belonging to the locality specified.
// [#comment:TODO(adisuissa): Once LEDS is implemented this field needs to be
// deprecated and replaced by ``load_balancer_endpoints``.]

View File

@ -8,6 +8,8 @@ import "envoy/config/core/v3/base.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
import "validate/validate.proto";
@ -23,7 +25,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// These are stats Envoy reports to the management server at a frequency defined by
// :ref:`LoadStatsResponse.load_reporting_interval<envoy_v3_api_field_service.load_stats.v3.LoadStatsResponse.load_reporting_interval>`.
// Stats per upstream region/zone and optionally per subzone.
// [#next-free-field: 9]
// [#next-free-field: 15]
message UpstreamLocalityStats {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.endpoint.UpstreamLocalityStats";
@ -48,7 +50,45 @@ message UpstreamLocalityStats {
// upstream endpoints in the locality.
uint64 total_issued_requests = 8;
// Stats for multi-dimensional load balancing.
// The total number of connections in an established state at the time of the
// report. This field is aggregated over all the upstream endpoints in the
// locality.
// In Envoy, this information may be based on ``upstream_cx_active metric``.
// [#not-implemented-hide:]
uint64 total_active_connections = 9 [(xds.annotations.v3.field_status).work_in_progress = true];
// The total number of connections opened since the last report.
// This field is aggregated over all the upstream endpoints in the locality.
// In Envoy, this information may be based on ``upstream_cx_total`` metric
// compared to itself between start and end of an interval, i.e.
// ``upstream_cx_total``(now) - ``upstream_cx_total``(now -
// load_report_interval).
// [#not-implemented-hide:]
uint64 total_new_connections = 10 [(xds.annotations.v3.field_status).work_in_progress = true];
// The total number of connection failures since the last report.
// This field is aggregated over all the upstream endpoints in the locality.
// In Envoy, this information may be based on ``upstream_cx_connect_fail``
// metric compared to itself between start and end of an interval, i.e.
// ``upstream_cx_connect_fail``(now) - ``upstream_cx_connect_fail``(now -
// load_report_interval).
// [#not-implemented-hide:]
uint64 total_fail_connections = 11 [(xds.annotations.v3.field_status).work_in_progress = true];
// CPU utilization stats for multi-dimensional load balancing.
// This typically comes from endpoint metrics reported via ORCA.
UnnamedEndpointLoadMetricStats cpu_utilization = 12;
// Memory utilization for multi-dimensional load balancing.
// This typically comes from endpoint metrics reported via ORCA.
UnnamedEndpointLoadMetricStats mem_utilization = 13;
// Blended application-defined utilization for multi-dimensional load balancing.
// This typically comes from endpoint metrics reported via ORCA.
UnnamedEndpointLoadMetricStats application_utilization = 14;
// Named stats for multi-dimensional load balancing.
// These typically come from endpoint metrics reported via ORCA.
repeated EndpointLoadMetricStats load_metric_stats = 5;
// Endpoint granularity stats information for this locality. This information
@ -118,6 +158,16 @@ message EndpointLoadMetricStats {
double total_metric_value = 3;
}
// Same as EndpointLoadMetricStats, except without the metric_name field.
message UnnamedEndpointLoadMetricStats {
// Number of calls that finished and included this metric.
uint64 num_requests_finished_with_metric = 1;
// Sum of metric values across all calls that finished with this metric for
// load_reporting_interval.
double total_metric_value = 2;
}
// Per cluster load stats. Envoy reports these stats a management server in a
// :ref:`LoadStatsRequest<envoy_v3_api_msg_service.load_stats.v3.LoadStatsRequest>`
// Next ID: 7

View File

@ -21,4 +21,42 @@ option (udpa.annotations.file_status).package_version_status = FROZEN;
message KafkaBroker {
// The prefix to use when emitting :ref:`statistics <config_network_filters_kafka_broker_stats>`.
string stat_prefix = 1 [(validate.rules).string = {min_bytes: 1}];
// Set to true if broker filter should attempt to serialize the received responses from the
// upstream broker instead of passing received bytes as is.
// Disabled by default.
bool force_response_rewrite = 2;
// Optional broker address rewrite specification.
// Allows the broker filter to rewrite Kafka responses so that all connections established by
// the Kafka clients point to Envoy.
// This allows Kafka cluster not to configure its 'advertised.listeners' property
// (as the necessary re-pointing will be done by this filter).
// This collection of rules should cover all brokers in the cluster that is being proxied,
// otherwise some nodes' addresses might leak to the downstream clients.
oneof broker_address_rewrite_spec {
// Broker address rewrite rules that match by broker ID.
IdBasedBrokerRewriteSpec id_based_broker_address_rewrite_spec = 3;
}
}
// Collection of rules matching by broker ID.
message IdBasedBrokerRewriteSpec {
repeated IdBasedBrokerRewriteRule rules = 1;
}
// Defines a rule to rewrite broker address data.
message IdBasedBrokerRewriteRule {
// Broker ID to match.
uint32 id = 1 [(validate.rules).uint32 = {gte: 0}];
// The host value to use (resembling the host part of Kafka's advertised.listeners).
// The value should point to the Envoy (not Kafka) listener, so that all client traffic goes
// through Envoy.
string host = 2 [(validate.rules).string = {min_len: 1}];
// The port value to use (resembling the port part of Kafka's advertised.listeners).
// The value should point to the Envoy (not Kafka) listener, so that all client traffic goes
// through Envoy.
uint32 port = 3 [(validate.rules).uint32 = {lte: 65535}];
}

View File

@ -53,7 +53,7 @@ message ListenerCollection {
repeated xds.core.v3.CollectionEntry entries = 1;
}
// [#next-free-field: 35]
// [#next-free-field: 36]
message Listener {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.Listener";
@ -199,7 +199,12 @@ message Listener {
// before a connection is created.
// UDP Listener filters can be specified when the protocol in the listener socket address in
// :ref:`protocol <envoy_v3_api_field_config.core.v3.SocketAddress.protocol>` is :ref:`UDP
// <envoy_v3_api_enum_value_config.core.v3.SocketAddress.Protocol.UDP>`.
// <envoy_v3_api_enum_value_config.core.v3.SocketAddress.Protocol.UDP>` and no
// :ref:`quic_options <envoy_v3_api_field_config.listener.v3.UdpListenerConfig.quic_options>` is specified in :ref:`udp_listener_config <envoy_v3_api_field_config.listener.v3.Listener.udp_listener_config>`.
// QUIC listener filters can be specified when :ref:`quic_options
// <envoy_v3_api_field_config.listener.v3.UdpListenerConfig.quic_options>` is
// specified in :ref:`udp_listener_config <envoy_v3_api_field_config.listener.v3.Listener.udp_listener_config>`.
// They are processed sequentially right before connection creation. And like TCP Listener filters, they can be used to manipulate the connection metadata and socket. But the difference is that they can't be used to pause connection creation.
repeated ListenerFilter listener_filters = 9;
// The timeout to wait for all listener filters to complete operation. If the timeout is reached,
@ -244,7 +249,7 @@ message Listener {
// Additional socket options that may not be present in Envoy source code or
// precompiled binaries. The socket options can be updated for a listener when
// :ref:`enable_reuse_port <envoy_v3_api_field_config.listener.v3.Listener.enable_reuse_port>`
// is `true`. Otherwise, if socket options change during a listener update the update will be rejected
// is ``true``. Otherwise, if socket options change during a listener update the update will be rejected
// to make it clear that the options were not updated.
repeated core.v3.SocketOption socket_options = 13;
@ -382,6 +387,9 @@ message Listener {
// Whether the listener should limit connections based upon the value of
// :ref:`global_downstream_max_connections <config_overload_manager_limiting_connections>`.
bool ignore_global_conn_limit = 31;
// Whether the listener bypasses configured overload manager actions.
bool bypass_overload_manager = 35;
}
// A placeholder proto so that users can explicitly configure the standard

View File

@ -45,7 +45,6 @@ message Filter {
// Configuration source specifier for an extension configuration discovery
// service. In case of a failure and without the default configuration, the
// listener closes the connections.
// [#not-implemented-hide:]
core.v3.ExtensionConfigSource config_discovery = 5;
}
}

View File

@ -24,7 +24,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: QUIC listener config]
// Configuration specific to the UDP QUIC listener.
// [#next-free-field: 10]
// [#next-free-field: 12]
message QuicProtocolOptions {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.listener.QuicProtocolOptions";
@ -72,9 +72,18 @@ message QuicProtocolOptions {
core.v3.TypedExtensionConfig connection_id_generator_config = 8;
// Configure the server's preferred address to advertise so that client can migrate to it. See :ref:`example <envoy_v3_api_msg_extensions.quic.server_preferred_address.v3.FixedServerPreferredAddressConfig>` which configures a pair of v4 and v6 preferred addresses.
// The current QUICHE implementation will advertise only one of the preferred IPv4 and IPv6 addresses based on the address family the client initially connects with, and only if the client is also QUICHE-based.
// The current QUICHE implementation will advertise only one of the preferred IPv4 and IPv6 addresses based on the address family the client initially connects with.
// If not specified, Envoy will not advertise any server's preferred address.
// [#extension-category: envoy.quic.server_preferred_address]
core.v3.TypedExtensionConfig server_preferred_address_config = 9
[(xds.annotations.v3.field_status).work_in_progress = true];
// Configure the server to send transport parameter `disable_active_migration <https://www.rfc-editor.org/rfc/rfc9000#section-18.2-4.30.1>`_.
// Defaults to false (do not send this transport parameter).
google.protobuf.BoolValue send_disable_active_migration = 10;
// Configure which implementation of ``quic::QuicConnectionDebugVisitor`` to be used for this listener.
// If not specified, no debug visitor will be attached to connections.
// [#extension-category: envoy.quic.connection_debug_visitor]
core.v3.TypedExtensionConfig connection_debug_visitor_config = 11;
}

View File

@ -43,7 +43,6 @@ enum HistogramEmitMode {
// - name: envoy.stat_sinks.metrics_service
// typed_config:
// "@type": type.googleapis.com/envoy.config.metrics.v3.MetricsServiceConfig
// transport_api_version: V3
//
// [#extension: envoy.stat_sinks.metrics_service]
// [#next-free-field: 6]

View File

@ -121,8 +121,8 @@ message StatsMatcher {
// limited by either an exclusion or an inclusion list of :ref:`StringMatcher
// <envoy_v3_api_msg_type.matcher.v3.StringMatcher>` protos:
//
// * If ``reject_all`` is set to `true`, no stats will be instantiated. If ``reject_all`` is set to
// `false`, all stats will be instantiated.
// * If ``reject_all`` is set to ``true``, no stats will be instantiated. If ``reject_all`` is set to
// ``false``, all stats will be instantiated.
//
// * If an exclusion list is supplied, any stat name matching *any* of the StringMatchers in the
// list will not instantiate.

View File

@ -194,7 +194,7 @@ message Policy {
}
// Permission defines an action (or actions) that a principal can take.
// [#next-free-field: 13]
// [#next-free-field: 14]
message Permission {
option (udpa.annotations.versioning).previous_message_type = "envoy.config.rbac.v2.Permission";
@ -270,6 +270,10 @@ message Permission {
// Extension for configuring custom matchers for RBAC.
// [#extension-category: envoy.rbac.matchers]
core.v3.TypedExtensionConfig matcher = 12;
// URI template path matching.
// [#extension-category: envoy.path.match]
core.v3.TypedExtensionConfig uri_template = 13;
}
}

View File

@ -23,7 +23,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// * Routing :ref:`architecture overview <arch_overview_http_routing>`
// * HTTP :ref:`router filter <config_http_filters_router>`
// [#next-free-field: 17]
// [#next-free-field: 18]
message RouteConfiguration {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.RouteConfiguration";
@ -82,14 +82,11 @@ message RouteConfiguration {
(validate.rules).repeated = {items {string {well_known_regex: HTTP_HEADER_NAME strict: false}}}
];
// By default, headers that should be added/removed are evaluated from most to least specific:
//
// * route level
// * virtual host level
// * connection manager level
//
// To allow setting overrides at the route or virtual host level, this order can be reversed
// by setting this option to true. Defaults to false.
// Headers mutations at all levels are evaluated, if specified. By default, the order is from most
// specific (i.e. route entry level) to least specific (i.e. route configuration level). Later header
// mutations may override earlier mutations.
// This order can be reversed by setting this field to true. In other words, most specific level mutation
// is evaluated last.
//
bool most_specific_header_mutations_wins = 10;
@ -142,19 +139,22 @@ message RouteConfiguration {
// For users who want to only match path on the "<path>" portion, this option should be true.
bool ignore_path_parameters_in_path_matching = 15;
// The typed_per_filter_config field can be used to provide RouteConfiguration level per filter config.
// The key should match the :ref:`filter config name
// This field can be used to provide RouteConfiguration level per filter config. The key should match the
// :ref:`filter config name
// <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpFilter.name>`.
// The canonical filter name (e.g., ``envoy.filters.http.buffer`` for the HTTP buffer filter) can also
// be used for the backwards compatibility. If there is no entry referred by the filter config name, the
// entry referred by the canonical filter name will be provided to the filters as fallback.
//
// Use of this field is filter specific;
// see the :ref:`HTTP filter documentation <config_http_filters>` for if and how it is utilized.
// See :ref:`Http filter route specific config <arch_overview_http_filters_per_filter_config>`
// for details.
// [#comment: An entry's value may be wrapped in a
// :ref:`FilterConfig<envoy_v3_api_msg_config.route.v3.FilterConfig>`
// message to specify additional options.]
map<string, google.protobuf.Any> typed_per_filter_config = 16;
// The metadata field can be used to provide additional information
// about the route configuration. It can be used for configuration, stats, and logging.
// The metadata should go under the filter namespace that will need it.
// For instance, if the metadata is intended for the Router filter,
// the filter name should be specified as ``envoy.filters.http.router``.
core.v3.Metadata metadata = 17;
}
message Vhds {

View File

@ -41,7 +41,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// host header. This allows a single listener to service multiple top level domain path trees. Once
// a virtual host is selected based on the domain, the routes are processed in order to see which
// upstream cluster to route to or whether to perform a redirect.
// [#next-free-field: 24]
// [#next-free-field: 25]
message VirtualHost {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.route.VirtualHost";
@ -153,15 +153,11 @@ message VirtualHost {
// to configure the CORS HTTP filter.
CorsPolicy cors = 8 [deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
// The per_filter_config field can be used to provide virtual host-specific configurations for filters.
// The key should match the :ref:`filter config name
// This field can be used to provide virtual host level per filter config. The key should match the
// :ref:`filter config name
// <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpFilter.name>`.
// The canonical filter name (e.g., ``envoy.filters.http.buffer`` for the HTTP buffer filter) can also
// be used for the backwards compatibility. If there is no entry referred by the filter config name, the
// entry referred by the canonical filter name will be provided to the filters as fallback.
//
// Use of this field is filter specific;
// see the :ref:`HTTP filter documentation <config_http_filters>` for if and how it is utilized.
// See :ref:`Http filter route specific config <arch_overview_http_filters_per_filter_config>`
// for details.
// [#comment: An entry's value may be wrapped in a
// :ref:`FilterConfig<envoy_v3_api_msg_config.route.v3.FilterConfig>`
// message to specify additional options.]
@ -219,6 +215,13 @@ message VirtualHost {
// It takes precedence over the route config mirror policy entirely.
// That is, policies are not merged, the most specific non-empty one becomes the mirror policies.
repeated RouteAction.RequestMirrorPolicy request_mirror_policies = 22;
// The metadata field can be used to provide additional information
// about the virtual host. It can be used for configuration, stats, and logging.
// The metadata should go under the filter namespace that will need it.
// For instance, if the metadata is intended for the Router filter,
// the filter name should be specified as ``envoy.filters.http.router``.
core.v3.Metadata metadata = 24;
}
// A filter-defined action type.
@ -292,15 +295,11 @@ message Route {
// Decorator for the matched route.
Decorator decorator = 5;
// The per_filter_config field can be used to provide route-specific configurations for filters.
// The key should match the :ref:`filter config name
// This field can be used to provide route specific per filter config. The key should match the
// :ref:`filter config name
// <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpFilter.name>`.
// The canonical filter name (e.g., ``envoy.filters.http.buffer`` for the HTTP buffer filter) can also
// be used for the backwards compatibility. If there is no entry referred by the filter config name, the
// entry referred by the canonical filter name will be provided to the filters as fallback.
//
// Use of this field is filter specific;
// see the :ref:`HTTP filter documentation <config_http_filters>` for if and how it is utilized.
// See :ref:`Http filter route specific config <arch_overview_http_filters_per_filter_config>`
// for details.
// [#comment: An entry's value may be wrapped in a
// :ref:`FilterConfig<envoy_v3_api_msg_config.route.v3.FilterConfig>`
// message to specify additional options.]
@ -451,16 +450,11 @@ message WeightedCluster {
items {string {well_known_regex: HTTP_HEADER_NAME strict: false}}
}];
// The per_filter_config field can be used to provide weighted cluster-specific configurations
// for filters.
// The key should match the :ref:`filter config name
// This field can be used to provide weighted cluster specific per filter config. The key should match the
// :ref:`filter config name
// <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpFilter.name>`.
// The canonical filter name (e.g., ``envoy.filters.http.buffer`` for the HTTP buffer filter) can also
// be used for the backwards compatibility. If there is no entry referred by the filter config name, the
// entry referred by the canonical filter name will be provided to the filters as fallback.
//
// Use of this field is filter specific;
// see the :ref:`HTTP filter documentation <config_http_filters>` for if and how it is utilized.
// See :ref:`Http filter route specific config <arch_overview_http_filters_per_filter_config>`
// for details.
// [#comment: An entry's value may be wrapped in a
// :ref:`FilterConfig<envoy_v3_api_msg_config.route.v3.FilterConfig>`
// message to specify additional options.]
@ -537,10 +531,20 @@ message RouteMatch {
// If specified, the route will match against whether or not a certificate is validated.
// If not specified, certificate validation status (true or false) will not be considered when route matching.
//
// .. warning::
//
// Client certificate validation is not currently performed upon TLS session resumption. For
// a resumed TLS session the route will match only when ``validated`` is false, regardless of
// whether the client TLS certificate is valid.
//
// The only known workaround for this issue is to disable TLS session resumption entirely, by
// setting both :ref:`disable_stateless_session_resumption <envoy_v3_api_field_extensions.transport_sockets.tls.v3.DownstreamTlsContext.disable_stateless_session_resumption>`
// and :ref:`disable_stateful_session_resumption <envoy_v3_api_field_extensions.transport_sockets.tls.v3.DownstreamTlsContext.disable_stateful_session_resumption>` on the DownstreamTlsContext.
google.protobuf.BoolValue validated = 2;
}
// An extensible message for matching CONNECT requests.
// An extensible message for matching CONNECT or CONNECT-UDP requests.
message ConnectMatcher {
}
@ -573,11 +577,10 @@ message RouteMatch {
// stripping. This needs more thought.]
type.matcher.v3.RegexMatcher safe_regex = 10 [(validate.rules).message = {required: true}];
// If this is used as the matcher, the matcher will only match CONNECT requests.
// Note that this will not match HTTP/2 upgrade-style CONNECT requests
// (WebSocket and the like) as they are normalized in Envoy as HTTP/1.1 style
// upgrades.
// This is the only way to match CONNECT requests for HTTP/1.1. For HTTP/2,
// If this is used as the matcher, the matcher will only match CONNECT or CONNECT-UDP requests.
// Note that this will not match other Extended CONNECT requests (WebSocket and the like) as
// they are normalized in Envoy as HTTP/1.1 style upgrades.
// This is the only way to match CONNECT requests for HTTP/1.1. For HTTP/2 and HTTP/3,
// where Extended CONNECT requests may have a path, the path matchers will work if
// there is a path present.
// Note that CONNECT support is currently considered alpha in Envoy.
@ -632,7 +635,8 @@ message RouteMatch {
// match. The router will check the query string from the ``path`` header
// against all the specified query parameters. If the number of specified
// query parameters is nonzero, they all must match the ``path`` header's
// query string for a match to occur.
// query string for a match to occur. In the event query parameters are
// repeated, only the first value for each key will be considered.
//
// .. note::
//
@ -669,7 +673,7 @@ message RouteMatch {
// :ref:`CorsPolicy in filter extension <envoy_v3_api_msg_extensions.filters.http.cors.v3.CorsPolicy>`
// as as alternative.
//
// [#next-free-field: 13]
// [#next-free-field: 14]
message CorsPolicy {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.route.CorsPolicy";
@ -723,6 +727,10 @@ message CorsPolicy {
//
// More details refer to https://developer.chrome.com/blog/private-network-access-preflight.
google.protobuf.BoolValue allow_private_network_access = 12;
// Specifies if preflight requests not matching the configured allowed origin should be forwarded
// to the upstream. Default is true.
google.protobuf.BoolValue forward_not_matching_preflights = 13;
}
// [#next-free-field: 42]
@ -755,7 +763,8 @@ message RouteAction {
// collected for the shadow cluster making this feature useful for testing.
//
// During shadowing, the host/authority header is altered such that ``-shadow`` is appended. This is
// useful for logging. For example, ``cluster1`` becomes ``cluster1-shadow``.
// useful for logging. For example, ``cluster1`` becomes ``cluster1-shadow``. This behavior can be
// disabled by setting ``disable_shadow_host_suffix_append`` to ``true``.
//
// .. note::
//
@ -764,7 +773,7 @@ message RouteAction {
// .. note::
//
// Shadowing doesn't support Http CONNECT and upgrades.
// [#next-free-field: 6]
// [#next-free-field: 7]
message RequestMirrorPolicy {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.route.RouteAction.RequestMirrorPolicy";
@ -810,6 +819,9 @@ message RouteAction {
// Determines if the trace span should be sampled. Defaults to true.
google.protobuf.BoolValue trace_sampled = 4;
// Disables appending the ``-shadow`` suffix to the shadowed ``Host`` header. Defaults to ``false``.
bool disable_shadow_host_suffix_append = 6;
}
// Specifies the route's hashing policy if the upstream cluster uses a hashing :ref:`load balancer
@ -895,7 +907,8 @@ message RouteAction {
// The name of the URL query parameter that will be used to obtain the hash
// key. If the parameter is not present, no hash will be produced. Query
// parameter names are case-sensitive.
// parameter names are case-sensitive. If query parameters are repeated, only
// the first value will be considered.
string name = 1 [(validate.rules).string = {min_len: 1}];
}
@ -1150,7 +1163,9 @@ message RouteAction {
// Indicates that during forwarding, the host header will be swapped with
// the hostname of the upstream host chosen by the cluster manager. This
// option is applicable only when the destination cluster for a route is of
// type ``strict_dns`` or ``logical_dns``. Setting this to true with other cluster types
// type ``strict_dns`` or ``logical_dns``,
// or when :ref:`hostname <envoy_v3_api_field_config.endpoint.v3.Endpoint.hostname>`
// field is not empty. Setting this to true with other cluster types
// has no effect. Using this option will append the
// :ref:`config_http_conn_man_headers_x-forwarded-host` header if
// :ref:`append_x_forwarded_host <envoy_v3_api_field_config.route.v3.RouteAction.append_x_forwarded_host>`
@ -1204,7 +1219,6 @@ message RouteAction {
// :ref:`host_rewrite_path_regex <envoy_v3_api_field_config.route.v3.RouteAction.host_rewrite_path_regex>`)
// causes the original value of the host header, if any, to be appended to the
// :ref:`config_http_conn_man_headers_x-forwarded-host` HTTP header if it is different to the last value appended.
// This can be disabled by setting the runtime guard `envoy_reloadable_features_append_xfh_idempotent` to false.
bool append_x_forwarded_host = 38;
// Specifies the upstream timeout for the route. If not specified, the default is 15s. This
@ -2358,6 +2372,7 @@ message QueryParameterMatcher {
}
// HTTP Internal Redirect :ref:`architecture overview <arch_overview_internal_redirects>`.
// [#next-free-field: 6]
message InternalRedirectPolicy {
// An internal redirect is not handled, unless the number of previous internal redirects that a
// downstream request has encountered is lower than this value.
@ -2383,6 +2398,14 @@ message InternalRedirectPolicy {
// Allow internal redirect to follow a target URI with a different scheme than the value of
// x-forwarded-proto. The default is false.
bool allow_cross_scheme_redirect = 4;
// Specifies a list of headers, by name, to copy from the internal redirect into the subsequent
// request. If a header is specified here but not present in the redirect, it will be cleared in
// the subsequent request.
repeated string response_headers_to_copy = 5 [(validate.rules).repeated = {
unique: true
items {string {well_known_regex: HTTP_HEADER_NAME strict: false}}
}];
}
// A simple wrapper for an HTTP filter config. This is intended to be used as a wrapper for the
@ -2401,6 +2424,8 @@ message FilterConfig {
bool is_optional = 2;
// If true, the filter is disabled in the route or virtual host and the ``config`` field is ignored.
// See :ref:`route based filter chain <arch_overview_http_filters_route_based_filter_chain>`
// for more details.
//
// .. note::
//

View File

@ -4,6 +4,7 @@ package envoy.config.tap.v3;
import "envoy/config/common/matcher/v3/matcher.proto";
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/extension.proto";
import "envoy/config/core/v3/grpc_service.proto";
import "envoy/config/route/v3/route_components.proto";
@ -183,7 +184,7 @@ message OutputConfig {
}
// Tap output sink configuration.
// [#next-free-field: 6]
// [#next-free-field: 7]
message OutputSink {
option (udpa.annotations.versioning).previous_message_type =
"envoy.service.tap.v2alpha.OutputSink";
@ -259,6 +260,9 @@ message OutputSink {
// been configured to receive tap configuration from some other source (e.g., static
// file, XDS, etc.) configuring the buffered admin output type will fail.
BufferedAdminSink buffered_admin = 5;
// Tap output filter will be defined by an extension type
core.v3.TypedExtensionConfig custom_sink = 6;
}
}

View File

@ -2,6 +2,8 @@ syntax = "proto3";
package envoy.config.trace.v3;
import "google/protobuf/duration.proto";
import "udpa/annotations/migrate.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
@ -16,6 +18,13 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Datadog tracer]
// Configuration for the Remote Configuration feature.
message DatadogRemoteConfig {
// Frequency at which new configuration updates are queried.
// If no value is provided, the default value is delegated to the Datadog tracing library.
google.protobuf.Duration polling_interval = 1;
}
// Configuration for the Datadog tracer.
// [#extension: envoy.tracers.datadog]
message DatadogConfig {
@ -31,4 +40,11 @@ message DatadogConfig {
// Optional hostname to use when sending spans to the collector_cluster. Useful for collectors
// that require a specific hostname. Defaults to :ref:`collector_cluster <envoy_v3_api_field_config.trace.v3.DatadogConfig.collector_cluster>` above.
string collector_hostname = 3;
// Enables and configures remote configuration.
// Remote Configuration allows to configure the tracer from Datadog's user interface.
// This feature can drastically increase the number of connections to the Datadog Agent.
// Each tracer regularly polls for configuration updates, and the number of tracers is the product
// of the number of listeners and worker threads.
DatadogRemoteConfig remote_config = 4;
}

View File

@ -4,6 +4,7 @@ package envoy.config.trace.v3;
import "google/protobuf/struct.proto";
import "envoy/annotations/deprecation.proto";
import "udpa/annotations/migrate.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
@ -29,9 +30,18 @@ message DynamicOtConfig {
// Dynamic library implementing the `OpenTracing API
// <https://github.com/opentracing/opentracing-cpp>`_.
string library = 1 [(validate.rules).string = {min_len: 1}];
string library = 1 [
deprecated = true,
(validate.rules).string = {min_len: 1},
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// The configuration to use when creating a tracer from the given dynamic
// library.
google.protobuf.Struct config = 2;
google.protobuf.Struct config = 2 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
}

View File

@ -48,59 +48,109 @@ message OpenCensusConfig {
reserved 7;
// Configures tracing, e.g. the sampler, max number of annotations, etc.
opencensus.proto.trace.v1.TraceConfig trace_config = 1;
opencensus.proto.trace.v1.TraceConfig trace_config = 1 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the stdout exporter if set to true. This is intended for debugging
// purposes.
bool stdout_exporter_enabled = 2;
bool stdout_exporter_enabled = 2 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the Stackdriver exporter if set to true. The project_id must also
// be set.
bool stackdriver_exporter_enabled = 3;
bool stackdriver_exporter_enabled = 3 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// The Cloud project_id to use for Stackdriver tracing.
string stackdriver_project_id = 4;
string stackdriver_project_id = 4 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// (optional) By default, the Stackdriver exporter will connect to production
// Stackdriver. If stackdriver_address is non-empty, it will instead connect
// to this address, which is in the gRPC format:
// https://github.com/grpc/grpc/blob/master/doc/naming.md
string stackdriver_address = 10;
string stackdriver_address = 10 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// (optional) The gRPC server that hosts Stackdriver tracing service. Only
// Google gRPC is supported. If :ref:`target_uri <envoy_v3_api_field_config.core.v3.GrpcService.GoogleGrpc.target_uri>`
// is not provided, the default production Stackdriver address will be used.
core.v3.GrpcService stackdriver_grpc_service = 13;
core.v3.GrpcService stackdriver_grpc_service = 13 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the Zipkin exporter if set to true. The url and service name must
// also be set. This is deprecated, prefer to use Envoy's :ref:`native Zipkin
// tracer <envoy_v3_api_msg_config.trace.v3.ZipkinConfig>`.
bool zipkin_exporter_enabled = 5
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
bool zipkin_exporter_enabled = 5 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// The URL to Zipkin, e.g. "http://127.0.0.1:9411/api/v2/spans". This is
// deprecated, prefer to use Envoy's :ref:`native Zipkin tracer
// <envoy_v3_api_msg_config.trace.v3.ZipkinConfig>`.
string zipkin_url = 6
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
string zipkin_url = 6 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the OpenCensus Agent exporter if set to true. The ocagent_address or
// ocagent_grpc_service must also be set.
bool ocagent_exporter_enabled = 11;
bool ocagent_exporter_enabled = 11 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// The address of the OpenCensus Agent, if its exporter is enabled, in gRPC
// format: https://github.com/grpc/grpc/blob/master/doc/naming.md
// [#comment:TODO: deprecate this field]
string ocagent_address = 12;
string ocagent_address = 12 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// (optional) The gRPC server hosted by the OpenCensus Agent. Only Google gRPC is supported.
// This is only used if the ocagent_address is left empty.
core.v3.GrpcService ocagent_grpc_service = 14;
core.v3.GrpcService ocagent_grpc_service = 14 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// List of incoming trace context headers we will accept. First one found
// wins.
repeated TraceContext incoming_trace_context = 8;
repeated TraceContext incoming_trace_context = 8 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// List of outgoing trace context headers we will produce.
repeated TraceContext outgoing_trace_context = 9;
repeated TraceContext outgoing_trace_context = 9 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
}

View File

@ -2,8 +2,11 @@ syntax = "proto3";
package envoy.config.trace.v3;
import "envoy/config/core/v3/extension.proto";
import "envoy/config/core/v3/grpc_service.proto";
import "envoy/config/core/v3/http_service.proto";
import "udpa/annotations/migrate.proto";
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.config.trace.v3";
@ -16,13 +19,42 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// Configuration for the OpenTelemetry tracer.
// [#extension: envoy.tracers.opentelemetry]
// [#next-free-field: 6]
message OpenTelemetryConfig {
// The upstream gRPC cluster that will receive OTLP traces.
// Note that the tracer drops traces if the server does not read data fast enough.
// This field can be left empty to disable reporting traces to the collector.
core.v3.GrpcService grpc_service = 1;
// This field can be left empty to disable reporting traces to the gRPC service.
// Only one of ``grpc_service``, ``http_service`` may be used.
core.v3.GrpcService grpc_service = 1
[(udpa.annotations.field_migrate).oneof_promotion = "otlp_exporter"];
// The upstream HTTP cluster that will receive OTLP traces.
// This field can be left empty to disable reporting traces to the HTTP service.
// Only one of ``grpc_service``, ``http_service`` may be used.
//
// .. note::
//
// Note: The ``request_headers_to_add`` property in the OTLP HTTP exporter service
// does not support the :ref:`format specifier <config_access_log_format>` as used for
// :ref:`HTTP access logging <config_access_log>`.
// The values configured are added as HTTP headers on the OTLP export request
// without any formatting applied.
core.v3.HttpService http_service = 3
[(udpa.annotations.field_migrate).oneof_promotion = "otlp_exporter"];
// The name for the service. This will be populated in the ResourceSpan Resource attributes.
// If it is not provided, it will default to "unknown_service:envoy".
string service_name = 2;
// An ordered list of resource detectors
// [#extension-category: envoy.tracers.opentelemetry.resource_detectors]
repeated core.v3.TypedExtensionConfig resource_detectors = 4;
// Specifies the sampler to be used by the OpenTelemetry tracer.
// The configured sampler implements the Sampler interface defined by the OpenTelemetry specification.
// This field can be left empty. In this case, the default Envoy sampling decision is used.
//
// See: `OpenTelemetry sampler specification <https://opentelemetry.io/docs/specs/otel/trace/sdk/#sampler>`_
// [#extension-category: envoy.tracers.opentelemetry.samplers]
core.v3.TypedExtensionConfig sampler = 5;
}

View File

@ -22,9 +22,9 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: SkyWalking tracer]
// Configuration for the SkyWalking tracer. Please note that if SkyWalking tracer is used as the
// provider of http tracer, then
// :ref:`start_child_span <envoy_v3_api_field_extensions.filters.http.router.v3.Router.start_child_span>`
// in the router must be set to true to get the correct topology and tracing data. Moreover, SkyWalking
// provider of tracing, then
// :ref:`spawn_upstream_span <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpConnectionManager.Tracing.spawn_upstream_span>`
// in the tracing config must be set to true to get the correct topology and tracing data. Moreover, SkyWalking
// Tracer does not support SkyWalking extension header (``sw8-x``) temporarily.
// [#extension: envoy.tracers.skywalking]
message SkyWalkingConfig {

View File

@ -75,12 +75,17 @@ message ZipkinConfig {
//
// * The Envoy Proxy is used as gateway or ingress.
// * The Envoy Proxy is used as sidecar but inbound traffic capturing or outbound traffic capturing is disabled.
// * Any case that the `start_child_span of router <envoy_v3_api_field_extensions.filters.http.router.v3.Router.start_child_span>` is set to true.
// * Any case that the :ref:`start_child_span of router <envoy_v3_api_field_extensions.filters.http.router.v3.Router.start_child_span>` is set to true.
//
// .. attention::
//
// If this is set to true, then the
// :ref:`start_child_span of router <envoy_v3_api_field_extensions.filters.http.router.v3.Router.start_child_span>`
// SHOULD be set to true also to ensure the correctness of trace chain.
bool split_spans_for_request = 7;
//
// Both this field and ``start_child_span`` are deprecated by the
// :ref:`spawn_upstream_span <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpConnectionManager.Tracing.spawn_upstream_span>`.
// Please use that ``spawn_upstream_span`` field to control the span creation.
bool split_spans_for_request = 7
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
}

View File

@ -0,0 +1,31 @@
syntax = "proto3";
package envoy.config.upstream.local_address_selector.v3;
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.config.upstream.local_address_selector.v3";
option java_outer_classname = "DefaultLocalAddressSelectorProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/config/upstream/local_address_selector/v3;local_address_selectorv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Default Local Address Selector]
// [#extension: envoy.upstream.local_address_selector.default_local_address_selector]
// Default implementation of a local address selector. This implementation is
// used if :ref:`local_address_selector
// <envoy_v3_api_field_config.core.v3.BindConfig.local_address_selector>` is not
// specified.
// This implementation supports the specification of only one address in
// :ref:`extra_source_addresses
// <envoy_v3_api_field_config.core.v3.BindConfig.extra_source_addresses>` which
// is appended to the address specified in the
// :ref:`source_address <envoy_v3_api_field_config.core.v3.BindConfig.source_address>`
// field. The extra address should have a different IP version than the address in the
// ``source_address`` field. The address which has the same IP
// version with the target host's address IP version will be used as bind address.
// If there is no same IP version address found, the address in the ``source_address`` field will
// be returned.
message DefaultLocalAddressSelector {
}

View File

@ -44,6 +44,9 @@ enum AccessLogType {
UpstreamPeriodic = 8;
UpstreamEnd = 9;
DownstreamTunnelSuccessfullyEstablished = 10;
UdpTunnelUpstreamConnected = 11;
UdpPeriodic = 12;
UdpSessionEnd = 13;
}
message TCPAccessLogEntry {
@ -268,7 +271,7 @@ message AccessLogCommon {
}
// Flags indicating occurrences during request/response processing.
// [#next-free-field: 28]
// [#next-free-field: 29]
message ResponseFlags {
option (udpa.annotations.versioning).previous_message_type =
"envoy.data.accesslog.v2.ResponseFlags";
@ -369,6 +372,9 @@ message ResponseFlags {
// Indicates a DNS resolution failed.
bool dns_resolution_failure = 27;
// Indicates a downstream remote codec level reset was received on the stream
bool downstream_remote_reset = 28;
}
// Properties of a negotiated TLS connection.
@ -406,6 +412,9 @@ message TLSProperties {
// The subject field of the certificate.
string subject = 2;
// The issuer field of the certificate.
string issuer = 3;
}
// Version of TLS that was negotiated.

View File

@ -35,7 +35,7 @@ enum HealthCheckerType {
THRIFT = 4;
}
// [#next-free-field: 12]
// [#next-free-field: 13]
message HealthCheckEvent {
option (udpa.annotations.versioning).previous_message_type =
"envoy.data.core.v2alpha.HealthCheckEvent";
@ -55,6 +55,12 @@ message HealthCheckEvent {
// Host addition.
HealthCheckAddHealthy add_healthy_event = 5;
// A health check was successful. Note: a host will be considered healthy either if it is
// the first ever health check, or if the healthy threshold is reached. This kind of event
// indicate that a health check was successful, but does not indicates that the host is
// considered healthy. A host is considered healthy if HealthCheckAddHealthy kind of event is sent.
HealthCheckSuccessful successful_health_check_event = 12;
// Host failure.
HealthCheckFailure health_check_failure_event = 7;
@ -93,6 +99,9 @@ message HealthCheckAddHealthy {
bool first_check = 1;
}
message HealthCheckSuccessful {
}
message HealthCheckFailure {
option (udpa.annotations.versioning).previous_message_type =
"envoy.data.core.v2alpha.HealthCheckFailure";

View File

@ -0,0 +1,24 @@
syntax = "proto3";
package envoy.data.core.v3;
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.data.core.v3";
option java_outer_classname = "TlvMetadataProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/data/core/v3;corev3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Proxy Protocol Filter Typed Metadata]
// PROXY protocol filter typed metadata.
message TlvsMetadata {
// Typed metadata for :ref:`Proxy protocol filter <envoy_v3_api_msg_extensions.filters.listener.proxy_protocol.v3.ProxyProtocol>`, that represents a map of TLVs.
// Each entry in the map consists of a key which corresponds to a configured
// :ref:`rule key <envoy_v3_api_field_extensions.filters.listener.proxy_protocol.v3.ProxyProtocol.KeyValuePair.key>` and a value (TLV value in bytes).
// When runtime flag ``envoy.reloadable_features.use_typed_metadata_in_proxy_protocol_listener`` is enabled,
// :ref:`Proxy protocol filter <envoy_v3_api_msg_extensions.filters.listener.proxy_protocol.v3.ProxyProtocol>`
// will populate typed metadata and regular metadata. By default filter will populate typed and untyped metadata.
map<string, bytes> typed_metadata = 1;
}

View File

@ -128,7 +128,15 @@ message DnsTable {
option (udpa.annotations.versioning).previous_message_type =
"envoy.data.dns.v2alpha.DnsTable.DnsVirtualDomain";
// A domain name for which Envoy will respond to query requests
// A domain name for which Envoy will respond to query requests.
// Wildcard records are supported on the first label only, e.g. ``*.example.com`` or ``*.subdomain.example.com``.
// Names such as ``*example.com``, ``subdomain.*.example.com``, ``*subdomain.example.com``, etc
// are not valid wildcard names and asterisk will be interpreted as a literal ``*`` character.
// Wildcard records match subdomains on any levels, e.g. ``*.example.com`` will match
// ``foo.example.com``, ``bar.foo.example.com``, ``baz.bar.foo.example.com``, etc. In case there are multiple
// wildcard records, the longest wildcard match will be used, e.g. if there are wildcard records for
// ``*.example.com`` and ``*.foo.example.com`` and the query is for ``bar.foo.example.com``, the latter will be used.
// Specific records will always take precedence over wildcard records.
string name = 1 [(validate.rules).string = {min_len: 1 well_known_regex: HTTP_HEADER_NAME}];
// The configuration containing the method to determine the address of this endpoint

View File

@ -2,6 +2,8 @@ syntax = "proto3";
package envoy.data.tap.v3;
import "envoy/config/core/v3/address.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
@ -36,3 +38,14 @@ message Body {
// <envoy_v3_api_field_config.tap.v3.OutputConfig.max_buffered_tx_bytes>` settings.
bool truncated = 3;
}
// Connection properties.
message Connection {
option (udpa.annotations.versioning).previous_message_type = "envoy.data.tap.v2alpha.Connection";
// Local address.
config.core.v3.Address local_address = 1;
// Remote address.
config.core.v3.Address remote_address = 2;
}

View File

@ -5,6 +5,8 @@ package envoy.data.tap.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/data/tap/v3/common.proto";
import "google/protobuf/timestamp.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
@ -34,6 +36,9 @@ message HttpBufferedTrace {
// Message trailers.
repeated config.core.v3.HeaderValue trailers = 3;
// The timestamp after receiving the message headers.
google.protobuf.Timestamp headers_received_time = 4;
}
// Request message.
@ -41,6 +46,9 @@ message HttpBufferedTrace {
// Response message.
Message response = 2;
// downstream connection
Connection downstream_connection = 3;
}
// A streamed HTTP trace segment. Multiple segments make up a full trace.

View File

@ -2,7 +2,6 @@ syntax = "proto3";
package envoy.data.tap.v3;
import "envoy/config/core/v3/address.proto";
import "envoy/data/tap/v3/common.proto";
import "google/protobuf/timestamp.proto";
@ -20,17 +19,6 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// Trace format for the tap transport socket extension. This dumps plain text read/write
// sequences on a socket.
// Connection properties.
message Connection {
option (udpa.annotations.versioning).previous_message_type = "envoy.data.tap.v2alpha.Connection";
// Local address.
config.core.v3.Address local_address = 2;
// Remote address.
config.core.v3.Address remote_address = 3;
}
// Event in a socket trace.
message SocketEvent {
option (udpa.annotations.versioning).previous_message_type = "envoy.data.tap.v2alpha.SocketEvent";

View File

@ -21,7 +21,8 @@ message ExpressionFilter {
// Expressions are based on the set of Envoy :ref:`attributes <arch_overview_attributes>`.
// The provided expression must evaluate to true for logging (expression errors are considered false).
// Examples:
// - ``response.code >= 400``
// - ``(connection.mtls && request.headers['x-log-mtls'] == 'true') || request.url_path.contains('v1beta3')``
//
// * ``response.code >= 400``
// * ``(connection.mtls && request.headers['x-log-mtls'] == 'true') || request.url_path.contains('v1beta3')``
string expression = 1;
}

View File

@ -0,0 +1,94 @@
syntax = "proto3";
package envoy.extensions.access_loggers.fluentd.v3;
import "envoy/config/core/v3/backoff.proto";
import "envoy/config/core/v3/extension.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
import "google/protobuf/wrappers.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.access_loggers.fluentd.v3";
option java_outer_classname = "FluentdProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/access_loggers/fluentd/v3;fluentdv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Fluentd access log]
// Configuration for the *envoy.access_loggers.fluentd* :ref:`AccessLog <envoy_v3_api_msg_config.accesslog.v3.AccessLog>`.
// This access log extension will send the emitted access logs over a TCP connection to an upstream that is accepting
// the Fluentd Forward Protocol as described in: `Fluentd Forward Protocol Specification
// <https://github.com/fluent/fluentd/wiki/Forward-Protocol-Specification-v1>`_.
// [#extension: envoy.access_loggers.fluentd]
// [#next-free-field: 9]
message FluentdAccessLogConfig {
message RetryOptions {
// The number of times the logger will attempt to connect to the upstream during reconnects.
// By default, there is no limit. The logger will attempt to reconnect to the upstream each time
// connecting to the upstream failed or the upstream connection had been closed for any reason.
google.protobuf.UInt32Value max_connect_attempts = 1;
// Sets the backoff strategy. If this value is not set, the default base backoff interval is 500
// milliseconds and the default max backoff interval is 5 seconds (10 times the base interval).
config.core.v3.BackoffStrategy backoff_options = 2;
}
// The upstream cluster to connect to for streaming the Fluentd messages.
string cluster = 1 [(validate.rules).string = {min_len: 1}];
// A tag is a string separated with '.' (e.g. log.type) to categorize events.
// See: https://github.com/fluent/fluentd/wiki/Forward-Protocol-Specification-v1#message-modes
string tag = 2 [(validate.rules).string = {min_len: 1}];
// The prefix to use when emitting :ref:`statistics <config_access_log_stats>`.
string stat_prefix = 3 [(validate.rules).string = {min_len: 1}];
// Interval for flushing access logs to the TCP stream. Logger will flush requests every time
// this interval is elapsed, or when batch size limit is hit, whichever comes first. Defaults to
// 1 second.
google.protobuf.Duration buffer_flush_interval = 4 [(validate.rules).duration = {gt {}}];
// Soft size limit in bytes for access log entries buffer. The logger will buffer requests until
// this limit it hit, or every time flush interval is elapsed, whichever comes first. When the buffer
// limit is hit, the logger will immediately flush the buffer contents. Setting it to zero effectively
// disables the batching. Defaults to 16384.
google.protobuf.UInt32Value buffer_size_bytes = 5;
// A struct that represents the record that is sent for each log entry.
// https://github.com/fluent/fluentd/wiki/Forward-Protocol-Specification-v1#entry
// Values are rendered as strings, numbers, or boolean values as appropriate.
// Nested JSON objects may be produced by some command operators (e.g. FILTER_STATE or DYNAMIC_METADATA).
// See :ref:`format string<config_access_log_format_strings>` documentation for a specific command operator details.
//
// .. validated-code-block:: yaml
// :type-name: envoy.extensions.access_loggers.fluentd.v3.FluentdAccessLogConfig
//
// record:
// status: "%RESPONSE_CODE%"
// message: "%LOCAL_REPLY_BODY%"
//
// The following msgpack record would be created:
//
// .. code-block:: json
//
// {
// "status": 500,
// "message": "My error message"
// }
google.protobuf.Struct record = 6 [(validate.rules).message = {required: true}];
// Optional retry, in case upstream connection has failed. If this field is not set, the default values will be applied,
// as specified in the :ref:`RetryOptions <envoy_v3_api_msg_extensions.access_loggers.fluentd.v3.FluentdAccessLogConfig.RetryOptions>`
// configuration.
RetryOptions retry_options = 7;
// Specifies a collection of Formatter plugins that can be called from the access log configuration.
// See the formatters extensions documentation for details.
// [#extension-category: envoy.formatter]
repeated config.core.v3.TypedExtensionConfig formatters = 8;
}

View File

@ -2,6 +2,7 @@ syntax = "proto3";
package envoy.extensions.access_loggers.open_telemetry.v3;
import "envoy/config/core/v3/extension.proto";
import "envoy/extensions/access_loggers/grpc/v3/als.proto";
import "opentelemetry/proto/common/v1/common.proto";
@ -22,7 +23,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// populate `opentelemetry.proto.collector.v1.logs.ExportLogsServiceRequest.resource_logs <https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/collector/logs/v1/logs_service.proto>`_.
// In addition, the request start time is set in the dedicated field.
// [#extension: envoy.access_loggers.open_telemetry]
// [#next-free-field: 6]
// [#next-free-field: 8]
message OpenTelemetryAccessLogConfig {
// [#comment:TODO(itamarkam): add 'filter_state_objects_to_log' to logs.]
grpc.v3.CommonGrpcAccessLogConfig common_config = 1 [(validate.rules).message = {required: true}];
@ -46,4 +47,14 @@ message OpenTelemetryAccessLogConfig {
// See 'attributes' in the LogResource proto for more details.
// Example: ``attributes { values { key: "user_agent" value { string_value: "%REQ(USER-AGENT)%" } } }``.
opentelemetry.proto.common.v1.KeyValueList attributes = 3;
// Optional. Additional prefix to use on OpenTelemetry access logger stats. If empty, the stats will be rooted at
// ``access_logs.open_telemetry_access_log.``. If non-empty, stats will be rooted at
// ``access_logs.open_telemetry_access_log.<stat_prefix>.``.
string stat_prefix = 6;
// Specifies a collection of Formatter plugins that can be called from the access log configuration.
// See the formatters extensions documentation for details.
// [#extension-category: envoy.formatter]
repeated config.core.v3.TypedExtensionConfig formatters = 7;
}

View File

@ -4,8 +4,6 @@ package envoy.extensions.bootstrap.internal_listener.v3;
import "google/protobuf/wrappers.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
@ -14,7 +12,6 @@ option java_outer_classname = "InternalListenerProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/bootstrap/internal_listener/v3;internal_listenerv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
option (xds.annotations.v3.file_status).work_in_progress = true;
// [#protodoc-title: Internal Listener]
// Internal Listener :ref:`overview <config_internal_listener>`.

View File

@ -60,7 +60,7 @@ message ClusterConfig {
// resolved address for the new connection matches the peer address of the connection and
// the TLS certificate is also valid for the new hostname. For example, if a connection
// has previously been established to foo.example.com at IP 1.2.3.4 with a certificate
// that is valid for `*.example.com`, then this connection could be used for requests to
// that is valid for ``*.example.com``, then this connection could be used for requests to
// bar.example.com if that also resolved to 1.2.3.4.
//
// .. note::

View File

@ -67,9 +67,9 @@ message DnsCacheConfig {
// The minimum rate that DNS resolution will occur. Per ``dns_refresh_rate``, once a host is
// resolved, the DNS TTL will be used, with a minimum set by ``dns_min_refresh_rate``.
// ``dns_min_refresh_rate`` defaults to 5s and must also be >= 5s.
// ``dns_min_refresh_rate`` defaults to 5s and must also be >= 1s.
google.protobuf.Duration dns_min_refresh_rate = 14
[(validate.rules).duration = {gte {seconds: 5}}];
[(validate.rules).duration = {gte {seconds: 1}}];
// The TTL for hosts that are unused. Hosts that have not been used in the configured time
// interval will be purged. If not specified defaults to 5m.

View File

@ -5,7 +5,6 @@ package envoy.extensions.common.matching.v3;
import "envoy/config/common/matcher/v3/matcher.proto";
import "envoy/config/core/v3/extension.proto";
import "xds/annotations/v3/status.proto";
import "xds/type/matcher/v3/matcher.proto";
import "envoy/annotations/deprecation.proto";
@ -24,8 +23,6 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// decorating an existing extension with a matcher, which can be used to match against
// relevant protocol data.
message ExtensionWithMatcher {
option (xds.annotations.v3.message_status).work_in_progress = true;
// The associated matcher. This is deprecated in favor of xds_matcher.
config.common.matcher.v3.Matcher matcher = 1
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];

View File

@ -130,3 +130,15 @@ message LocalRateLimitDescriptor {
// Token Bucket algorithm for local ratelimiting.
type.v3.TokenBucket token_bucket = 2 [(validate.rules).message = {required: true}];
}
// Configuration used to enable local cluster level rate limiting where the token buckets
// will be shared across all the Envoy instances in the local cluster.
// A share will be calculated based on the membership of the local cluster dynamically
// and the configuration. When the limiter refilling the token bucket, the share will be
// applied. By default, the token bucket will be shared evenly.
//
// See :ref:`local cluster name
// <envoy_v3_api_field_config.bootstrap.v3.ClusterManager.local_cluster_name>` for more context
// about local cluster.
message LocalClusterRateLimit {
}

View File

@ -0,0 +1,65 @@
syntax = "proto3";
package envoy.extensions.filters.common.set_filter_state.v3;
import "envoy/config/core/v3/substitution_format_string.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.common.set_filter_state.v3";
option java_outer_classname = "ValueProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/common/set_filter_state/v3;set_filter_statev3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Set-Filter-State filter state value]
// A filter state key and value pair.
// [#next-free-field: 7]
message FilterStateValue {
enum SharedWithUpstream {
// Object is not shared with the upstream internal connections.
NONE = 0;
// Object is shared with the upstream internal connection.
ONCE = 1;
// Object is shared with the upstream internal connection and any internal connection upstream from it.
TRANSITIVE = 2;
}
oneof key {
option (validate.required) = true;
// Filter state object key. The key is used to lookup the object factory, unless :ref:`factory_key
// <envoy_v3_api_field_extensions.filters.common.set_filter_state.v3.FilterStateValue.factory_key>` is set. See
// :ref:`the well-known filter state keys <well_known_filter_state>` for a list of valid object keys.
string object_key = 1 [(validate.rules).string = {min_len: 1}];
}
// Optional filter object factory lookup key. See :ref:`the well-known filter state keys <well_known_filter_state>`
// for a list of valid factory keys.
string factory_key = 6;
oneof value {
option (validate.required) = true;
// Uses the :ref:`format string <config_access_log_format_strings>` to
// instantiate the filter state object value.
config.core.v3.SubstitutionFormatString format_string = 2;
}
// If marked as read-only, the filter state key value is locked, and cannot
// be overridden by any filter, including this filter.
bool read_only = 3;
// Configures the object to be shared with the upstream internal connections. See :ref:`internal upstream
// transport <config_internal_upstream_transport>` for more details on the filter state sharing with
// the internal connections.
SharedWithUpstream shared_with_upstream = 4;
// Skip the update if the value evaluates to an empty string.
// This option can be used to supply multiple alternatives for the same filter state object key.
bool skip_if_empty = 5;
}

View File

@ -4,6 +4,7 @@ package envoy.extensions.filters.http.alternate_protocols_cache.v3;
import "envoy/config/core/v3/protocol.proto";
import "envoy/annotations/deprecation.proto";
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.http.alternate_protocols_cache.v3";
@ -17,9 +18,8 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// Configuration for the alternate protocols cache HTTP filter.
// [#extension: envoy.filters.http.alternate_protocols_cache]
message FilterConfig {
// If set, causes the use of the alternate protocols cache, which is responsible for
// parsing and caching HTTP Alt-Svc headers. This enables the use of HTTP/3 for upstream
// servers that advertise supporting it.
// TODO(RyanTheOptimist): Make this field required when HTTP/3 is enabled via auto_http.
config.core.v3.AlternateProtocolsCacheOptions alternate_protocols_cache_options = 1;
// This field is ignored: the alternate protocols cache filter will use the
// cache for the cluster the request is routed to.
config.core.v3.AlternateProtocolsCacheOptions alternate_protocols_cache_options = 1
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
}

View File

@ -17,6 +17,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#extension: envoy.filters.http.aws_lambda]
// AWS Lambda filter config
// [#next-free-field: 7]
message Config {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.aws_lambda.v2alpha.Config";
@ -42,6 +43,51 @@ message Config {
// Determines the way to invoke the Lambda function.
InvocationMode invocation_mode = 3 [(validate.rules).enum = {defined_only: true}];
// Indicates that before signing headers, the host header will be swapped with
// this value. If not set or empty, the original host header value
// will be used and no rewrite will happen.
//
// Note: this rewrite affects both signing and host header forwarding. However, this
// option shouldn't be used with
// :ref:`HCM host rewrite <envoy_v3_api_field_config.route.v3.RouteAction.host_rewrite_literal>` given that the
// value set here would be used for signing whereas the value set in the HCM would be used
// for host header forwarding which is not the desired outcome.
// Changing the value of the host header can result in a different route to be selected
// if an HTTP filter after AWS lambda re-evaluates the route (clears route cache).
string host_rewrite = 4;
// Specifies the credentials profile to be used from the AWS credentials file.
// This parameter is optional. If set, it will override the value set in the AWS_PROFILE env variable and
// the provider chain is limited to the AWS credentials file Provider.
// If credentials configuration is provided, this configuration will be ignored.
// If this field is provided, then the default providers chain specified in the documentation will be ignored.
// (See :ref:`default credentials providers <config_http_filters_aws_lambda_credentials>`).
string credentials_profile = 5;
// Specifies the credentials to be used. This parameter is optional and if it is set,
// it will override other providers and will take precedence over credentials_profile.
// The provider chain is limited to the configuration credentials provider.
// If this field is provided, then the default providers chain specified in the documentation will be ignored.
// (See :ref:`default credentials providers <config_http_filters_aws_lambda_credentials>`).
//
// .. warning::
// Distributing the AWS credentials via this configuration should not be done in production.
Credentials credentials = 6;
}
// AWS Lambda Credentials config.
message Credentials {
// AWS access key id.
string access_key_id = 1 [(validate.rules).string = {min_len: 1}];
// AWS secret access key.
string secret_access_key = 2 [(validate.rules).string = {min_len: 1}];
// AWS session token.
// This parameter is optional. If it is set to empty string it will not be consider in the request.
// It is required if temporary security credentials retrieved directly from AWS STS operations are used.
string session_token = 3;
}
// Per-route configuration for AWS Lambda. This can be useful when invoking a different Lambda function or a different

View File

@ -4,6 +4,8 @@ package envoy.extensions.filters.http.aws_request_signing.v3;
import "envoy/type/matcher/v3/string.proto";
import "google/protobuf/duration.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
import "validate/validate.proto";
@ -19,11 +21,29 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#extension: envoy.filters.http.aws_request_signing]
// Top level configuration for the AWS request signing filter.
// [#next-free-field: 6]
// [#next-free-field: 8]
message AwsRequestSigning {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.aws_request_signing.v2alpha.AwsRequestSigning";
enum SigningAlgorithm {
// Use SigV4 for signing
AWS_SIGV4 = 0;
// Use SigV4A for signing
AWS_SIGV4A = 1;
}
message QueryString {
// Optional expiration time for the query string parameters. As query string parameter based requests are replayable, in effect representing
// an API call that has already been authenticated, it is recommended to keep this expiration time as short as feasible.
// This value will default to 5 seconds and has a maximum value of 3600 seconds (1 hour).
google.protobuf.Duration expiration_time = 1 [(validate.rules).duration = {
lte {seconds: 3600}
gte {seconds: 1}
}];
}
// The `service namespace
// <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#genref-aws-service-namespaces>`_
// of the HTTP endpoint.
@ -31,11 +51,24 @@ message AwsRequestSigning {
// Example: s3
string service_name = 1 [(validate.rules).string = {min_len: 1}];
// The `region <https://docs.aws.amazon.com/general/latest/gr/rande.html>`_ hosting the HTTP
// endpoint.
// Optional region string. If region is not provided, the region will be retrieved from the environment
// or AWS configuration files. See :ref:`config_http_filters_aws_request_signing_region` for more details.
//
// When signing_algorithm is set to ``AWS_SIGV4`` the region is a standard AWS `region <https://docs.aws.amazon.com/general/latest/gr/rande.html>`_ string for the service
// hosting the HTTP endpoint.
//
// Example: us-west-2
string region = 2 [(validate.rules).string = {min_len: 1}];
//
// When signing_algorithm is set to ``AWS_SIGV4A`` the region is used as a region set.
//
// A region set is a comma separated list of AWS regions, such as ``us-east-1,us-east-2`` or wildcard ``*``
// or even region strings containing wildcards such as ``us-east-*``
//
// Example: '*'
//
// By configuring a region set, a SigV4A signed request can be sent to multiple regions, rather than being
// valid for only a single region destination.
string region = 2;
// Indicates that before signing headers, the host header will be swapped with
// this value. If not set or empty, the original host header value
@ -63,6 +96,17 @@ message AwsRequestSigning {
// - exact: bar
// When applied, all headers that start with "x-envoy" and headers "foo" and "bar" will not be signed.
repeated type.matcher.v3.StringMatcher match_excluded_headers = 5;
// Optional Signing algorithm specifier, either ``AWS_SIGV4`` or ``AWS_SIGV4A``, defaulting to ``AWS_SIGV4``.
SigningAlgorithm signing_algorithm = 6;
// If set, use the query string to store output of SigV4 or SigV4A calculation, rather than HTTP headers. The ``Authorization`` header will not be modified if ``query_string``
// is configured.
//
// Example:
// query_string: {}
//
QueryString query_string = 7;
}
message AwsRequestSigningPerRoute {

View File

@ -0,0 +1,52 @@
syntax = "proto3";
package envoy.extensions.filters.http.basic_auth.v3;
import "envoy/config/core/v3/base.proto";
import "udpa/annotations/sensitive.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.http.basic_auth.v3";
option java_outer_classname = "BasicAuthProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/http/basic_auth/v3;basic_authv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Basic Auth]
// Basic Auth :ref:`configuration overview <config_http_filters_basic_auth>`.
// [#extension: envoy.filters.http.basic_auth]
// Basic HTTP authentication.
//
// Example:
//
// .. code-block:: yaml
//
// users:
// inline_string: |-
// user1:{SHA}hashed_user1_password
// user2:{SHA}hashed_user2_password
//
message BasicAuth {
// Username-password pairs used to verify user credentials in the "Authorization" header.
// The value needs to be the htpasswd format.
// Reference to https://httpd.apache.org/docs/2.4/programs/htpasswd.html
config.core.v3.DataSource users = 1 [(udpa.annotations.sensitive) = true];
// This field specifies the header name to forward a successfully authenticated user to
// the backend. The header will be added to the request with the username as the value.
//
// If it is not specified, the username will not be forwarded.
string forward_username_header = 2
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME strict: false}];
}
// Extra settings that may be added to per-route configuration for
// a virtual host or a cluster.
message BasicAuthPerRoute {
// Username-password pairs for this route.
config.core.v3.DataSource users = 1
[(validate.rules).message = {required: true}, (udpa.annotations.sensitive) = true];
}

View File

@ -20,7 +20,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: HTTP Cache Filter]
// [#extension: envoy.filters.http.cache]
// [#next-free-field: 6]
// [#next-free-field: 7]
message CacheConfig {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.cache.v2alpha.CacheConfig";
@ -88,4 +88,9 @@ message CacheConfig {
// Max body size the cache filter will insert into a cache. 0 means unlimited (though the cache
// storage implementation may have its own limit beyond which it will reject insertions).
uint32 max_body_bytes = 4;
// By default, a ``cache-control: no-cache`` or ``pragma: no-cache`` header in the request
// causes the cache to validate with its upstream even if the lookup is a hit. Setting this
// to true will ignore these headers.
bool ignore_request_cache_control_header = 6;
}

View File

@ -2,11 +2,13 @@ syntax = "proto3";
package envoy.extensions.filters.http.composite.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/config_source.proto";
import "envoy/config/core/v3/extension.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/migrate.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.http.composite.v3";
option java_outer_classname = "CompositeProto";
@ -29,11 +31,40 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// :ref:`ExecuteFilterAction <envoy_v3_api_msg_extensions.filters.http.composite.v3.ExecuteFilterAction>`)
// which filter configuration to create and delegate to.
message Composite {
option (xds.annotations.v3.message_status).work_in_progress = true;
}
// Configuration for an extension configuration discovery service with name.
message DynamicConfig {
// The name of the extension configuration. It also serves as a resource name in ExtensionConfigDS.
string name = 1 [(validate.rules).string = {min_len: 1}];
// Configuration source specifier for an extension configuration discovery
// service. In case of a failure and without the default configuration,
// 500(Internal Server Error) will be returned.
config.core.v3.ExtensionConfigSource config_discovery = 2;
}
// Composite match action (see :ref:`matching docs <arch_overview_matching_api>` for more info on match actions).
// This specifies the filter configuration of the filter that the composite filter should delegate filter interactions to.
message ExecuteFilterAction {
config.core.v3.TypedExtensionConfig typed_config = 1;
// Filter specific configuration which depends on the filter being
// instantiated. See the supported filters for further documentation.
// Only one of ``typed_config`` or ``dynamic_config`` can be set.
// [#extension-category: envoy.filters.http]
config.core.v3.TypedExtensionConfig typed_config = 1
[(udpa.annotations.field_migrate).oneof_promotion = "config_type"];
// Dynamic configuration of filter obtained via extension configuration discovery service.
// Only one of ``typed_config`` or ``dynamic_config`` can be set.
DynamicConfig dynamic_config = 2
[(udpa.annotations.field_migrate).oneof_promotion = "config_type"];
// Probability of the action execution. If not specified, this is 100%.
// This allows sampling behavior for the configured actions.
// For example, if
// :ref:`default_value <envoy_v3_api_field_config.core.v3.RuntimeFractionalPercent.default_value>`
// under the ``sample_percent`` is configured with 30%, a dice roll with that
// probability is done. The underline action will only be executed if the
// dice roll returns positive. Otherwise, the action is skipped.
config.core.v3.RuntimeFractionalPercent sample_percent = 3;
}

View File

@ -126,18 +126,21 @@ message Compressor {
// ``<stat_prefix>.compressor.<compressor_library.name>.<compressor_library_stat_prefix>.*``.
ResponseDirectionConfig response_direction_config = 8;
// If true, chooses this compressor first to do compression when the q-values in `Accept-Encoding` are same.
// If true, chooses this compressor first to do compression when the q-values in ``Accept-Encoding`` are same.
// The last compressor which enables choose_first will be chosen if multiple compressor filters in the chain have choose_first as true.
bool choose_first = 9;
}
// Per-route overrides of `ResponseDirectionConfig`. Anything added here should be optional,
// to allow overriding arbitrary subsets of configuration. Omitted fields must have no affect.
// Per-route overrides of ``ResponseDirectionConfig``. Anything added here should be optional,
// to allow overriding arbitrary subsets of configuration. Omitted fields must have no effect.
message ResponseDirectionOverrides {
// If set, overrides the filter-level
// :ref:`remove_accept_encoding_header<envoy_v3_api_field_extensions.filters.http.compressor.v3.Compressor.ResponseDirectionConfig.remove_accept_encoding_header>`.
google.protobuf.BoolValue remove_accept_encoding_header = 1;
}
// Per-route overrides. As per-route overrides are needed, they should be
// added here, mirroring the structure of `Compressor`. All fields should be
// added here, mirroring the structure of ``Compressor``. All fields should be
// optional, to allow overriding arbitrary subsets of configuration.
message CompressorOverrides {
// If present, response compression is enabled.
@ -152,7 +155,7 @@ message CompressorPerRoute {
// Overrides Compressor.runtime_enabled and CommonDirectionConfig.enabled.
bool disabled = 1 [(validate.rules).bool = {const: true}];
// Per-route overrides. Fields set here will override corresponding fields in `Compressor`.
// Per-route overrides. Fields set here will override corresponding fields in ``Compressor``.
CompressorOverrides overrides = 2;
}
}

View File

@ -13,10 +13,10 @@ option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/fil
option (udpa.annotations.file_status).package_version_status = ACTIVE;
option (xds.annotations.v3.file_status).work_in_progress = true;
// [#protodoc-title: Buf Connect to gRPC] Buf Connect to gRPC bridge
// [#protodoc-title: Connect RPC to gRPC] Connect RPC to gRPC bridge
// :ref:`configuration overview <config_http_filters_connect_grpc_bridge>`.
// [#extension: envoy.filters.http.connect_grpc_bridge]
// Buf Connect gRPC bridge filter configuration
// Connect RPC to gRPC bridge filter configuration
message FilterConfig {
}

View File

@ -33,7 +33,7 @@ message Cors {
// Per route configuration for the CORS filter. This configuration should be configured in the ``RouteConfiguration`` as ``typed_per_filter_config`` at some level to
// make the filter work.
// [#next-free-field: 10]
// [#next-free-field: 11]
message CorsPolicy {
// Specifies string patterns that match allowed origins. An origin is allowed if any of the
// string matchers match.
@ -79,4 +79,8 @@ message CorsPolicy {
//
// More details refer to https://developer.chrome.com/blog/private-network-access-preflight.
google.protobuf.BoolValue allow_private_network_access = 9;
// Specifies if preflight requests not matching the configured allowed origin should be forwarded
// to the upstream. Default is true.
google.protobuf.BoolValue forward_not_matching_preflights = 10;
}

View File

@ -0,0 +1,90 @@
syntax = "proto3";
package envoy.extensions.filters.http.credential_injector.v3;
import "envoy/config/core/v3/extension.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.http.credential_injector.v3";
option java_outer_classname = "CredentialInjectorProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/http/credential_injector/v3;credential_injectorv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
option (xds.annotations.v3.file_status).work_in_progress = true;
// [#protodoc-title: Credential Injector]
// Credential Injector :ref:`configuration overview <config_http_filters_credential_injector>`.
// [#extension: envoy.filters.http.credential_injector]
// Credential Injector injects credentials into outgoing HTTP requests. The filter configuration is used to retrieve the credentials, or
// they can be requested through the OAuth2 client credential grant. The credentials obtained are then injected into the Authorization header
// of the proxied HTTP requests, utilizing either the Basic or Bearer scheme.
//
// If the credential is not present or there was a failure injecting the credential, the request will fail with ``401 Unauthorized`` unless
// ``allow_request_without_credential`` is set to ``true``.
//
// Notice: This filter is intended to be used for workload authentication, which means that the identity associated with the inserted credential
// is considered as the identity of the workload behind the envoy proxy(in this case, envoy is typically deployed as a sidecar alongside that
// workload). Please note that this filter does not handle end user authentication. Its purpose is solely to authenticate the workload itself.
//
// Here is an example of CredentialInjector configuration with Generic credential, which injects an HTTP Basic Auth credential into the proxied requests.
//
// .. code-block:: yaml
//
// overwrite: true
// credential:
// name: generic_credential
// typed_config:
// "@type": type.googleapis.com/envoy.extensions.http.injected_credentials.generic.v3.Generic
// credential:
// name: credential
// sds_config:
// path_config_source:
// path: credential.yaml
// header: Authorization
//
// credential.yaml for Basic Auth:
//
// .. code-block:: yaml
//
// resources:
// - "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
// name: credential
// generic_secret:
// secret:
// inline_string: "Basic base64EncodedUsernamePassword"
//
// It can also be configured to inject a Bearer token into the proxied requests.
//
// credential.yaml for Bearer Token:
//
// .. code-block:: yaml
//
// resources:
// - "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
// name: credential
// generic_secret:
// secret:
// inline_string: "Bearer myToken"
//
message CredentialInjector {
// Whether to overwrite the value or not if the injected headers already exist.
// Value defaults to false.
bool overwrite = 1;
// Whether to send the request to upstream if the credential is not present or if the credential injection
// to the request fails.
//
// By default, a request will fail with ``401 Unauthorized`` if the
// credential is not present or the injection of the credential to the request fails.
// If set to true, the request will be sent to upstream without the credential.
bool allow_request_without_credential = 2;
// The credential to inject into the proxied requests
// [#extension-category: envoy.http.injected_credentials]
config.core.v3.TypedExtensionConfig credential = 3 [(validate.rules).message = {required: true}];
}

View File

@ -2,6 +2,7 @@ syntax = "proto3";
package envoy.extensions.filters.http.ext_authz.v3;
import "envoy/config/common/mutation_rules/v3/mutation_rules.proto";
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/config_source.proto";
import "envoy/config/core/v3/grpc_service.proto";
@ -10,6 +11,8 @@ import "envoy/type/matcher/v3/metadata.proto";
import "envoy/type/matcher/v3/string.proto";
import "envoy/type/v3/http_status.proto";
import "google/protobuf/wrappers.proto";
import "envoy/annotations/deprecation.proto";
import "udpa/annotations/sensitive.proto";
import "udpa/annotations/status.proto";
@ -26,10 +29,10 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// External Authorization :ref:`configuration overview <config_http_filters_ext_authz>`.
// [#extension: envoy.filters.http.ext_authz]
// [#next-free-field: 19]
// [#next-free-field: 28]
message ExtAuthz {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.ext_authz.v2.ExtAuthz";
"envoy.config.filter.http.ext_authz.v3.ExtAuthz";
reserved 4;
@ -63,6 +66,12 @@ message ExtAuthz {
// <config_http_filters_ext_authz_stats>`.
bool failure_mode_allow = 2;
// When ``failure_mode_allow`` and ``failure_mode_allow_header_add`` are both set to true,
// ``x-envoy-auth-failure-mode-allowed: true`` will be added to request headers if the communication
// with the authorization service has failed, or if the authorization service has returned a
// HTTP 5xx error.
bool failure_mode_allow_header_add = 19;
// Enables filter to buffer the client request body and send it within the authorization request.
// A ``x-envoy-auth-partial-body: false|true`` metadata header will be added to the authorization
// request message indicating if the body data is partial.
@ -84,8 +93,26 @@ message ExtAuthz {
// or cannot be reached. The default status is HTTP 403 Forbidden.
type.v3.HttpStatus status_on_error = 7;
// When this is set to true, the filter will check the :ref:`ext_authz response
// <envoy_v3_api_msg_service.auth.v3.CheckResponse>` for invalid header &
// query parameter mutations. If the side stream response is invalid, it will send a local reply
// to the downstream request with status HTTP 500 Internal Server Error.
//
// Note that headers_to_remove & query_parameters_to_remove are validated, but invalid elements in
// those fields should not affect any headers & thus will not cause the filter to send a local
// reply.
//
// When set to false, any invalid mutations will be visible to the rest of envoy and may cause
// unexpected behavior.
//
// If you are using ext_authz with an untrusted ext_authz server, you should set this to true.
bool validate_mutations = 24;
// Specifies a list of metadata namespaces whose values, if present, will be passed to the
// ext_authz service. :ref:`filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.filter_metadata>` is passed as an opaque ``protobuf::Struct``.
// ext_authz service. The :ref:`filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.filter_metadata>`
// is passed as an opaque ``protobuf::Struct``.
//
// Please note that this field exclusively applies to the gRPC ext_authz service and has no effect on the HTTP service.
//
// For example, if the ``jwt_authn`` filter is used and :ref:`payload_in_metadata
// <envoy_v3_api_field_extensions.filters.http.jwt_authn.v3.JwtProvider.payload_in_metadata>` is set,
@ -99,13 +126,28 @@ message ExtAuthz {
repeated string metadata_context_namespaces = 8;
// Specifies a list of metadata namespaces whose values, if present, will be passed to the
// ext_authz service. :ref:`typed_filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.typed_filter_metadata>` is passed as an ``protobuf::Any``.
// ext_authz service. :ref:`typed_filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.typed_filter_metadata>`
// is passed as a ``protobuf::Any``.
//
// It works in a way similar to ``metadata_context_namespaces`` but allows envoy and external authz server to share the protobuf message definition
// in order to do a safe parsing.
// Please note that this field exclusively applies to the gRPC ext_authz service and has no effect on the HTTP service.
//
// It works in a way similar to ``metadata_context_namespaces`` but allows Envoy and ext_authz server to share
// the protobuf message definition in order to do a safe parsing.
//
repeated string typed_metadata_context_namespaces = 16;
// Specifies a list of route metadata namespaces whose values, if present, will be passed to the
// ext_authz service at :ref:`route_metadata_context <envoy_v3_api_field_service.auth.v3.AttributeContext.route_metadata_context>` in
// :ref:`CheckRequest <envoy_v3_api_field_service.auth.v3.CheckRequest.attributes>`.
// :ref:`filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.filter_metadata>` is passed as an opaque ``protobuf::Struct``.
repeated string route_metadata_context_namespaces = 21;
// Specifies a list of route metadata namespaces whose values, if present, will be passed to the
// ext_authz service at :ref:`route_metadata_context <envoy_v3_api_field_service.auth.v3.AttributeContext.route_metadata_context>` in
// :ref:`CheckRequest <envoy_v3_api_field_service.auth.v3.CheckRequest.attributes>`.
// :ref:`typed_filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.typed_filter_metadata>` is passed as an ``protobuf::Any``.
repeated string route_typed_metadata_context_namespaces = 22;
// Specifies if the filter is enabled.
//
// If :ref:`runtime_key <envoy_v3_api_field_config.core.v3.RuntimeFractionalPercent.runtime_key>` is specified,
@ -178,13 +220,76 @@ message ExtAuthz {
// <envoy_v3_api_field_extensions.filters.http.ext_authz.v3.ExtAuthz.with_request_body>` setting),
// consequently the value of *Content-Length* of the authorization request reflects the size of
// its payload size.
//
// .. note::
//
// 3. This can be overridden by the field ``disallowed_headers`` below. That is, if a header
// matches for both ``allowed_headers`` and ``disallowed_headers``, the header will NOT be sent.
type.matcher.v3.ListStringMatcher allowed_headers = 17;
// If set, specifically disallow any header in this list to be forwarded to the external
// authentication server. This overrides the above ``allowed_headers`` if a header matches both.
type.matcher.v3.ListStringMatcher disallowed_headers = 25;
// Specifies if the TLS session level details like SNI are sent to the external service.
//
// When this field is true, Envoy will include the SNI name used for TLSClientHello, if available, in the
// :ref:`tls_session<envoy_v3_api_field_service.auth.v3.AttributeContext.tls_session>`.
bool include_tls_session = 18;
// Whether to increment cluster statistics (e.g. cluster.<cluster_name>.upstream_rq_*) on authorization failure.
// Defaults to true.
google.protobuf.BoolValue charge_cluster_response_stats = 20;
// Whether to encode the raw headers (i.e. unsanitized values & unconcatenated multi-line headers)
// in authentication request. Works with both HTTP and GRPC clients.
//
// When this is set to true, header values are not sanitized. Headers with the same key will also
// not be combined into a single, comma-separated header.
// Requests to GRPC services will populate the field
// :ref:`header_map<envoy_v3_api_field_service.auth.v3.AttributeContext.HttpRequest.header_map>`.
// Requests to HTTP services will be constructed with the unsanitized header values and preserved
// multi-line headers with the same key.
//
// If this field is set to false, header values will be sanitized, with any non-UTF-8-compliant
// bytes replaced with '!'. Headers with the same key will have their values concatenated into a
// single comma-separated header value.
// Requests to GRPC services will populate the field
// :ref:`headers<envoy_v3_api_field_service.auth.v3.AttributeContext.HttpRequest.headers>`.
// Requests to HTTP services will have their header values sanitized and will not preserve
// multi-line headers with the same key.
//
// It's recommended you set this to true unless you already rely on the old behavior. False is the
// default only for backwards compatibility.
bool encode_raw_headers = 23;
// Rules for what modifications an ext_authz server may make to the request headers before
// continuing decoding / forwarding upstream.
//
// If set to anything, enables header mutation checking against configured rules. Note that
// :ref:`HeaderMutationRules <envoy_v3_api_msg_config.common.mutation_rules.v3.HeaderMutationRules>`
// has defaults that change ext_authz behavior. Also note that if this field is set to anything,
// ext_authz can no longer append to :-prefixed headers.
//
// If empty, header mutation rule checking is completely disabled.
//
// Regardless of what is configured here, ext_authz cannot remove :-prefixed headers.
//
// This field and ``validate_mutations`` have different use cases. ``validate_mutations`` enables
// correctness checks for all header / query parameter mutations (e.g. for invalid characters).
// This field allows the filter to reject mutations to specific headers.
config.common.mutation_rules.v3.HeaderMutationRules decoder_header_mutation_rules = 26;
// Enable / disable ingestion of dynamic metadata from ext_authz service.
//
// If false, the filter will ignore dynamic metadata injected by the ext_authz service. If the
// ext_authz service tries injecting dynamic metadata, the filter will log, increment the
// ``ignored_dynamic_metadata`` stat, then continue handling the response.
//
// If true, the filter will ingest dynamic metadata entries as normal.
//
// If unset, defaults to true.
google.protobuf.BoolValue enable_dynamic_metadata_ingestion = 27;
}
// Configuration for buffering the request data.
@ -304,8 +409,8 @@ message AuthorizationResponse {
type.matcher.v3.ListStringMatcher allowed_upstream_headers = 1;
// When this :ref:`list <envoy_v3_api_msg_type.matcher.v3.ListStringMatcher>` is set, authorization
// response headers that have a correspondent match will be added to the client's response. Note
// that coexistent headers will be appended.
// response headers that have a correspondent match will be added to the original client request.
// Note that coexistent headers will be appended.
type.matcher.v3.ListStringMatcher allowed_upstream_headers_to_append = 3;
// When this :ref:`list <envoy_v3_api_msg_type.matcher.v3.ListStringMatcher>` is set, authorization
@ -371,6 +476,19 @@ message CheckSettings {
map<string, string> context_extensions = 1 [(udpa.annotations.sensitive) = true];
// When set to true, disable the configured :ref:`with_request_body
// <envoy_v3_api_field_extensions.filters.http.ext_authz.v3.ExtAuthz.with_request_body>` for a route.
// <envoy_v3_api_field_extensions.filters.http.ext_authz.v3.ExtAuthz.with_request_body>` for a specific route.
//
// Please note that only one of *disable_request_body_buffering* or
// :ref:`with_request_body <envoy_v3_api_field_extensions.filters.http.ext_authz.v3.CheckSettings.with_request_body>`
// may be specified.
bool disable_request_body_buffering = 2;
// Enable or override request body buffering, which is configured using the
// :ref:`with_request_body <envoy_v3_api_field_extensions.filters.http.ext_authz.v3.ExtAuthz.with_request_body>`
// option for a specific route.
//
// Please note that only only one of *with_request_body* or
// :ref:`disable_request_body_buffering <envoy_v3_api_field_extensions.filters.http.ext_authz.v3.CheckSettings.disable_request_body_buffering>`
// may be specified.
BufferSettings with_request_body = 3;
}

View File

@ -3,6 +3,7 @@ syntax = "proto3";
package envoy.extensions.filters.http.ext_proc.v3;
import "envoy/config/common/mutation_rules/v3/mutation_rules.proto";
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/grpc_service.proto";
import "envoy/extensions/filters/http/ext_proc/v3/processing_mode.proto";
import "envoy/type/matcher/v3/string.proto";
@ -10,6 +11,7 @@ import "envoy/type/matcher/v3/string.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
import "udpa/annotations/migrate.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
@ -28,8 +30,6 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// **Current Implementation Status:**
// All options and processing modes are implemented except for the following:
//
// * Request and response attributes are not sent and not processed.
// * Dynamic metadata in responses from the external processor is ignored.
// * "async mode" is not implemented.
// The filter communicates with an external gRPC service called an "external processor"
@ -51,16 +51,15 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// * Whether subsequent HTTP requests are transmitted synchronously or whether they are
// sent asynchronously.
// * To modify request or response trailers if they already exist
// * To add request or response trailers where they are not present
//
// The filter supports up to six different processing steps. Each is represented by
// a gRPC stream message that is sent to the external processor. For each message, the
// processor must send a matching response.
//
// * Request headers: Contains the headers from the original HTTP request.
// * Request body: Sent in a single message if the BUFFERED or BUFFERED_PARTIAL
// mode is chosen, in multiple messages if the STREAMED mode is chosen, and not
// at all otherwise.
// * Request body: Delivered if they are present and sent in a single message if
// the BUFFERED or BUFFERED_PARTIAL mode is chosen, in multiple messages if the
// STREAMED mode is chosen, and not at all otherwise.
// * Request trailers: Delivered if they are present and if the trailer mode is set
// to SEND.
// * Response headers: Contains the headers from the HTTP response. Keep in mind
@ -99,8 +98,31 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// <arch_overview_advanced_filter_state_sharing>` object in a namespace matching the filter
// name.
//
// [#next-free-field: 15]
// [#next-free-field: 20]
message ExternalProcessor {
// Describes the route cache action to be taken when an external processor response
// is received in response to request headers.
enum RouteCacheAction {
// The default behavior is to clear the route cache only when the
// :ref:`clear_route_cache <envoy_v3_api_field_service.ext_proc.v3.CommonResponse.clear_route_cache>`
// field is set in an external processor response.
DEFAULT = 0;
// Always clear the route cache irrespective of the clear_route_cache bit in
// the external processor response.
CLEAR = 1;
// Do not clear the route cache irrespective of the clear_route_cache bit in
// the external processor response. Setting to RETAIN is equivalent to set the
// :ref:`disable_clear_route_cache <envoy_v3_api_field_extensions.filters.http.ext_proc.v3.ExternalProcessor.disable_clear_route_cache>`
// to true.
RETAIN = 2;
}
reserved 4;
reserved "async_mode";
// Configuration for the gRPC service that the filter will communicate with.
// The filter supports both the "Envoy" and "Google" gRPC clients.
config.core.v3.GrpcService grpc_service = 1 [(validate.rules).message = {required: true}];
@ -118,15 +140,6 @@ message ExternalProcessor {
// sent. See ProcessingMode for details.
ProcessingMode processing_mode = 3;
// [#not-implemented-hide:]
// If true, send each part of the HTTP request or response specified by ProcessingMode
// asynchronously -- in other words, send the message on the gRPC stream and then continue
// filter processing. If false, which is the default, suspend filter execution after
// each message is sent to the remote service and wait up to "message_timeout"
// for a reply.
bool async_mode = 4;
// [#not-implemented-hide:]
// Envoy provides a number of :ref:`attributes <arch_overview_attributes>`
// for expressive policies. Each attribute name provided in this field will be
// matched against that list and populated in the request_headers message.
@ -134,7 +147,6 @@ message ExternalProcessor {
// for the list of supported attributes and their types.
repeated string request_attributes = 5;
// [#not-implemented-hide:]
// Envoy provides a number of :ref:`attributes <arch_overview_attributes>`
// for expressive policies. Each attribute name provided in this field will be
// matched against that list and populated in the response_headers message.
@ -180,11 +192,6 @@ message ExternalProcessor {
gte {}
}];
// Prevents clearing the route-cache when the
// :ref:`clear_route_cache <envoy_v3_api_field_service.ext_proc.v3.CommonResponse.clear_route_cache>`
// field is set in an external processor response.
bool disable_clear_route_cache = 11;
// Allow headers matching the ``forward_rules`` to be forwarded to the external processing server.
// If not set, all headers are forwarded to the external processing server.
HeaderForwardingRules forward_rules = 12;
@ -200,6 +207,88 @@ message ExternalProcessor {
// :ref:`mode_override <envoy_v3_api_field_service.ext_proc.v3.ProcessingResponse.mode_override>`.
// If not set, ``mode_override`` API in the response message will be ignored.
bool allow_mode_override = 14;
// If set to true, ignore the
// :ref:`immediate_response <envoy_v3_api_field_service.ext_proc.v3.ProcessingResponse.immediate_response>`
// message in an external processor response. In such case, no local reply will be sent.
// Instead, the stream to the external processor will be closed. There will be no
// more external processing for this stream from now on.
bool disable_immediate_response = 15;
// Options related to the sending and receiving of dynamic metadata.
MetadataOptions metadata_options = 16;
// If true, send each part of the HTTP request or response specified by ProcessingMode
// without pausing on filter chain iteration. It is "Send and Go" mode that can be used
// by external processor to observe Envoy data and status. In this mode:
//
// 1. Only STREAMED body processing mode is supported and any other body processing modes will be
// ignored. NONE mode(i.e., skip body processing) will still work as expected.
//
// 2. External processor should not send back processing response, as any responses will be ignored.
// This also means that
// :ref:`message_timeout <envoy_v3_api_field_extensions.filters.http.ext_proc.v3.ExternalProcessor.message_timeout>`
// restriction doesn't apply to this mode.
//
// 3. External processor may still close the stream to indicate that no more messages are needed.
//
// .. warning::
//
// Flow control is necessary mechanism to prevent the fast sender (either downstream client or upstream server)
// from overwhelming the external processor when its processing speed is slower.
// This protective measure is being explored and developed but has not been ready yet, so please use your own
// discretion when enabling this feature.
// This work is currently tracked under https://github.com/envoyproxy/envoy/issues/33319.
//
bool observability_mode = 17;
// Prevents clearing the route-cache when the
// :ref:`clear_route_cache <envoy_v3_api_field_service.ext_proc.v3.CommonResponse.clear_route_cache>`
// field is set in an external processor response.
// Only one of ``disable_clear_route_cache`` or ``route_cache_action`` can be set.
// It is recommended to set ``route_cache_action`` which supersedes ``disable_clear_route_cache``.
bool disable_clear_route_cache = 11
[(udpa.annotations.field_migrate).oneof_promotion = "clear_route_cache_type"];
// Specifies the action to be taken when an external processor response is
// received in response to request headers. It is recommended to set this field than set
// :ref:`disable_clear_route_cache <envoy_v3_api_field_extensions.filters.http.ext_proc.v3.ExternalProcessor.disable_clear_route_cache>`.
// Only one of ``disable_clear_route_cache`` or ``route_cache_action`` can be set.
RouteCacheAction route_cache_action = 18
[(udpa.annotations.field_migrate).oneof_promotion = "clear_route_cache_type"];
// Specifies the deferred closure timeout for gRPC stream that connects to external processor. Currently, the deferred stream closure
// is only used in :ref:`observability_mode <envoy_v3_api_field_extensions.filters.http.ext_proc.v3.ExternalProcessor.observability_mode>`.
// In observability mode, gRPC streams may be held open to the external processor longer than the lifetime of the regular client to
// backend stream lifetime. In this case, Envoy will eventually timeout the external processor stream according to this time limit.
// The default value is 5000 milliseconds (5 seconds) if not specified.
google.protobuf.Duration deferred_close_timeout = 19;
}
// The MetadataOptions structure defines options for the sending and receiving of
// dynamic metadata. Specifically, which namespaces to send to the server, whether
// metadata returned by the server may be written, and how that metadata may be written.
message MetadataOptions {
message MetadataNamespaces {
// Specifies a list of metadata namespaces whose values, if present,
// will be passed to the ext_proc service as an opaque *protobuf::Struct*.
repeated string untyped = 1;
// Specifies a list of metadata namespaces whose values, if present,
// will be passed to the ext_proc service as a *protobuf::Any*. This allows
// envoy and the external processing server to share the protobuf message
// definition for safe parsing.
repeated string typed = 2;
}
// Describes which typed or untyped dynamic metadata namespaces to forward to
// the external processing server.
MetadataNamespaces forwarding_namespaces = 1;
// Describes which typed or untyped dynamic metadata namespaces to accept from
// the external processing server. Set to empty or leave unset to disallow writing
// any received dynamic metadata. Receiving of typed metadata is not supported.
MetadataNamespaces receiving_namespaces = 2;
}
// The HeaderForwardingRules structure specifies what headers are
@ -242,7 +331,7 @@ message ExtProcPerRoute {
}
// Overrides that may be set on a per-route basis
// [#next-free-field: 6]
// [#next-free-field: 8]
message ExtProcOverrides {
// Set a different processing mode for this route than the default.
ProcessingMode processing_mode = 1;
@ -263,4 +352,17 @@ message ExtProcOverrides {
// Set a different gRPC service for this route than the default.
config.core.v3.GrpcService grpc_service = 5;
// Options related to the sending and receiving of dynamic metadata.
// Lists of forwarding and receiving namespaces will be overridden in their entirety,
// meaning the most-specific config that specifies this override will be the final
// config used. It is the prerogative of the control plane to ensure this
// most-specific config contains the correct final overrides.
MetadataOptions metadata_options = 6;
// Additional metadata to include into streams initiated to the ext_proc gRPC
// service. This can be used for scenarios in which additional ad hoc
// authorization headers (e.g. ``x-foo-bar: baz-key``) are to be injected or
// when a route needs to partially override inherited metadata.
repeated config.core.v3.HeaderValue grpc_initial_metadata = 7;
}

View File

@ -35,6 +35,22 @@ message ProcessingMode {
}
// Control how the request and response bodies are handled
// When body mutation by external processor is enabled, ext_proc filter will always remove
// the content length header in three cases below because content length can not be guaranteed
// to be set correctly:
// 1) STREAMED BodySendMode: header processing completes before body mutation comes back.
// 2) BUFFERED_PARTIAL BodySendMode: body is buffered and could be injected in different phases.
// 3) BUFFERED BodySendMode + SKIP HeaderSendMode: header processing (e.g., update content-length) is skipped.
//
// In Envoy's http1 codec implementation, removing content length will enable chunked transfer
// encoding whenever feasible. The recipient (either client or server) must be able
// to parse and decode the chunked transfer coding.
// (see `details in RFC9112 <https://tools.ietf.org/html/rfc9112#section-7.1>`_).
//
// In BUFFERED BodySendMode + SEND HeaderSendMode, content length header is allowed but it is
// external processor's responsibility to set the content length correctly matched to the length
// of mutated body. If they don't match, the corresponding body mutation will be rejected and
// local reply will be sent with an error message.
enum BodySendMode {
// Do not send the body at all. This is the default.
NONE = 0;

View File

@ -5,8 +5,10 @@ package envoy.extensions.filters.http.gcp_authn.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/http_uri.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/wrappers.proto";
import "envoy/annotations/deprecation.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
@ -21,12 +23,21 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#extension: envoy.filters.http.gcp_authn]
// Filter configuration.
// [#next-free-field: 7]
message GcpAuthnFilterConfig {
// The HTTP URI to fetch tokens from GCE Metadata Server(https://cloud.google.com/compute/docs/metadata/overview).
// The URL format is "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=[AUDIENCE]"
config.core.v3.HttpUri http_uri = 1 [(validate.rules).message = {required: true}];
//
// This field is deprecated because it does not match the API surface provided by the google auth libraries.
// Control planes should not attempt to override the metadata server URI.
// The cluster and timeout can be configured using the ``cluster`` and ``timeout`` fields instead.
// For backward compatibility, the cluster and timeout configured in this field will be used
// if the new ``cluster`` and ``timeout`` fields are not set.
config.core.v3.HttpUri http_uri = 1
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
// Retry policy for fetching tokens. This field is optional.
// Retry policy for fetching tokens.
// Not supported by all data planes.
config.core.v3.RetryPolicy retry_policy = 2;
// Token cache configuration. This field is optional.
@ -34,7 +45,20 @@ message GcpAuthnFilterConfig {
// Request header location to extract the token. By default (i.e. if this field is not specified), the token
// is extracted to the Authorization HTTP header, in the format "Authorization: Bearer <token>".
// Not supported by all data planes.
TokenHeader token_header = 4;
// Cluster to send traffic to the GCE metadata server. Not supported
// by all data planes; a data plane may instead have its own mechanism
// for contacting the metadata server.
string cluster = 5;
// Timeout for fetching the tokens from the GCE metadata server.
// Not supported by all data planes.
google.protobuf.Duration timeout = 6 [(validate.rules).duration = {
lt {seconds: 4294967296}
gte {}
}];
}
// Audience is the URL of the receiving service that performs token authentication.

View File

@ -21,52 +21,6 @@ option (xds.annotations.v3.file_status).work_in_progress = true;
// [#extension: envoy.filters.http.geoip]
message Geoip {
// The set of geolocation headers to add to request. If any of the configured headers is present
// in the incoming request, it will be overridden by Geoip filter.
// [#next-free-field: 10]
message GeolocationHeadersToAdd {
// If set, the header will be used to populate the country ISO code associated with the IP address.
string country = 1
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the header will be used to populate the city associated with the IP address.
string city = 2
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the header will be used to populate the region ISO code associated with the IP address.
string region = 3
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the header will be used to populate the ASN associated with the IP address.
string asn = 4
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the IP address will be checked if it belongs to any type of anonymization network (e.g. VPN, public proxy etc)
// and header will be populated with the check result. Header value will be set to either "true" or "false" depending on the check result.
string is_anon = 5
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the IP address will be checked if it belongs to a VPN and header will be populated with the check result.
// Header value will be set to either "true" or "false" depending on the check result.
string anon_vpn = 6
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the IP address will be checked if it belongs to a hosting provider and header will be populated with the check result.
// Header value will be set to either "true" or "false" depending on the check result.
string anon_hosting = 7
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the IP address will be checked if it belongs to a TOR exit node and header will be populated with the check result.
// Header value will be set to either "true" or "false" depending on the check result.
string anon_tor = 8
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
// If set, the IP address will be checked if it belongs to a public proxy and header will be populated with the check result.
// Header value will be set to either "true" or "false" depending on the check result.
string anon_proxy = 9
[(validate.rules).string = {well_known_regex: HTTP_HEADER_NAME ignore_empty: true}];
}
message XffConfig {
// The number of additional ingress proxy hops from the right side of the
// :ref:`config_http_conn_man_headers_x-forwarded-for` HTTP header to trust when
@ -77,14 +31,15 @@ message Geoip {
}
// If set, the :ref:`xff_num_trusted_hops <envoy_v3_api_field_extensions.filters.http.geoip.v3.Geoip.XffConfig.xff_num_trusted_hops>` field will be used to determine
// trusted client address from `x-forwarded-for` header.
// trusted client address from ``x-forwarded-for`` header.
// Otherwise, the immediate downstream connection source address will be used.
// [#next-free-field: 2]
XffConfig xff_config = 1;
// Configuration for geolocation headers to add to request.
GeolocationHeadersToAdd geo_headers_to_add = 2 [(validate.rules).message = {required: true}];
// Geolocation provider specific configuration.
// Geoip driver specific configuration which depends on the driver being instantiated.
// See the geoip drivers for examples:
//
// - :ref:`MaxMindConfig <envoy_v3_api_msg_extensions.geoip_providers.maxmind.v3.MaxMindConfig>`
// [#extension-category: envoy.geoip_providers]
config.core.v3.TypedExtensionConfig provider = 3 [(validate.rules).message = {required: true}];
}

View File

@ -140,14 +140,14 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
message GrpcFieldExtractionConfig {
// The proto descriptor set binary for the gRPC services.
//
// It could be passed by a local file through `Datasource.filename` or embedded in the
// `Datasource.inline_bytes`.
// It could be passed by a local file through ``Datasource.filename`` or embedded in the
// ``Datasource.inline_bytes``.
config.core.v3.DataSource descriptor_set = 1 [(validate.rules).message = {required: true}];
// Specify the extraction info.
// The key is the fully qualified gRPC method name.
// `${package}.${Service}.${Method}`, like
// `endpoints.examples.bookstore.BookStore.GetShelf`
// ``${package}.${Service}.${Method}``, like
// ``endpoints.examples.bookstore.BookStore.GetShelf``
//
// The value is the field extractions for individual gRPC method.
map<string, FieldExtractions> extractions_by_method = 2;
@ -158,8 +158,8 @@ message GrpcFieldExtractionConfig {
message FieldExtractions {
// The field extractions for requests.
// The key is the field path within the grpc request.
// For example, we can define `foo.bar.name` if we want to extract
// Request.foo.bar.name.
// For example, we can define ``foo.bar.name`` if we want to extract
// ``Request.foo.bar.name``.
//
// .. code-block:: proto
//

View File

@ -26,4 +26,7 @@ message Config {
// For the requests that went through this upgrade the filter will also strip the frame before forwarding the
// response to the client.
bool upgrade_protobuf_to_grpc = 1;
// If true then query parameters in request's URL path will be removed.
bool ignore_query_parameters = 2;
}

View File

@ -18,7 +18,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// gRPC-JSON transcoder :ref:`configuration overview <config_http_filters_grpc_json_transcoder>`.
// [#extension: envoy.filters.http.grpc_json_transcoder]
// [#next-free-field: 17]
// [#next-free-field: 18]
// GrpcJsonTranscoder filter configuration.
// The filter itself can be used per route / per virtual host or on the general level. The most
// specific one is being used for a given route. If the list of services is empty - filter
@ -88,7 +88,8 @@ message GrpcJsonTranscoder {
// When set to true, the request will be rejected with a ``HTTP 400 Bad Request``.
//
// The fields
// :ref:`ignore_unknown_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.ignore_unknown_query_parameters>`
// :ref:`ignore_unknown_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.ignore_unknown_query_parameters>`,
// :ref:`capture_unknown_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.capture_unknown_query_parameters>`,
// and
// :ref:`ignored_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.ignored_query_parameters>`
// have priority over this strict validation behavior.
@ -288,4 +289,20 @@ message GrpcJsonTranscoder {
//
// If unset, the current stream buffer size is used.
google.protobuf.UInt32Value max_response_body_size = 16 [(validate.rules).uint32 = {gt: 0}];
// If true, query parameters that cannot be mapped to a corresponding
// protobuf field are captured in an HttpBody extension of UnknownQueryParams.
bool capture_unknown_query_parameters = 17;
}
// ``UnknownQueryParams`` is added as an extension field in ``HttpBody`` if
// ``GrpcJsonTranscoder::capture_unknown_query_parameters`` is true and unknown query
// parameters were present in the request.
message UnknownQueryParams {
message Values {
repeated string values = 1;
}
// A map from unrecognized query parameter keys, to the values associated with those keys.
map<string, Values> key = 1;
}

Some files were not shown because too many files have changed in this diff Show More