Compare commits

..

67 Commits

Author SHA1 Message Date
Phil Peble 9081324e6b
Merge pull request #5306 from yanjunding/patch-1
Typo
2025-08-25 14:42:13 -05:00
Phil Peble cc19ca0746
Merge pull request #5864 from emissary-ingress/release-3.10-fix-CHANGELOG
Update CHANGELOG with correct metadata for 3.10 release
2025-08-14 16:01:01 -05:00
Phil Peble db5d38e826
Update CHANGELOG with correct metadata for 3.10 release
Signed-off-by: Phil Peble <ppeble@activecampaign.com>
2025-08-14 15:56:44 -05:00
Flynn a8e8f4aacd
Merge pull request #5849 from emissary-ingress/release-3-10-quickstart
Point quickstart link in README to emissary-ingress.dev
2025-07-29 13:30:12 -04:00
Phil Peble e6fa8e56e3
Point quickstart link in README to emissary-ingress.dev
Signed-off-by: Phil Peble <ppeble@activecampaign.com>
2025-07-29 12:17:48 -05:00
Flynn 4f12337556
Merge pull request #5839 from emissary-ingress/flynn/update-docs
Update README and QUICKSTART for 3.10.0
2025-05-07 15:50:26 -04:00
Flynn dd98ecd66a Minor tweaks
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-05-07 10:15:53 -04:00
Flynn c815e182b2 Update README and SUPPORT.md
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-05-07 10:15:47 -04:00
Flynn 96a49735a8 TRY-3.10 -> QUICKSTART
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-05-07 10:15:41 -04:00
Flynn d25610acbe
Merge pull request #5831 from emissary-ingress/flynn/update-try-3.10
Update the TRY-3.10 document for 3.10.0-rc.3.
2025-03-26 12:28:37 -04:00
Flynn 0f94681cfb Update the TRY-3.10 document for 3.10.0-rc.3.
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-25 22:19:49 -04:00
Flynn 5d1dea8ba8
Merge pull request #5795 from emissary-ingress/ci/5794
[CI Run] ambex: Remove usage of md5
2025-03-21 20:22:50 -04:00
Alice Wasko 7f3c6a8868 fix linting errors
Signed-off-by: Alice Wasko <aliceproxy@pm.me>
2025-03-21 16:36:55 -04:00
Flynn 214320b2e4 Update release notes
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-21 16:36:55 -04:00
Flynn 433ac459a0 Remove usage of md5
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-21 16:36:55 -04:00
Flynn 79170dbc4a
Merge pull request #5827 from emissary-ingress/flynn/python-deps
Update Python dependencies
2025-03-21 16:34:17 -04:00
Flynn 2f95c68bf1 Update dependency licenses. Ugh.
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-06 09:17:00 -05:00
Flynn da250b7cc7 Update Python dependencies
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-05 22:08:49 -05:00
Flynn 08d78948ac Use py-version to choose the Python version for our venv
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-03-05 22:08:45 -05:00
Flynn d14c84c690
Merge pull request #5823 from emissary-ingress/flynn/isker-5821
Pass client certificate and SNI to auth service -- thanks, @isker!
2025-02-14 09:54:43 -05:00
Flynn 2ae71716cc Automatic formatter stuff
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-13 18:36:41 -05:00
Flynn 6c161bd268 Move CHANGELOG tweak into docs/releaseNotes.yml
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-13 18:36:24 -05:00
Ian Kerins 9b6894249f Pass client certificate and SNI to auth service
This enables the auth service to do things like mTLS.

Signed-off-by: Ian Kerins <git@isk.haus>
2025-02-13 18:29:47 -05:00
Flynn cffdd53f8e
Merge pull request #5825 from emissary-ingress/flynn/readme-fix
🤦‍♂️ right, TRY-3.10.md is on master at the moment.
2025-02-13 10:22:52 -05:00
Flynn ccdc52db1d 🤦‍♂️ right, TRY-3.10.md is on master at the moment.
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 23:18:00 -05:00
Flynn 600dcaf4b8
Merge pull request #5822 from emissary-ingress/flynn/try-3.10
"Try 3.10" instructions for the release/v3.10 branch
2025-02-12 17:05:05 -05:00
Flynn def2e22bc2 Disable the broken chart test for the moment (I've torn the charts apart at the moment).
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 15:31:26 -05:00
Flynn 1c5819bce5 Tweak language around ALabs contributions
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 14:54:22 -05:00
Flynn 0e1a1d1d9d D'oh, include links for Ajay and Luke
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 14:53:49 -05:00
Flynn c8f597d7ce "Try 3.10" instructions for the release/v3.10 branch
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-12 14:47:34 -05:00
Flynn faf6f7a057
Merge pull request #5818 from emissary-ingress/lukeshu/go-updates
Update Go dependencies (from PR 5817)
2025-02-07 13:35:57 -05:00
Flynn 672c554e16 Use (hopefully) unique names for artifacts
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-06 18:27:36 -05:00
Flynn f0afe10599 Format GitHub actions
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-06 18:27:31 -05:00
Flynn 120314d95b Switch to upload-artifacts@v4 (and make quoting consistent)
Signed-off-by: Flynn <emissary@flynn.kodachi.com>
2025-02-06 18:27:15 -05:00
Luke T. Shumaker 368ca59863 Upgrade Go dependencies
sed -i \
        -e 's,^replace k8s.io/code-generator.*,replace k8s.io/code-generator v0.32.1 => github.com/emissary-ingress/code-generator 4d5bf4656f7139d290a2fa3684a6903cd04cbf97,' \
        -e 's,k8s.io/code-generator v0\.30.*,k8s.io/code-generator v0.32.1,' \
        go.mod
    GOFLAGS=-tags=pin go get -u ./...
    sed -i 's,sigs.k8s.io/e2e-framework/support/utils,sigs.k8s.io/e2e-framework/pkg/utils,' test/apiext/apiext_test.go
    go mod tidy
    go mod tidy
    go mod vendor
    make generate

Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker 79917553b1 DO NOT MERGE YET: Upgrade go-mkopensource
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker a4933625e8 go.mod: Preemptively pin pre-rename versions of go-metrics and mergo
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker e053f3b716 Upgrade k8s.io/code-generator to match the version it's supposed to be
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker 1f8b0cb718 tools: Prevent `goimports` from ruining the pin.go files
We did this in Telepresence a long time ago (at Thomas' suggestion).

Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:15 -07:00
Luke T. Shumaker 2e92aa8a21 go.mod: Tidy comments (SO MAYBE PEOPLE READ THEM)
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:14 -07:00
Luke T. Shumaker d784486390 py-list-deps: Adjust to work with newer Python
Signed-off-by: Luke T. Shumaker <lukeshu@lukeshu.com>
2025-02-06 01:12:14 -07:00
Flynn c2cd9ddfc6
Merge pull request #5808 from emissary-ingress/flynn/pr5798
Include PR5798
2024-12-09 12:07:42 -05:00
Flynn 20e3f63e7c
Merge pull request #5807 from emissary-ingress/flynn/update
Update Envoy, go-control-plane, Go, and dependencies
2024-12-05 17:15:15 -05:00
Flynn f30725562c Clean up releaseNotes/CHANGELOG and 'make generate'
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:25:55 -05:00
ajaychoudhary-hotstar f8829ee1d8 Fixed test case
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar ab7b539a47 Added condition to take only Ready pods for load balancing
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar b1fb2d6bfb Added endpoints fallback in case endpointslice doesn't exists
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar 46ab826f03 Removed break
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar 88712774f4 updated test Yaml
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
ajaychoudhary-hotstar c5e28b8fbe Added support for endpointslices
Signed-off-by: ajaychoudhary-hotstar <ajay.choudhary@hotstar.com>
2024-12-04 23:11:14 -05:00
Flynn 2b124a957d Whitespace
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:10:29 -05:00
Flynn d9f94770a3 Use cryptography instead of OpenSSL.crypto
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:10:29 -05:00
Flynn d0e902dceb Fix lint errors
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 23:10:02 -05:00
Flynn 95495a54c2 Switch to the GCR mirror for the base Envoy image
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:38 -05:00
Flynn c9a542be33 gmake generate
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:37 -05:00
Flynn 233307cb95 Fix make generate
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:36 -05:00
Flynn a47f7482e1 gmake compile-envoy-protos
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:35 -05:00
Flynn 33a9ce8c80 Switch to Golang 1.23.3
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:34 -05:00
Flynn c55dad2cf3 Bump google.golang.org/grpc (to get grpc.NewClient for go-control-plane) and go mod tidy.
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:34 -05:00
Flynn dfcf01298f Update ENVOY_COMMIT and ENVOY_GO_CONTROL_PLANE_COMMIT
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:33 -05:00
Flynn 8e7ee3b7cf Switch GitHub workflows to ubuntu-24.04
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:32 -05:00
Flynn 9f6c9b536e Switch KAT to Ubuntu 24.04. Clean up Docker lint stuff
Signed-off-by: Flynn <flynn+github@kodachi.com>
2024-12-04 22:52:31 -05:00
Flynn 02e88319b7
Merge pull request #5789 from emissary-ingress/kai-tillman/update-monthly-meeting
Update Zoom meeting link
2024-10-02 17:11:02 -04:00
Kai Tillman f853d23884 Update Zoom meeting link
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-10-02 14:42:14 -04:00
Kai Tillman ac2dc64c66 Add ArtifactHub badge
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-09-03 10:44:46 -07:00
Kai Tillman 010ac84078 Rename DEVELOPING.md to CONTRIBUTING.md
Signed-off-by: Kai Tillman <ktillman@datawire.io>
2024-09-03 10:44:46 -07:00
Adrian Ding 7f56afa587
Typo 2023-09-19 07:26:27 +12:00
791 changed files with 32463 additions and 18237 deletions

View File

@ -811,7 +811,7 @@ curl localhost:8877/ambassador/v0/diag/?loglevel=debug
```
Note: This affects diagd and Envoy, but NOT the AES `amb-sidecar`.
See the AES `DEVELOPING.md` for how to do that.
See the AES `CONTRIBUTING.md` for how to do that.
### Can I build from a docker container instead of on my local computer?

View File

@ -1,4 +1,4 @@
name: 'Collect Logs'
name: "Collect Logs"
description: >-
Store any log files as artifacts.
inputs:
@ -49,7 +49,7 @@ runs:
cp /tmp/*.yaml /tmp/test-logs || true
cp /tmp/kat-client-*.log /tmp/test-logs || true
- name: "Upload Logs"
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: logs-${{ inputs.jobname }}
path: /tmp/test-logs

View File

@ -46,6 +46,6 @@ A few sentences describing what testing you've done, e.g., manual tests, automat
- We should lean on the bulk of code being covered by unit tests, but...
- ... an end-to-end test should cover the integration points
- [ ] **I updated `DEVELOPING.md` with any any special dev tricks I had to use to work on this code efficiently.**
- [ ] **I updated `CONTRIBUTING.md` with any special dev tricks I had to use to work on this code efficiently.**
- [ ] **The changes in this PR have been reviewed for security concerns and adherence to security best practices.**

View File

@ -22,7 +22,7 @@ name: Check branch version
jobs:
check-branch-version:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3
with:

View File

@ -10,7 +10,7 @@ name: job-promote-to-passed
jobs:
lint: ########################################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -27,10 +27,12 @@ jobs:
run: |
make lint
- uses: ./.github/actions/after-job
with:
jobname: lint
if: always()
generate: ####################################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -75,10 +77,12 @@ jobs:
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate' (again!)"
- uses: ./.github/actions/after-job
with:
jobname: generate
if: always()
check-envoy-protos: ####################################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -107,10 +111,12 @@ jobs:
- name: "Check Git not dirty from 'make compile-envoy-protos'"
uses: ./.github/actions/git-dirty-check
- uses: ./.github/actions/after-job
with:
jobname: check-envoy-protos
if: always()
check-envoy-version: #########################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -135,11 +141,13 @@ jobs:
password: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
- run: make check-envoy-version
- uses: ./.github/actions/after-job
with:
jobname: check-envoy-version
if: always()
# Tests ######################################################################
apiext-e2e:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -170,7 +178,7 @@ jobs:
run: |
go test -p 1 -parallel 1 -v -tags=apiext ./test/apiext/... -timeout 15m
check-gotest:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -191,9 +199,11 @@ jobs:
run: |
make gotest
- uses: ./.github/actions/after-job
with:
jobname: check-gotest
if: always()
check-pytest:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -249,7 +259,7 @@ jobs:
with:
jobname: check-pytest-${{ matrix.test }}
check-pytest-unit:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -275,9 +285,11 @@ jobs:
export PYTEST_ARGS=' --cov-branch --cov=ambassador --cov-report html:/tmp/cov_html '
make pytest-unit-tests
- uses: ./.github/actions/after-job
with:
jobname: check-pytest-unit
if: always()
check-chart:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
DEV_REGISTRY: ${{ secrets.DEV_REGISTRY }}
# See docker/base-python.docker.gen
@ -287,28 +299,33 @@ jobs:
DOCKER_BUILD_USERNAME: ${{ secrets.GH_DOCKER_BUILD_USERNAME }}
DOCKER_BUILD_PASSWORD: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
steps:
- uses: docker/login-action@v2
with:
registry: ${{ (!startsWith(secrets.DEV_REGISTRY, 'docker.io/')) && secrets.DEV_REGISTRY || null }}
username: ${{ secrets.GH_DOCKER_BUILD_USERNAME }}
password: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
- uses: actions/checkout@v3
with:
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: make test-chart
- name: Warn about skip
run: |
make ci/setup-k3d
export DEV_KUBECONFIG=~/.kube/config
echo "SKIPPING CHART TEST; check the charts manually"
# - uses: docker/login-action@v2
# with:
# registry: ${{ (!startsWith(secrets.DEV_REGISTRY, 'docker.io/')) && secrets.DEV_REGISTRY || null }}
# username: ${{ secrets.GH_DOCKER_BUILD_USERNAME }}
# password: ${{ secrets.GH_DOCKER_BUILD_TOKEN }}
# - uses: actions/checkout@v3
# with:
# fetch-depth: 0
# ref: ${{ github.event.pull_request.head.sha }}
# - name: Install Deps
# uses: ./.github/actions/setup-deps
# - name: make test-chart
# run: |
# make ci/setup-k3d
# export DEV_KUBECONFIG=~/.kube/config
make test-chart
- uses: ./.github/actions/after-job
if: always()
# make test-chart
# - uses: ./.github/actions/after-job
# with:
# jobname: check-chart
# if: always()
build: #######################################################################
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
DEV_REGISTRY: ${{ secrets.DEV_REGISTRY }}
# See docker/base-python.docker.gen
@ -342,12 +359,14 @@ jobs:
run: |
make push-dev
- uses: ./.github/actions/after-job
with:
jobname: build
if: always()
######################################################################
######################### CVE Scanning ###############################
trivy-container-scan:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
needs: [build]
steps:
# upload of results to github uses git so checkout of code is needed
@ -388,7 +407,7 @@ jobs:
- check-pytest-unit
- check-chart
- trivy-container-scan
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
steps:
- name: No-Op
if: ${{ false }}

View File

@ -3,42 +3,44 @@ on:
schedule:
# run at noon on sundays to prepare for monday
# used https://crontab.guru/ to generate
- cron: '0 12 * * SUN'
- cron: "0 12 * * SUN"
jobs:
generate: ####################################################################
runs-on: ubuntu-latest
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: "Git Login"
run: |
if [[ -n '${{ secrets.GHA_SSH_KEY }}' ]]; then
install -m700 -d ~/.ssh
install -m600 /dev/stdin ~/.ssh/id_rsa <<<'${{ secrets.GHA_SSH_KEY }}'
fi
- name: "Docker Login"
uses: docker/login-action@v2
with:
registry: ${{ (!startsWith(secrets.RELEASE_REGISTRY, 'docker.io/')) && secrets.RELEASE_REGISTRY || null }}
username: ${{ secrets.GH_DOCKER_RELEASE_USERNAME }}
password: ${{ secrets.GH_DOCKER_RELEASE_TOKEN }}
- name: "'make generate'"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate'"
- name: "'make generate' (again!)"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate' (again!)"
- uses: ./.github/actions/after-job
if: always()
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Deps
uses: ./.github/actions/setup-deps
- name: "Git Login"
run: |
if [[ -n '${{ secrets.GHA_SSH_KEY }}' ]]; then
install -m700 -d ~/.ssh
install -m600 /dev/stdin ~/.ssh/id_rsa <<<'${{ secrets.GHA_SSH_KEY }}'
fi
- name: "Docker Login"
uses: docker/login-action@v2
with:
registry: ${{ (!startsWith(secrets.RELEASE_REGISTRY, 'docker.io/')) && secrets.RELEASE_REGISTRY || null }}
username: ${{ secrets.GH_DOCKER_RELEASE_USERNAME }}
password: ${{ secrets.GH_DOCKER_RELEASE_TOKEN }}
- name: "'make generate'"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate'"
- name: "'make generate' (again!)"
shell: bash
run: |
make generate
- uses: ./.github/actions/git-dirty-check
name: "Check Git not dirty from 'make generate' (again!)"
- uses: ./.github/actions/after-job
with:
jobname: generate-base-python
if: always()

View File

@ -8,7 +8,7 @@ name: k8s-e2e
jobs:
acceptance_tests:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
env:
# See docker/base-python.docker.gen
BASE_PYTHON_REPO: ${{ secrets.BASE_PYTHON_REPO }}
@ -71,4 +71,4 @@ jobs:
- uses: ./.github/actions/after-job
if: always()
with:
jobname: check-pytest-${{ matrix.test }}
jobname: check-pytest-${{matrix.k8s.kubectl}}-${{ matrix.test }}

View File

@ -2,10 +2,10 @@ name: promote-to-ga
"on":
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
- "v[0-9]+.[0-9]+.[0-9]+"
jobs:
promote-to-ga:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
name: promote-to-ga
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
@ -30,6 +30,8 @@ jobs:
run: |
make release/promote-oss/to-ga
- uses: ./.github/actions/after-job
with:
jobname: promote-to-ga-1
if: always()
- id: check-slack-webhook
name: Assign slack webhook variable
@ -41,18 +43,20 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
with:
status: ${{ job.status }}
success_text: 'Emissary GA for ${env.GITHUB_REF} successfully built'
failure_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed'
cancelled_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled'
success_text: "Emissary GA for ${env.GITHUB_REF} successfully built"
failure_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed"
cancelled_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: promote-to-ga-2
if: always()
create-gh-release:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
needs: [promote-to-ga]
name: "Create GitHub release"
env:
@ -80,13 +84,15 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
with:
status: ${{ job.status }}
success_text: 'Emissary GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}'
failure_text: 'Emissary GitHub release failed'
cancelled_text: 'Emissary GitHub release was was cancelled'
success_text: "Emissary GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}"
failure_text: "Emissary GitHub release failed"
cancelled_text: "Emissary GitHub release was was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: create-gh-release
if: always()

View File

@ -2,11 +2,11 @@ name: promote-to-rc
"on":
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+'
- 'v[0-9]+.[0-9]+.[0-9]+-dev'
- "v[0-9]+.[0-9]+.[0-9]+-rc.[0-9]+"
- "v[0-9]+.[0-9]+.[0-9]+-dev"
jobs:
promote-to-rc:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
name: promote-to-rc
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
@ -49,12 +49,14 @@ jobs:
export AMBASSADOR_MANIFEST_URL=https://app.getambassador.io/yaml/emissary/${{ steps.step-main.outputs.version }}
export HELM_CHART_VERSION=${{ steps.step-main.outputs.chart_version }}
\`\`\`
failure_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed'
cancelled_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled'
failure_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed"
cancelled_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: promote-to-rc
if: always()

View File

@ -2,10 +2,10 @@ name: chart-publish
"on":
push:
tags:
- 'chart/v*'
- "chart/v*"
jobs:
chart-publish:
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
name: chart-publish
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
@ -34,18 +34,20 @@ jobs:
with:
status: ${{ job.status }}
success_text: "Chart successfully published for ${env.GITHUB_REF}"
failure_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed'
cancelled_text: '${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled'
failure_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build failed"
cancelled_text: "${env.GITHUB_WORKFLOW} (${env.GITHUB_RUN_NUMBER}) build was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: chart-publish
if: always()
chart-create-gh-release:
if: ${{ ! contains(github.ref, '-') }}
runs-on: ubuntu-latest
runs-on: ubuntu-24.04
needs: [chart-publish]
name: "Create GitHub release"
steps:
@ -71,13 +73,15 @@ jobs:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
with:
status: ${{ job.status }}
success_text: 'Chart GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}'
failure_text: 'Chart GitHub release failed'
cancelled_text: 'Chart GitHub release was was cancelled'
success_text: "Chart GitHub release was created: ${{ steps.step-create-gh-release.outputs.url }}"
failure_text: "Chart GitHub release failed"
cancelled_text: "Chart GitHub release was was cancelled"
fields: |
[{ "title": "Repository", "value": "${env.GITHUB_REPOSITORY}", "short": true },
{ "title": "Branch", "value": "${env.GITHUB_REF}", "short": true },
{ "title": "Action URL", "value": "${env.GITHUB_SERVER_URL}/${env.GITHUB_REPOSITORY}/actions/runs/${env.GITHUB_RUN_ID}"}
]
- uses: ./.github/actions/after-job
with:
jobname: chart-create-gh-release
if: always()

View File

@ -85,8 +85,8 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
## RELEASE NOTES
## [3.10.0-dev] TBD
[3.10.0-dev]: https://github.com/emissary-ingress/emissary/compare/v3.9.0...v3.10.0-dev
## [3.10.0] July 29, 2025
[3.10.0]: https://github.com/emissary-ingress/emissary/compare/v3.9.0...v3.10.0
### Emissary-ingress and Ambassador Edge Stack
@ -107,7 +107,20 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
instead of the Mapping name, which could reduce the cache's effectiveness. This has been fixed so
that the correct key is used. ([Incorrect Cache Key for Mapping])
- Feature: Emissary-ingress now supports resolving Endpoints from EndpointSlices in addition to the
existing support for Endpoints, supporting Services with more than 1000 endpoints.
- Feature: Emissary-ingress now passes the client TLS certificate and SNI, if any, to the external
auth service. These are available in the `source.certificate` and `tls_session.sni` fields, as
described in the <a
href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/service/auth/v3/attribute_context.proto">
Envoy extauth documentation</a>.
- Change: The `ambex` component of Emissary-ingress now uses `xxhash64` instead of `md5`, since
`md5` can cause problems in crypto-restricted environments (e.g. FIPS) ([Remove usage of md5])
[Incorrect Cache Key for Mapping]: https://github.com/emissary-ingress/emissary/issues/5714
[Remove usage of md5]: https://github.com/emissary-ingress/emissary/pull/5794
## [3.9.0] November 13, 2023
[3.9.0]: https://github.com/emissary-ingress/emissary/compare/v3.8.0...v3.9.0
@ -401,7 +414,7 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
releases, or a `Host` with or without a `TLSContext` as in prior 2.y releases.
- Bugfix: Prior releases of Emissary-ingress had the arbitrary limitation that a `TCPMapping` cannot
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
Emissary-ingress now allows `TCPMappings` to be used on the same `Listener` port as HTTP `Hosts`,
as long as that `Listener` terminates TLS.
@ -567,7 +580,7 @@ it will be removed; but as it won't be user-visible this isn't considered a brea
releases, or a `Host` with or without a `TLSContext` as in prior 2.y releases.
- Bugfix: Prior releases of Emissary-ingress had the arbitrary limitation that a `TCPMapping` cannot
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
be used on the same port that HTTP is served on, even if TLS+SNI would make this possible.
Emissary-ingress now allows `TCPMappings` to be used on the same `Listener` port as HTTP `Hosts`,
as long as that `Listener` terminates TLS.

View File

@ -4,8 +4,8 @@
The Emissary-ingress Contributors Meeting is held on the first Wednesday of every month at 3:30pm Eastern. The focus of this meeting is discussion of technical issues related to development of Emissary-ingress.
New contributors are always welcome! Check out our [contributor's guide](../DevDocumentation/DEVELOPING.md) to learn how you can help make Emissary-ingress better.
New contributors are always welcome! Check out our [contributor's guide](../DevDocumentation/CONTRIBUTING.md) to learn how you can help make Emissary-ingress better.
**Zoom Meeting Link**: [https://ambassadorlabs.zoom.us/j/86139262248?pwd=bzZlcU96WjAxN2E1RFZFZXJXZ1FwQT09](https://ambassadorlabs.zoom.us/j/86139262248?pwd=bzZlcU96WjAxN2E1RFZFZXJXZ1FwQT09)
- Meeting ID: 861 3926 2248
- Passcode: 113675
**Zoom Meeting Link**: [https://ambassadorlabs.zoom.us/j/81589589470?pwd=U8qNvZSqjQx7abIzwRtGryFU35pi3T.1](https://ambassadorlabs.zoom.us/j/81589589470?pwd=U8qNvZSqjQx7abIzwRtGryFU35pi3T.1)
- Meeting ID: 815 8958 9470
- Passcode: 199217

View File

@ -1,16 +1,12 @@
## Support for deploying and using Ambassador
## Support for deploying and using Emissary
Welcome to Ambassador! We use GitHub for tracking bugs and feature requests. If you need support, the following resources are available. Thanks for understanding.
Welcome to Emissary! The Emissary community is the best current resource for
Emissary support, with the best options being:
### Documentation
- Checking out the [documentation] at https://emissary-ingress.dev/
- Joining the `#emissary-ingress` channel in the [CNCF Slack]
- [Opening an issue][GitHub] in [GitHub]
* [User Documentation](https://www.getambassador.io/docs)
* [Troubleshooting Guide](https://www.getambassador.io/reference/debugging)
### Real-time Chat
* [Slack](https://d6e.co/slack): The `#ambassador` channel is a good place to start.
### Commercial Support
* Commercial Support is available as part of [Ambassador Pro](https://www.getambassador.io/pro/).
[CNCF Slack]: https://communityinviter.com/apps/cloud-native/cncf)
[documentation]: https://emissary-ingress.dev/
[GitHub]: https://github.com/emissary-ingress/emissary/issues

View File

@ -1,213 +1,219 @@
The Go module "github.com/emissary-ingress/emissary/v3" incorporates the
following Free and Open Source software:
Name Version License(s)
---- ------- ----------
the Go language standard library ("std") v1.22.4 3-clause BSD license
dario.cat/mergo v1.0.0 3-clause BSD license
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 MIT license
github.com/MakeNowJust/heredoc v1.0.0 MIT license
github.com/Masterminds/goutils v1.1.1 Apache License 2.0
github.com/Masterminds/semver v1.5.0 MIT license
github.com/Masterminds/sprig v2.22.0+incompatible MIT license
github.com/Microsoft/go-winio v0.6.1 MIT license
github.com/ProtonMail/go-crypto v0.0.0-20230923063757-afb1ddc0824c 3-clause BSD license
github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230305170008-8188dc5388df 3-clause BSD license
github.com/armon/go-metrics v0.4.1 MIT license
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 MIT license
github.com/beorn7/perks v1.0.1 MIT license
github.com/blang/semver/v4 v4.0.0 MIT license
github.com/cenkalti/backoff/v4 v4.2.1 MIT license
github.com/census-instrumentation/opencensus-proto v0.4.1 Apache License 2.0
github.com/cespare/xxhash/v2 v2.2.0 MIT license
github.com/chai2010/gettext-go v1.0.2 3-clause BSD license
github.com/cloudflare/circl v1.3.7 3-clause BSD license
github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 Apache License 2.0
github.com/cyphar/filepath-securejoin v0.2.4 3-clause BSD license
github.com/datawire/dlib v1.3.1 Apache License 2.0
github.com/datawire/dtest v0.0.0-20210928162311-722b199c4c2f Apache License 2.0
github.com/datawire/go-mkopensource v0.0.12-0.20230821212923-d1d8451579a1 Apache License 2.0
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc ISC license
github.com/distribution/reference v0.5.0 Apache License 2.0
github.com/emicklei/go-restful/v3 v3.11.0 MIT license
github.com/emirpasic/gods v1.18.1 2-clause BSD license, ISC license
github.com/envoyproxy/protoc-gen-validate v1.0.2 Apache License 2.0
github.com/evanphx/json-patch v5.7.0+incompatible 3-clause BSD license
github.com/evanphx/json-patch/v5 v5.9.0 3-clause BSD license
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f MIT license
github.com/fatih/camelcase v1.0.0 MIT license
github.com/fatih/color v1.16.0 MIT license
github.com/fsnotify/fsnotify v1.7.0 3-clause BSD license
github.com/go-errors/errors v1.5.1 MIT license
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 3-clause BSD license
github.com/go-git/go-billy/v5 v5.5.0 Apache License 2.0
github.com/go-git/go-git/v5 v5.11.0 Apache License 2.0
github.com/go-logr/logr v1.4.1 Apache License 2.0
github.com/go-logr/zapr v1.3.0 Apache License 2.0
github.com/go-openapi/jsonpointer v0.20.0 Apache License 2.0
github.com/go-openapi/jsonreference v0.20.2 Apache License 2.0
github.com/go-openapi/swag v0.22.4 Apache License 2.0
github.com/gobuffalo/flect v1.0.2 MIT license
github.com/gogo/protobuf v1.3.2 3-clause BSD license
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da Apache License 2.0
github.com/golang/protobuf v1.5.4 3-clause BSD license
github.com/google/btree v1.0.1 Apache License 2.0
github.com/google/cel-go v0.18.1 Apache License 2.0
github.com/google/gnostic-models v0.6.8 Apache License 2.0
github.com/google/go-cmp v0.6.0 3-clause BSD license
github.com/google/gofuzz v1.2.0 Apache License 2.0
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 Apache License 2.0
github.com/google/uuid v1.5.0 3-clause BSD license
github.com/gorilla/websocket v1.5.1 3-clause BSD license
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 MIT license
github.com/hashicorp/consul/api v1.26.1 Mozilla Public License 2.0
github.com/hashicorp/errwrap v1.1.0 Mozilla Public License 2.0
github.com/hashicorp/go-cleanhttp v0.5.2 Mozilla Public License 2.0
github.com/hashicorp/go-hclog v1.5.0 MIT license
github.com/hashicorp/go-immutable-radix v1.3.1 Mozilla Public License 2.0
github.com/hashicorp/go-multierror v1.1.1 Mozilla Public License 2.0
github.com/hashicorp/go-rootcerts v1.0.2 Mozilla Public License 2.0
github.com/hashicorp/golang-lru v1.0.2 Mozilla Public License 2.0
github.com/hashicorp/hcl v1.0.0 Mozilla Public License 2.0
github.com/hashicorp/serf v0.10.1 Mozilla Public License 2.0
github.com/huandu/xstrings v1.3.2 MIT license
github.com/imdario/mergo v0.3.16 3-clause BSD license
github.com/inconshreveable/mousetrap v1.1.0 Apache License 2.0
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 MIT license
github.com/josharian/intern v1.0.1-0.20211109044230-42b52b674af5 MIT license
github.com/json-iterator/go v1.1.12 MIT license
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 MIT license
github.com/kevinburke/ssh_config v1.2.0 MIT license
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de 3-clause BSD license
github.com/magiconair/properties v1.8.1 2-clause BSD license
github.com/mailru/easyjson v0.7.7 MIT license
github.com/mattn/go-colorable v0.1.13 MIT license
github.com/mattn/go-isatty v0.0.20 MIT license
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 Apache License 2.0
github.com/mitchellh/copystructure v1.2.0 MIT license
github.com/mitchellh/go-homedir v1.1.0 MIT license
github.com/mitchellh/go-wordwrap v1.0.1 MIT license
github.com/mitchellh/mapstructure v1.5.0 MIT license
github.com/mitchellh/reflectwalk v1.0.2 MIT license
github.com/moby/spdystream v0.2.0 Apache License 2.0
github.com/moby/term v0.0.0-20221205130635-1aeaba878587 Apache License 2.0
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd Apache License 2.0
github.com/modern-go/reflect2 v1.0.2 Apache License 2.0
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 MIT license
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 3-clause BSD license
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f 3-clause BSD license
github.com/opencontainers/go-digest v1.0.0 Apache License 2.0
github.com/pelletier/go-toml v1.2.0 MIT license
github.com/peterbourgon/diskv v2.0.1+incompatible MIT license
github.com/pjbgf/sha1cd v0.3.0 Apache License 2.0
github.com/pkg/errors v0.9.1 2-clause BSD license
github.com/planetscale/vtprotobuf v0.6.0 3-clause BSD license
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 3-clause BSD license
github.com/prometheus/client_golang v1.17.0 Apache License 2.0
github.com/prometheus/client_model v0.5.0 Apache License 2.0
github.com/prometheus/common v0.45.0 Apache License 2.0
github.com/prometheus/procfs v0.12.0 Apache License 2.0
github.com/russross/blackfriday/v2 v2.1.0 2-clause BSD license
github.com/sergi/go-diff v1.3.1 MIT license
github.com/sirupsen/logrus v1.9.3 MIT license
github.com/skeema/knownhosts v1.2.1 Apache License 2.0
github.com/spf13/afero v1.3.3 Apache License 2.0
github.com/spf13/cast v1.3.0 MIT license
github.com/spf13/cobra v1.8.0 Apache License 2.0
github.com/spf13/jwalterweatherman v1.0.0 MIT license
github.com/spf13/pflag v1.0.5 3-clause BSD license
github.com/spf13/viper v1.7.0 MIT license
github.com/stoewer/go-strcase v1.3.0 MIT license
github.com/stretchr/testify v1.8.4 MIT license
github.com/subosito/gotenv v1.2.0 MIT license
github.com/vladimirvivien/gexe v0.2.0 MIT license
github.com/xanzy/ssh-agent v0.3.3 Apache License 2.0
github.com/xlab/treeprint v1.2.0 MIT license
go.opentelemetry.io/proto/otlp v1.0.0 Apache License 2.0
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca 3-clause BSD license
go.uber.org/goleak v1.3.0 MIT license
go.uber.org/multierr v1.11.0 MIT license
go.uber.org/zap v1.26.0 MIT license
golang.org/x/crypto v0.21.0 3-clause BSD license
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa 3-clause BSD license
golang.org/x/mod v0.15.0 3-clause BSD license
golang.org/x/net v0.23.0 3-clause BSD license
golang.org/x/oauth2 v0.15.0 3-clause BSD license
golang.org/x/sync v0.6.0 3-clause BSD license
golang.org/x/sys v0.18.0 3-clause BSD license
golang.org/x/term v0.18.0 3-clause BSD license
golang.org/x/text v0.14.0 3-clause BSD license
golang.org/x/time v0.5.0 3-clause BSD license
golang.org/x/tools v0.18.0 3-clause BSD license
gomodules.xyz/jsonpatch/v2 v2.4.0 Apache License 2.0
google.golang.org/appengine v1.6.8 Apache License 2.0
google.golang.org/genproto v0.0.0-20231212172506-995d672761c0 Apache License 2.0
google.golang.org/genproto/googleapis/api v0.0.0-20231211222908-989df2bf70f3 Apache License 2.0
google.golang.org/genproto/googleapis/rpc v0.0.0-20240102182953-50ed04b92917 Apache License 2.0
google.golang.org/grpc v1.60.1 Apache License 2.0
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.3.0 Apache License 2.0
google.golang.org/protobuf v1.33.0 3-clause BSD license
gopkg.in/inf.v0 v0.9.1 3-clause BSD license
gopkg.in/ini.v1 v1.51.0 Apache License 2.0
gopkg.in/warnings.v0 v0.1.2 2-clause BSD license
gopkg.in/yaml.v2 v2.4.0 Apache License 2.0, MIT license
gopkg.in/yaml.v3 v3.0.1 Apache License 2.0, MIT license
k8s.io/api v0.30.1 Apache License 2.0
k8s.io/apiextensions-apiserver v0.30.1 Apache License 2.0
k8s.io/apimachinery v0.30.1 3-clause BSD license, Apache License 2.0
k8s.io/apiserver v0.30.1 Apache License 2.0
k8s.io/cli-runtime v0.30.1 Apache License 2.0
k8s.io/client-go v0.30.1 3-clause BSD license, Apache License 2.0
github.com/emissary-ingress/code-generator (modified from k8s.io/code-generator) v0.28.0-alpha.0.0.20231105041308-a20b0cd90dea Apache License 2.0
k8s.io/component-base v0.30.1 Apache License 2.0
k8s.io/gengo v0.0.0-20230829151522-9cce18d56c01 Apache License 2.0
k8s.io/klog/v2 v2.120.1 Apache License 2.0
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 3-clause BSD license, Apache License 2.0
k8s.io/kubectl v0.30.1 Apache License 2.0
k8s.io/kubernetes v1.30.1 Apache License 2.0
k8s.io/metrics v0.30.1 Apache License 2.0
k8s.io/utils v0.0.0-20230726121419-3b25d923346b 3-clause BSD license, Apache License 2.0
sigs.k8s.io/controller-runtime v0.18.2 Apache License 2.0
sigs.k8s.io/controller-tools v0.13.0 Apache License 2.0
sigs.k8s.io/e2e-framework v0.3.0 Apache License 2.0
sigs.k8s.io/gateway-api v0.2.0 Apache License 2.0
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd 3-clause BSD license, Apache License 2.0
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 Apache License 2.0
sigs.k8s.io/kustomize/kyaml v0.14.3 Apache License 2.0, MIT license
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 Apache License 2.0
sigs.k8s.io/yaml v1.4.0 Apache License 2.0, MIT license
Name Version License(s)
---- ------- ----------
the Go language standard library ("std") v1.23.3 3-clause BSD license
cel.dev/expr v0.19.2 Apache License 2.0
dario.cat/mergo v1.0.1 3-clause BSD license
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c MIT license
github.com/MakeNowJust/heredoc v1.0.0 MIT license
github.com/Masterminds/goutils v1.1.1 Apache License 2.0
github.com/Masterminds/semver v1.5.0 MIT license
github.com/Masterminds/sprig v2.22.0+incompatible MIT license
github.com/Microsoft/go-winio v0.6.2 MIT license
github.com/ProtonMail/go-crypto v1.1.5 3-clause BSD license
github.com/antlr4-go/antlr/v4 v4.13.1 3-clause BSD license
github.com/armon/go-metrics v0.4.1 MIT license
github.com/beorn7/perks v1.0.1 MIT license
github.com/blang/semver/v4 v4.0.0 MIT license
github.com/cenkalti/backoff/v4 v4.3.0 MIT license
github.com/census-instrumentation/opencensus-proto v0.4.1 Apache License 2.0
github.com/cespare/xxhash/v2 v2.3.0 MIT license
github.com/chai2010/gettext-go v1.0.3 3-clause BSD license
github.com/cloudflare/circl v1.6.0 3-clause BSD license
github.com/cncf/xds/go v0.0.0-20250121191232-2f005788dc42 Apache License 2.0
github.com/cyphar/filepath-securejoin v0.4.1 3-clause BSD license
github.com/datawire/dlib v1.3.1 Apache License 2.0
github.com/datawire/dtest v0.0.0-20210928162311-722b199c4c2f Apache License 2.0
github.com/LukeShu/go-mkopensource (modified from github.com/datawire/go-mkopensource) v0.0.0-20250206080114-4ff6b660d8d4 Apache License 2.0
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc ISC license
github.com/distribution/reference v0.6.0 Apache License 2.0
github.com/emicklei/go-restful/v3 v3.12.1 MIT license
github.com/emirpasic/gods v1.18.1 2-clause BSD license, ISC license
github.com/envoyproxy/protoc-gen-validate v1.2.1 Apache License 2.0
github.com/evanphx/json-patch/v5 v5.9.11 3-clause BSD license
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f MIT license
github.com/fatih/camelcase v1.0.0 MIT license
github.com/fatih/color v1.18.0 MIT license
github.com/fsnotify/fsnotify v1.8.0 3-clause BSD license
github.com/fxamacker/cbor/v2 v2.7.0 MIT license
github.com/go-errors/errors v1.5.1 MIT license
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 3-clause BSD license
github.com/go-git/go-billy/v5 v5.6.2 Apache License 2.0
github.com/go-git/go-git/v5 v5.13.2 Apache License 2.0
github.com/go-logr/logr v1.4.2 Apache License 2.0
github.com/go-logr/zapr v1.3.0 Apache License 2.0
github.com/go-openapi/jsonpointer v0.21.0 Apache License 2.0
github.com/go-openapi/jsonreference v0.21.0 Apache License 2.0
github.com/go-openapi/swag v0.23.0 Apache License 2.0
github.com/gobuffalo/flect v1.0.3 MIT license
github.com/gogo/protobuf v1.3.2 3-clause BSD license
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 Apache License 2.0
github.com/golang/protobuf v1.5.4 3-clause BSD license
github.com/google/btree v1.1.3 Apache License 2.0
github.com/google/cel-go v0.23.2 3-clause BSD license, Apache License 2.0
github.com/google/gnostic-models v0.6.9 Apache License 2.0
github.com/google/go-cmp v0.6.0 3-clause BSD license
github.com/google/gofuzz v1.2.0 Apache License 2.0
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 Apache License 2.0
github.com/google/uuid v1.6.0 3-clause BSD license
github.com/gorilla/websocket v1.5.3 2-clause BSD license
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 MIT license
github.com/hashicorp/consul/api v1.31.0 Mozilla Public License 2.0
github.com/hashicorp/errwrap v1.1.0 Mozilla Public License 2.0
github.com/hashicorp/go-cleanhttp v0.5.2 Mozilla Public License 2.0
github.com/hashicorp/go-hclog v1.6.3 MIT license
github.com/hashicorp/go-immutable-radix v1.3.1 Mozilla Public License 2.0
github.com/hashicorp/go-metrics v0.5.4 MIT license
github.com/hashicorp/go-multierror v1.1.1 Mozilla Public License 2.0
github.com/hashicorp/go-rootcerts v1.0.2 Mozilla Public License 2.0
github.com/hashicorp/golang-lru v1.0.2 Mozilla Public License 2.0
github.com/hashicorp/hcl v1.0.0 Mozilla Public License 2.0
github.com/hashicorp/serf v0.10.2 Mozilla Public License 2.0
github.com/huandu/xstrings v1.5.0 MIT license
github.com/imdario/mergo v0.3.16 3-clause BSD license
github.com/inconshreveable/mousetrap v1.1.0 Apache License 2.0
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 MIT license
github.com/josharian/intern v1.0.1-0.20211109044230-42b52b674af5 MIT license
github.com/json-iterator/go v1.1.12 MIT license
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 MIT license
github.com/kevinburke/ssh_config v1.2.0 MIT license
github.com/klauspost/compress v1.17.11 3-clause BSD license, Apache License 2.0, MIT license
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de 3-clause BSD license
github.com/magiconair/properties v1.8.9 2-clause BSD license
github.com/mailru/easyjson v0.9.0 MIT license
github.com/mattn/go-colorable v0.1.14 MIT license
github.com/mattn/go-isatty v0.0.20 MIT license
github.com/mitchellh/copystructure v1.2.0 MIT license
github.com/mitchellh/go-homedir v1.1.0 MIT license
github.com/mitchellh/go-wordwrap v1.0.1 MIT license
github.com/mitchellh/mapstructure v1.5.0 MIT license
github.com/mitchellh/reflectwalk v1.0.2 MIT license
github.com/moby/spdystream v0.5.0 Apache License 2.0
github.com/moby/term v0.5.2 Apache License 2.0
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd Apache License 2.0
github.com/modern-go/reflect2 v1.0.2 Apache License 2.0
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 MIT license
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 3-clause BSD license
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f 3-clause BSD license
github.com/opencontainers/go-digest v1.0.0 Apache License 2.0
github.com/pelletier/go-toml/v2 v2.2.3 MIT license
github.com/peterbourgon/diskv v2.0.1+incompatible MIT license
github.com/pjbgf/sha1cd v0.3.2 Apache License 2.0
github.com/pkg/errors v0.9.1 2-clause BSD license
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10 3-clause BSD license
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 3-clause BSD license
github.com/prometheus/client_golang v1.20.5 3-clause BSD license, Apache License 2.0
github.com/prometheus/client_model v0.6.1 Apache License 2.0
github.com/prometheus/common v0.62.0 Apache License 2.0
github.com/prometheus/procfs v0.15.1 Apache License 2.0
github.com/russross/blackfriday/v2 v2.1.0 2-clause BSD license
github.com/sagikazarmark/locafero v0.7.0 MIT license
github.com/sagikazarmark/slog-shim v0.1.0 3-clause BSD license
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 MIT license
github.com/sirupsen/logrus v1.9.3 MIT license
github.com/skeema/knownhosts v1.3.1 Apache License 2.0
github.com/sourcegraph/conc v0.3.0 MIT license
github.com/spf13/afero v1.12.0 Apache License 2.0
github.com/spf13/cast v1.7.1 MIT license
github.com/spf13/cobra v1.8.1 Apache License 2.0
github.com/spf13/pflag v1.0.6 3-clause BSD license
github.com/spf13/viper v1.19.0 MIT license
github.com/stoewer/go-strcase v1.3.0 MIT license
github.com/stretchr/testify v1.10.0 MIT license
github.com/subosito/gotenv v1.6.0 MIT license
github.com/vladimirvivien/gexe v0.4.1 MIT license
github.com/x448/float16 v0.8.4 MIT license
github.com/xanzy/ssh-agent v0.3.3 Apache License 2.0
github.com/xlab/treeprint v1.2.0 MIT license
go.opentelemetry.io/otel v1.34.0 Apache License 2.0
go.opentelemetry.io/otel/trace v1.34.0 Apache License 2.0
go.opentelemetry.io/proto/otlp v1.5.0 Apache License 2.0
go.uber.org/goleak v1.3.0 MIT license
go.uber.org/multierr v1.11.0 MIT license
go.uber.org/zap v1.27.0 MIT license
golang.org/x/crypto v0.32.0 3-clause BSD license
golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c 3-clause BSD license
golang.org/x/mod v0.23.0 3-clause BSD license
golang.org/x/net v0.34.0 3-clause BSD license
golang.org/x/oauth2 v0.26.0 3-clause BSD license
golang.org/x/sync v0.11.0 3-clause BSD license
golang.org/x/sys v0.30.0 3-clause BSD license
golang.org/x/term v0.29.0 3-clause BSD license
golang.org/x/text v0.22.0 3-clause BSD license
golang.org/x/time v0.10.0 3-clause BSD license
golang.org/x/tools v0.29.0 3-clause BSD license
gomodules.xyz/jsonpatch/v2 v2.4.0 Apache License 2.0
google.golang.org/genproto/googleapis/api v0.0.0-20250204164813-702378808489 Apache License 2.0
google.golang.org/genproto/googleapis/rpc v0.0.0-20250204164813-702378808489 Apache License 2.0
google.golang.org/grpc v1.70.0 Apache License 2.0
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1 Apache License 2.0
google.golang.org/protobuf v1.36.4 3-clause BSD license
gopkg.in/evanphx/json-patch.v4 v4.12.0 3-clause BSD license
gopkg.in/inf.v0 v0.9.1 3-clause BSD license
gopkg.in/ini.v1 v1.67.0 Apache License 2.0
gopkg.in/warnings.v0 v0.1.2 2-clause BSD license
gopkg.in/yaml.v2 v2.4.0 Apache License 2.0, MIT license
gopkg.in/yaml.v3 v3.0.1 Apache License 2.0, MIT license
k8s.io/api v0.32.1 Apache License 2.0
k8s.io/apiextensions-apiserver v0.32.1 Apache License 2.0
k8s.io/apimachinery v0.32.1 3-clause BSD license, Apache License 2.0
k8s.io/apiserver v0.32.1 Apache License 2.0
k8s.io/cli-runtime v0.32.1 Apache License 2.0
k8s.io/client-go v0.32.1 3-clause BSD license, Apache License 2.0
github.com/emissary-ingress/code-generator (modified from k8s.io/code-generator) v0.32.2-0.20250205235421-4d5bf4656f71 Apache License 2.0
k8s.io/component-base v0.32.1 Apache License 2.0
k8s.io/component-helpers v0.32.1 Apache License 2.0
k8s.io/controller-manager v0.32.1 Apache License 2.0
k8s.io/gengo/v2 v2.0.0-20250130153323-76c5745d3511 Apache License 2.0
k8s.io/klog/v2 v2.130.1 Apache License 2.0
k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 3-clause BSD license, Apache License 2.0, MIT license
k8s.io/kubectl v0.32.1 Apache License 2.0
k8s.io/kubernetes v1.32.1 Apache License 2.0
k8s.io/metrics v0.32.1 Apache License 2.0
k8s.io/utils v0.0.0-20241210054802-24370beab758 3-clause BSD license, Apache License 2.0
sigs.k8s.io/controller-runtime v0.20.1 Apache License 2.0
sigs.k8s.io/controller-tools v0.17.1 Apache License 2.0
sigs.k8s.io/e2e-framework v0.6.0 Apache License 2.0
sigs.k8s.io/gateway-api v0.2.0 Apache License 2.0
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 3-clause BSD license, Apache License 2.0
sigs.k8s.io/kustomize/api v0.19.0 Apache License 2.0
sigs.k8s.io/kustomize/kyaml v0.19.0 Apache License 2.0
sigs.k8s.io/structured-merge-diff/v4 v4.5.0 Apache License 2.0
sigs.k8s.io/yaml v1.4.0 3-clause BSD license, Apache License 2.0, MIT license
The Emissary-ingress Python code makes use of the following Free and Open Source
libraries:
Name Version License(s)
---- ------- ----------
Cython 0.29.37 Apache License 2.0
Flask 3.0.3 3-clause BSD license
Jinja2 3.1.4 3-clause BSD license
MarkupSafe 2.1.5 3-clause BSD license
PyYAML 6.0.1 MIT license
Werkzeug 3.0.3 3-clause BSD license
blinker 1.8.2 MIT license
build 1.2.1 MIT license
certifi 2024.2.2 Mozilla Public License 2.0
charset-normalizer 3.3.2 MIT license
click 8.1.7 3-clause BSD license
durationpy 0.6 MIT license
expiringdict 1.2.2 Apache License 2.0
gunicorn 22.0.0 MIT license
idna 3.7 3-clause BSD license
itsdangerous 2.2.0 3-clause BSD license
jsonpatch 1.33 3-clause BSD license
jsonpointer 2.4 3-clause BSD license
orjson 3.10.3 Apache License 2.0, MIT license
packaging 23.1 2-clause BSD license, Apache License 2.0
pip-tools 7.3.0 3-clause BSD license
prometheus_client 0.20.0 Apache License 2.0
pyparsing 3.0.9 MIT license
pyproject_hooks 1.1.0 MIT license
python-json-logger 2.0.7 2-clause BSD license
requests 2.31.0 Apache License 2.0
semantic-version 2.10.0 2-clause BSD license
typing_extensions 4.11.0 Python Software Foundation license
urllib3 2.2.1 MIT license
Name Version License(s)
---- ------- ----------
Cython 0.29.37 Apache License 2.0
Flask 3.1.0 3-clause BSD license
Jinja2 3.1.6 3-clause BSD license
MarkupSafe 3.0.2 2-clause BSD license
PyYAML 6.0.1 MIT license
Werkzeug 3.1.3 3-clause BSD license
blinker 1.9.0 MIT license
build 1.2.2.post1 MIT license
certifi 2025.1.31 Mozilla Public License 2.0
charset-normalizer 3.4.1 MIT license
click 8.1.8 3-clause BSD license
durationpy 0.9 MIT license
expiringdict 1.2.2 Apache License 2.0
gunicorn 23.0.0 MIT license
idna 3.10 3-clause BSD license
itsdangerous 2.2.0 3-clause BSD license
jsonpatch 1.33 3-clause BSD license
jsonpointer 3.0.0 3-clause BSD license
orjson 3.10.15 Apache License 2.0, MIT license
packaging 23.1 2-clause BSD license, Apache License 2.0
pip-tools 7.3.0 3-clause BSD license
prometheus_client 0.21.1 Apache License 2.0
pyparsing 3.0.9 MIT license
pyproject_hooks 1.2.0 MIT license
python-json-logger 3.2.1 2-clause BSD license
requests 2.32.3 Apache License 2.0
semantic-version 2.10.0 2-clause BSD license
typing_extensions 4.12.2 Python Software Foundation license
urllib3 2.3.0 MIT license

View File

@ -172,7 +172,7 @@ Provides two main functions:
- Generate IR and envoy configs (load_ir function)
- Take each Resource generated in ResourceFetcher and add it to the Config object as strongly typed objects
- Store Config Object in `/ambassador/snapshots/aconf-tmp.json`
- Check Deltas for Mappings cach and determine if we needs to be reset
- Check Deltas for Mappings cache and determine if we needs to be reset
- Create IR with a Config, Cache, and invalidated items
- IR is generated which basically just converts our stuff to strongly typed generic "envoy" items (handling filters, clusters, listeners, removing duplicates, etc...)
- IR is updated in-memory for diagd process

View File

@ -1,4 +1,4 @@
Building Ambassador
===================
The content in this document has been moved to [DEVELOPING.md].
The content in this document has been moved to [CONTRIBUTING.md].

View File

@ -0,0 +1,929 @@
# Developing Emissary-ingress
Welcome to the Emissary-ingress Community!
Thank you for contributing, we appreciate small and large contributions and look forward to working with you to make Emissary-ingress better.
This document is intended for developers looking to contribute to the Emissary-ingress project. In this document you will learn how to get your development environment setup and how to contribute to the project. Also, you will find more information about the internal components of Emissary-ingress and other questions about working on the project.
> Looking for end user guides for Emissary-ingress? You can check out the end user guides at <https://www.getambassador.io/docs/emissary/>.
After reading this document if you have questions we encourage you to join us on our [Slack channel](https://communityinviter.com/apps/cloud-native/cncf) in the #emissary-ingress channel.
- [Code of Conduct](../Community/CODE_OF_CONDUCT.md)
- [Governance](../Community/GOVERNANCE.md)
- [Maintainers](../Community/MAINTAINERS.md)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Development Setup](#development-setup)
- [Step 1: Install Build Dependencies](#step-1-install-build-dependencies)
- [Step 2: Clone Project](#step-2-clone-project)
- [Step 3: Configuration](#step-3-configuration)
- [Step 4: Building](#step-4-building)
- [Step 5: Push](#step-5-push)
- [Step 6: Deploy](#step-6-deploy)
- [Step 7: Dev-loop](#step-7-dev-loop)
- [What should I do next?](#what-should-i-do-next)
- [Contributing](#contributing)
- [Submitting a Pull Request (PR)](#submitting-a-pull-request-pr)
- [Pull Request Review Process](#pull-request-review-process)
- [Rebasing a branch under review](#rebasing-a-branch-under-review)
- [Fixup commits during PR review](#fixup-commits-during-pr-review)
- [Development Workflow](#development-workflow)
- [Branching Strategy](#branching-strategy)
- [Backport Strategy](#backport-strategy)
- [What if I need a patch to land in a previous supported version?](#what-if-i-need-a-patch-to-land-in-a-previous-supported-version)
- [What if my patch is only for a previous supported version?](#what-if-my-patch-is-only-for-a-previous-supported-version)
- [What if I'm still not sure?](#what-if-im-still-not-sure)
- [Merge Strategy](#merge-strategy)
- [What about merge commit strategy?](#what-about-merge-commit-strategy)
- [Contributing to the Docs](#contributing-to-the-docs)
- [Advanced Topics](#advanced-topics)
- [Running Emissary-ingress internals locally](#running-emissary-ingress-internals-locally)
- [Setting up diagd](#setting-up-diagd)
- [Changing the ambassador root](#changing-the-ambassador-root)
- [Getting envoy](#getting-envoy)
- [Shutting up the pod labels error](#shutting-up-the-pod-labels-error)
- [Extra credit](#extra-credit)
- [Debugging and Developing Envoy Configuration](#debugging-and-developing-envoy-configuration)
- [Making changes to Envoy](#making-changes-to-envoy)
- [1. Preparing your machine](#1-preparing-your-machine)
- [2. Setting up your workspace to hack on Envoy](#2-setting-up-your-workspace-to-hack-on-envoy)
- [3. Hacking on Envoy](#3-hacking-on-envoy)
- [4. Building and testing your hacked-up Envoy](#4-building-and-testing-your-hacked-up-envoy)
- [5. Test Devloop](#5-test-devloop)
- [6. Protobuf changes](#6-protobuf-changes)
- [7. Finalizing your changes](#7-finalizing-your-changes)
- [8. Final Checklist](#8-final-checklist)
- [Developing Emissary-ingress (Maintainers-only advice)](#developing-emissary-ingress-maintainers-only-advice)
- [Updating license documentation](#updating-license-documentation)
- [Upgrading Python dependencies](#upgrading-python-dependencies)
- [FAQ](#faq)
- [How do I find out what build targets are available?](#how-do-i-find-out-what-build-targets-are-available)
- [How do I develop on a Mac with Apple Silicon?](#how-do-i-develop-on-a-mac-with-apple-silicon)
- [How do I develop on Windows using WSL?](#how-do-i-develop-on-windows-using-wsl)
- [How do I test using a private Docker repository?](#how-do-i-test-using-a-private-docker-repository)
- [How do I change the loglevel at runtime?](#how-do-i-change-the-loglevel-at-runtime)
- [Can I build from a docker container instead of on my local computer?](#can-i-build-from-a-docker-container-instead-of-on-my-local-computer)
- [How do I clear everything out to make sure my build runs like it will in CI?](#how-do-i-clear-everything-out-to-make-sure-my-build-runs-like-it-will-in-ci)
- [My editor is changing `go.mod` or `go.sum`, should I commit that?](#my-editor-is-changing-gomod-or-gosum-should-i-commit-that)
- [How do I debug "This should not happen in CI" errors?](#how-do-i-debug-this-should-not-happen-in-ci-errors)
- [How do I run Emissary-ingress tests?](#how-do-i-run-emissary-ingress-tests)
- [How do I type check my python code?](#how-do-i-type-check-my-python-code)
## Development Setup
This section provides the steps for getting started developing on Emissary-ingress. There are a number of prerequisites that need to be setup. In general, our tooling tries to detect any missing requirements and provide a friendly error message. If you ever find that this is not the case please file an issue.
> **Note:** To enable developers contributing on Macs with Apple Silicon, we ensure that the artifacts are built for `linux/amd64`
> rather than the host `linux/arm64` architecture. This can be overriden using the `BUILD_ARCH` environment variable. Pull Request are welcome :).
### Step 1: Install Build Dependencies
Here is a list of tools that are used by the build system to generate the build artifacts, packaging them up into containers, generating crds, helm charts and for running tests.
- git
- make
- docker (make sure you can run docker commands as your dev user without sudo)
- bash
- rsync
- golang - `go.mod` for current version
- python (>=3.10.9)
- kubectl
- a kubernetes cluster (you need permissions to create resources, i.e. crds, deployments, services, etc...)
- a Docker registry
- bsdtar (Provided by libarchive-tools on Ubuntu 19.10 and newer)
- gawk
- jq
- helm
### Step 2: Clone Project
If you haven't already then this would be a good time to clone the project running the following commands:
```bash
# clone to your preferred folder
git clone https://github.com/emissary-ingress/emissary.git
# navigate to project
cd emissary
```
### Step 3: Configuration
You can configure the build system using environment variables, two required variables are used for setting the container registry and the kubeconfig used.
> **Important**: the test and build system perform destructive operations against your cluster. Therefore, we recommend that you
> use a development cluster. Setting the DEV_KUBECONFIG variable described below ensures you don't accidently perform actions on a production cluster.
Open a terminal in the location where you cloned the repository and run the following commands:
```bash
# set container registry using `export DEV_REGISTRY=<your-registry>
# note: you need to be logged in and have permissions to push
# Example:
export DEV_REGISTRY=docker.io/parsec86
# set kube config file using `export DEV_KUBECONFIG=<dev-kubeconfig>`
# your cluster needs the ability to read from the configured container registry
export DEV_KUBECONFIG="$HOME/.kube/dev-config.yaml"
```
### Step 4: Building
The build system for this project leverages `make` and multi-stage `docker` builds to produce the following containers:
- `emissary.local/emissary` - single deployable container for Emissary-ingress
- `emissary.local/kat-client` - test client container used for testing
- `emissary.local/kat-server` - test server container used for testing
Using the terminal session you opened in step 2, run the following commands
>
```bash
# This will pull and build the necessary docker containers and produce multiple containers.
# If this is the first time running this command it will take a little bit while the base images are built up and cached.
make images
# verify containers were successfully created, you should also see some of the intermediate builder containers as well
docker images | grep emissary.local
```
*What just happened?*
The build system generated a build container that pulled in envoy, the build dependencies, built various binaries from within this project and packaged them into a single deployable container. More information on this can be found in the [Architecture Document](ARCHITECTURE.md).
### Step 5: Push
Now that you have successfully built the containers its time to push them to your container registry which you setup in step 2.
In the same terminal session you can run the following command:
```bash
# re-tags the images and pushes them to your configured container registry
# docker must be able to login to your registry and you have to have push permissions
make push
# you can view the newly tag images by running
docker images | grep <your -registry>
# alternatively, we have two make targets that provide information as well
make env
# or in a bash export friendly format
make export
```
### Step 6: Deploy
Now its time to deploy the container out to your Kubernetes cluster that was configured in step 2. Hopefully, it is already becoming apparent that we love to leverage Make to handle the complexity for you :).
```bash
# generate helm charts and K8's Configs with your container swapped in and apply them to your cluster
make deploy
# check your cluster to see if emissary is running
# note: kubectl doesn't know about DEV_KUBECONFIG so you may need to ensure KUBECONFIG is pointing to the correct cluster
kubectl get pod -n ambassador
```
🥳 If all has gone well then you should have your development environment setup for building and testing Emissary-ingress.
### Step 7: Dev-loop
Now that you are all setup and able to deploy a development container of Emissary-ingress to a cluster, it is time to start making some changes.
Lookup an issue that you want to work on, assign it to yourself and if you have any questions feel free to ping us on slack in the #emissary-dev channel.
Make a change to Emissary-ingress and when you want to test it in a live cluster just re-run
`make deploy`
This will:
- recompile the go binary
- rebuild containers
- push them to the docker registry
- rebuild helm charts and manifest
- reapply manifest to cluster and re-deploy Emissary-ingress to the cluster
> *Do I have to run the other make targets `make images` or `make push` ?*
> No you don't have to because `make deploy` will actually run those commands for you. The steps above were meant to introduce you to the various make targets so that you aware of them and have options when developing.
### What should I do next?
Now that you have your dev system up and running here are some additional content that we recommend you check out:
- [Emissary-ingress Architecture](ARCHITECTURE.md)
- [Contributing Code](#contributing)
- [Contributing to Docs](#contributing-to-the-docs)
- [Advanced Topics](#advanced-topics)
- [Faq](#faq)
## Contributing
This section goes over how to contribute code to the project and how to get started contributing. More information on how we manage our branches can be found below in [Development Workflow](#development-workflow).
Before contributing be sure to read our [Code of Conduct](../Community/CODE_OF_CONDUCT.md) and [Governance](../Community/GOVERNANCE.md) to get an understanding of how our project is structured.
### Submitting a Pull Request (PR)
> If you haven't set up your development environment then please see the [Development Setup](#development-setup) section.
When submitting a Pull Request (PR) here are a set of guidelines to follow:
1. Search for an [existing issue](https://github.com/emissary-ingress/emissary/issues) or create a [new issue](https://github.com/emissary-ingress/emissary/issues/new/choose).
2. Be sure to describe your proposed change and any open questions you might have in the issue. This allows us to collect historical context around an issue, provide feedback on the proposed solution and discuss what versions a fix should target.
3. If you haven't done so already create a fork of the respository and clone it locally
```shell
git clone <your-fork>
```
4. Cut a new patch branch from `master`:
```shell
git checkout master
git checkout -b my-patch-branch master
```
5. Make necessary code changes.
- Make sure you include test coverage for the change, see [How do I run Tests](#how-do-i-run-emissary-ingress-tests)
- Ensure code linting is passing by running `make lint`
- Code changes must have associated documentation updates.
- Make changes in <https://github.com/datawire/ambassador-docs> as necessary, and include a reference to those changes the pull request for your code changes.
- See [Contributing to Docs](#contributing-to-the-docs) for more details.
> Smaller pull requests are easier to review and can get merged faster thus reducing potential for merge conflicts so it is recommend to keep them small and focused.
6. Commit your changes using descriptive commit messages.
- we **require** that all commits are signed off so please be sure to commit using the `--signoff` flag, e.g. `git commit --signoff`
- commit message should summarize the fix and motivation for the proposed fix. Include issue # that the fix looks to address.
- we are "ok" with multiple commits but we may ask you to squash some commits during the PR review process
7. Push your branch to your forked repository:
> It is good practice to make sure your change is rebased on the latest master to ensure it will merge cleanly so if it has been awhile since you rebased on upstream you should do it now to ensure there are no merge conflicts
```shell
git push origin my-patch-branch
```
8. Submit a Pull Request from your fork targeting upstream `emissary/master`.
Thanks for your contribution! One of the [Maintainers](../Community/MAINTAINERS.md) will review your PR and discuss any changes that need to be made.
### Pull Request Review Process
This is an opportunity for the Maintainers to review the code for accuracy and ensure that it solves the problem outlined in the issue. This is an iterative process and meant to ensure the quality of the code base. During this process we may ask you to break up Pull Request into smaller changes, squash commits, rebase on master, etc...
Once you have been provided feedback:
1. Make the required updates to the code per the review discussion
2. Retest the code and ensure linting is still passing
3. Commit the changes and push to Github
- see [Fixup Commits](#fixup-commits-during-pr-review) below
4. Repeat these steps as necessary
Once you have **two approvals** then one of the Maintainers will merge the PR.
:tada: Thank you for contributing and being apart of the Emissary-ingress Community!
### Rebasing a branch under review
Many times the base branch will have new commits added to it which may cause merge conflicts with your open pull request. First, a good rule of thumb is to make pull request small so that these conflicts are less likely to occur but this is not always possible when have multiple people working on similiar features. Second, if it is just addressing commit feedback a `fixup` commit is also a good option so that the reviewers can see what changed since their last review.
If you need to address merge conflicts then it is preferred that you use **Rebase** on the base branch rather than merging base branch into the feature branch. This ensures that when the PR is merged that it will cleanly replay on top of the base branch ensuring we maintain a clean linear history.
To do a rebase you can do the following:
```shell
# add emissary.git as a remote repository, only needs to be done once
git remote add upstream https://github.com/emissary-ingress/emissary.git
# fetch upstream master
git fetch upstream master
# checkout local master and update it from upstream master
git checkout master
git pull -ff upstream master
# rebase patch branch on local master
git checkout my-patch-branch
git rebase -i master
```
Once the merge conflicts are addressed and you are ready to push the code up you will need to force push your changes because during the rebase process the commit sha's are re-written and it has diverged from what is in your remote fork (Github).
To force push a branch you can:
```shell
git push head --force-with-lease
```
> Note: the `--force-with-lease` is recommended over `--force` because it is safer because it will check if the remote branch had new commits added during your rebase. You can read more detail here: <https://itnext.io/git-force-vs-force-with-lease-9d0e753e8c41>
### Fixup commits during PR review
One of the major downsides to rebasing a branch is that it requires force pushing over the remote (Github) which then marks all the existing review history outdated. This makes it hard for a reviewer to figure out whether or not the new changes addressed the feedback.
One way you can help the reviewer out is by using **fixup** commits. Fixup commits are special git commits that append `fixup!` to the subject of a commit. `Git` provides tools for easily creating these and also squashing them after the PR review process is done.
Since this is a new commit on top of the other commits, you will not lose your previous review and the new commit can be reviewed independently to determine if the new changes addressed the feedback correctly. Then once the reviewers are happy we will ask you to squash them so that we when it is merged we will maintain a clean linear history.
Here is a quick read on it: <https://jordanelver.co.uk/blog/2020/06/04/fixing-commits-with-git-commit-fixup-and-git-rebase-autosquash/>
TL;DR;
```shell
# make code change and create new commit
git commit --fixup <sha>
# push to Github for review
git push
# reviewers are happy and ask you to do a final rebase before merging
git rebase -i --autosquash master
# final push before merging
git push --force-with-lease
```
## Development Workflow
This section introduces the development workflow used for this repository. It is recommended that both Contributors, Release Engineers and Maintainers familiarize themselves with this content.
### Branching Strategy
This repository follows a trunk based development workflow. Depending on what article you read there are slight nuances to this so this section will outline how this repository interprets that workflow.
The most important branch is `master` this is our **Next Release** version and it should always be in a shippable state. This means that CI should be green and at any point we can decided to ship a new release from it. In a traditional trunk based development workflow, developers are encouraged to land partially finished work daily and to keep that work hidden behind feature flags. This repository does **NOT** follow that and instead if code lands on master it is something we are comfortable with shipping.
We ship release candidate (RC) builds from the `master` branch (current major) and also from `release/v{major.minor}` branches (last major version) during our development cycles. Therefore, it is important that it remains shippable at all times!
When we do a final release then we will cut a new `release/v{major.minor}` branch. These are long lived release branches which capture a snapshot in time for that release. For example here are some of the current release branches (as of writing this):
- release/v3.2
- release/v3.1
- release/v3.0
- release/v2.4
- release/v2.3
- release/v1.14
These branches contain the codebase as it was at that time when the release was done. These branches have branch protection enabled to ensure that they are not removed or accidently overwritten. If we needed to do a security fix or bug patch then we may cut a new `.Z` patch release from an existing release branch. For example, the `release/v2.4` branch is currently on `2.4.1`.
As you can see we currently support mutliple major versions of Emissary-ingress and you can read more about our [End-of-Life Policy](https://www.getambassador.io/docs/emissary/latest/about/aes-emissary-eol/).
For more information on our current RC and Release process you can find that in our [Release Wiki](https://github.com/emissary-ingress/emissary/wiki).
### Backport Strategy
Since we follow a trunk based development workflow this means that the majority of the time your patch branch will be based off from `master` and that most Pull Request will target `master`.
This ensures that we do not miss bug fixes or features for the "Next" shippable release and simplifies the mental-model for deciding how to get started contributing code.
#### What if I need a patch to land in a previous supported version?
Let's say I have a bug fix for CRD round trip conversion for AuthService, which is affecting both `v2.y` and `v3.y`.
First within the issue we should discuss what versions we want to target. This can depend on current cycle work and any upcoming releases we may have.
The general rules we follow are:
1. land patch in "next" version which is `master`
2. backport patch to any `release/v{major}.{minor}` branches
So, let's say we discuss it and say that the "next" major version is a long ways away so we want to do a z patch release on our current minor version(`v3.2`) and we also want to do a z patch release on our last supported major version (`v2.4`).
This means that these patches need to land in three separate branches:
1. `master` - next release
2. `release/v3.2` - patch release
3. `release/v2.4` - patch release
In this scenario, we first ask you to land the patch in the `master` branch and then provide separate PR's with the commits backported onto the `release/v*` branches.
> Recommendation: using the `git cherry-pick -x` will add the source commit sha to the commit message. This helps with tracing work back to the original commit.
#### What if my patch is only for a previous supported version?
Although, this should be an edge case, it does happen where the code has diverged enough that a fix may only be relevant to an existing supported version. In these cases we may need to do a patch release for that older supported version.
A good example, if we were to find a bug in the Envoy v2 protocol configuration we would only want to target the v2 release.
In this scenario, the base branch that we would create our feature branch off from would be the latest `minor` version for that release. As of writing this, that would be the `release/v2.4` branch. We would **not** need to target master.
But, let's say during our fix we notice other things that need to be addressed that would also need to be fixed in `master`. Then you need to submit a **separate Pull Request** that should first land on master and then follow the normal backporting process for the other patches.
#### What if I'm still not sure?
This is what the issue discussions and disucssion in Slack are for so that we can help guide you so feel free to ping us in the `#emissary-dev` channel on Slack to discuss directly with us.
### Merge Strategy
> The audience for this section is the Maintainers but also beneficial for Contributors so that they are familiar with how the project operates.
Having a clean linear commit history for a repository makes it easier to understand what is being changed and reduces the mental load for new comers to the project.
To maintain a clean linear commit history the following rules should be followed:
First, always rebase patch branch on to base branch. This means **NO** merge commits from merging base branch into the patch branch. This can be accomplished using git rebase.
```shell
# first, make sure you pull latest upstream changes
git fetch upstream
git checkout master
git pull -ff upstream/master
# checkout patch branch and rebase interactive
# you may have merge conflicts you need to resolve
git checkout my-patch-branch
git rebase -i master
```
> Note: this does rewrite your commit shas so be aware when sharing branches with co-workers.
Once the Pull Request is reviewed and has **two approvals** then a Maintainer can merge. Maintainers should follow prefer the following merge strategies:
1. rebase and merge
2. squash merge
When `rebase and merge` is used your commits are played on top of the base branch so that it creates a clean linear history. This will maintain all the commits from the Pull Request. In most cases this should be the **preferred** merge strategy.
When a Pull Request has lots of fixup commits, or pr feedback fixes then you should ask the Contributor to squash them as part of the PR process.
If the contributor is unable to squash them then using a `squash merge` in some cases makes sense. **IMPORTANT**, when this does happen it is important that the commit messages are cleaned up and not just blindly accepted the way proposed by Github. Since it is easy to miss that cleanup step, this should be used less frequently compared to `rebase and merge`.
#### What about merge commit strategy?
> The audience for this section is the Maintainers but also beneficial for Contributors so that they are familiar with how the project operates.
When maintaining a linear commit history, each commit tells the story of what was changed in the repository. When using `merge commits` it
adds an additional commit to the history that is not necessary because the commit history and PR history already tell the story.
Now `merge commits` can be useful when you are concerned with not rewriting the commit sha. Based on the current release process which includes using `rel/v` branches that are tagged and merged into `release/v` branches we must use a `merge commit` when merging these branches. This ensures that the commit sha a Git Tag is pointing at still exists once merged into the `release/v` branch.
## Contributing to the Docs
The Emissary-ingress community will all benefit from having documentation that is useful and correct. If you have found an issue with the end user documentation, then please help us out by submitting an issue and/or pull request with a fix!
The end user documentation for Emissary-ingress lives in a different repository and can be found at <https://github.com/datawire/ambassador-docs>.
See this repository for details on how to contribute to either a `pre-release` or already-released version of Emissary-ingress.
## Advanced Topics
This section is for more advanced topics that provide more detailed instructions. Make sure you go through the Development Setup and read the Architecture document before exploring these topics.
### Running Emissary-ingress internals locally
The main entrypoint is written in go. It strives to be as compatible as possible
with the normal go toolchain. You can run it with:
```bash
go run ./cmd/busyambassador entrypoint
```
Of course just because you can run it this way does not mean it will succeed.
The entrypoint needs to launch `diagd` and `envoy` in order to function, and it
also expect to be able to write to the `/ambassador` directory.
#### Setting up diagd
If you want to hack on diagd, its easiest to setup a virtualenv with an editable
copy and launch your `go run` from within that virtualenv. Note that these
instructions depend on the virtualenvwrapper
(<https://virtualenvwrapper.readthedocs.io/en/latest/>) package:
```bash
# Create a virtualenv named venv with all the python requirements
# installed.
python3 -m venv venv
. venv/bin/activate
# If you're doing this in Datawire's apro.git, then:
cd ambassador
# Update pip and install dependencies
pip install --upgrade pip
pip install orjson # see below
pip install -r builder/requirements.txt
# Created an editable installation of ambassador:
pip install -e python/
# Check that we do indeed have diagd in our path.
which diagd
# If you're doing this in Datawire's apro.git, then:
cd ..
```
(Note: it shouldn't be necessary to install `orjson` by hand. The fact that it is
at the moment is an artifact of the way Ambassador builds currently happen.)
#### Changing the ambassador root
You should now be able to launch ambassador if you set the
`ambassador_root` environment variable to a writable location:
ambassador_root=/tmp go run ./cmd/busyambassador entrypoint
#### Getting envoy
If you do not have envoy in your path already, the entrypoint will use
docker to run it.
#### Shutting up the pod labels error
An astute observe of the logs will notice that ambassador complains
vociferously that pod labels are not mounted in the ambassador
container. To reduce this noise, you can:
```bash
mkdir /tmp/ambassador-pod-info && touch /tmp/ambassador-pod-info/labels
```
#### Extra credit
When you run ambassador locally it will configure itself exactly as it
would in the cluster. That means with two caveats you can actually
interact with it and it will function normally:
1. You need to run `telepresence connect` or equivalent so it can
connect to the backend services in its configuration.
2. You need to supply the host header when you talk to it.
### Debugging and Developing Envoy Configuration
Envoy configuration is generated by the ambassador compiler. Debugging
the ambassador compiler by running it in kubernetes is very slow since
we need to push both the code and any relevant kubernetes resources
into the cluster. The following sections will provide tips for improving
this development experience.
### Making changes to Envoy
Emissary-ingress is built on top of Envoy and leverages a vendored version of Envoy (*we track upstream very closely*). This section will go into how to make changes to the Envoy that is packaged with Emissary-ingress.
This is a bit more complex than anyone likes, but here goes:
#### 1. Preparing your machine
Building and testing Envoy can be very resource intensive. A laptop
often can build Envoy... if you plug in an external hard drive, point
a fan at it, and leave it running overnight and most of the next day.
At Ambassador Labs, we'll often spin up a temporary build machine in GCE, so
that we can build it very quickly.
As of Envoy 1.15.0, we've measure the resource use to build and test
it as:
> | Command | Disk Size | Disk Used | Duration[1] |
> |--------------------|-----------|-----------|-------------|
> | `make update-base` | 450G | 12GB | ~11m |
> | `make check-envoy` | 450G | 424GB | ~45m |
>
> [1] On a "Machine type: custom (32 vCPUs, 512 GB memory)" VM on GCE,
> with the following entry in its `/etc/fstab`:
>
> ```bash
> tmpfs:docker /var/lib/docker tmpfs size=450G 0 0
> ```
If you have the RAM, we've seen huge speed gains from doing the builds
and tests on a RAM disk (see the `/etc/fstab` line above).
#### 2. Setting up your workspace to hack on Envoy
1. From your `emissary.git` checkout, get Emissary-ingress's current
version of the Envoy sources, and create a branch from that:
```shell
make $PWD/_cxx/envoy
git -C _cxx/envoy checkout -b YOUR_BRANCHNAME
```
2. To build Envoy in FIPS mode, set the following variable:
```shell
export FIPS_MODE=true
```
It is important to note that while building Envoy in FIPS mode is
required for FIPS compliance, additional steps may be necessary.
Emissary does not claim to be FIPS compliant or certified.
See [here](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/security/ssl#fips-140-2) for more information on FIPS and Envoy.
> _NOTE:_ FIPS_MODE is NOT supported by the emissary-ingress maintainers but we provide this for developers as convience
#### 3. Hacking on Envoy
Modify the sources in `./_cxx/envoy/`. or update the branch and/or `ENVOY_COMMIT` as necessary in `./_cxx/envoy.mk`
#### 4. Building and testing your hacked-up Envoy
> See `./_cxx/envoy.mk` for the full list of targets.
Multiple Phony targets are provided so that developers can run the steps they are interested in when developing, here are few of the key ones:
- `make update-base`: will perform all the steps necessary to verify, build envoy, build docker images, push images to the container repository and compile the updated protos.
- `make build-envoy`: will build the envoy binaries using the same build container as the upstream Envoy project. Build outputs are mounted to the `_cxx/envoy-docker-build` directory and Bazel will write the results there.
- `make build-base-envoy-image`: will use the release outputs from building envoy to generate a new `base-envoy` container which is then used in the main emissary-ingress container build.
- `make push-base-envoy`: will push the built container to the remote container repository.
- `make check-envoy`: will use the build docker container to run the Envoy test suite against the currently checked out envoy in the `_cxx/envoy` folder.
- `make envoy-shell`: will run the envoy build container and open a bash shell session. The `_cxx/envoy` folder is volume mounted into the container and the user is set to the `envoybuild` user in the container to ensure you are not running as root to ensure hermetic builds.
#### 5. Test Devloop
Running the Envoy test suite will compile all the test targets. This is a slow process and can use lots of disk space.
The Envoy Inner Devloop for build and testing:
- You can make a change to Envoy code and run the whole test by just calling `make check-envoy`
- You can run a specific test instead of the whole test suite by setting the `ENVOY_TEST_LABEL` environment variable.
- For example, to run just the unit tests in `test/common/network/listener_impl_test.cc`, you should run:
```shell
ENVOY_TEST_LABEL='//test/common/network:listener_impl_test' make check-envoy
```
- Alternatively, you can run `make envoy-shell` to get a bash shell into the Docker container that does the Envoy builds and you are free to interact with `Bazel` directly.
Interpreting the test results:
- If you see the following message, don't worry, it's harmless; the tests still ran:
```text
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
```
The message means that the test passed, but it passed too
quickly, and Bazel is suggesting that you declare it as smaller.
Something along the lines of "This test only took 2s, but you
declared it as being in the 60s-300s ('moderate') bucket,
consider declaring it as being in the 0s-60s ('short')
bucket".
Don't be confused (as I was) in to thinking that it was saying
that the test was too big and was skipped and that you need to
throw more hardware at it.
- **Build or test Emissary-ingress** with the usual `make` commands, with
the exception that you MUST run `make update-base` first whenever
Envoy needs to be recompiled; it won't happen automatically. So
`make test` to build-and-test Emissary-ingress would become
`make update-base && make test`, and `make images` to just build
Emissary-ingress would become `make update-base && make images`.
The Envoy changes with Emissary-ingress:
- Either run `make update-base` to build, and push a new base container and then you can run `make test` for the Emissary-ingress test suite.
- If you do not want to push the container you can instead:
- Build Envoy - `make build-envoy`
- Build container - `make build-base-envoy-image`
- Test Emissary - `make test`
#### 6. Protobuf changes
If you made any changes to the Protocol Buffer files or if you bumped versions of Envoy then you
should make sure that you are re-compiling the Protobufs so that they are available and checked-in
to the emissary.git repository.
```sh
make compile-envoy-protos
```
This will copy over the raw proto files, compile and copy the generated go code over to emisary-ignress repository.
#### 7. Finalizing your changes
> NOTE: we are no longer accepting PR's in `datawire/envoy.git`.
If you have custom changes then land them in your custom envoy repository and update the `ENVOY_COMMIT` and `ENVOY_DOCKER_REPO` variable in `_cxx/envoy.mk` so that the image will be pushed to the correct repository.
Then run `make update-base` does all the bits so assuming that was successful then are all good.
**For maintainers:**
You will want to make sure that the image is pushed to the backup container registries:
```shell
# upload image to the mirror in GCR
SHA=GET_THIS_FROM_THE_make_update-base_OUTPUT
TAG="envoy-0.$SHA.opt"
docker pull "docker.io/emissaryingress/base-envoy:envoy-0.$TAG.opt"
docker tag "docker.io/emissaryingress/base-envoy:$TAG" "gcr.io/datawire/ambassador-base:$TAG"
docker push "gcr.io/datawire/ambassador-base:$TAG"
```
#### 8. Final Checklist
**For Maintainers Only**
Here is a checklist of things to do when bumping the `base-envoy` version:
- [ ] The image has been pushed to...
- [ ] `docker.io/emissaryingress/base-envoy`
- [ ] `gcr.io/datawire/ambassador-base`
- [ ] The `datawire/envoy.git` commit has been tagged as `datawire-$(git describe --tags --match='v*')`
(the `--match` is to prevent `datawire-*` tags from stacking on each other).
- [ ] It's been tested with...
- [ ] `make check-envoy`
The `check-envoy-version` CI job will double check all these things, with the exception of running
the Envoy tests. If the `check-envoy-version` is failing then double check the above, fix them and
re-run the job.
### Developing Emissary-ingress (Maintainers-only advice)
At the moment, these techniques will only work internally to Maintainers. Mostly
this is because they require credentials to access internal resources at the
moment, though in several cases we're working to fix that.
#### Updating license documentation
When new dependencies are added or existing ones are updated, run
`make generate` and commit changes to `DEPENDENCIES.md` and
`DEPENDENCY_LICENSES.md`
#### Upgrading Python dependencies
Delete `python/requirements.txt`, then run `make generate`.
If there are some dependencies you don't want to upgrade, but want to
upgrade everything else, then
1. Remove from `python/requirements.txt` all of the entries except
for those you want to pin.
2. Delete `python/requirements.in` (if it exists).
3. Run `make generate`.
> **Note**: If you are updating orjson you will need to also update `docker/base-python/Dockerfile` before running `make generate` for the new version. orjson uses rust bindings and the default wheels on PyPI rely on glibc. Because our base python image is Alpine based, it is built from scratch using rustc to build a musl compatable version.
> :warning: You may run into an error when running `make generate` where it can't detect the licenses for new or upgraded dependencies, which is needed so that so that we can properly generate DEPENDENCIES.md and DEPENDENCY_LICENSES.md. If that is the case, you may also have to update `build-aux/tools/src/py-mkopensource/main.go:parseLicenses` for any license changes then run `make generate` again.
## FAQ
This section contains a set of Frequently Asked Questions that may answer a question you have. Also, feel free to ping us in Slack.
### How do I find out what build targets are available?
Use `make help` and `make targets` to see what build targets are
available along with documentation for what each target does.
### How do I develop on a Mac with Apple Silicon?
To ensure that developers using a Mac with Apple Silicon can contribute, the build system ensures
the build artifacts are `linux/amd64` rather than the host architecture. This behavior can be overriden
using the `BUILD_ARCH` environment variable (e.g. `BUILD_ARCH=linux/arm64 make images`).
### How do I develop on Windows using WSL?
- [WSL 2](https://learn.microsoft.com/en-us/windows/wsl/)
- [Docker Desktop for Windows](https://docs.docker.com/desktop/windows/wsl/)
- [VS Code](https://code.visualstudio.com/)
### How do I test using a private Docker repository?
If you are pushing your development images to a private Docker repo,
then:
```sh
export DEV_USE_IMAGEPULLSECRET=true
export DOCKER_BUILD_USERNAME=...
export DOCKER_BUILD_PASSWORD=...
```
and the test machinery should create an `imagePullSecret` from those Docker credentials such that it can pull the images.
### How do I change the loglevel at runtime?
```console
curl localhost:8877/ambassador/v0/diag/?loglevel=debug
```
Note: This affects diagd and Envoy, but NOT the AES `amb-sidecar`.
See the AES `CONTRIBUTING.md` for how to do that.
### Can I build from a docker container instead of on my local computer?
If you want to build within a container instead of setting up dependencies on your local machine then you can run the build within a docker container and leverage "Docker in Docker" to build it.
1. `docker pull docker:latest`
2. `docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -it docker:latest sh`
3. `apk add --update --no-cache bash build-base go curl rsync python3 python2 git libarchive-tools gawk jq`
4. `git clone https://github.com/emissary-ingress/emissary.git && cd emissary`
5. `make images`
Steps 0 and 1 are run on your machine, and 2 - 4 are from within the docker container. The base image is a "Docker in Docker" image, ran with `-v /var/run/docker.sock:/var/run/docker.sock` in order to connect to your local daemon from the docker inside the container. More info on Docker in Docker [here](https://hub.docker.com/_/docker).
The images will be created and tagged as defined above, and will be available in docker on your local machine.
### How do I clear everything out to make sure my build runs like it will in CI?
Use `make clobber` to completely remove all derived objects, all cached artifacts, everything, and get back to a clean slate. This is recommended if you change branches within a clone, or if you need to `make generate` when you're not *certain* that your last `make generate` was using the same Envoy version.
Use `make clean` to remove derived objects, but *not* clear the caches.
### My editor is changing `go.mod` or `go.sum`, should I commit that?
If you notice this happening, run `make go-mod-tidy`, and commit that.
(If you're in Ambassador Labs, you should do this from `apro/`, not
`apro/ambassador/`, so that apro.git's files are included too.)
### How do I debug "This should not happen in CI" errors?
These checks indicate that some output file changed in the middle of a
run, when it should only change if a source file has changed. Since
CI isn't editing the source files, this shouldn't happen in CI!
This is problematic because it means that running the build multiple
times can give different results, and that the tests are probably not
testing the same image that would be released.
These checks will show you a patch showing how the output file
changed; it is up to you to figure out what is happening in the
build/test system that would cause that change in the middle of a run.
For the most part, this is pretty simple... except when the output
file is a Docker image; you just see that one image hash is different
than another image hash.
Fortunately, the failure showing the changed image hash is usually
immediately preceded by a `docker build`. Earlier in the CI output,
you should find an identical `docker build` command from the first time it
ran. In the second `docker build`'s output, each step should say
`---> Using cache`; the first few steps will say this, but at some
point later steps will stop saying this; find the first step that is
missing the `---> Using cache` line, and try to figure out what could
have changed between the two runs that would cause it to not use the
cache.
If that step is an `ADD` command that is adding a directory, the
problem is probably that you need to add something to `.dockerignore`.
To help figure out what you need to add, try adding a `RUN find
DIRECTORY -exec ls -ld -- {} +` step after the `ADD` step, so that you
can see what it added, and see what is different on that between the
first and second `docker build` commands.
### How do I run Emissary-ingress tests?
- `export DEV_REGISTRY=<your-dev-docker-registry>` (you need to be logged in and have permission to push)
- `export DEV_KUBECONFIG=<your-dev-kubeconfig>`
If you want to run the Go tests for `cmd/entrypoint`, you'll need `diagd`
in your `PATH`. See the instructions below about `Setting up diagd` to do
that.
| Group | Command |
| --------------- | ---------------------------------------------------------------------- |
| All Tests | `make test` |
| All Golang | `make gotest` |
| All Python | `make pytest` |
| Some/One Golang | `make gotest GOTEST_PKGS=./cmd/entrypoint GOTEST_ARGS="-run TestName"` |
| Some/One Python | `make pytest PYTEST_ARGS="-k TestName"` |
Please note the python tests use a local cache to speed up test
results. If you make a code update that changes the generated envoy
configuration, those tests will fail and you will need to update the
python test cache.
Note that it is invalid to run one of the `main[Plain.*]` Python tests
without running all of the other `main[Plain*]` tests; the test will
fail to run (not even showing up as a failure or xfail--it will fail
to run at all). For example, `PYTEST_ARGS="-k WebSocket"` would match
the `main[Plain.WebSocketMapping-GRPC]` test, and that test would fail
to run; one should instead say `PYTEST_ARGS="-k Plain or WebSocket"`
to avoid breaking the sub-tests of "Plain".
### How do I type check my python code?
Ambassador uses Python 3 type hinting and the `mypy` static type checker to
help find bugs before runtime. If you haven't worked with hinting before, a
good place to start is
[the `mypy` cheat sheet](https://mypy.readthedocs.io/en/latest/cheat_sheet_py3.html).
New code must be hinted, and the build process will verify that the type
check passes when you `make test`. Fair warning: this means that
PRs will not pass CI if the type checker fails.
We strongly recommend using an editor that can do realtime type checking
(at Datawire we tend to use PyCharm and VSCode a lot, but many many editors
can do this now) and also running the type checker by hand before submitting
anything:
- `make lint/mypy` will check all the Ambassador code
Ambassador code should produce *no* warnings and *no* errors.
If you're concerned that the mypy cache is somehow wrong, delete the
`.mypy_cache/` directory to clear the cache.

View File

@ -111,7 +111,7 @@ These steps should be completed within the 1-7 days of Disclosure.
[CVSS](https://www.first.org/cvss/specification-document) using the [CVSS
Calculator](https://www.first.org/cvss/calculator/3.0). The Fix Lead makes the final call on the
calculated CVSS; it is better to move quickly than to spend time making the CVSS perfect.
- The Fix Team will work per the usual [Emissary Development Process](DEVELOPING.md), including
- The Fix Team will work per the usual [Emissary Development Process](CONTRIBUTING.md), including
fix branches, PRs, reviews, etc.
- The Fix Team will notify the Fix Lead that work on the fix branch is complete once the fix is
present in the relevant release branch(es) in the private security repo.

176
QUICKSTART.md Normal file
View File

@ -0,0 +1,176 @@
# Emissary-ingress 3.10 Quickstart
**We recommend using Helm** to install Emissary.
### Installing if you're starting fresh
**If you are already running Emissary and just want to upgrade, DO NOT FOLLOW
THESE DIRECTIONS.** Instead, check out "Upgrading from an earlier Emissary"
below.
If you're starting from scratch and you don't need to worry about older CRD
versions, install using `--set enableLegacyVersions=false` to avoid install
the old versions of the CRDs and the conversion webhook:
```bash
helm install emissary-crds \
--namespace emissary --create-namespace \
oci://ghcr.io/emissary-ingress/emissary-crds-chart --version=3.10.0 \
--set enableLegacyVersions=false \
--wait
```
This will install only v3alpha1 CRDs and skip the conversion webhook entirely.
It will create the `emissary` namespace for you, but there won't be anything
in it at this point.
Next up, install Emissary itself, with `--set waitForApiext.enabled=false` to
tell Emissary not to wait for the conversion webhook to be ready:
```bash
helm install emissary \
--namespace emissary \
oci://ghcr.io/emissary-ingress/emissary-ingress --version=3.10.0 \
--set waitForApiext.enabled=false \
--wait
```
### Upgrading from an earlier Emissary
First, install the CRDs and the conversion webhook:
```bash
helm install emissary-crds \
--namespace emissary-system --create-namespace \
oci://ghcr.io/emissary-ingress/emissary-crds-chart --version=3.10.0 \
--wait
```
This will install all the versions of the CRDs (v1, v2, and v3alpha1) and the
conversion webhook into the `emissary-system` namespace. Once that's done, you'll install Emissary itself:
```bash
helm install emissary \
--namespace emissary --create-namespace \
oci://ghcr.io/emissary-ingress/emissary-ingress --version=3.10.0 \
--wait
```
### Using Emissary
In either case above, you should have a running Emissary behind the Service
named `emissary-emissary-ingress` in the `emissary` namespace. How exactly you
connect to that Service will vary with your cluster provider, but you can
start with
```bash
kubectl get svc -n emissary emissary-emissary-ingress
```
and that should get you started. Or, of course, you can use something like
```bash
kubectl port-forward -n emissary svc/emissary-emissary-ingress 8080:80
```
(after you configure a Listener!) and then talk to localhost:8080 with any
kind of cluster.
## Using Faces for a sanity check
[Faces Demo]: https://github.com/buoyantio/faces-demo
If you like, you can continue by using the [Faces Demo] as a quick sanity
check. First, install Faces itself using Helm:
```bash
helm install faces \
--namespace faces --create-namespace \
oci://ghcr.io/buoyantio/faces-chart --version 2.0.0-rc.4 \
--wait
```
Next, you'll need to configure Emissary to route to Faces. First, we'll do the
basic configuration to tell Emissary to listen for HTTP traffic:
```bash
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: ambassador-https-listener
spec:
port: 8443
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: ambassador-http-listener
spec:
port: 8080
protocol: HTTP
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: wildcard-host
spec:
hostname: "*"
requestPolicy:
insecure:
action: Route
EOF
```
(This actually supports both HTTPS and HTTP, but since we haven't set up TLS
certificates, we'll just stick with HTTP.)
Next, we need two Mappings:
| Prefix | Routes to Service | in Namespace |
| --------- | ----------------- | ------------ |
| `/faces/` | `faces-gui` | `faces` |
| `/face/` | `face` | `faces` |
```bash
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: gui-mapping
namespace: faces
spec:
hostname: "*"
prefix: /faces/
service: faces-gui.faces
rewrite: /
timeout_ms: 0
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: face-mapping
namespace: faces
spec:
hostname: "*"
prefix: /face/
service: face.faces
timeout_ms: 0
EOF
```
Once that's done, then you'll be able to access the Faces Demo at `/faces/`,
on whatever IP address or hostname your cluster provides for the
`emissary-emissary-ingress` Service. Or you can port-forward as above and
access it at `http://localhost:8080/faces/`.

129
README.md
View File

@ -6,6 +6,7 @@ Emissary-ingress
[![Docker Repository][badge-docker-img]][badge-docker-link]
[![Join Slack][badge-slack-img]][badge-slack-link]
[![Core Infrastructure Initiative: Best Practices][badge-cii-img]][badge-cii-link]
[![Artifact HUB][badge-artifacthub-img]][badge-artifacthub-link]
[badge-version-img]: https://img.shields.io/docker/v/emissaryingress/emissary?sort=semver
[badge-version-link]: https://github.com/emissary-ingress/emissary/releases
@ -15,59 +16,95 @@ Emissary-ingress
[badge-slack-link]: https://communityinviter.com/apps/cloud-native/cncf
[badge-cii-img]: https://bestpractices.coreinfrastructure.org/projects/1852/badge
[badge-cii-link]: https://bestpractices.coreinfrastructure.org/projects/1852
[badge-artifacthub-img]: https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/emissary-ingress
[badge-artifacthub-link]: https://artifacthub.io/packages/helm/datawire/emissary-ingress
<!-- Links are (mostly) at the end of this document, for legibility. -->
[Emissary-Ingress](https://www.getambassador.io/docs/open-source) is an open-source Kubernetes-native API Gateway +
Layer 7 load balancer + Kubernetes Ingress built on [Envoy Proxy](https://www.envoyproxy.io).
Emissary-ingress is a CNCF incubation project (and was formerly known as Ambassador API Gateway).
---
Emissary-ingress enables its users to:
* Manage ingress traffic with [load balancing], support for multiple protocols ([gRPC and HTTP/2], [TCP], and [web sockets]), and Kubernetes integration
* Manage changes to routing with an easy to use declarative policy engine and [self-service configuration], via Kubernetes [CRDs] or annotations
* Secure microservices with [authentication], [rate limiting], and [TLS]
* Ensure high availability with [sticky sessions], [rate limiting], and [circuit breaking]
* Leverage observability with integrations with [Grafana], [Prometheus], and [Datadog], and comprehensive [metrics] support
* Enable progressive delivery with [canary releases]
* Connect service meshes including [Consul], [Linkerd], and [Istio]
## QUICKSTART
Looking to get started as quickly as possible? Check out [the
QUICKSTART](https://emissary-ingress.dev/docs/3.10/quick-start/)!
### Latest Release
The latest production version of Emissary is **3.10.0**.
**Note well** that there is also an Ambassador Edge Stack 3.10.0, but
**Emissary 3.10 and Edge Stack 3.10 are not equivalent**. Their codebases have
diverged and will continue to do so.
---
Emissary-ingress
================
[Emissary-ingress](https://www.getambassador.io/docs/open-source) is an
open-source, developer-centric, Kubernetes-native API gateway built on [Envoy
Proxy]. Emissary-ingress is a CNCF incubating project (and was formerly known
as Ambassador API Gateway).
### Design Goals
The first problem faced by any organization trying to develop cloud-native
applications is the _ingress problem_: allowing users outside the cluster to
access the application running inside the cluster. Emissary is built around
the idea that the application developers should be able to solve the ingress
problem themselves, without needing to become Kubernetes experts and without
needing dedicated operations staff: a self-service, developer-centric workflow
is necessary to develop at scale.
Emissary is open-source, developer-centric, role-oriented, opinionated, and
Kubernatives-native.
- open-source: Emissary is licensed under the Apache 2 license, permitting use
or modification by anyone.
- developer-centric: Emissary is designed taking the application developer
into account first.
- role-oriented: Emissary's configuration deliberately tries to separate
elements to allow separation of concerns between developers and operations.
- opinionated: Emissary deliberately tries to make easy things easy, even if
that comes of the cost of not allowing some uncommon features.
### Features
Emissary supports all the table-stakes features needed for a modern API
gateway:
* Per-request [load balancing]
* Support for routing [gRPC], [HTTP/2], [TCP], and [web sockets]
* Declarative configuration via Kubernetes [custom resources]
* Fine-grained [authentication] and [authorization]
* Advanced routing features like [canary releases], [A/B testing], [dynamic routing], and [sticky sessions]
* Resilience features like [retries], [rate limiting], and [circuit breaking]
* Observability features including comprehensive [metrics] support using the [Prometheus] stack
* Easy service mesh integration with [Linkerd], [Istio], [Consul], etc.
* [Knative serverless integration]
See the full list of [features](https://www.getambassador.io/docs/emissary) here.
Branches
========
### Branches
(If you are looking at this list on a branch other than `master`, it
may be out of date.)
- [`master`](https://github.com/emissary-ingress/emissary/tree/master) - branch for Emissary-ingress dev work ( :heavy_check_mark: upcoming release)
- [`release/v3.9`](https://github.com/emissary-ingress/emissary/tree/release/v3.9) - branch for Emissary-ingress 3.9.z work
- [`release/v2.5`](https://github.com/emissary-ingress/emissary/tree/release/v2.5) - branch for Emissary-ingress 2.5.z work ( :heavy_check_mark: maintenance)
- [`main`](https://github.com/emissary-ingress/emissary/tree/main): Emissary 4 development work
Architecture
============
**No further development is planned on any branches listed below.**
Emissary is configured via Kubernetes CRDs, or via annotations on Kubernetes `Service`s. Internally,
it uses the [Envoy Proxy] to actually handle routing data; externally, it relies on Kubernetes for
scaling and resiliency. For more on Emissary's architecture and motivation, read [this blog post](https://blog.getambassador.io/building-ambassador-an-open-source-api-gateway-on-kubernetes-and-envoy-ed01ed520844).
- [`master`](https://github.com/emissary-ingress/emissary/tree/master) - **Frozen** at Emissary 3.10.0
- [`release/v3.10`](https://github.com/emissary-ingress/emissary/tree/release/v3.10) - Emissary-ingress 3.10.0 release branch
- [`release/v3.9`](https://github.com/emissary-ingress/emissary/tree/release/v3.9)
- Emissary-ingress 3.9.1 release branch
- [`release/v2.5`](https://github.com/emissary-ingress/emissary/tree/release/v2.5) - Emissary-ingress 2.5.1 release branch
Getting Started
===============
**Note well** that there is also an Ambassador Edge Stack 3.10.0, but
**Emissary 3.10 and Edge Stack 3.10 are not equivalent**. Their codebases have
diverged and will continue to do so.
You can get Emissary up and running in just three steps. Follow the instructions here: https://www.getambassador.io/docs/emissary/latest/tutorials/getting-started/
If you are looking for a Kubernetes ingress controller, Emissary provides a superset of the functionality of a typical ingress controller. (It does the traditional routing, and layers on a raft of configuration options.) This blog post covers [Kubernetes ingress](https://blog.getambassador.io/kubernetes-ingress-nodeport-load-balancers-and-ingress-controllers-6e29f1c44f2d).
For other common questions, view this [FAQ page](https://www.getambassador.io/docs/emissary/latest/about/faq/).
You can also use Helm to install Emissary. For more information, see the instructions in the [Helm installation documentation](https://www.getambassador.io/docs/emissary/latest/topics/install/helm/)
Check out the full [Emissary
documentation](https://www.getambassador.io/docs/emissary/) at
www.getambassador.io/docs/open-source.
Community
=========
#### Community
Emissary-ingress is a CNCF Incubating project and welcomes any and all
contributors.
@ -82,21 +119,21 @@ the way the community is run, including:
regular trouble-shooting meetings and contributor meetings
- how to get [`SUPPORT.md`](Community/SUPPORT.md).
The best way to join the community is to join the [CNCF Slack](https://communityinviter.com/apps/cloud-native/cncf)
#emissary-ingress channel.
Check out the [`DevDocumentation/`](DevDocumentation/) directory for
information on the technicals of Emissary, most notably the
[`DEVELOPING.md`](DevDocumentation/DEVELOPING.md) contributor's guide.
The best way to join the community is to join the `#emissary-ingress` channel
in the [CNCF Slack]. This is also the best place for technical information
about Emissary's architecture or development.
If you're interested in contributing, here are some ways:
* Write a blog post for [our blog](https://blog.getambassador.io)
* Investigate an [open issue](https://github.com/emissary-ingress/emissary/issues)
* Add [more tests](https://github.com/emissary-ingress/emissary/tree/master/ambassador/tests)
The Ambassador Edge Stack is a superset of Emissary-ingress that provides additional functionality including OAuth/OpenID Connect, advanced rate limiting, Swagger/OpenAPI support, integrated ACME support for automatic TLS certificate management, and a cloud-based UI. For more information, visit https://www.getambassador.io/editions/.
* Add [more tests](https://github.com/emissary-ingress/emissary/tree/main/ambassador/tests)
<!-- Please keep this list sorted. -->
[CNCF Slack]: https://communityinviter.com/apps/cloud-native/cncf
[Envoy Proxy]: https://www.envoyproxy.io
<!-- Legacy: clean up these links! -->
[authentication]: https://www.getambassador.io/docs/emissary/latest/topics/running/services/auth-service/
[canary releases]: https://www.getambassador.io/docs/emissary/latest/topics/using/canary/
[circuit breaking]: https://www.getambassador.io/docs/emissary/latest/topics/using/circuit-breakers/

View File

@ -12,8 +12,8 @@ export ENVOY_TEST_LABEL
# IF YOU MESS WITH ANY OF THESE VALUES, YOU MUST RUN `make update-base`.
ENVOY_REPO ?= https://github.com/datawire/envoy.git
# https://github.com/datawire/envoy/tree/rebase/release/v1.30.3
ENVOY_COMMIT ?= 99c27c6cf5753adb0390d05992d6e5f248f85ab2
# https://github.com/datawire/envoy/tree/rebase/release/v1.31.3
ENVOY_COMMIT ?= 628f5afc75a894a08504fa0f416269ec50c07bf9
ENVOY_COMPILATION_MODE ?= opt
# Increment BASE_ENVOY_RELVER on changes to `docker/base-envoy/Dockerfile`, or Envoy recipes.
@ -24,7 +24,8 @@ BASE_ENVOY_RELVER ?= 0
FIPS_MODE ?=
export FIPS_MODE
ENVOY_DOCKER_REPO ?= docker.io/emissaryingress/base-envoy
# ENVOY_DOCKER_REPO ?= docker.io/emissaryingress/base-envoy
ENVOY_DOCKER_REPO ?= gcr.io/datawire/ambassador-base
ENVOY_DOCKER_VERSION ?= $(BASE_ENVOY_RELVER).$(ENVOY_COMMIT).$(ENVOY_COMPILATION_MODE)$(if $(FIPS_MODE),.FIPS)
ENVOY_DOCKER_TAG ?= $(ENVOY_DOCKER_REPO):envoy-$(ENVOY_DOCKER_VERSION)
# END LIST OF VARIABLES REQUIRING `make update-base`.
@ -37,11 +38,11 @@ ENVOY_DOCKER_TAG ?= $(ENVOY_DOCKER_REPO):envoy-$(ENVOY_DOCKER_VERSION)
# which commits are ancestors, I added `make guess-envoy-go-control-plane-commit` to do that in an
# automated way! Still look at the commit yourself to make sure it seems sane; blindly trusting
# machines is bad, mmkay?
ENVOY_GO_CONTROL_PLANE_COMMIT = 57c85e1829e6fe6e73fb69b8a9d9f2d3780572a5
ENVOY_GO_CONTROL_PLANE_COMMIT = f888b4f71207d0d268dee7cb824de92848da9ede
# Set ENVOY_DOCKER_REPO to the list of mirrors to check
ENVOY_DOCKER_REPOS = docker.io/emissaryingress/base-envoy
ENVOY_DOCKER_REPOS += gcr.io/datawire/ambassador-base
# ENVOY_DOCKER_REPOS = docker.io/emissaryingress/base-envoy
# ENVOY_DOCKER_REPOS += gcr.io/datawire/ambassador-base
# Intro
include $(OSS_HOME)/build-aux/prelude.mk
@ -136,17 +137,17 @@ verify-base-envoy:
exit 1; \
fi; \
echo "Nothing to build at this time"; \
exit 1; \
exit 0; \
fi; \
}
# builds envoy using release settings, see https://github.com/envoyproxy/envoy/blob/main/ci/README.md for additional
# details on configuring builds
.PHONY: build-envoy
.PHONY: build-envoy
build-envoy: $(OSS_HOME)/_cxx/envoy-build-image.txt
$(OSS_HOME)/_cxx/tools/build-envoy.sh
# build the base-envoy containers and tags them locally, this requires running `build-envoy` first.
# build the base-envoy containers and tags them locally, this requires running `build-envoy` first.
.PHONY: build-base-envoy-image
build-base-envoy-image: $(OSS_HOME)/_cxx/envoy-build-image.txt
docker build --platform="$(BUILD_ARCH)" -f $(OSS_HOME)/docker/base-envoy/Dockerfile.stripped -t $(ENVOY_DOCKER_TAG) $(OSS_HOME)/docker/base-envoy
@ -154,13 +155,13 @@ build-base-envoy-image: $(OSS_HOME)/_cxx/envoy-build-image.txt
# Allows pushing the docker image independent of building envoy and docker containers
# Note, bump the BASE_ENVOY_RELVER and re-build before pushing when making non-commit changes to have a unique image tag.
.PHONY: push-base-envoy-image
push-base-envoy-image:
push-base-envoy-image:
docker push $(ENVOY_DOCKER_TAG)
# `make update-base`: Recompile Envoy and do all of the related things.
.PHONY: update-base
update-base: $(OSS_HOME)/_cxx/envoy-build-image.txt
update-base: $(OSS_HOME)/_cxx/envoy-build-image.txt
$(MAKE) verify-base-envoy
$(MAKE) build-envoy
$(MAKE) build-base-envoy-image

View File

@ -59,7 +59,7 @@ message ServerInfo {
config.core.v3.Node node = 7;
}
// [#next-free-field: 40]
// [#next-free-field: 41]
message CommandLineOptions {
option (udpa.annotations.versioning).previous_message_type =
"envoy.admin.v2alpha.CommandLineOptions";
@ -101,6 +101,9 @@ message CommandLineOptions {
// See :option:`--skip-hot-restart-on-no-parent` for details.
bool skip_hot_restart_on_no_parent = 39;
// See :option:`--skip-hot-restart-parent-stats` for details.
bool skip_hot_restart_parent_stats = 40;
// See :option:`--base-id-path` for details.
string base_id_path = 32;

View File

@ -41,7 +41,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// <config_overview_bootstrap>` for more detail.
// Bootstrap :ref:`configuration overview <config_overview_bootstrap>`.
// [#next-free-field: 41]
// [#next-free-field: 42]
message Bootstrap {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.bootstrap.v2.Bootstrap";
@ -411,6 +411,10 @@ message Bootstrap {
// Optional gRPC async manager config.
GrpcAsyncClientManagerConfig grpc_async_client_manager_config = 40;
// Optional configuration for memory allocation manager.
// Memory releasing is only supported for `tcmalloc allocator <https://github.com/google/tcmalloc>`_.
MemoryAllocatorManager memory_allocator_manager = 41;
}
// Administration interface :ref:`operations documentation
@ -734,3 +738,14 @@ message CustomInlineHeader {
// The type of the header that is expected to be set as the inline header.
InlineHeaderType inline_header_type = 2 [(validate.rules).enum = {defined_only: true}];
}
message MemoryAllocatorManager {
// Configures tcmalloc to perform background release of free memory in amount of bytes per ``memory_release_interval`` interval.
// If equals to ``0``, no memory release will occur. Defaults to ``0``.
uint64 bytes_to_release = 1;
// Interval in milliseconds for memory releasing. If specified, during every
// interval Envoy will try to release ``bytes_to_release`` of free memory back to operating system for reuse.
// Defaults to 1000 milliseconds.
google.protobuf.Duration memory_release_interval = 2;
}

View File

@ -45,7 +45,7 @@ message ClusterCollection {
}
// Configuration for a single upstream cluster.
// [#next-free-field: 57]
// [#next-free-field: 58]
message Cluster {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.Cluster";
@ -168,7 +168,7 @@ message Cluster {
// The name of the match, used in stats generation.
string name = 1 [(validate.rules).string = {min_len: 1}];
// Optional endpoint metadata match criteria.
// Optional metadata match criteria.
// The connection to the endpoint with metadata matching what is set in this field
// will use the transport socket configuration specified here.
// The endpoint's metadata entry in ``envoy.transport_socket_match`` is used to match
@ -754,12 +754,14 @@ message Cluster {
reserved "hosts", "tls_context", "extension_protocol_options";
// Configuration to use different transport sockets for different endpoints.
// The entry of ``envoy.transport_socket_match`` in the
// :ref:`LbEndpoint.Metadata <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.metadata>`
// is used to match against the transport sockets as they appear in the list. The first
// :ref:`match <envoy_v3_api_msg_config.cluster.v3.Cluster.TransportSocketMatch>` is used.
// For example, with the following match
// Configuration to use different transport sockets for different endpoints. The entry of
// ``envoy.transport_socket_match`` in the :ref:`LbEndpoint.Metadata
// <envoy_v3_api_field_config.endpoint.v3.LbEndpoint.metadata>` is used to match against the
// transport sockets as they appear in the list. If a match is not found, the search continues in
// :ref:`LocalityLbEndpoints.Metadata
// <envoy_v3_api_field_config.endpoint.v3.LocalityLbEndpoints.metadata>`. The first :ref:`match
// <envoy_v3_api_msg_config.cluster.v3.Cluster.TransportSocketMatch>` is used. For example, with
// the following match
//
// .. code-block:: yaml
//
@ -783,8 +785,9 @@ message Cluster {
// socket match in case above.
//
// If an endpoint metadata's value under ``envoy.transport_socket_match`` does not match any
// ``TransportSocketMatch``, socket configuration fallbacks to use the ``tls_context`` or
// ``transport_socket`` specified in this cluster.
// ``TransportSocketMatch``, the locality metadata is then checked for a match. Barring any
// matches in the endpoint or locality metadata, the socket configuration fallbacks to use the
// ``tls_context`` or ``transport_socket`` specified in this cluster.
//
// This field allows gradual and flexible transport socket configuration changes.
//
@ -1148,6 +1151,22 @@ message Cluster {
// from the LRS stream here.]
core.v3.ConfigSource lrs_server = 42;
// [#not-implemented-hide:]
// A list of metric names from ORCA load reports to propagate to LRS.
//
// For map fields in the ORCA proto, the string will be of the form ``<map_field_name>.<map_key>``.
// For example, the string ``named_metrics.foo`` will mean to look for the key ``foo`` in the ORCA
// ``named_metrics`` field.
//
// The special map key ``*`` means to report all entries in the map (e.g., ``named_metrics.*`` means to
// report all entries in the ORCA named_metrics field). Note that this should be used only with trusted
// backends.
//
// The metric names in LRS will follow the same semantics as this field. In other words, if this field
// contains ``named_metrics.foo``, then the LRS load report will include the data with that same string
// as the key.
repeated string lrs_report_endpoint_metrics = 57;
// If track_timeout_budgets is true, the :ref:`timeout budget histograms
// <config_cluster_manager_cluster_stats_timeout_budgets>` will be published for each
// request. These show what percentage of a request's per try and global timeout was used. A value

View File

@ -21,7 +21,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// See the :ref:`architecture overview <arch_overview_outlier_detection>` for
// more information on outlier detection.
// [#next-free-field: 25]
// [#next-free-field: 26]
message OutlierDetection {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.cluster.OutlierDetection";
@ -42,8 +42,8 @@ message OutlierDetection {
// Defaults to 30000ms or 30s.
google.protobuf.Duration base_ejection_time = 3 [(validate.rules).duration = {gt {}}];
// The maximum % of an upstream cluster that can be ejected due to outlier
// detection. Defaults to 10% but will eject at least one host regardless of the value.
// The maximum % of an upstream cluster that can be ejected due to outlier detection. Defaults to 10% .
// Will eject at least one host regardless of the value if :ref:`always_eject_one_host<envoy_v3_api_field_config.cluster.v3.OutlierDetection.always_eject_one_host>` is enabled.
google.protobuf.UInt32Value max_ejection_percent = 4 [(validate.rules).uint32 = {lte: 100}];
// The % chance that a host will be actually ejected when an outlier status
@ -173,4 +173,8 @@ message OutlierDetection {
// Set of host's passive monitors.
// [#not-implemented-hide:]
repeated core.v3.TypedExtensionConfig monitors = 24;
// If enabled, at least one host is ejected regardless of the value of :ref:`max_ejection_percent<envoy_v3_api_field_config.cluster.v3.OutlierDetection.max_ejection_percent>`.
// Defaults to false.
google.protobuf.BoolValue always_eject_one_host = 25;
}

View File

@ -303,6 +303,59 @@ message RuntimeFeatureFlag {
string runtime_key = 2 [(validate.rules).string = {min_len: 1}];
}
message KeyValue {
// The key of the key/value pair.
string key = 1 [(validate.rules).string = {min_len: 1 max_bytes: 16384}];
// The value of the key/value pair.
bytes value = 2;
}
// Key/value pair plus option to control append behavior. This is used to specify
// key/value pairs that should be appended to a set of existing key/value pairs.
message KeyValueAppend {
// Describes the supported actions types for key/value pair append action.
enum KeyValueAppendAction {
// If the key already exists, this action will result in the following behavior:
//
// - Comma-concatenated value if multiple values are not allowed.
// - New value added to the list of values if multiple values are allowed.
//
// If the key doesn't exist then this will add pair with specified key and value.
APPEND_IF_EXISTS_OR_ADD = 0;
// This action will add the key/value pair if it doesn't already exist. If the
// key already exists then this will be a no-op.
ADD_IF_ABSENT = 1;
// This action will overwrite the specified value by discarding any existing
// values if the key already exists. If the key doesn't exist then this will add
// the pair with specified key and value.
OVERWRITE_IF_EXISTS_OR_ADD = 2;
// This action will overwrite the specified value by discarding any existing
// values if the key already exists. If the key doesn't exist then this will
// be no-op.
OVERWRITE_IF_EXISTS = 3;
}
// Key/value pair entry that this option to append or overwrite.
KeyValue entry = 1 [(validate.rules).message = {required: true}];
// Describes the action taken to append/overwrite the given value for an existing
// key or to only add this key if it's absent.
KeyValueAppendAction action = 2 [(validate.rules).enum = {defined_only: true}];
}
// Key/value pair to append or remove.
message KeyValueMutation {
// Key/value pair to append or overwrite. Only one of ``append`` or ``remove`` can be set.
KeyValueAppend append = 1;
// Key to remove. Only one of ``append`` or ``remove`` can be set.
string remove = 2 [(validate.rules).string = {max_bytes: 16384}];
}
// Query parameter name/value pair.
message QueryParameter {
// The key of the query parameter. Case sensitive.
@ -411,6 +464,7 @@ message WatchedDirectory {
}
// Data source consisting of a file, an inline value, or an environment variable.
// [#next-free-field: 6]
message DataSource {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.core.DataSource";
@ -429,6 +483,22 @@ message DataSource {
// Environment variable data source.
string environment_variable = 4 [(validate.rules).string = {min_len: 1}];
}
// Watched directory that is watched for file changes. If this is set explicitly, the file
// specified in the ``filename`` field will be reloaded when relevant file move events occur.
//
// .. note::
// This field only makes sense when the ``filename`` field is set.
//
// .. note::
// Envoy only updates when the file is replaced by a file move, and not when the file is
// edited in place.
//
// .. note::
// Not all use cases of ``DataSource`` support watching directories. It depends on the
// specific usage of the ``DataSource``. See the documentation of the parent message for
// details.
WatchedDirectory watched_directory = 5;
}
// The message specifies the retry policy of remote data source when fetching fails.

View File

@ -28,12 +28,10 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// xDS API and non-xDS services version. This is used to describe both resource and transport
// protocol versions (in distinct configuration fields).
enum ApiVersion {
// When not specified, we assume v2, to ease migration to Envoy's stable API
// versioning. If a client does not support v2 (e.g. due to deprecation), this
// is an invalid value.
AUTO = 0 [deprecated = true, (envoy.annotations.deprecated_at_minor_version_enum) = "3.0"];
// When not specified, we assume v3; it is the only supported version.
AUTO = 0;
// Use xDS v2 API.
// Use xDS v2 API. This is no longer supported.
V2 = 1 [deprecated = true, (envoy.annotations.deprecated_at_minor_version_enum) = "3.0"];
// Use xDS v3 API.

View File

@ -29,6 +29,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
message GrpcService {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.core.GrpcService";
// [#next-free-field: 6]
message EnvoyGrpc {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.core.GrpcService.EnvoyGrpc";
@ -55,6 +56,12 @@ message GrpcService {
// This limit is applied to individual messages in the streaming response and not the total size of streaming response.
// Defaults to 0, which means unlimited.
google.protobuf.UInt32Value max_receive_message_length = 4;
// This provides gRPC client level control over envoy generated headers.
// If false, the header will be sent but it can be overridden by per stream option.
// If true, the header will be removed and can not be overridden by per stream option.
// Default to false.
bool skip_envoy_headers = 5;
}
// [#next-free-field: 9]

View File

@ -5,6 +5,7 @@ package envoy.config.core.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/event_service_config.proto";
import "envoy/config/core/v3/extension.proto";
import "envoy/config/core/v3/proxy_protocol.proto";
import "envoy/type/matcher/v3/string.proto";
import "envoy/type/v3/http.proto";
import "envoy/type/v3/range.proto";
@ -177,6 +178,13 @@ message HealthCheck {
// payload block must be found, and in the order specified, but not
// necessarily contiguous.
repeated Payload receive = 2;
// When setting this value, it tries to attempt health check request with ProxyProtocol.
// When ``send`` is presented, they are sent after preceding ProxyProtocol header.
// Only ProxyProtocol header is sent when ``send`` is not presented.
// It allows to use both ProxyProtocol V1 and V2. In V1, it presents L3/L4. In V2, it includes
// LOCAL command and doesn't include L3/L4.
ProxyProtocolConfig proxy_protocol_config = 3;
}
message RedisHealthCheck {

View File

@ -249,10 +249,9 @@ message HttpProtocolOptions {
google.protobuf.Duration idle_timeout = 1;
// The maximum duration of a connection. The duration is defined as a period since a connection
// was established. If not set, there is no max duration. When max_connection_duration is reached
// and if there are no active streams, the connection will be closed. If the connection is a
// downstream connection and there are any active streams, the drain sequence will kick-in,
// and the connection will be force-closed after the drain period. See :ref:`drain_timeout
// was established. If not set, there is no max duration. When max_connection_duration is reached,
// the drain sequence will kick-in. The connection will be closed after the drain timeout period
// if there are no active streams. See :ref:`drain_timeout
// <envoy_v3_api_field_extensions.filters.network.http_connection_manager.v3.HttpConnectionManager.drain_timeout>`.
google.protobuf.Duration max_connection_duration = 3;
@ -664,6 +663,13 @@ message Http3ProtocolOptions {
message SchemeHeaderTransformation {
oneof transformation {
// Overwrite any Scheme header with the contents of this string.
// If set, takes precedence over match_upstream.
string scheme_to_overwrite = 1 [(validate.rules).string = {in: "http" in: "https"}];
}
// Set the Scheme header to match the upstream transport protocol. For example, should a
// request be sent to the upstream over TLS, the scheme header will be set to "https". Should the
// request be sent over plaintext, the scheme header will be set to "http".
// If scheme_to_overwrite is set, this field is not used.
bool match_upstream = 2;
}

View File

@ -77,6 +77,12 @@ message ClusterLoadAssignment {
//
// Envoy supports only one element and will NACK if more than one element is present.
// Other xDS-capable data planes will not necessarily have this limitation.
//
// In Envoy, this ``drop_overloads`` config can be overridden by a runtime key
// "load_balancing_policy.drop_overload_limit" setting. This runtime key can be set to
// any integer number between 0 and 100. 0 means drop 0%. 100 means drop 100%.
// When both ``drop_overloads`` config and "load_balancing_policy.drop_overload_limit"
// setting are in place, the min of these two wins.
repeated DropOverload drop_overloads = 2;
// Priority levels and localities are considered overprovisioned with this

View File

@ -147,7 +147,7 @@ message LedsClusterLocalityConfig {
// A group of endpoints belonging to a Locality.
// One can have multiple LocalityLbEndpoints for a locality, but only if
// they have different priorities.
// [#next-free-field: 9]
// [#next-free-field: 10]
message LocalityLbEndpoints {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.endpoint.LocalityLbEndpoints";
@ -161,6 +161,9 @@ message LocalityLbEndpoints {
// Identifies location of where the upstream hosts run.
core.v3.Locality locality = 1;
// Metadata to provide additional information about the locality endpoints in aggregate.
core.v3.Metadata metadata = 9;
// The group of endpoints belonging to the locality specified.
// [#comment:TODO(adisuissa): Once LEDS is implemented this field needs to be
// deprecated and replaced by ``load_balancer_endpoints``.]

View File

@ -8,6 +8,8 @@ import "envoy/config/core/v3/base.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
import "validate/validate.proto";
@ -23,7 +25,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// These are stats Envoy reports to the management server at a frequency defined by
// :ref:`LoadStatsResponse.load_reporting_interval<envoy_v3_api_field_service.load_stats.v3.LoadStatsResponse.load_reporting_interval>`.
// Stats per upstream region/zone and optionally per subzone.
// [#next-free-field: 9]
// [#next-free-field: 15]
message UpstreamLocalityStats {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.endpoint.UpstreamLocalityStats";
@ -48,7 +50,45 @@ message UpstreamLocalityStats {
// upstream endpoints in the locality.
uint64 total_issued_requests = 8;
// Stats for multi-dimensional load balancing.
// The total number of connections in an established state at the time of the
// report. This field is aggregated over all the upstream endpoints in the
// locality.
// In Envoy, this information may be based on ``upstream_cx_active metric``.
// [#not-implemented-hide:]
uint64 total_active_connections = 9 [(xds.annotations.v3.field_status).work_in_progress = true];
// The total number of connections opened since the last report.
// This field is aggregated over all the upstream endpoints in the locality.
// In Envoy, this information may be based on ``upstream_cx_total`` metric
// compared to itself between start and end of an interval, i.e.
// ``upstream_cx_total``(now) - ``upstream_cx_total``(now -
// load_report_interval).
// [#not-implemented-hide:]
uint64 total_new_connections = 10 [(xds.annotations.v3.field_status).work_in_progress = true];
// The total number of connection failures since the last report.
// This field is aggregated over all the upstream endpoints in the locality.
// In Envoy, this information may be based on ``upstream_cx_connect_fail``
// metric compared to itself between start and end of an interval, i.e.
// ``upstream_cx_connect_fail``(now) - ``upstream_cx_connect_fail``(now -
// load_report_interval).
// [#not-implemented-hide:]
uint64 total_fail_connections = 11 [(xds.annotations.v3.field_status).work_in_progress = true];
// CPU utilization stats for multi-dimensional load balancing.
// This typically comes from endpoint metrics reported via ORCA.
UnnamedEndpointLoadMetricStats cpu_utilization = 12;
// Memory utilization for multi-dimensional load balancing.
// This typically comes from endpoint metrics reported via ORCA.
UnnamedEndpointLoadMetricStats mem_utilization = 13;
// Blended application-defined utilization for multi-dimensional load balancing.
// This typically comes from endpoint metrics reported via ORCA.
UnnamedEndpointLoadMetricStats application_utilization = 14;
// Named stats for multi-dimensional load balancing.
// These typically come from endpoint metrics reported via ORCA.
repeated EndpointLoadMetricStats load_metric_stats = 5;
// Endpoint granularity stats information for this locality. This information
@ -118,6 +158,16 @@ message EndpointLoadMetricStats {
double total_metric_value = 3;
}
// Same as EndpointLoadMetricStats, except without the metric_name field.
message UnnamedEndpointLoadMetricStats {
// Number of calls that finished and included this metric.
uint64 num_requests_finished_with_metric = 1;
// Sum of metric values across all calls that finished with this metric for
// load_reporting_interval.
double total_metric_value = 2;
}
// Per cluster load stats. Envoy reports these stats a management server in a
// :ref:`LoadStatsRequest<envoy_v3_api_msg_service.load_stats.v3.LoadStatsRequest>`
// Next ID: 7

View File

@ -53,7 +53,7 @@ message ListenerCollection {
repeated xds.core.v3.CollectionEntry entries = 1;
}
// [#next-free-field: 35]
// [#next-free-field: 36]
message Listener {
option (udpa.annotations.versioning).previous_message_type = "envoy.api.v2.Listener";
@ -387,6 +387,9 @@ message Listener {
// Whether the listener should limit connections based upon the value of
// :ref:`global_downstream_max_connections <config_overload_manager_limiting_connections>`.
bool ignore_global_conn_limit = 31;
// Whether the listener bypasses configured overload manager actions.
bool bypass_overload_manager = 35;
}
// A placeholder proto so that users can explicitly configure the standard

View File

@ -24,7 +24,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: QUIC listener config]
// Configuration specific to the UDP QUIC listener.
// [#next-free-field: 11]
// [#next-free-field: 12]
message QuicProtocolOptions {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.listener.QuicProtocolOptions";
@ -81,4 +81,9 @@ message QuicProtocolOptions {
// Configure the server to send transport parameter `disable_active_migration <https://www.rfc-editor.org/rfc/rfc9000#section-18.2-4.30.1>`_.
// Defaults to false (do not send this transport parameter).
google.protobuf.BoolValue send_disable_active_migration = 10;
// Configure which implementation of ``quic::QuicConnectionDebugVisitor`` to be used for this listener.
// If not specified, no debug visitor will be attached to connections.
// [#extension-category: envoy.quic.connection_debug_visitor]
core.v3.TypedExtensionConfig connection_debug_visitor_config = 11;
}

View File

@ -43,7 +43,6 @@ enum HistogramEmitMode {
// - name: envoy.stat_sinks.metrics_service
// typed_config:
// "@type": type.googleapis.com/envoy.config.metrics.v3.MetricsServiceConfig
// transport_api_version: V3
//
// [#extension: envoy.stat_sinks.metrics_service]
// [#next-free-field: 6]

View File

@ -763,7 +763,8 @@ message RouteAction {
// collected for the shadow cluster making this feature useful for testing.
//
// During shadowing, the host/authority header is altered such that ``-shadow`` is appended. This is
// useful for logging. For example, ``cluster1`` becomes ``cluster1-shadow``.
// useful for logging. For example, ``cluster1`` becomes ``cluster1-shadow``. This behavior can be
// disabled by setting ``disable_shadow_host_suffix_append`` to ``true``.
//
// .. note::
//
@ -772,7 +773,7 @@ message RouteAction {
// .. note::
//
// Shadowing doesn't support Http CONNECT and upgrades.
// [#next-free-field: 6]
// [#next-free-field: 7]
message RequestMirrorPolicy {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.route.RouteAction.RequestMirrorPolicy";
@ -818,6 +819,9 @@ message RouteAction {
// Determines if the trace span should be sampled. Defaults to true.
google.protobuf.BoolValue trace_sampled = 4;
// Disables appending the ``-shadow`` suffix to the shadowed ``Host`` header. Defaults to ``false``.
bool disable_shadow_host_suffix_append = 6;
}
// Specifies the route's hashing policy if the upstream cluster uses a hashing :ref:`load balancer

View File

@ -2,6 +2,8 @@ syntax = "proto3";
package envoy.config.trace.v3;
import "google/protobuf/duration.proto";
import "udpa/annotations/migrate.proto";
import "udpa/annotations/status.proto";
import "udpa/annotations/versioning.proto";
@ -16,6 +18,13 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Datadog tracer]
// Configuration for the Remote Configuration feature.
message DatadogRemoteConfig {
// Frequency at which new configuration updates are queried.
// If no value is provided, the default value is delegated to the Datadog tracing library.
google.protobuf.Duration polling_interval = 1;
}
// Configuration for the Datadog tracer.
// [#extension: envoy.tracers.datadog]
message DatadogConfig {
@ -31,4 +40,11 @@ message DatadogConfig {
// Optional hostname to use when sending spans to the collector_cluster. Useful for collectors
// that require a specific hostname. Defaults to :ref:`collector_cluster <envoy_v3_api_field_config.trace.v3.DatadogConfig.collector_cluster>` above.
string collector_hostname = 3;
// Enables and configures remote configuration.
// Remote Configuration allows to configure the tracer from Datadog's user interface.
// This feature can drastically increase the number of connections to the Datadog Agent.
// Each tracer regularly polls for configuration updates, and the number of tracers is the product
// of the number of listeners and worker threads.
DatadogRemoteConfig remote_config = 4;
}

View File

@ -48,70 +48,109 @@ message OpenCensusConfig {
reserved 7;
// Configures tracing, e.g. the sampler, max number of annotations, etc.
opencensus.proto.trace.v1.TraceConfig trace_config = 1
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
opencensus.proto.trace.v1.TraceConfig trace_config = 1 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the stdout exporter if set to true. This is intended for debugging
// purposes.
bool stdout_exporter_enabled = 2
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
bool stdout_exporter_enabled = 2 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the Stackdriver exporter if set to true. The project_id must also
// be set.
bool stackdriver_exporter_enabled = 3
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
bool stackdriver_exporter_enabled = 3 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// The Cloud project_id to use for Stackdriver tracing.
string stackdriver_project_id = 4
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
string stackdriver_project_id = 4 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// (optional) By default, the Stackdriver exporter will connect to production
// Stackdriver. If stackdriver_address is non-empty, it will instead connect
// to this address, which is in the gRPC format:
// https://github.com/grpc/grpc/blob/master/doc/naming.md
string stackdriver_address = 10
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
string stackdriver_address = 10 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// (optional) The gRPC server that hosts Stackdriver tracing service. Only
// Google gRPC is supported. If :ref:`target_uri <envoy_v3_api_field_config.core.v3.GrpcService.GoogleGrpc.target_uri>`
// is not provided, the default production Stackdriver address will be used.
core.v3.GrpcService stackdriver_grpc_service = 13
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
core.v3.GrpcService stackdriver_grpc_service = 13 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the Zipkin exporter if set to true. The url and service name must
// also be set. This is deprecated, prefer to use Envoy's :ref:`native Zipkin
// tracer <envoy_v3_api_msg_config.trace.v3.ZipkinConfig>`.
bool zipkin_exporter_enabled = 5
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
bool zipkin_exporter_enabled = 5 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// The URL to Zipkin, e.g. "http://127.0.0.1:9411/api/v2/spans". This is
// deprecated, prefer to use Envoy's :ref:`native Zipkin tracer
// <envoy_v3_api_msg_config.trace.v3.ZipkinConfig>`.
string zipkin_url = 6
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
string zipkin_url = 6 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// Enables the OpenCensus Agent exporter if set to true. The ocagent_address or
// ocagent_grpc_service must also be set.
bool ocagent_exporter_enabled = 11
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
bool ocagent_exporter_enabled = 11 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// The address of the OpenCensus Agent, if its exporter is enabled, in gRPC
// format: https://github.com/grpc/grpc/blob/master/doc/naming.md
// [#comment:TODO: deprecate this field]
string ocagent_address = 12
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
string ocagent_address = 12 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// (optional) The gRPC server hosted by the OpenCensus Agent. Only Google gRPC is supported.
// This is only used if the ocagent_address is left empty.
core.v3.GrpcService ocagent_grpc_service = 14
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
core.v3.GrpcService ocagent_grpc_service = 14 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// List of incoming trace context headers we will accept. First one found
// wins.
repeated TraceContext incoming_trace_context = 8
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
repeated TraceContext incoming_trace_context = 8 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
// List of outgoing trace context headers we will produce.
repeated TraceContext outgoing_trace_context = 9
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
repeated TraceContext outgoing_trace_context = 9 [
deprecated = true,
(envoy.annotations.deprecated_at_minor_version) = "3.0",
(envoy.annotations.disallowed_by_default) = true
];
}

View File

@ -0,0 +1,24 @@
syntax = "proto3";
package envoy.data.core.v3;
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.data.core.v3";
option java_outer_classname = "TlvMetadataProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/data/core/v3;corev3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Proxy Protocol Filter Typed Metadata]
// PROXY protocol filter typed metadata.
message TlvsMetadata {
// Typed metadata for :ref:`Proxy protocol filter <envoy_v3_api_msg_extensions.filters.listener.proxy_protocol.v3.ProxyProtocol>`, that represents a map of TLVs.
// Each entry in the map consists of a key which corresponds to a configured
// :ref:`rule key <envoy_v3_api_field_extensions.filters.listener.proxy_protocol.v3.ProxyProtocol.KeyValuePair.key>` and a value (TLV value in bytes).
// When runtime flag ``envoy.reloadable_features.use_typed_metadata_in_proxy_protocol_listener`` is enabled,
// :ref:`Proxy protocol filter <envoy_v3_api_msg_extensions.filters.listener.proxy_protocol.v3.ProxyProtocol>`
// will populate typed metadata and regular metadata. By default filter will populate typed and untyped metadata.
map<string, bytes> typed_metadata = 1;
}

View File

@ -128,7 +128,15 @@ message DnsTable {
option (udpa.annotations.versioning).previous_message_type =
"envoy.data.dns.v2alpha.DnsTable.DnsVirtualDomain";
// A domain name for which Envoy will respond to query requests
// A domain name for which Envoy will respond to query requests.
// Wildcard records are supported on the first label only, e.g. ``*.example.com`` or ``*.subdomain.example.com``.
// Names such as ``*example.com``, ``subdomain.*.example.com``, ``*subdomain.example.com``, etc
// are not valid wildcard names and asterisk will be interpreted as a literal ``*`` character.
// Wildcard records match subdomains on any levels, e.g. ``*.example.com`` will match
// ``foo.example.com``, ``bar.foo.example.com``, ``baz.bar.foo.example.com``, etc. In case there are multiple
// wildcard records, the longest wildcard match will be used, e.g. if there are wildcard records for
// ``*.example.com`` and ``*.foo.example.com`` and the query is for ``bar.foo.example.com``, the latter will be used.
// Specific records will always take precedence over wildcard records.
string name = 1 [(validate.rules).string = {min_len: 1 well_known_regex: HTTP_HEADER_NAME}];
// The configuration containing the method to determine the address of this endpoint

View File

@ -2,6 +2,7 @@ syntax = "proto3";
package envoy.extensions.access_loggers.open_telemetry.v3;
import "envoy/config/core/v3/extension.proto";
import "envoy/extensions/access_loggers/grpc/v3/als.proto";
import "opentelemetry/proto/common/v1/common.proto";
@ -22,7 +23,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// populate `opentelemetry.proto.collector.v1.logs.ExportLogsServiceRequest.resource_logs <https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/collector/logs/v1/logs_service.proto>`_.
// In addition, the request start time is set in the dedicated field.
// [#extension: envoy.access_loggers.open_telemetry]
// [#next-free-field: 6]
// [#next-free-field: 8]
message OpenTelemetryAccessLogConfig {
// [#comment:TODO(itamarkam): add 'filter_state_objects_to_log' to logs.]
grpc.v3.CommonGrpcAccessLogConfig common_config = 1 [(validate.rules).message = {required: true}];
@ -46,4 +47,14 @@ message OpenTelemetryAccessLogConfig {
// See 'attributes' in the LogResource proto for more details.
// Example: ``attributes { values { key: "user_agent" value { string_value: "%REQ(USER-AGENT)%" } } }``.
opentelemetry.proto.common.v1.KeyValueList attributes = 3;
// Optional. Additional prefix to use on OpenTelemetry access logger stats. If empty, the stats will be rooted at
// ``access_logs.open_telemetry_access_log.``. If non-empty, stats will be rooted at
// ``access_logs.open_telemetry_access_log.<stat_prefix>.``.
string stat_prefix = 6;
// Specifies a collection of Formatter plugins that can be called from the access log configuration.
// See the formatters extensions documentation for details.
// [#extension-category: envoy.formatter]
repeated config.core.v3.TypedExtensionConfig formatters = 7;
}

View File

@ -130,3 +130,15 @@ message LocalRateLimitDescriptor {
// Token Bucket algorithm for local ratelimiting.
type.v3.TokenBucket token_bucket = 2 [(validate.rules).message = {required: true}];
}
// Configuration used to enable local cluster level rate limiting where the token buckets
// will be shared across all the Envoy instances in the local cluster.
// A share will be calculated based on the membership of the local cluster dynamically
// and the configuration. When the limiter refilling the token bucket, the share will be
// applied. By default, the token bucket will be shared evenly.
//
// See :ref:`local cluster name
// <envoy_v3_api_field_config.bootstrap.v3.ClusterManager.local_cluster_name>` for more context
// about local cluster.
message LocalClusterRateLimit {
}

View File

@ -4,6 +4,7 @@ package envoy.extensions.filters.http.alternate_protocols_cache.v3;
import "envoy/config/core/v3/protocol.proto";
import "envoy/annotations/deprecation.proto";
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.http.alternate_protocols_cache.v3";
@ -17,9 +18,8 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// Configuration for the alternate protocols cache HTTP filter.
// [#extension: envoy.filters.http.alternate_protocols_cache]
message FilterConfig {
// If set, causes the use of the alternate protocols cache, which is responsible for
// parsing and caching HTTP Alt-Svc headers. This enables the use of HTTP/3 for upstream
// servers that advertise supporting it.
// TODO(RyanTheOptimist): Make this field required when HTTP/3 is enabled via auto_http.
config.core.v3.AlternateProtocolsCacheOptions alternate_protocols_cache_options = 1;
// This field is ignored: the alternate protocols cache filter will use the
// cache for the cluster the request is routed to.
config.core.v3.AlternateProtocolsCacheOptions alternate_protocols_cache_options = 1
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
}

View File

@ -17,7 +17,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#extension: envoy.filters.http.aws_lambda]
// AWS Lambda filter config
// [#next-free-field: 6]
// [#next-free-field: 7]
message Config {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.aws_lambda.v2alpha.Config";
@ -60,8 +60,34 @@ message Config {
// Specifies the credentials profile to be used from the AWS credentials file.
// This parameter is optional. If set, it will override the value set in the AWS_PROFILE env variable and
// the provider chain is limited to the AWS credentials file Provider.
// Other providers are ignored
// If credentials configuration is provided, this configuration will be ignored.
// If this field is provided, then the default providers chain specified in the documentation will be ignored.
// (See :ref:`default credentials providers <config_http_filters_aws_lambda_credentials>`).
string credentials_profile = 5;
// Specifies the credentials to be used. This parameter is optional and if it is set,
// it will override other providers and will take precedence over credentials_profile.
// The provider chain is limited to the configuration credentials provider.
// If this field is provided, then the default providers chain specified in the documentation will be ignored.
// (See :ref:`default credentials providers <config_http_filters_aws_lambda_credentials>`).
//
// .. warning::
// Distributing the AWS credentials via this configuration should not be done in production.
Credentials credentials = 6;
}
// AWS Lambda Credentials config.
message Credentials {
// AWS access key id.
string access_key_id = 1 [(validate.rules).string = {min_len: 1}];
// AWS secret access key.
string secret_access_key = 2 [(validate.rules).string = {min_len: 1}];
// AWS session token.
// This parameter is optional. If it is set to empty string it will not be consider in the request.
// It is required if temporary security credentials retrieved directly from AWS STS operations are used.
string session_token = 3;
}
// Per-route configuration for AWS Lambda. This can be useful when invoking a different Lambda function or a different

View File

@ -20,7 +20,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: HTTP Cache Filter]
// [#extension: envoy.filters.http.cache]
// [#next-free-field: 6]
// [#next-free-field: 7]
message CacheConfig {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.cache.v2alpha.CacheConfig";
@ -88,4 +88,9 @@ message CacheConfig {
// Max body size the cache filter will insert into a cache. 0 means unlimited (though the cache
// storage implementation may have its own limit beyond which it will reject insertions).
uint32 max_body_bytes = 4;
// By default, a ``cache-control: no-cache`` or ``pragma: no-cache`` header in the request
// causes the cache to validate with its upstream even if the lookup is a hit. Setting this
// to true will ignore these headers.
bool ignore_request_cache_control_header = 6;
}

View File

@ -2,6 +2,7 @@ syntax = "proto3";
package envoy.extensions.filters.http.composite.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/config_source.proto";
import "envoy/config/core/v3/extension.proto";
@ -57,4 +58,13 @@ message ExecuteFilterAction {
// Only one of ``typed_config`` or ``dynamic_config`` can be set.
DynamicConfig dynamic_config = 2
[(udpa.annotations.field_migrate).oneof_promotion = "config_type"];
// Probability of the action execution. If not specified, this is 100%.
// This allows sampling behavior for the configured actions.
// For example, if
// :ref:`default_value <envoy_v3_api_field_config.core.v3.RuntimeFractionalPercent.default_value>`
// under the ``sample_percent`` is configured with 30%, a dice roll with that
// probability is done. The underline action will only be executed if the
// dice roll returns positive. Otherwise, the action is skipped.
config.core.v3.RuntimeFractionalPercent sample_percent = 3;
}

View File

@ -2,6 +2,7 @@ syntax = "proto3";
package envoy.extensions.filters.http.ext_authz.v3;
import "envoy/config/common/mutation_rules/v3/mutation_rules.proto";
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/config_source.proto";
import "envoy/config/core/v3/grpc_service.proto";
@ -28,10 +29,10 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// External Authorization :ref:`configuration overview <config_http_filters_ext_authz>`.
// [#extension: envoy.filters.http.ext_authz]
// [#next-free-field: 24]
// [#next-free-field: 28]
message ExtAuthz {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.ext_authz.v2.ExtAuthz";
"envoy.config.filter.http.ext_authz.v3.ExtAuthz";
reserved 4;
@ -92,6 +93,21 @@ message ExtAuthz {
// or cannot be reached. The default status is HTTP 403 Forbidden.
type.v3.HttpStatus status_on_error = 7;
// When this is set to true, the filter will check the :ref:`ext_authz response
// <envoy_v3_api_msg_service.auth.v3.CheckResponse>` for invalid header &
// query parameter mutations. If the side stream response is invalid, it will send a local reply
// to the downstream request with status HTTP 500 Internal Server Error.
//
// Note that headers_to_remove & query_parameters_to_remove are validated, but invalid elements in
// those fields should not affect any headers & thus will not cause the filter to send a local
// reply.
//
// When set to false, any invalid mutations will be visible to the rest of envoy and may cause
// unexpected behavior.
//
// If you are using ext_authz with an untrusted ext_authz server, you should set this to true.
bool validate_mutations = 24;
// Specifies a list of metadata namespaces whose values, if present, will be passed to the
// ext_authz service. The :ref:`filter_metadata <envoy_v3_api_field_config.core.v3.Metadata.filter_metadata>`
// is passed as an opaque ``protobuf::Struct``.
@ -204,8 +220,17 @@ message ExtAuthz {
// <envoy_v3_api_field_extensions.filters.http.ext_authz.v3.ExtAuthz.with_request_body>` setting),
// consequently the value of *Content-Length* of the authorization request reflects the size of
// its payload size.
//
// .. note::
//
// 3. This can be overridden by the field ``disallowed_headers`` below. That is, if a header
// matches for both ``allowed_headers`` and ``disallowed_headers``, the header will NOT be sent.
type.matcher.v3.ListStringMatcher allowed_headers = 17;
// If set, specifically disallow any header in this list to be forwarded to the external
// authentication server. This overrides the above ``allowed_headers`` if a header matches both.
type.matcher.v3.ListStringMatcher disallowed_headers = 25;
// Specifies if the TLS session level details like SNI are sent to the external service.
//
// When this field is true, Envoy will include the SNI name used for TLSClientHello, if available, in the
@ -237,6 +262,34 @@ message ExtAuthz {
// It's recommended you set this to true unless you already rely on the old behavior. False is the
// default only for backwards compatibility.
bool encode_raw_headers = 23;
// Rules for what modifications an ext_authz server may make to the request headers before
// continuing decoding / forwarding upstream.
//
// If set to anything, enables header mutation checking against configured rules. Note that
// :ref:`HeaderMutationRules <envoy_v3_api_msg_config.common.mutation_rules.v3.HeaderMutationRules>`
// has defaults that change ext_authz behavior. Also note that if this field is set to anything,
// ext_authz can no longer append to :-prefixed headers.
//
// If empty, header mutation rule checking is completely disabled.
//
// Regardless of what is configured here, ext_authz cannot remove :-prefixed headers.
//
// This field and ``validate_mutations`` have different use cases. ``validate_mutations`` enables
// correctness checks for all header / query parameter mutations (e.g. for invalid characters).
// This field allows the filter to reject mutations to specific headers.
config.common.mutation_rules.v3.HeaderMutationRules decoder_header_mutation_rules = 26;
// Enable / disable ingestion of dynamic metadata from ext_authz service.
//
// If false, the filter will ignore dynamic metadata injected by the ext_authz service. If the
// ext_authz service tries injecting dynamic metadata, the filter will log, increment the
// ``ignored_dynamic_metadata`` stat, then continue handling the response.
//
// If true, the filter will ingest dynamic metadata entries as normal.
//
// If unset, defaults to true.
google.protobuf.BoolValue enable_dynamic_metadata_ingestion = 27;
}
// Configuration for buffering the request data.

View File

@ -11,6 +11,7 @@ import "envoy/type/matcher/v3/string.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
import "udpa/annotations/migrate.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
@ -97,8 +98,27 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// <arch_overview_advanced_filter_state_sharing>` object in a namespace matching the filter
// name.
//
// [#next-free-field: 18]
// [#next-free-field: 20]
message ExternalProcessor {
// Describes the route cache action to be taken when an external processor response
// is received in response to request headers.
enum RouteCacheAction {
// The default behavior is to clear the route cache only when the
// :ref:`clear_route_cache <envoy_v3_api_field_service.ext_proc.v3.CommonResponse.clear_route_cache>`
// field is set in an external processor response.
DEFAULT = 0;
// Always clear the route cache irrespective of the clear_route_cache bit in
// the external processor response.
CLEAR = 1;
// Do not clear the route cache irrespective of the clear_route_cache bit in
// the external processor response. Setting to RETAIN is equivalent to set the
// :ref:`disable_clear_route_cache <envoy_v3_api_field_extensions.filters.http.ext_proc.v3.ExternalProcessor.disable_clear_route_cache>`
// to true.
RETAIN = 2;
}
reserved 4;
reserved "async_mode";
@ -172,11 +192,6 @@ message ExternalProcessor {
gte {}
}];
// Prevents clearing the route-cache when the
// :ref:`clear_route_cache <envoy_v3_api_field_service.ext_proc.v3.CommonResponse.clear_route_cache>`
// field is set in an external processor response.
bool disable_clear_route_cache = 11;
// Allow headers matching the ``forward_rules`` to be forwarded to the external processing server.
// If not set, all headers are forwarded to the external processing server.
HeaderForwardingRules forward_rules = 12;
@ -226,6 +241,28 @@ message ExternalProcessor {
// This work is currently tracked under https://github.com/envoyproxy/envoy/issues/33319.
//
bool observability_mode = 17;
// Prevents clearing the route-cache when the
// :ref:`clear_route_cache <envoy_v3_api_field_service.ext_proc.v3.CommonResponse.clear_route_cache>`
// field is set in an external processor response.
// Only one of ``disable_clear_route_cache`` or ``route_cache_action`` can be set.
// It is recommended to set ``route_cache_action`` which supersedes ``disable_clear_route_cache``.
bool disable_clear_route_cache = 11
[(udpa.annotations.field_migrate).oneof_promotion = "clear_route_cache_type"];
// Specifies the action to be taken when an external processor response is
// received in response to request headers. It is recommended to set this field than set
// :ref:`disable_clear_route_cache <envoy_v3_api_field_extensions.filters.http.ext_proc.v3.ExternalProcessor.disable_clear_route_cache>`.
// Only one of ``disable_clear_route_cache`` or ``route_cache_action`` can be set.
RouteCacheAction route_cache_action = 18
[(udpa.annotations.field_migrate).oneof_promotion = "clear_route_cache_type"];
// Specifies the deferred closure timeout for gRPC stream that connects to external processor. Currently, the deferred stream closure
// is only used in :ref:`observability_mode <envoy_v3_api_field_extensions.filters.http.ext_proc.v3.ExternalProcessor.observability_mode>`.
// In observability mode, gRPC streams may be held open to the external processor longer than the lifetime of the regular client to
// backend stream lifetime. In this case, Envoy will eventually timeout the external processor stream according to this time limit.
// The default value is 5000 milliseconds (5 seconds) if not specified.
google.protobuf.Duration deferred_close_timeout = 19;
}
// The MetadataOptions structure defines options for the sending and receiving of

View File

@ -5,8 +5,10 @@ package envoy.extensions.filters.http.gcp_authn.v3;
import "envoy/config/core/v3/base.proto";
import "envoy/config/core/v3/http_uri.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/wrappers.proto";
import "envoy/annotations/deprecation.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
@ -21,12 +23,21 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#extension: envoy.filters.http.gcp_authn]
// Filter configuration.
// [#next-free-field: 7]
message GcpAuthnFilterConfig {
// The HTTP URI to fetch tokens from GCE Metadata Server(https://cloud.google.com/compute/docs/metadata/overview).
// The URL format is "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/identity?audience=[AUDIENCE]"
config.core.v3.HttpUri http_uri = 1 [(validate.rules).message = {required: true}];
//
// This field is deprecated because it does not match the API surface provided by the google auth libraries.
// Control planes should not attempt to override the metadata server URI.
// The cluster and timeout can be configured using the ``cluster`` and ``timeout`` fields instead.
// For backward compatibility, the cluster and timeout configured in this field will be used
// if the new ``cluster`` and ``timeout`` fields are not set.
config.core.v3.HttpUri http_uri = 1
[deprecated = true, (envoy.annotations.deprecated_at_minor_version) = "3.0"];
// Retry policy for fetching tokens. This field is optional.
// Retry policy for fetching tokens.
// Not supported by all data planes.
config.core.v3.RetryPolicy retry_policy = 2;
// Token cache configuration. This field is optional.
@ -34,7 +45,20 @@ message GcpAuthnFilterConfig {
// Request header location to extract the token. By default (i.e. if this field is not specified), the token
// is extracted to the Authorization HTTP header, in the format "Authorization: Bearer <token>".
// Not supported by all data planes.
TokenHeader token_header = 4;
// Cluster to send traffic to the GCE metadata server. Not supported
// by all data planes; a data plane may instead have its own mechanism
// for contacting the metadata server.
string cluster = 5;
// Timeout for fetching the tokens from the GCE metadata server.
// Not supported by all data planes.
google.protobuf.Duration timeout = 6 [(validate.rules).duration = {
lt {seconds: 4294967296}
gte {}
}];
}
// Audience is the URL of the receiving service that performs token authentication.

View File

@ -18,7 +18,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// gRPC-JSON transcoder :ref:`configuration overview <config_http_filters_grpc_json_transcoder>`.
// [#extension: envoy.filters.http.grpc_json_transcoder]
// [#next-free-field: 17]
// [#next-free-field: 18]
// GrpcJsonTranscoder filter configuration.
// The filter itself can be used per route / per virtual host or on the general level. The most
// specific one is being used for a given route. If the list of services is empty - filter
@ -88,7 +88,8 @@ message GrpcJsonTranscoder {
// When set to true, the request will be rejected with a ``HTTP 400 Bad Request``.
//
// The fields
// :ref:`ignore_unknown_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.ignore_unknown_query_parameters>`
// :ref:`ignore_unknown_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.ignore_unknown_query_parameters>`,
// :ref:`capture_unknown_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.capture_unknown_query_parameters>`,
// and
// :ref:`ignored_query_parameters <envoy_v3_api_field_extensions.filters.http.grpc_json_transcoder.v3.GrpcJsonTranscoder.ignored_query_parameters>`
// have priority over this strict validation behavior.
@ -288,4 +289,20 @@ message GrpcJsonTranscoder {
//
// If unset, the current stream buffer size is used.
google.protobuf.UInt32Value max_response_body_size = 16 [(validate.rules).uint32 = {gt: 0}];
// If true, query parameters that cannot be mapped to a corresponding
// protobuf field are captured in an HttpBody extension of UnknownQueryParams.
bool capture_unknown_query_parameters = 17;
}
// ``UnknownQueryParams`` is added as an extension field in ``HttpBody`` if
// ``GrpcJsonTranscoder::capture_unknown_query_parameters`` is true and unknown query
// parameters were present in the request.
message UnknownQueryParams {
message Values {
repeated string values = 1;
}
// A map from unrecognized query parameter keys, to the values associated with those keys.
map<string, Values> key = 1;
}

View File

@ -191,7 +191,7 @@ message JwtProvider {
// If false, the JWT is removed in the request after a success verification. If true, the JWT is
// not removed in the request. Default value is false.
// caveat: only works for from_header & has no effect for JWTs extracted through from_params & from_cookies.
// caveat: only works for from_header/from_params & has no effect for JWTs extracted through from_cookies.
bool forward = 5;
// Two fields below define where to extract the JWT from an HTTP request.
@ -395,7 +395,7 @@ message RemoteJwks {
// cluster: jwt.www.googleapis.com|443
// timeout: 1s
//
config.core.v3.HttpUri http_uri = 1;
config.core.v3.HttpUri http_uri = 1 [(validate.rules).message = {required: true}];
// Duration after which the cached JWKS should be expired. If not specified, default cache
// duration is 10 minutes.
@ -729,7 +729,7 @@ message FilterStateRule {
// - provider_name: provider1
// - provider_name: provider2
//
// [#next-free-field: 6]
// [#next-free-field: 7]
message JwtAuthentication {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.http.jwt_authn.v2alpha.JwtAuthentication";
@ -802,6 +802,11 @@ message JwtAuthentication {
// :ref:`requirement_name <envoy_v3_api_field_extensions.filters.http.jwt_authn.v3.PerRouteConfig.requirement_name>`
// in ``PerRouteConfig`` uses this map to specify a JwtRequirement.
map<string, JwtRequirement> requirement_map = 5;
// A request failing the verification process will receive a 401 downstream with the failure response details
// in the body along with WWWAuthenticate header value set with "invalid token". If this value is set to true,
// the response details will be stripped and only a 401 response code will be returned. Default value is false
bool strip_failure_response = 6;
}
// Specify per-route config.

View File

@ -22,7 +22,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// Local Rate limit :ref:`configuration overview <config_http_filters_local_rate_limit>`.
// [#extension: envoy.filters.http.local_ratelimit]
// [#next-free-field: 16]
// [#next-free-field: 17]
message LocalRateLimit {
// The human readable prefix to use when emitting stats.
string stat_prefix = 1 [(validate.rules).string = {min_len: 1}];
@ -110,6 +110,23 @@ message LocalRateLimit {
// If unspecified, the default value is false.
bool local_rate_limit_per_downstream_connection = 11;
// Enables the local cluster level rate limiting, iff this is set explicitly. For example,
// given an Envoy gateway that contains N Envoy instances and a rate limit rule X tokens
// per second. If this is set, the total rate limit of whole gateway will always be X tokens
// per second regardless of how N changes. If this is not set, the total rate limit of whole
// gateway will be N * X tokens per second.
//
// .. note::
// This should never be set if the ``local_rate_limit_per_downstream_connection`` is set to
// true. Because if per connection rate limiting is enabled, we assume that the token buckets
// should never be shared across Envoy instances.
//
// .. note::
// This only works when the :ref:`local cluster name
// <envoy_v3_api_field_config.bootstrap.v3.ClusterManager.local_cluster_name>` is set and
// the related cluster is defined in the bootstrap configuration.
common.ratelimit.v3.LocalClusterRateLimit local_cluster_rate_limit = 16;
// Defines the standard version to use for X-RateLimit headers emitted by the filter.
//
// Disabled by default.

View File

@ -74,7 +74,7 @@ message OAuth2Credentials {
// OAuth config
//
// [#next-free-field: 16]
// [#next-free-field: 18]
message OAuth2Config {
enum AuthType {
// The ``client_id`` and ``client_secret`` will be sent in the URL encoded request body.
@ -111,6 +111,12 @@ message OAuth2Config {
// Forward the OAuth token as a Bearer to upstream web service.
bool forward_bearer_token = 7;
// If set to true, preserve the existing authorization header.
// By default Envoy strips the existing authorization header before forwarding upstream.
// Can not be set to true if forward_bearer_token is already set to true.
// Default value is false.
bool preserve_authorization_header = 16;
// Any request that matches any of the provided matchers will be passed through without OAuth validation.
repeated config.route.v3.HeaderMatcher pass_through_matcher = 8;
@ -149,6 +155,12 @@ message OAuth2Config {
// in a week.
// This setting is only considered if ``use_refresh_token`` is set to true, otherwise the authorization server expiration or ``defaul_expires_in`` is used.
google.protobuf.Duration default_refresh_token_expires_in = 15;
// If set to true, Envoy will not set a cookie for ID Token even if one is received from the Identity Provider. This may be useful in cases where the ID
// Token is too large for HTTP cookies (longer than 4096 characters). Enabling this option will only disable setting the cookie response header, the filter
// will still process incoming ID Tokens as part of the HMAC if they are there. This is to ensure compatibility while switching this setting on. Future
// sessions would not set the IdToken cookie header.
bool disable_id_token_set_cookie = 17;
}
// Filter config.

View File

@ -0,0 +1,194 @@
syntax = "proto3";
package envoy.extensions.filters.http.thrift_to_metadata.v3;
import "envoy/extensions/filters/network/thrift_proxy/v3/thrift_proxy.proto";
import "google/protobuf/struct.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.filters.http.thrift_to_metadata.v3";
option java_outer_classname = "ThriftToMetadataProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/http/thrift_to_metadata/v3;thrift_to_metadatav3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Thrift-To-Metadata Filter]
//
// The Thrift to Metadata filter serves for thrift over HTTP traffic, expecting serialized
// Thrift request and response bodies in the HTTP payload. It extracts *thrift metadata* from the
// HTTP body and put them into the *filter metadata*. This is useful for matching load balancer
// subsets, logging, etc.
//
// Thrift to Metadata :ref:`configuration overview <config_http_filters_thrift_to_metadata>`.
// [#extension: envoy.filters.http.thrift_to_metadata]
enum Field {
// The Thrift method name, string value.
METHOD_NAME = 0;
// The Thrift protocol name, string value. Values are "binary", "binary/non-strict", and "compact", with "(auto)" suffix if
// :ref:`protocol <envoy_v3_api_field_extensions.filters.http.thrift_to_metadata.v3.ThriftToMetadata.protocol>`
// is set to :ref:`AUTO_PROTOCOL<envoy_v3_api_enum_value_extensions.filters.network.thrift_proxy.v3.ProtocolType.AUTO_PROTOCOL>`
PROTOCOL = 1;
// The Thrift transport name, string value. Values are "framed", "header", and "unframed", with "(auto)" suffix if
// :ref:`transport <envoy_v3_api_field_extensions.filters.http.thrift_to_metadata.v3.ThriftToMetadata.transport>`
// is set to :ref:`AUTO_TRANSPORT<envoy_v3_api_enum_value_extensions.filters.network.thrift_proxy.v3.TransportType.AUTO_TRANSPORT>`
TRANSPORT = 2;
// The Thrift message type, singed 16-bit integer value.
HEADER_FLAGS = 3;
// The Thrift sequence ID, singed 32-bit integer value.
SEQUENCE_ID = 4;
// The Thrift message type, string value. Values in request are "call" and "oneway", and in response are "reply" and "exception".
MESSAGE_TYPE = 5;
// The Thrift reply type, string value. This is only valid for response rules. Values are "success" and "error".
REPLY_TYPE = 6;
}
message KeyValuePair {
// The namespace if this is empty, the filter's namespace will be used.
string metadata_namespace = 1;
// The key to use within the namespace.
string key = 2 [(validate.rules).string = {min_len: 1}];
// When used for on_present case, if value is non-empty it'll be used instead
// of the field value.
//
// When used for on_missing case, a non-empty value must be provided.
google.protobuf.Value value = 3;
}
message FieldSelector {
option (xds.annotations.v3.message_status).work_in_progress = true;
// field name to log
string name = 1 [(validate.rules).string = {min_len: 1}];
// field id to match
int32 id = 2 [(validate.rules).int32 = {lte: 32767 gte: -32768}];
// next node of the field selector
FieldSelector child = 3;
}
// [#next-free-field: 6]
message Rule {
// The field to match on. If set, takes precedence over field_selector.
Field field = 1;
// Specifies that a match will be performed on the value of a field in the thrift body.
// If set, the whole http body will be buffered to extract the field value, which
// may have performance implications.
//
// It's a thrift over http version of
// :ref:`field_selector<envoy_v3_api_field_extensions.filters.network.thrift_proxy.filters.payload_to_metadata.v3.PayloadToMetadata.Rule.field_selector>`.
//
// See also `payload-to-metadata <https://www.envoyproxy.io/docs/envoy/latest/configuration/other_protocols/thrift_filters/payload_to_metadata_filter>`_
// for more reference.
//
// Example:
//
// .. code-block:: yaml
//
// method_name: foo
// field_selector:
// name: info
// id: 2
// child:
// name: version
// id: 1
//
// The above yaml will match on value of ``info.version`` in the below thrift schema as input of
// :ref:`on_present<envoy_v3_api_field_extensions.filters.http.thrift_to_metadata.v3.Rule.on_present>` or
// :ref:`on_missing<envoy_v3_api_field_extensions.filters.http.thrift_to_metadata.v3.Rule.on_missing>`
// while we are processing ``foo`` method. This rule won't be applied to ``bar`` method.
//
// .. code-block:: thrift
//
// struct Info {
// 1: required string version;
// }
// service Server {
// bool foo(1: i32 id, 2: Info info);
// bool bar(1: i32 id, 2: Info info);
// }
//
FieldSelector field_selector = 2 [(xds.annotations.v3.field_status).work_in_progress = true];
// If specified, :ref:`field_selector<envoy_v3_api_field_extensions.filters.http.thrift_to_metadata.v3.Rule.field_selector>`
// will be used to extract the field value *only* on the thrift message with method name.
string method_name = 3 [(xds.annotations.v3.field_status).work_in_progress = true];
// The key-value pair to set in the *filter metadata* if the field is present
// in *thrift metadata*.
//
// If the value in the KeyValuePair is non-empty, it'll be used instead
// of field value.
KeyValuePair on_present = 4;
// The key-value pair to set in the *filter metadata* if the field is missing
// in *thrift metadata*.
//
// The value in the KeyValuePair must be set, since it'll be used in lieu
// of the missing field value.
KeyValuePair on_missing = 5;
}
// The configuration for transforming thrift metadata into filter metadata.
//
// [#next-free-field: 7]
message ThriftToMetadata {
// The list of rules to apply to http request body to extract thrift metadata.
repeated Rule request_rules = 1;
// The list of rules to apply to http response body to extract thrift metadata.
repeated Rule response_rules = 2;
// Supplies the type of transport that the Thrift proxy should use. Defaults to
// :ref:`AUTO_TRANSPORT<envoy_v3_api_enum_value_extensions.filters.network.thrift_proxy.v3.TransportType.AUTO_TRANSPORT>`.
network.thrift_proxy.v3.TransportType transport = 3
[(validate.rules).enum = {defined_only: true}];
// Supplies the type of protocol that the Thrift proxy should use. Defaults to
// :ref:`AUTO_PROTOCOL<envoy_v3_api_enum_value_extensions.filters.network.thrift_proxy.v3.ProtocolType.AUTO_PROTOCOL>`.
// Note that :ref:`LAX_BINARY<envoy_v3_api_enum_value_extensions.filters.network.thrift_proxy.v3.ProtocolType.LAX_BINARY>`
// is not distinguished by :ref:`AUTO_PROTOCOL<envoy_v3_api_enum_value_extensions.filters.network.thrift_proxy.v3.ProtocolType.AUTO_PROTOCOL>`,
// which is the same with :ref:`thrift_proxy network filter <envoy_v3_api_msg_extensions.filters.network.thrift_proxy.v3.ThriftProxy>`.
// Note that :ref:`TWITTER<envoy_v3_api_enum_value_extensions.filters.network.thrift_proxy.v3.ProtocolType.TWITTER>` is
// not supported due to deprecation in envoy.
network.thrift_proxy.v3.ProtocolType protocol = 4 [(validate.rules).enum = {defined_only: true}];
// Allowed content-type for thrift payload to filter metadata transformation.
// Default to ``{"application/x-thrift"}``.
//
// Set ``allow_empty_content_type`` if empty/missing content-type header
// is allowed.
repeated string allow_content_types = 5
[(validate.rules).repeated = {items {string {min_len: 1}}}];
// Allowed empty content-type for thrift payload to filter metadata transformation.
// Default to false.
bool allow_empty_content_type = 6;
}
// Thrift to metadata configuration on a per-route basis, which overrides the global configuration for
// request rules and responses rules.
message ThriftToMetadataPerRoute {
option (xds.annotations.v3.message_status).work_in_progress = true;
// The list of rules to apply to http request body to extract thrift metadata.
repeated Rule request_rules = 1;
// The list of rules to apply to http response body to extract thrift metadata.
repeated Rule response_rules = 2;
}

View File

@ -18,6 +18,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// PROXY protocol listener filter.
// [#extension: envoy.filters.listener.proxy_protocol]
// [#next-free-field: 6]
message ProxyProtocol {
option (udpa.annotations.versioning).previous_message_type =
"envoy.config.filter.listener.proxy_protocol.v2.ProxyProtocol";
@ -85,4 +86,10 @@ message ProxyProtocol {
// and an incoming request matches the V2 signature, the filter will allow the request through without any modification.
// The filter treats this request as if it did not have any PROXY protocol information.
repeated config.core.v3.ProxyProtocolConfig.Version disallowed_versions = 4;
// The human readable prefix to use when emitting statistics for the filter.
// If not configured, statistics will be emitted without the prefix segment.
// See the :ref:`filter's statistics documentation <config_listener_filters_proxy_protocol>` for
// more information.
string stat_prefix = 5;
}

View File

@ -547,9 +547,10 @@ message HttpConnectionManager {
// race with the final GOAWAY frame. During this grace period, Envoy will
// continue to accept new streams. After the grace period, a final GOAWAY
// frame is sent and Envoy will start refusing new streams. Draining occurs
// both when a connection hits the idle timeout or during general server
// draining. The default grace period is 5000 milliseconds (5 seconds) if this
// option is not specified.
// either when a connection hits the idle timeout, when :ref:`max_connection_duration
// <envoy_v3_api_field_config.core.v3.HttpProtocolOptions.max_connection_duration>`
// is reached, or during general server draining. The default grace period is
// 5000 milliseconds (5 seconds) if this option is not specified.
google.protobuf.Duration drain_timeout = 12;
// The delayed close timeout is for downstream connections managed by the HTTP connection manager.
@ -663,6 +664,34 @@ message HttpConnectionManager {
// purposes. If unspecified, only RFC1918 IP addresses will be considered internal.
// See the documentation for :ref:`config_http_conn_man_headers_x-envoy-internal` for more
// information about internal/external addresses.
//
// .. warning::
// In the next release, no IP addresses will be considered trusted. If you have tooling such as probes
// on your private network which need to be treated as trusted (e.g. changing arbitrary x-envoy headers)
// you will have to manually include those addresses or CIDR ranges like:
//
// .. validated-code-block:: yaml
// :type-name: envoy.extensions.filters.network.http_connection_manager.v3.InternalAddressConfig
//
// cidr_ranges:
// address_prefix: 10.0.0.0
// prefix_len: 8
// cidr_ranges:
// address_prefix: 192.168.0.0
// prefix_len: 16
// cidr_ranges:
// address_prefix: 172.16.0.0
// prefix_len: 12
// cidr_ranges:
// address_prefix: 127.0.0.1
// prefix_len: 32
// cidr_ranges:
// address_prefix: fd00::
// prefix_len: 8
// cidr_ranges:
// address_prefix: ::1
// prefix_len: 128
//
InternalAddressConfig internal_address_config = 25;
// If set, Envoy will not append the remote address to the

View File

@ -22,6 +22,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// * ROUTE
// * UPSTREAM_HOST
// * LISTENER
// * VIRTUAL_HOST
//
// See :ref:`here <config_access_log>` for more information on access log configuration.

View File

@ -54,7 +54,7 @@ message RedirectPolicy {
// The new response status code if specified. This is used to override the
// status code of the response from the new upstream if it is not an error status.
google.protobuf.UInt32Value status_code = 3 [(validate.rules).uint32 = {lte: 999 gte: 100}];
google.protobuf.UInt32Value status_code = 3 [(validate.rules).uint32 = {lte: 999 gte: 200}];
// HTTP headers to add to the response. This allows the
// response policy to append, to add or to override headers of

View File

@ -5,6 +5,8 @@ package envoy.extensions.http.injected_credentials.oauth2.v3;
import "envoy/config/core/v3/http_uri.proto";
import "envoy/extensions/transport_sockets/tls/v3/secret.proto";
import "google/protobuf/duration.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
@ -18,7 +20,6 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
option (xds.annotations.v3.file_status).work_in_progress = true;
// [#protodoc-title: OAuth2 Credential]
// [#not-implemented-hide:]
// [#extension: envoy.http.injected_credentials.oauth2]
// OAuth2 extension can be used to retrieve an OAuth2 access token from an authorization server and inject it into the
@ -67,4 +68,9 @@ message OAuth2 {
// Refer to [RFC 6749: The OAuth 2.0 Authorization Framework](https://www.rfc-editor.org/rfc/rfc6749#section-4.4) for details.
ClientCredentials client_credentials = 3;
}
// The interval between two successive retries to fetch token from Identity Provider. Default is 2 secs.
// The interval must be at least 1 second.
google.protobuf.Duration token_fetch_retry_interval = 4
[(validate.rules).duration = {gte {seconds: 1}}];
}

View File

@ -5,6 +5,8 @@ package envoy.extensions.network.dns_resolver.cares.v3;
import "envoy/config/core/v3/address.proto";
import "envoy/config/core/v3/resolver.proto";
import "google/protobuf/wrappers.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
@ -18,6 +20,7 @@ option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#extension: envoy.network.dns_resolver.cares]
// Configuration for c-ares DNS resolver.
// [#next-free-field: 6]
message CaresDnsResolverConfig {
// A list of dns resolver addresses.
// :ref:`use_resolvers_as_fallback<envoy_v3_api_field_extensions.network.dns_resolver.cares.v3.CaresDnsResolverConfig.use_resolvers_as_fallback>`
@ -41,4 +44,8 @@ message CaresDnsResolverConfig {
// Configuration of DNS resolver option flags which control the behavior of the DNS resolver.
config.core.v3.DnsResolverOptions dns_resolver_options = 2;
// This option allows for number of UDP based DNS queries to be capped. Note, this
// is only applicable to c-ares DNS resolver currently.
google.protobuf.UInt32Value udp_max_queries = 5;
}

View File

@ -0,0 +1,18 @@
syntax = "proto3";
package envoy.extensions.quic.connection_debug_visitor.v3;
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.extensions.quic.connection_debug_visitor.v3";
option java_outer_classname = "ConnectionDebugVisitorBasicProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/quic/connection_debug_visitor/v3;connection_debug_visitorv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: QUIC connection debug visitor basic config]
// [#extension: envoy.quic.connection_debug_visitor.basic]
// Configuration for a basic QUIC connection debug visitor.
message BasicConfig {
}

View File

@ -0,0 +1,53 @@
syntax = "proto3";
package envoy.extensions.quic.server_preferred_address.v3;
import "envoy/config/core/v3/base.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
import "validate/validate.proto";
option java_package = "io.envoyproxy.envoy.extensions.quic.server_preferred_address.v3";
option java_outer_classname = "DatasourceProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/quic/server_preferred_address/v3;server_preferred_addressv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: QUIC DataSource server preferred address config]
// [#extension: envoy.quic.server_preferred_address.datasource]
// Configuration for DataSourceServerPreferredAddressConfig.
message DataSourceServerPreferredAddressConfig {
// [#comment:TODO(danzh2010): discuss with API shepherds before removing WiP status.]
option (xds.annotations.v3.message_status).work_in_progress = true;
// Addresses for server preferred address for a single address family (IPv4 or IPv6).
message AddressFamilyConfig {
// The server preferred address sent to clients. The data must contain an IP address string.
config.core.v3.DataSource address = 1 [(validate.rules).message = {required: true}];
// The server preferred address port sent to clients. The data must contain a integer port value.
//
// If this is not specified, the listener's port is used.
//
// Note: Envoy currently must receive all packets for a QUIC connection on the same port, so unless
// :ref:`dnat_address <envoy_v3_api_field_extensions.quic.server_preferred_address.v3.DataSourceServerPreferredAddressConfig.AddressFamilyConfig.dnat_address>`
// is configured, this must be left unset.
config.core.v3.DataSource port = 2;
// If there is a DNAT between the client and Envoy, the address that Envoy will observe
// server preferred address packets being sent to. If this is not specified, it is assumed
// there is no DNAT and the server preferred address packets will be sent to the address advertised
// to clients for server preferred address.
config.core.v3.DataSource dnat_address = 3;
}
// The IPv4 address to advertise to clients for Server Preferred Address.
AddressFamilyConfig ipv4_config = 1;
// The IPv6 address to advertise to clients for Server Preferred Address.
AddressFamilyConfig ipv6_config = 2;
}

View File

@ -2,6 +2,8 @@ syntax = "proto3";
package envoy.extensions.quic.server_preferred_address.v3;
import "envoy/config/core/v3/address.proto";
import "xds/annotations/v3/status.proto";
import "udpa/annotations/status.proto";
@ -12,7 +14,7 @@ option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/quic/server_preferred_address/v3;server_preferred_addressv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: QUIC server preferred address config]
// [#protodoc-title: QUIC fixed server preferred address config]
// [#extension: envoy.quic.server_preferred_address.fixed]
// Configuration for FixedServerPreferredAddressConfig.
@ -21,15 +23,41 @@ message FixedServerPreferredAddressConfig {
option (xds.annotations.v3.message_status).work_in_progress = true;
oneof ipv4_type {
// String representation of IPv4 address, i.e. "127.0.0.2".
// If not specified, none will be configured.
string ipv4_address = 1;
// Addresses for server preferred address for a single address family (IPv4 or IPv6).
message AddressFamilyConfig {
// The server preferred address sent to clients.
//
// Note: Envoy currently must receive all packets for a QUIC connection on the same port, so unless
// :ref:`dnat_address <envoy_v3_api_field_extensions.quic.server_preferred_address.v3.FixedServerPreferredAddressConfig.AddressFamilyConfig.dnat_address>`
// is configured, the port for this address must be zero, and the listener's
// port will be used instead.
config.core.v3.SocketAddress address = 1;
// If there is a DNAT between the client and Envoy, the address that Envoy will observe
// server preferred address packets being sent to. If this is not specified, it is assumed
// there is no DNAT and the server preferred address packets will be sent to the address advertised
// to clients for server preferred address.
//
// Note: Envoy currently must receive all packets for a QUIC connection on the same port, so the
// port for this address must be zero, and the listener's port will be used instead.
config.core.v3.SocketAddress dnat_address = 2;
}
oneof ipv6_type {
// String representation of IPv6 address, i.e. "::1".
// If not specified, none will be configured.
string ipv6_address = 2;
}
// String representation of IPv4 address, i.e. "127.0.0.2".
// If not specified, none will be configured.
string ipv4_address = 1;
// The IPv4 address to advertise to clients for Server Preferred Address.
// This field takes precedence over
// :ref:`ipv4_address <envoy_v3_api_field_extensions.quic.server_preferred_address.v3.FixedServerPreferredAddressConfig.ipv4_address>`.
AddressFamilyConfig ipv4_config = 3;
// String representation of IPv6 address, i.e. "::1".
// If not specified, none will be configured.
string ipv6_address = 2;
// The IPv6 address to advertise to clients for Server Preferred Address.
// This field takes precedence over
// :ref:`ipv6_address <envoy_v3_api_field_extensions.quic.server_preferred_address.v3.FixedServerPreferredAddressConfig.ipv6_address>`.
AddressFamilyConfig ipv6_config = 4;
}

View File

@ -0,0 +1,23 @@
syntax = "proto3";
package envoy.extensions.tracers.opentelemetry.resource_detectors.v3;
import "udpa/annotations/status.proto";
option java_package = "io.envoyproxy.envoy.extensions.tracers.opentelemetry.resource_detectors.v3";
option java_outer_classname = "StaticConfigResourceDetectorProto";
option java_multiple_files = true;
option go_package = "github.com/envoyproxy/go-control-plane/envoy/extensions/tracers/opentelemetry/resource_detectors/v3;resource_detectorsv3";
option (udpa.annotations.file_status).package_version_status = ACTIVE;
// [#protodoc-title: Static Config Resource Detector config]
// Configuration for the Static Resource detector extension.
// The resource detector uses static config for resource attribute,
// as per the OpenTelemetry specification.
//
// [#extension: envoy.tracers.opentelemetry.resource_detectors.static_config]
message StaticConfigResourceDetectorConfig {
// Custom Resource attributes to be included.
map<string, string> attributes = 1;
}

View File

@ -314,16 +314,32 @@ message SubjectAltNameMatcher {
DNS = 2;
URI = 3;
IP_ADDRESS = 4;
OTHER_NAME = 5;
}
// Specification of type of SAN. Note that the default enum value is an invalid choice.
SanType san_type = 1 [(validate.rules).enum = {defined_only: true not_in: 0}];
// Matcher for SAN value.
//
// The string matching for OTHER_NAME SAN values depends on their ASN.1 type:
//
// * OBJECT: Validated against its dotted numeric notation (e.g., "1.2.3.4")
// * BOOLEAN: Validated against strings "true" or "false"
// * INTEGER/ENUMERATED: Validated against a string containing the integer value
// * NULL: Validated against an empty string
// * Other types: Validated directly against the string value
type.matcher.v3.StringMatcher matcher = 2 [(validate.rules).message = {required: true}];
// OID Value which is required if OTHER_NAME SAN type is used.
// For example, UPN OID is 1.3.6.1.4.1.311.20.2.3
// (Reference: http://oid-info.com/get/1.3.6.1.4.1.311.20.2.3).
//
// If set for SAN types other than OTHER_NAME, it will be ignored.
string oid = 3;
}
// [#next-free-field: 17]
// [#next-free-field: 18]
message CertificateValidationContext {
option (udpa.annotations.versioning).previous_message_type =
"envoy.api.v2.auth.CertificateValidationContext";
@ -339,6 +355,9 @@ message CertificateValidationContext {
ACCEPT_UNTRUSTED = 1;
}
message SystemRootCerts {
}
reserved 4, 5;
reserved "verify_subject_alt_name";
@ -378,20 +397,23 @@ message CertificateValidationContext {
// can be treated as trust anchor as well. It allows verification with building valid partial chain instead
// of a full chain.
//
// Only one of ``trusted_ca`` and ``ca_certificate_provider_instance`` may be specified.
//
// [#next-major-version: This field and watched_directory below should ideally be moved into a
// separate sub-message, since there's no point in specifying the latter field without this one.]
// If ``ca_certificate_provider_instance`` is set, it takes precedence over ``trusted_ca``.
config.core.v3.DataSource trusted_ca = 1
[(udpa.annotations.field_migrate).oneof_promotion = "ca_cert_source"];
// Certificate provider instance for fetching TLS certificates.
//
// Only one of ``trusted_ca`` and ``ca_certificate_provider_instance`` may be specified.
// If set, takes precedence over ``trusted_ca``.
// [#not-implemented-hide:]
CertificateProviderPluginInstance ca_certificate_provider_instance = 13
[(udpa.annotations.field_migrate).oneof_promotion = "ca_cert_source"];
// Use system root certs for validation.
// If present, system root certs are used only if neither of the ``trusted_ca``
// or ``ca_certificate_provider_instance`` fields are set.
// [#not-implemented-hide:]
SystemRootCerts system_root_certs = 17;
// If specified, updates of a file-based ``trusted_ca`` source will be triggered
// by this watch. This allows explicit control over the path watched, by
// default the parent directory of the filesystem path in ``trusted_ca`` is

View File

@ -248,11 +248,8 @@ message CommonTlsContext {
// :ref:`Multiple TLS certificates <arch_overview_ssl_cert_select>` can be associated with the
// same context to allow both RSA and ECDSA certificates and support SNI-based selection.
//
// Only one of ``tls_certificates``, ``tls_certificate_sds_secret_configs``,
// and ``tls_certificate_provider_instance`` may be used.
// [#next-major-version: These mutually exclusive fields should ideally be in a oneof, but it's
// not legal to put a repeated field in a oneof. In the next major version, we should rework
// this to avoid this problem.]
// If ``tls_certificate_provider_instance`` is set, this field is ignored.
// If this field is set, ``tls_certificate_sds_secret_configs`` is ignored.
repeated TlsCertificate tls_certificates = 2;
// Configs for fetching TLS certificates via SDS API. Note SDS API allows certificates to be
@ -261,17 +258,14 @@ message CommonTlsContext {
// The same number and types of certificates as :ref:`tls_certificates <envoy_v3_api_field_extensions.transport_sockets.tls.v3.CommonTlsContext.tls_certificates>`
// are valid in the the certificates fetched through this setting.
//
// Only one of ``tls_certificates``, ``tls_certificate_sds_secret_configs``,
// and ``tls_certificate_provider_instance`` may be used.
// [#next-major-version: These mutually exclusive fields should ideally be in a oneof, but it's
// not legal to put a repeated field in a oneof. In the next major version, we should rework
// this to avoid this problem.]
// If ``tls_certificates`` or ``tls_certificate_provider_instance`` are set, this field
// is ignored.
repeated SdsSecretConfig tls_certificate_sds_secret_configs = 6;
// Certificate provider instance for fetching TLS certs.
//
// Only one of ``tls_certificates``, ``tls_certificate_sds_secret_configs``,
// and ``tls_certificate_provider_instance`` may be used.
// If this field is set, ``tls_certificates`` and ``tls_certificate_provider_instance``
// are ignored.
// [#not-implemented-hide:]
CertificateProviderPluginInstance tls_certificate_provider_instance = 14;

View File

@ -96,9 +96,10 @@ message VmConfig {
bool nack_on_code_cache_miss = 6;
// Specifies environment variables to be injected to this VM which will be available through
// WASI's ``environ_get`` and ``environ_get_sizes`` system calls. Note that these functions are mostly implicitly
// called in your language's standard library, so you do not need to call them directly and you can access to env
// vars just like when you do on native platforms.
// WASI's ``environ_get`` and ``environ_get_sizes`` system calls. Note that these functions
// are generally called implicitly by your language's standard library. Therefore, you do not
// need to call them directly. You can access environment variables in the same way you would
// on native platforms.
// Warning: Envoy rejects the configuration if there's conflict of key space.
EnvironmentVariables environment_variables = 7;
}

View File

@ -339,7 +339,7 @@ message ImmediateResponse {
// The message body to return with the response which is sent using the
// text/plain content type, or encoded in the grpc-message header.
string body = 3;
bytes body = 3;
// If set, then include a gRPC status trailer.
GrpcStatus grpc_status = 4;

View File

@ -255,9 +255,9 @@ pytest-kat-envoy3-tests-%: build-aux/pytest-kat.txt $(tools/py-split-tests)
$(MAKE) pytest-run-tests PYTEST_ARGS="$$PYTEST_ARGS -k '$$($(tools/py-split-tests) $(subst -of-, ,$*) <build-aux/pytest-kat.txt)' python/tests/kat"
pytest-kat-envoy3-%: python-integration-test-environment pytest-kat-envoy3-tests-%
$(OSS_HOME)/venv: python/requirements.txt python/requirements-dev.txt
$(OSS_HOME)/venv: $(OSS_HOME)/build-aux/py-version.txt python/requirements.txt python/requirements-dev.txt
rm -rf $@
python3 -m venv $@
python$$(sed -e 's/\~//' <$(OSS_HOME)/build-aux/py-version.txt) -m venv $@
$@/bin/pip3 install -r python/requirements.txt
$@/bin/pip3 install -r python/requirements-dev.txt
$@/bin/pip3 install -e $(OSS_HOME)/python

View File

@ -14,8 +14,10 @@ vendor: FORCE
go mod vendor
clean: vendor.rm-r
# The egrep below is because the MarkupSafe has a broken, unreadable,
# multiline license value.
$(OSS_HOME)/build-aux/pip-show.txt: docker/base-pip.docker.tag.local
docker run --rm "$$(cat docker/base-pip.docker)" sh -c 'pip freeze --exclude-editable | cut -d= -f1 | xargs pip show' > $@
docker run --rm "$$(cat docker/base-pip.docker)" sh -c "pip freeze --exclude-editable | cut -d= -f1 | xargs pip show | egrep '^([A-Za-z-]+: |---)'" > $@
clean: build-aux/pip-show.txt.rm
$(OSS_HOME)/build-aux/go-version.txt: $(_go-version/deps)

View File

@ -162,14 +162,14 @@ $(OSS_HOME)/_generate.tmp/crds: $(tools/controller-gen) build-aux/copyright-boil
crd \
paths=./pkg/api/getambassador.io/... \
output:crd:dir=./_generate.tmp/crds
$(OSS_HOME)/%/zz_generated.conversion.go: $(tools/conversion-gen) build-aux/copyright-boilerplate.go.txt FORCE
rm -f $@ $(@D)/*.scaffold.go
GOPATH= GOFLAGS=-mod=mod $(tools/conversion-gen) \
--skip-unsafe \
--go-header-file=build-aux/copyright-boilerplate.go.txt \
--input-dirs=./$* \
--output-file-base=zz_generated.conversion
--output-file=zz_generated.conversion.go \
./$*
# Because v1 just aliases v2, conversion-gen will need to be able to see the v2 conversion functions
# when generating code for v1.
$(OSS_HOME)/pkg/api/getambassador.io/v1/zz_generated.conversion.go: $(OSS_HOME)/pkg/api/getambassador.io/v2/zz_generated.conversion.go

View File

@ -65,7 +65,7 @@ tools/protoc-gen-go = $(tools.bindir)/protoc-gen-go
tools/protoc-gen-go-grpc = $(tools.bindir)/protoc-gen-go-grpc
tools/yq = $(tools.bindir)/yq
$(tools.bindir)/%: $(tools.srcdir)/%/pin.go $(tools.srcdir)/%/go.mod
cd $(<D) && GOOS= GOARCH= go build -o $(abspath $@) $$(sed -En 's,^import "(.*)".*,\1,p' pin.go)
cd $(<D) && GOOS= GOARCH= go build -o $(abspath $@) $$(sed -En 's,^import _ "(.*)".*,\1,p' pin.go)
# Let these use the main Emissary go.mod instead of having their own go.mod.
tools.main-gomod += $(tools/controller-gen) # ensure runtime libraries are consistent
tools.main-gomod += $(tools/conversion-gen) # ensure runtime libraries are consistent
@ -74,7 +74,7 @@ tools.main-gomod += $(tools/protoc-gen-go-grpc) # ensure runtime libraries are c
tools.main-gomod += $(tools/go-mkopensource) # ensure it is consistent with py-mkopensource
tools.main-gomod += $(tools/kubestatus) # is actually part of Emissary
$(tools.main-gomod): $(tools.bindir)/%: $(tools.srcdir)/%/pin.go $(OSS_HOME)/go.mod
cd $(<D) && GOOS= GOARCH= go build -o $(abspath $@) $$(sed -En 's,^import "(.*)".*,\1,p' pin.go)
cd $(<D) && GOOS= GOARCH= go build -o $(abspath $@) $$(sed -En 's,^import _ "(.*)".*,\1,p' pin.go)
# Local Go sources
# ================
@ -156,7 +156,7 @@ $(tools/ct): $(tools.bindir)/%: $(tools.srcdir)/%/wrap.sh $(tools/ct).d/bin/ct $
$(tools/ct).d/bin/ct: $(tools.srcdir)/ct/pin.go $(tools.srcdir)/ct/go.mod
@PS4=; set -ex; {\
cd $(<D); \
pkg=$$(sed -En 's,^import "(.*)".*,\1,p' pin.go); \
pkg=$$(sed -En 's,^import _ "(.*)".*,\1,p' pin.go); \
ver=$$(go list -f='{{ .Module.Version }}' "$$pkg"); \
GOOS= GOARCH= go build -o $(abspath $@) -ldflags="-X $$pkg/cmd.Version=$$ver" "$$pkg"; \
}
@ -165,7 +165,7 @@ $(tools/ct).d/bin/kubectl: $(tools/kubectl)
ln -s ../../kubectl $@
$(tools/ct).d/dir.txt: $(tools.srcdir)/ct/pin.go $(tools.srcdir)/ct/go.mod
mkdir -p $(@D)
cd $(<D) && GOFLAGS='-mod=readonly' go list -f='{{ .Module.Dir }}' "$$(sed -En 's,^import "(.*)".*,\1,p' pin.go)" >$(abspath $@)
cd $(<D) && GOFLAGS='-mod=readonly' go list -f='{{ .Module.Dir }}' "$$(sed -En 's,^import _ "(.*)".*,\1,p' pin.go)" >$(abspath $@)
$(tools/ct).d/venv: %/venv: %/dir.txt
rm -rf $@
python3 -m venv $@

View File

@ -20,13 +20,33 @@ func makeEndpoints(ctx context.Context, ksnap *snapshot.KubernetesSnapshot, cons
result := map[string][]*ambex.Endpoint{}
for _, k8sEp := range ksnap.Endpoints {
svc, ok := k8sServices[key(k8sEp)]
if !ok {
continue
svcEndpointSlices := map[string][]*kates.EndpointSlice{}
// Collect all the EndpointSlices for each service if the "kubernetes.io/service-name" label is present
for _, k8sEndpointSlice := range ksnap.EndpointSlices {
if serviceName, labelExists := k8sEndpointSlice.Labels["kubernetes.io/service-name"]; labelExists {
svcKey := fmt.Sprintf("%s:%s", k8sEndpointSlice.Namespace, serviceName)
svcEndpointSlices[svcKey] = append(svcEndpointSlices[svcKey], k8sEndpointSlice)
}
for _, ep := range k8sEndpointsToAmbex(k8sEp, svc) {
result[ep.ClusterName] = append(result[ep.ClusterName], ep)
}
// Map each service to its corresponding endpoints from all its EndpointSlices, or fall back to Endpoints if needed
for svcKey, svc := range k8sServices {
if slices, ok := svcEndpointSlices[svcKey]; ok && len(slices) > 0 {
for _, slice := range slices {
for _, ep := range k8sEndpointSlicesToAmbex(slice, svc) {
result[ep.ClusterName] = append(result[ep.ClusterName], ep)
}
}
} else {
// Fallback to using Endpoints if no valid EndpointSlices are available
for _, k8sEp := range ksnap.Endpoints {
if key(k8sEp) == svcKey {
for _, ep := range k8sEndpointsToAmbex(k8sEp, svc) {
result[ep.ClusterName] = append(result[ep.ClusterName], ep)
}
}
}
}
}
@ -97,6 +117,62 @@ func k8sEndpointsToAmbex(ep *kates.Endpoints, svc *kates.Service) (result []*amb
return
}
func k8sEndpointSlicesToAmbex(endpointSlice *kates.EndpointSlice, svc *kates.Service) (result []*ambex.Endpoint) {
portmap := map[string][]string{}
for _, p := range svc.Spec.Ports {
port := fmt.Sprintf("%d", p.Port)
targetPort := p.TargetPort.String()
if targetPort == "" {
targetPort = fmt.Sprintf("%d", p.Port)
}
portmap[targetPort] = append(portmap[targetPort], port)
if p.Name != "" {
portmap[targetPort] = append(portmap[targetPort], p.Name)
portmap[p.Name] = append(portmap[p.Name], port)
}
if len(svc.Spec.Ports) == 1 {
portmap[targetPort] = append(portmap[targetPort], "")
portmap[""] = append(portmap[""], port)
portmap[""] = append(portmap[""], "")
}
}
for _, endpoint := range endpointSlice.Endpoints {
for _, port := range endpointSlice.Ports {
if *port.Protocol == kates.ProtocolTCP || *port.Protocol == kates.ProtocolUDP {
portNames := map[string]bool{}
candidates := []string{fmt.Sprintf("%d", *port.Port), *port.Name, ""}
for _, c := range candidates {
if pns, ok := portmap[c]; ok {
for _, pn := range pns {
portNames[pn] = true
}
}
}
if *endpoint.Conditions.Ready {
for _, address := range endpoint.Addresses {
for pn := range portNames {
sep := "/"
if pn == "" {
sep = ""
}
result = append(result, &ambex.Endpoint{
ClusterName: fmt.Sprintf("k8s/%s/%s%s%s", svc.Namespace, svc.Name, sep, pn),
Ip: address,
Port: uint32(*port.Port),
Protocol: string(*port.Protocol),
})
}
}
}
}
}
}
return
}
func consulEndpointsToAmbex(ctx context.Context, endpoints consulwatch.Endpoints) (result []*ambex.Endpoint) {
for _, ep := range endpoints.Endpoints {
addrs, err := net.LookupHost(ep.Address)

View File

@ -20,6 +20,29 @@ import (
)
func TestEndpointRouting(t *testing.T) {
f := entrypoint.RunFake(t, entrypoint.FakeConfig{EnvoyConfig: true}, nil)
// Create Mapping, Service, and Endpoints resources to start.
assert.NoError(t, f.Upsert(makeMapping("default", "foo", "/foo", "foo", "endpoint")))
assert.NoError(t, f.Upsert(makeService("default", "foo")))
endpoint, port, err := makeSliceEndpoint(8080, "1.2.3.4")
require.NoError(t, err)
assert.NoError(t, f.Upsert(makeEndpointSlice("default", "foo", "foo", endpoint, port)))
f.Flush()
snap, err := f.GetSnapshot(HasMapping("default", "foo"))
require.NoError(t, err)
assert.NotNil(t, snap)
// Check that the endpoints resource we created at the start was properly propagated.
endpoints, err := f.GetEndpoints(HasEndpoints("k8s/default/foo"))
require.NoError(t, err)
assert.Equal(t, "1.2.3.4", endpoints.Entries["k8s/default/foo"][0].Ip)
assert.Equal(t, uint32(8080), endpoints.Entries["k8s/default/foo"][0].Port)
assert.Contains(t, endpoints.Entries, "k8s/default/foo/80")
assert.Equal(t, "1.2.3.4", endpoints.Entries["k8s/default/foo/80"][0].Ip)
assert.Equal(t, uint32(8080), endpoints.Entries["k8s/default/foo/80"][0].Port)
}
func TestEndpointRoutingWithNoEndpointSlices(t *testing.T) {
f := entrypoint.RunFake(t, entrypoint.FakeConfig{EnvoyConfig: true}, nil)
// Create Mapping, Service, and Endpoints resources to start.
assert.NoError(t, f.Upsert(makeMapping("default", "foo", "/foo", "foo", "endpoint")))
@ -57,9 +80,9 @@ service: foo
resolver: endpoint`,
}
assert.NoError(t, f.Upsert(svc))
subset, err := makeSubset(8080, "1.2.3.4")
endpoint, port, err := makeSliceEndpoint(8080, "1.2.3.4")
require.NoError(t, err)
assert.NoError(t, f.Upsert(makeEndpoints("default", "foo", subset)))
assert.NoError(t, f.Upsert(makeEndpointSlice("default", "foo", "foo", endpoint, port)))
f.Flush()
snap, err := f.GetSnapshot(HasService("default", "foo"))
require.NoError(t, err)
@ -97,9 +120,9 @@ func TestEndpointRoutingMultiplePorts(t *testing.T) {
},
},
}))
subset, err := makeSubset("cleartext", 8080, "encrypted", 8443, "1.2.3.4")
endpoint, port, err := makeSliceEndpoint("cleartext", 8080, "encrypted", 8443, "1.2.3.4")
require.NoError(t, err)
assert.NoError(t, f.Upsert(makeEndpoints("default", "foo", subset)))
assert.NoError(t, f.Upsert(makeEndpointSlice("default", "foo", "foo", endpoint, port)))
f.Flush()
snap, err := f.GetSnapshot(HasMapping("default", "foo"))
require.NoError(t, err)
@ -155,9 +178,9 @@ func TestEndpointRoutingIP(t *testing.T) {
func TestEndpointRoutingMappingCreation(t *testing.T) {
f := entrypoint.RunFake(t, entrypoint.FakeConfig{}, nil)
assert.NoError(t, f.Upsert(makeService("default", "foo")))
subset, err := makeSubset(8080, "1.2.3.4")
endpoint, port, err := makeSliceEndpoint(8080, "1.2.3.4")
require.NoError(t, err)
assert.NoError(t, f.Upsert(makeEndpoints("default", "foo", subset)))
assert.NoError(t, f.Upsert(makeEndpointSlice("default", "foo", "foo", endpoint, port)))
f.Flush()
f.AssertEndpointsEmpty(timeout)
assert.NoError(t, f.UpsertYAML(`
@ -275,3 +298,58 @@ func makeSubset(args ...interface{}) (kates.EndpointSubset, error) {
return kates.EndpointSubset{Addresses: addrs, Ports: ports}, nil
}
func makeEndpointSlice(namespace, name, serviceName string, endpoint []kates.Endpoint, port []kates.EndpointSlicePort) *kates.EndpointSlice {
return &kates.EndpointSlice{
TypeMeta: kates.TypeMeta{Kind: "EndpointSlices", APIVersion: "v1.discovery.k8s.io"},
ObjectMeta: kates.ObjectMeta{
Namespace: namespace,
Name: name,
Labels: map[string]string{
"kubernetes.io/service-name": serviceName,
},
},
Endpoints: endpoint,
Ports: port,
}
}
func makeSliceEndpoint(args ...interface{}) ([]kates.Endpoint, []kates.EndpointSlicePort, error) {
var endpoints []kates.Endpoint
var ports []kates.EndpointSlicePort
var currentPortName string
for _, arg := range args {
switch v := arg.(type) {
case int:
portName := currentPortName
ports = append(ports, kates.EndpointSlicePort{Name: &portName, Port: int32Ptr(int32(v)), Protocol: protocolPtr(kates.ProtocolTCP)})
case string:
IP := net.ParseIP(v)
if IP != nil {
endpoints = append(endpoints, kates.Endpoint{
Addresses: []string{v},
Conditions: kates.EndpointConditions{
Ready: &[]bool{true}[0],
Serving: &[]bool{true}[0],
Terminating: &[]bool{false}[0],
},
})
} else {
currentPortName = v // Assume it's a port name if not an IP address
}
default:
return nil, nil, fmt.Errorf("unrecognized type: %T", v)
}
}
return endpoints, ports, nil
}
func int32Ptr(i int32) *int32 {
return &i
}
func protocolPtr(p kates.Protocol) *kates.Protocol {
return &p
}

View File

@ -22,6 +22,7 @@ type endpointRoutingInfo struct {
module moduleResolver
endpointWatches map[string]bool // A set to track the subset of kubernetes endpoints we care about.
previousWatches map[string]bool
endpointSlices []*kates.EndpointSlice
}
type ResolverType int
@ -47,7 +48,7 @@ func (rt ResolverType) String() string {
// newEndpointRoutingInfo creates a shiny new struct to hold information about
// resolvers in use and such.
func newEndpointRoutingInfo() endpointRoutingInfo {
func newEndpointRoutingInfo(endpointSlices []*kates.EndpointSlice) endpointRoutingInfo {
return endpointRoutingInfo{
// resolverTypes keeps track of the type of every resolver in the system.
// It starts out empty.
@ -59,6 +60,7 @@ func newEndpointRoutingInfo() endpointRoutingInfo {
resolverTypes: make(map[string]ResolverType),
// Track which endpoints we actually want to watch.
endpointWatches: make(map[string]bool),
endpointSlices: endpointSlices,
}
}
@ -71,6 +73,7 @@ func (eri *endpointRoutingInfo) reconcileEndpointWatches(ctx context.Context, s
eri.module = moduleResolver{}
eri.previousWatches = eri.endpointWatches
eri.endpointWatches = map[string]bool{}
eri.endpointSlices = s.EndpointSlices
// Phase one processes all the configuration stuff that Mappings depend on. Right now this
// includes Modules and Resolvers. When we are done with Phase one we have processed enough
@ -228,7 +231,7 @@ func (eri *endpointRoutingInfo) checkMapping(ctx context.Context, mapping *amb.M
if eri.resolverTypes[resolver] == KubernetesEndpointResolver {
svc, ns, _ := eri.module.parseService(ctx, mapping, service, mapping.GetNamespace())
eri.endpointWatches[fmt.Sprintf("%s:%s", ns, svc)] = true
eri.mapEndpointWatches(ns, svc)
}
}
@ -247,7 +250,25 @@ func (eri *endpointRoutingInfo) checkTCPMapping(ctx context.Context, tcpmapping
if eri.resolverTypes[resolver] == KubernetesEndpointResolver {
svc, ns, _ := eri.module.parseService(ctx, tcpmapping, service, tcpmapping.GetNamespace())
eri.endpointWatches[fmt.Sprintf("%s:%s", ns, svc)] = true
eri.mapEndpointWatches(ns, svc)
}
}
// mapEndpointWatches figures out what service discovery object available for a given service.
func (eri *endpointRoutingInfo) mapEndpointWatches(namespace string, serviceName string) {
foundEndpointSlice := false
for _, endpointSlice := range eri.endpointSlices {
// Check if this EndpointSlice matches the target service and namespace, and has the required label
if endpointSlice.Namespace == namespace {
if service, ok := endpointSlice.Labels["kubernetes.io/service-name"]; ok && service == serviceName {
eri.endpointWatches[fmt.Sprintf("%s:%s", namespace, endpointSlice.Name)] = true
foundEndpointSlice = true
}
}
}
if !foundEndpointSlice {
//Use Endpoint if EndpointSlice doesn't exist
eri.endpointWatches[fmt.Sprintf("%s:%s", namespace, serviceName)] = true
}
}

View File

@ -80,10 +80,11 @@ func GetInterestingTypes(ctx context.Context, serverTypeList []kates.APIResource
//
// Note that we pull `secrets.v1.` in to "K8sSecrets". ReconcileSecrets will pull
// over the ones we need into "Secrets" and "Endpoints" respectively.
"Services": {{typename: "services.v1."}}, // New in Kubernetes 0.16.0 (2015-04-28) (v1beta{1..3} before that)
"Endpoints": {{typename: "endpoints.v1.", fieldselector: endpointFs}}, // New in Kubernetes 0.16.0 (2015-04-28) (v1beta{1..3} before that)
"K8sSecrets": {{typename: "secrets.v1."}}, // New in Kubernetes 0.16.0 (2015-04-28) (v1beta{1..3} before that)
"ConfigMaps": {{typename: "configmaps.v1.", fieldselector: configMapFs}},
"Services": {{typename: "services.v1."}}, // New in Kubernetes 0.16.0 (2015-04-28) (v1beta{1..3} before that)
"Endpoints": {{typename: "endpoints.v1.", fieldselector: endpointFs}}, // New in Kubernetes 0.16.0 (2015-04-28) (v1beta{1..3} before that)
"EndpointSlices": {{typename: "endpointslices.v1.discovery.k8s.io", fieldselector: endpointFs}},
"K8sSecrets": {{typename: "secrets.v1."}}, // New in Kubernetes 0.16.0 (2015-04-28) (v1beta{1..3} before that)
"ConfigMaps": {{typename: "configmaps.v1.", fieldselector: configMapFs}},
"Ingresses": {
{typename: "ingresses.v1beta1.extensions"}, // New in Kubernetes 1.2.0 (2016-03-16), gone in Kubernetes 1.22.0 (2021-08-04)
{typename: "ingresses.v1beta1.networking.k8s.io"}, // New in Kubernetes 1.14.0 (2019-03-25), gone in Kubernetes 1.22.0 (2021-08-04)

View File

@ -1,46 +1,48 @@
---
# All the IP addresses, pod names, etc., are basically made up. These
# aren't meant to be functional, just to exercise the machinery of
# aren't meant to be functional, just to exercise the machinery of
# filting things in the watcher.
apiVersion: v1
kind: Endpoints
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: random-1
subsets:
endpoints:
- addresses:
- ip: 10.42.0.55
nodeName: flynn-2a
targetRef:
kind: Pod
name: random-6db467b4d7-zzzz1
- ip: 10.42.0.56
nodeName: flynn-2b
targetRef:
kind: Pod
name: random-6db467b4d7-zzzz1
ports:
- port: 5000
protocol: TCP
- "10.42.0.55"
nodeName: flynn-2a
targetRef:
kind: Pod
name: random-6db467b4d7-zzzz1
- addresses:
- "10.42.0.56"
nodeName: flynn-2b
targetRef:
kind: Pod
name: random-6db467b4d7-zzzz1
ports:
- port: 5000
protocol: TCP
---
# All the IP addresses, pod names, etc., are basically made up. These
# aren't meant to be functional, just to exercise the machinery of
# aren't meant to be functional, just to exercise the machinery of
# filting things in the watcher.
apiVersion: v1
kind: Endpoints
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: random-2
subsets:
endpoints:
- addresses:
- ip: 10.42.0.65
nodeName: flynn-2a
targetRef:
kind: Pod
name: rande-6db467b4d7-zzzz2
- ip: 10.42.0.66
nodeName: flynn-2b
targetRef:
kind: Pod
name: rande-6db467b4d7-zzzz2
ports:
- port: 5000
protocol: TCP
- "10.42.0.65"
nodeName: flynn-2a
targetRef:
kind: Pod
name: rande-6db467b4d7-zzzz2
- addresses:
- "10.42.0.66"
nodeName: flynn-2b
targetRef:
kind: Pod
name: rande-6db467b4d7-zzzz2
ports:
- port: 5000
protocol: TCP

View File

@ -11,24 +11,25 @@ spec:
targetPort: http-api
---
# All the IP addresses, pod names, etc., are basically made up. These
# aren't meant to be functional, just to exercise the machinery of
# aren't meant to be functional, just to exercise the machinery of
# filting things in the watcher.
apiVersion: v1
kind: Endpoints
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: hello
subsets:
endpoints:
- addresses:
- ip: 10.42.0.15
nodeName: flynn-2a
targetRef:
kind: Pod
name: hello-6db467b4d7-n45n7
- ip: 10.42.0.16
nodeName: flynn-2b
targetRef:
kind: Pod
name: hello-6db467b4d7-n45n7
ports:
- port: 5000
protocol: TCP
- "10.42.0.15"
nodeName: flynn-2a
targetRef:
kind: Pod
name: hello-6db467b4d7-n45n7
- addresses:
- "10.42.0.16"
nodeName: flynn-2b
targetRef:
kind: Pod
name: hello-6db467b4d7-n45n7
ports:
- port: 5000
protocol: TCP

View File

@ -11,24 +11,25 @@ spec:
targetPort: http-api
---
# All the IP addresses, pod names, etc., are basically made up. These
# aren't meant to be functional, just to exercise the machinery of
# aren't meant to be functional, just to exercise the machinery of
# filting things in the watcher.
apiVersion: v1
kind: Endpoints
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: qotm
subsets:
endpoints:
- addresses:
- ip: 10.42.0.15
nodeName: flynn-2a
targetRef:
kind: Pod
name: qotm-6db467b4d7-n45n7
- ip: 10.42.0.16
nodeName: flynn-2b
targetRef:
kind: Pod
name: qotm-6db467b4d7-n45n7
ports:
- port: 5000
protocol: TCP
- "10.42.0.15"
nodeName: flynn-2a
targetRef:
kind: Pod
name: qotm-6db467b4d7-n45n7
- addresses:
- "10.42.0.16"
nodeName: flynn-2b
targetRef:
kind: Pod
name: qotm-6db467b4d7-n45n7
ports:
- port: 5000
protocol: TCP

View File

@ -211,6 +211,8 @@ func canonGVK(rawString string) (canonKind string, canonGroupVersion string, err
return "Service", "v1", nil
case "endpoints":
return "Endpoints", "v1", nil
case "endpointslices":
return "EndpointSlices", "v1.discovery.k8s.io", nil
case "secret", "secrets":
return "Secret", "v1", nil
case "configmap", "configmaps":

View File

@ -361,12 +361,13 @@ func NewSnapshotHolder(ambassadorMeta *snapshot.AmbassadorMetaInfo) (*SnapshotHo
if err != nil {
return nil, err
}
k8sSnapshot := NewKubernetesSnapshot()
return &SnapshotHolder{
validator: validator,
ambassadorMeta: ambassadorMeta,
k8sSnapshot: NewKubernetesSnapshot(),
k8sSnapshot: k8sSnapshot,
consulSnapshot: &snapshot.ConsulSnapshot{},
endpointRoutingInfo: newEndpointRoutingInfo(),
endpointRoutingInfo: newEndpointRoutingInfo(k8sSnapshot.EndpointSlices),
dispatcher: disp,
firstReconfig: true,
}, nil
@ -491,7 +492,7 @@ func (sh *SnapshotHolder) K8sUpdate(
for _, delta := range deltas {
sh.unsentDeltas = append(sh.unsentDeltas, delta)
if delta.Kind == "Endpoints" {
if delta.Kind == "EndpointSlice" || delta.Kind == "Endpoints" {
key := fmt.Sprintf("%s:%s", delta.Namespace, delta.Name)
if sh.endpointRoutingInfo.endpointWatches[key] || sh.dispatcher.IsWatched(delta.Namespace, delta.Name) {
endpointsChanged = true

View File

@ -76,7 +76,7 @@ RUN apk --no-cache add \
# 'python3' versions above.
RUN pip3 install "Cython<3.0" pip-tools==7.3
RUN curl --fail -L https://dl.google.com/go/go1.22.4.linux-amd64.tar.gz | tar -C /usr/local -xzf -
RUN curl --fail -L https://dl.google.com/go/go1.23.3.linux-amd64.tar.gz | tar -C /usr/local -xzf -
# The YAML parser is... special. To get the C version, we need to install Cython and libyaml, then
# build it locally -- just using pip won't work.

View File

@ -15,13 +15,13 @@
# See the License for the specific language governing permissions and
# limitations under the License
FROM ubuntu:23.10
FROM ubuntu:24.04
LABEL PROJECT_REPO_URL = "git@github.com:datawire/ambassador.git" \
PROJECT_REPO_BROWSER_URL = "https://github.com/datawire/ambassador" \
DESCRIPTION = "Ambassador REST Service" \
VENDOR = "Datawire" \
VENDOR_URL = "https://datawire.io/"
LABEL PROJECT_REPO_URL="git@github.com:datawire/ambassador.git" \
PROJECT_REPO_BROWSER_URL="https://github.com/datawire/ambassador" \
DESCRIPTION="Ambassador REST Service" \
VENDOR="Datawire" \
VENDOR_URL="https://datawire.io/"
# This Dockerfile is set up to install all the application-specific stuff into
# /application.
@ -56,4 +56,5 @@ COPY auth.py authsvc.crt authsvc.key entrypoint.sh ./
# perform any final configuration steps.
ARG TLS=""
ENV TLS=${TLS}
SHELL [ "/bin/bash", "-c" ]
ENTRYPOINT ./entrypoint.sh ${TLS}

View File

@ -15,13 +15,13 @@
# See the License for the specific language governing permissions and
# limitations under the License
FROM ubuntu:23.10
FROM ubuntu:24.04
LABEL PROJECT_REPO_URL = "git@github.com:datawire/ambassador.git" \
PROJECT_REPO_BROWSER_URL = "https://github.com/datawire/ambassador" \
DESCRIPTION = "Ambassador REST Service" \
VENDOR = "Datawire" \
VENDOR_URL = "https://datawire.io/"
LABEL PROJECT_REPO_URL="git@github.com:datawire/ambassador.git" \
PROJECT_REPO_BROWSER_URL="https://github.com/datawire/ambassador" \
DESCRIPTION="Ambassador REST Service" \
VENDOR="Datawire" \
VENDOR_URL="https://datawire.io/"
# This Dockerfile is set up to install all the application-specific stuff into
# /application.
@ -58,4 +58,5 @@ ARG VERSION="0.0.2"
ENV VERSION=${VERSION}
ARG TLS=""
ENV TLS=${TLS}
SHELL [ "/bin/bash", "-c" ]
ENTRYPOINT ./entrypoint.sh ${VERSION} ${TLS}

View File

@ -32,9 +32,9 @@
changelog: https://github.com/emissary-ingress/emissary/blob/$branch$/CHANGELOG.md
items:
- version: 3.10.0-dev
- version: 3.10.0
prevVersion: 3.9.0
date: 'TBD'
date: "2025-07-29"
notes:
- title: Upgrade to Envoy 1.30.2
type: feature
@ -68,9 +68,36 @@ items:
- title: "Incorrect Cache Key for Mapping"
link: https://github.com/emissary-ingress/emissary/issues/5714
- title: Add support for EndpointSlices to the Endpoints resolver
type: feature
body: >-
$productName$ now supports resolving Endpoints from EndpointSlices
in addition to the existing support for Endpoints, supporting Services
with more than 1000 endpoints.
- title: Pass client TLS information to external auth
type: feature
body: >-
$productName$ now passes the client TLS certificate and SNI, if any,
to the external auth service. These are available in the
`source.certificate` and `tls_session.sni` fields, as described in
the <a
href="https://www.envoyproxy.io/docs/envoy/latest/api-v3/service/auth/v3/attribute_context.proto">
Envoy extauth documentation</a>.
- title: Update `ambex` to use `xxhash64` instead of `md5`
type: change
body: >-
The `ambex` component of $productName$ now uses `xxhash64` instead
of `md5`, since `md5` can cause problems in crypto-restricted
environments (e.g. FIPS)
github:
- title: "Remove usage of md5"
link: https://github.com/emissary-ingress/emissary/pull/5794
- version: 3.9.0
prevVersion: 3.8.0
date: '2023-11-13'
date: "2023-11-13"
notes:
- title: Upgrade to Envoy 1.27.2
type: feature
@ -120,7 +147,7 @@ items:
- version: 3.8.0
prevVersion: 3.7.2
date: '2023-08-29'
date: "2023-08-29"
notes:
- title: Account for matchLabels when associating mappings with the same prefix to different Hosts
type: bugfix
@ -154,7 +181,7 @@ items:
- version: 3.7.2
prevVersion: 3.7.1
date: '2023-07-25'
date: "2023-07-25"
notes:
- title: Upgrade to Envoy 1.26.4
type: security
@ -164,7 +191,7 @@ items:
- version: 3.7.1
prevVersion: 3.7.0
date: '2023-07-13'
date: "2023-07-13"
notes:
- title: Upgrade to Envoy 1.26.3
type: security
@ -173,7 +200,7 @@ items:
- version: 3.7.0
prevVersion: 3.6.0
date: '2023-06-20'
date: "2023-06-20"
notes:
- title: Upgrade to Golang 1.20.4
type: security
@ -197,7 +224,7 @@ items:
- version: 3.6.0
prevVersion: 3.5.0
date: '2023-04-17'
date: "2023-04-17"
notes:
- title: Upgrade to Envoy 1.25.4
type: feature
@ -207,7 +234,7 @@ items:
- version: 3.5.0
prevVersion: 3.4.0
date: '2023-02-15'
date: "2023-02-15"
notes:
- title: Update to golang 1.20.1
type: security
@ -243,8 +270,8 @@ items:
generated with an sni match including the port. This has been fixed and the correct envoy configuration is
being generated.
github:
- title: "fix: hostname port issue"
link: https://github.com/emissary-ingress/emissary/pull/4816
- title: "fix: hostname port issue"
link: https://github.com/emissary-ingress/emissary/pull/4816
- title: Add support for resolving port names in Ingress resource
type: change
@ -255,8 +282,8 @@ items:
to the original behavior.
(Thanks to <a href="https://github.com/antonu17">Anton Ustyuzhanin</a>!).
github:
- title: "#4809"
link: https://github.com/emissary-ingress/emissary/pull/4809
- title: "#4809"
link: https://github.com/emissary-ingress/emissary/pull/4809
- title: Add starupProbe to emissary-apiext server
type: change
@ -268,10 +295,9 @@ items:
configure the webhooks before running liveness and readiness probes. This is to ensure
slow startup doesn't cause K8s to needlessly restart the pod.
- version: 3.4.0
prevVersion: 3.3.0
date: '2023-01-03'
date: "2023-01-03"
notes:
- title: Re-add support for getambassador.io/v1
type: feature
@ -335,7 +361,7 @@ items:
- version: 3.3.0
prevVersion: 3.2.0
date: '2022-11-02'
date: "2022-11-02"
notes:
- title: Update Golang to 1.19.2
type: security
@ -358,8 +384,8 @@ items:
restores the previous behavior by disabling the ext_authz call on the
https redirect routes.
github:
- title: "#4620"
link: https://github.com/emissary-ingress/emissary/issues/4620
- title: "#4620"
link: https://github.com/emissary-ingress/emissary/issues/4620
- title: Fix regression in host_redirects with AuthService
type: bugfix
@ -376,8 +402,8 @@ items:
restores the previous behavior by disabling the ext_authz call on the
host_redirect routes.
github:
- title: "#4640"
link: https://github.com/emissary-ingress/emissary/issues/4640
- title: "#4640"
link: https://github.com/emissary-ingress/emissary/issues/4640
- title: Fixed finding ingress resource tls secrets
type: bugfix
@ -389,7 +415,7 @@ items:
- version: 3.2.0
prevVersion: 3.1.0
date: '2022-09-26'
date: "2022-09-26"
notes:
- title: Envoy upgraded to 1.23
type: change
@ -428,8 +454,8 @@ items:
Distinct services with names that are the same in the first forty characters
will no longer be incorrectly mapped to the same cluster.
github:
- title: "#4354"
link: https://github.com/emissary-ingress/emissary/issues/4354
- title: "#4354"
link: https://github.com/emissary-ingress/emissary/issues/4354
- title: Add failure_mode_deny option to the RateLimitService
type: feature
body: >-
@ -468,8 +494,8 @@ items:
literal values, environment variables, or request headers.
(Thanks to <a href="https://github.com/psalaberria002">Paul</a>!)
github:
- title: "#4181"
link: https://github.com/emissary-ingress/emissary/pull/4181
- title: "#4181"
link: https://github.com/emissary-ingress/emissary/pull/4181
- title: TCPMappings use correct SNI configuration
type: bugfix
body: >-
@ -489,7 +515,7 @@ items:
body: >-
Prior releases of $productName$ had the arbitrary limitation that a
<code>TCPMapping</code> cannot be used on the same port that HTTP is served on, even if
TLS+SNI would make this possible. $productName$ now allows <code>TCPMappings</code> to be
TLS+SNI would make this possible. $productName$ now allows <code>TCPMappings</code> to be
used on the same <code>Listener</code> port as HTTP <code>Hosts</code>, as long as that
<code>Listener</code> terminates TLS.
- title: Update Golang to 1.19.1
@ -498,7 +524,7 @@ items:
Updated Golang to 1.19.1 to address the CVEs: CVE-2022-27664, CVE-2022-32190.
- version: 3.1.0
date: '2022-08-01'
date: "2022-08-01"
notes:
- title: Add support for OpenAPI 2 contracts
type: feature
@ -551,7 +577,7 @@ items:
- version: 3.0.0
prevVersion: 2.3.1
date: '2022-06-27'
date: "2022-06-27"
notes:
- title: Envoy upgraded to 1.22
type: change
@ -645,7 +671,7 @@ items:
between downstream clients and $productName$.
- version: 2.5.0
date: 'TBD'
date: "TBD"
prevVersion: 2.4.0
notes:
- title: Fixed <code>mappingSelector</code> associating <code>Hosts</code> with <code>Mappings</code>
@ -662,7 +688,7 @@ items:
(Thanks to <a href="https://github.com/f-herceg">Filip Herceg</a> and <a href="https://github.com/dynajoe">Joe Andaverde</a>!).
- version: 2.4.0
date: '2022-09-19'
date: "2022-09-19"
prevVersion: 2.3.2
notes:
- title: Add support for Host resources using secrets from different namespaces
@ -714,12 +740,12 @@ items:
body: >-
Prior releases of $productName$ had the arbitrary limitation that a
<code>TCPMapping</code> cannot be used on the same port that HTTP is served on, even if
TLS+SNI would make this possible. $productName$ now allows <code>TCPMappings</code> to be
TLS+SNI would make this possible. $productName$ now allows <code>TCPMappings</code> to be
used on the same <code>Listener</code> port as HTTP <code>Hosts</code>, as long as that
<code>Listener</code> terminates TLS.
- version: 1.14.5
date: 'TBD'
date: "TBD"
notes:
- title: When using gzip, upstreams will no longer receive encoded data
type: bugfix
@ -728,12 +754,12 @@ items:
data. This bug was introduced in 1.14.0. The fix restores the default behavior of
not sending compressed data to upstream services.
github:
- title: 3818
link: https://github.com/emissary-ingress/emissary/issues/3818
- title: 3818
link: https://github.com/emissary-ingress/emissary/issues/3818
docs: https://github.com/emissary-ingress/emissary/issues/3818
- version: 2.3.2
date: '2022-08-01'
date: "2022-08-01"
prevVersion: 2.3.1
notes:
- title: Fix regression in the agent for the metrics transfer.
@ -762,7 +788,7 @@ items:
Updated ncurses to 1.1.1q-r0 to address CVE-2022-29458
- version: 1.14.4
date: '2022-06-13'
date: "2022-06-13"
notes:
- title: Envoy security updates
type: security
@ -775,7 +801,7 @@ items:
docs: https://groups.google.com/g/envoy-announce/c/8nP3Kn4jV7k
- version: 2.3.1
date: '2022-06-09'
date: "2022-06-09"
notes:
- title: fix regression in tracing service config
type: bugfix
@ -784,8 +810,8 @@ items:
for the other drivers (lightstep, etc...). This caused $productName$ to crash on startup. This issue has been resolved
to ensure that the defaults are only applied when driver is <code>zipkin</code>
github:
- title: "#4267"
link: https://github.com/emissary-ingress/emissary/issues/4267
- title: "#4267"
link: https://github.com/emissary-ingress/emissary/issues/4267
- title: Envoy security updates
type: security
body: >-
@ -796,7 +822,7 @@ items:
redirects</a>, and does not use Envoy's built-in OAuth2 filter.
docs: https://groups.google.com/g/envoy-announce/c/8nP3Kn4jV7k
- version: 2.3.0
date: '2022-06-06'
date: "2022-06-06"
notes:
- title: Remove unused packages
type: security
@ -809,16 +835,16 @@ items:
<code>TracingService</code> config when using lightstep as the driver.
(Thanks to <a href="https://github.com/psalaberria002">Paul</a>!)
github:
- title: "#4179"
link: https://github.com/emissary-ingress/emissary/pull/4179
- title: "#4179"
link: https://github.com/emissary-ingress/emissary/pull/4179
- title: Added support for TLS certificate revocation list
type: feature
body: >-
It is now possible to set `crl_secret` in `Host` and `TLSContext` resources
to check peer certificates against a certificate revocation list.
github:
- title: "#1743"
link: https://github.com/emissary-ingress/emissary/issues/1743
- title: "#1743"
link: https://github.com/emissary-ingress/emissary/issues/1743
- title: Added support for the LogService v3 transport protocol
type: feature
body: >-
@ -856,7 +882,7 @@ items:
to configure Envoy.
- version: 2.2.2
date: '2022-02-25'
date: "2022-02-25"
prevVersion: 2.2.1
notes:
- title: TLS Secret validation is now opt-in
@ -871,8 +897,8 @@ items:
body: >-
Kubernetes Secrets that should contain an EC (Elliptic Curve) TLS Private Key are now properly validated.
github:
- title: 4134
link: https://github.com/emissary-ingress/emissary/issues/4134
- title: 4134
link: https://github.com/emissary-ingress/emissary/issues/4134
docs: https://github.com/emissary-ingress/emissary/issues/4134
- title: Decrease metric sync frequency
@ -880,11 +906,11 @@ items:
body: >-
The new delay between two metrics syncs is now 30s.
github:
- title: "#4122"
link: https://github.com/emissary-ingress/emissary/pull/4122
- title: "#4122"
link: https://github.com/emissary-ingress/emissary/pull/4122
- version: 1.14.3
date: '2022-02-25'
date: "2022-02-25"
notes:
- title: Envoy security updates
type: security
@ -894,7 +920,7 @@ items:
docs: https://groups.google.com/g/envoy-announce/c/bIUgEDKHl4g
- version: 2.2.1
date: '2022-02-22'
date: "2022-02-22"
notes:
- title: Envoy V2 API deprecation
type: change
@ -910,7 +936,7 @@ items:
docs: ../../../argo/latest/howtos/manage-rollouts-using-cloud
- version: 2.2.0
date: '2022-02-10'
date: "2022-02-10"
notes:
- title: Envoy V2 API deprecation
type: change
@ -943,8 +969,8 @@ items:
instance was not actually left doing debugging logging, for example.
(Thanks to <a href="https://github.com/jfrabaute">Fabrice</a>!)
github:
- title: "#3906"
link: https://github.com/emissary-ingress/emissary/issues/3906
- title: "#3906"
link: https://github.com/emissary-ingress/emissary/issues/3906
docs: topics/running/statistics/8877-metrics/
- title: Envoy configuration % escaping
@ -955,10 +981,10 @@ items:
custom user content can now contain '%' symbols escaped as '%%'.
docs: topics/running/custom-error-responses
github:
- title: "DW Envoy: 74"
link: https://github.com/datawire/envoy/pull/74
- title: "Upstream Envoy: 19383"
link: https://github.com/envoyproxy/envoy/pull/19383
- title: "DW Envoy: 74"
link: https://github.com/datawire/envoy/pull/74
- title: "Upstream Envoy: 19383"
link: https://github.com/envoyproxy/envoy/pull/19383
image: ./v2.2.0-percent-escape.png
- title: Stream metrics from Envoy to Ambassador Cloud
@ -966,8 +992,8 @@ items:
body: >-
Support for streaming Envoy metrics about the clusters to Ambassador Cloud.
github:
- title: "#4053"
link: https://github.com/emissary-ingress/emissary/pull/4053
- title: "#4053"
link: https://github.com/emissary-ingress/emissary/pull/4053
docs: https://github.com/emissary-ingress/emissary/pull/4053
- title: Support received commands to pause, continue and abort a Rollout via Agent directives
@ -978,8 +1004,8 @@ items:
is sent to Ambassador Cloud including the command ID, whether it ran successfully, and
an error message in case there was any.
github:
- title: "#4040"
link: https://github.com/emissary-ingress/emissary/pull/4040
- title: "#4040"
link: https://github.com/emissary-ingress/emissary/pull/4040
docs: https://github.com/emissary-ingress/emissary/pull/4040
- title: Validate certificates in TLS Secrets
@ -989,8 +1015,8 @@ items:
accepted for configuration. A Secret that contains an invalid TLS certificate will be logged
as an invalid resource.
github:
- title: "#3821"
link: https://github.com/emissary-ingress/emissary/issues/3821
- title: "#3821"
link: https://github.com/emissary-ingress/emissary/issues/3821
docs: ../topics/running/tls
image: ./v2.2.0-tls-cert-validation.png
@ -1004,7 +1030,7 @@ items:
- version: 2.1.2
prevVersion: 2.1.0
date: '2022-01-25'
date: "2022-01-25"
notes:
- title: Envoy V2 API deprecation
type: change
@ -1061,8 +1087,8 @@ items:
Any <code>Mapping</code> that uses the <code>host_redirect</code> field is now properly discovered and used. Thanks
to <a href="https://github.com/gferon">Gabriel Féron</a> for contributing this bugfix!
github:
- title: "#3709"
link: https://github.com/emissary-ingress/emissary/issues/3709
- title: "#3709"
link: https://github.com/emissary-ingress/emissary/issues/3709
docs: https://github.com/emissary-ingress/emissary/issues/3709
- title: Correctly handle DNS wildcards when associating Hosts and Mappings
@ -1112,7 +1138,7 @@ items:
some situations a validation error would not be reported.
- version: 2.1.1
date: 'N/A'
date: "N/A"
notes:
- title: Never issued
type: change
@ -1122,7 +1148,7 @@ items:
Emissary-ingress 2.1.0.</i>
- version: 2.1.0
date: '2021-12-16'
date: "2021-12-16"
notes:
- title: Not recommended; upgrade to 2.1.2 instead
type: change
@ -1154,8 +1180,8 @@ items:
<code>Mapping</code>s together). This has been corrected, so that all such
updates correctly take effect.
github:
- title: "#3945"
link: https://github.com/emissary-ingress/emissary/issues/3945
- title: "#3945"
link: https://github.com/emissary-ingress/emissary/issues/3945
docs: https://github.com/emissary-ingress/emissary/issues/3945
image: ./v2.1.0-canary.png
@ -1174,8 +1200,8 @@ items:
data. This bug was introduced in 1.14.0. The fix restores the default behavior of
not sending compressed data to upstream services.
github:
- title: "#3818"
link: https://github.com/emissary-ingress/emissary/issues/3818
- title: "#3818"
link: https://github.com/emissary-ingress/emissary/issues/3818
docs: https://github.com/emissary-ingress/emissary/issues/3818
image: ./v2.1.0-gzip-enabled.png
@ -1199,7 +1225,7 @@ items:
have now been removed, resolving CVE-2020-29651.
- version: 2.0.5
date: '2021-11-08'
date: "2021-11-08"
notes:
- title: AuthService circuit breakers
type: feature
@ -1227,13 +1253,13 @@ items:
<code>mappingSelector</code>; a future version of $productName$ will remove the
<code>selector</code> element.
github:
- title: "#3902"
link: https://github.com/emissary-ingress/emissary/issues/3902
- title: "#3902"
link: https://github.com/emissary-ingress/emissary/issues/3902
docs: https://github.com/emissary-ingress/emissary/issues/3902
image: ./v2.0.5-mappingselector.png
- version: 2.0.4
date: '2021-10-19'
date: "2021-10-19"
notes:
- title: General availability!
type: feature
@ -1307,8 +1333,8 @@ items:
The release now shows its actual released version number, rather than
the internal development version number.
github:
- title: "#3854"
link: https://github.com/emissary-ingress/emissary/issues/3854
- title: "#3854"
link: https://github.com/emissary-ingress/emissary/issues/3854
docs: https://github.com/emissary-ingress/emissary/issues/3854
image: ./v2.0.4-version.png
@ -1318,8 +1344,8 @@ items:
Large configurations no longer cause $productName$ to be unable
to communicate with Ambassador Cloud.
github:
- title: "#3593"
link: https://github.com/emissary-ingress/emissary/issues/3593
- title: "#3593"
link: https://github.com/emissary-ingress/emissary/issues/3593
docs: https://github.com/emissary-ingress/emissary/issues/3593
- title: Listeners correctly support l7Depth
@ -1331,7 +1357,7 @@ items:
image: ./v2.0.4-l7depth.png
- version: 2.0.3-ea
date: '2021-09-16'
date: "2021-09-16"
notes:
- title: Developer Preview!
body: We're pleased to introduce $productName$ 2.0.3 as a <b>developer preview</b>. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on <a href="https://a8r.io/slack">Slack</a> and let us know what you think.
@ -1344,10 +1370,10 @@ items:
type: feature
docs: topics/running/running/
github:
- title: "#3686"
link: https://github.com/emissary-ingress/emissary/issues/3686
- title: "#3666"
link: https://github.com/emissary-ingress/emissary/issues/3666
- title: "#3686"
link: https://github.com/emissary-ingress/emissary/issues/3686
- title: "#3666"
link: https://github.com/emissary-ingress/emissary/issues/3666
- title: AmbassadorMapping supports setting the DNS type
body: You can now set <code>dns_type</code> in the <code>AmbassadorMapping</code> to configure how Envoy will use the DNS for the service.
@ -1359,11 +1385,11 @@ items:
type: bugfix
docs: https://github.com/emissary-ingress/emissary/issues/3707
github:
- title: "#3707"
link: https://github.com/emissary-ingress/emissary/issues/3707
- title: "#3707"
link: https://github.com/emissary-ingress/emissary/issues/3707
- version: 2.0.2-ea
date: '2021-08-24'
date: "2021-08-24"
notes:
- title: Developer Preview!
body: We're pleased to introduce $productName$ 2.0.2 as a <b>developer preview</b>. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on <a href="https://a8r.io/slack">Slack</a> and let us know what you think.
@ -1387,7 +1413,7 @@ items:
docs: topics/running/running/
- version: 2.0.1-ea
date: '2021-08-12'
date: "2021-08-12"
notes:
- title: Developer Preview!
body: We're pleased to introduce $productName$ 2.0.1 as a <b>developer preview</b>. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on <a href="https://a8r.io/slack">Slack</a> and let us know what you think.
@ -1426,7 +1452,7 @@ items:
docs: topics/concepts/rate-limiting-at-the-edge/
- version: 2.0.0-ea
date: '2021-06-24'
date: "2021-06-24"
notes:
- title: Developer Preview!
body: We're pleased to introduce $productName$ 2.0.0 as a <b>developer preview</b>. The 2.X family introduces a number of changes to allow $productName$ to more gracefully handle larger installations, reduce global configuration to better handle multitenant or multiorganizational installations, reduce memory footprint, and improve performance. We welcome feedback!! Join us on <a href="https://a8r.io/slack">Slack</a> and let us know what you think.
@ -1469,8 +1495,8 @@ items:
body: Each <code>AmbassadorHost</code> can specify its <code>requestPolicy.insecure.action</code> independently of any other <code>AmbassadorHost</code>, allowing for HTTP routing as flexible as HTTPS routing.
docs: topics/running/host-crd/#secure-and-insecure-requests
github:
- title: "#2888"
link: https://github.com/datawire/ambassador/issues/2888
- title: "#2888"
link: https://github.com/datawire/ambassador/issues/2888
image: ./edge-stack-2.0.0-insecure_action_hosts.png
type: bugfix
@ -1534,7 +1560,7 @@ items:
type: change
- version: 1.14.2
date: '2021-09-29'
date: "2021-09-29"
notes:
- title: Mappings support controlling DNS refresh with DNS TTL
type: feature
@ -1559,7 +1585,7 @@ items:
docs: topics/running/ambassador/#modify-default-buffer-size
- version: 1.14.1
date: '2021-08-24'
date: "2021-08-24"
notes:
- title: Envoy security updates
type: change
@ -1569,7 +1595,7 @@ items:
docs: https://groups.google.com/g/envoy-announce/c/5xBpsEZZDfE
- version: 1.14.0
date: '2021-08-19'
date: "2021-08-19"
notes:
- title: Envoy upgraded to 1.17.3!
type: change
@ -1596,7 +1622,7 @@ items:
docs: https://github.com/emissary-ingress/emissary/pull/3650
- version: 1.13.10
date: '2021-07-28'
date: "2021-07-28"
notes:
- title: Fix for CORS origins configuration on the Mapping resource
type: bugfix
@ -1647,7 +1673,7 @@ items:
image: ../images/edge-stack-1.13.10-consul-cert-log.png
- version: 1.13.9
date: '2021-06-30'
date: "2021-06-30"
notes:
- title: Fix for TCPMappings
body: >-
@ -1657,7 +1683,7 @@ items:
docs: topics/using/tcpmappings/
- version: 1.13.8
date: '2021-06-08'
date: "2021-06-08"
notes:
- title: Fix Ambassador Cloud Service Details
body: >-
@ -1676,7 +1702,7 @@ items:
docs: https://www.getambassador.io/docs/argo
- version: 1.13.7
date: '2021-06-03'
date: "2021-06-03"
notes:
- title: JSON logging support
body: >-
@ -1703,7 +1729,7 @@ items:
type: change
- version: 1.13.6
date: '2021-05-24'
date: "2021-05-24"
notes:
- title: Quieter logs in legacy mode
type: bugfix
@ -1712,7 +1738,7 @@ items:
when using <code>AMBASSADOR_LEGACY_MODE=true</code>.
- version: 1.13.5
date: '2021-05-13'
date: "2021-05-13"
notes:
- title: Correctly support proper_case and preserve_external_request_id
type: bugfix
@ -1731,7 +1757,7 @@ items:
docs: topics/running/ingress-controller
- version: 1.13.4
date: '2021-05-11'
date: "2021-05-11"
notes:
- title: Envoy 1.15.5
body: >-
@ -1740,5 +1766,4 @@ items:
image: ../images/edge-stack-1.13.4.png
docs: topics/running/ambassador/#rejecting-client-requests-with-escaped-slashes
type: security
# Don't go any further back than 1.13.4.

323
go.mod
View File

@ -1,42 +1,6 @@
module github.com/emissary-ingress/emissary/v3
go 1.22.4
// If you're editing this file, there's a few things you should know:
//
// 1. Avoid the `replace` command as much as possible. Go only pays
// attention to the `replace` command when it appears in the main
// module, which means that if the `replace` command is required
// for the compile to work, then anything using ambassador.git as
// a library needs to duplicate that `replace` in their go.mod.
// We don't want to burden our users with that if we can avoid it
// (since we encourage them to use the gRPC Go libraries when
// implementing plugin services), and we don't want to deal with
// that ourselves in apro.git.
//
// The biggest reason we wouldn't be able to avoid it is if we
// need to pull in a library that has a `replace` in its
// go.mod--just as us adding a `replace` to our go.mod would
// require our dependents to do the same, our dependencies adding
// a `replace` requires us to do the same. And even then, if
// we're careful we might be able to avoid it.
//
// 2. If you do add a `replace` command to this file, always include
// a version number to the left of the "=>" (even if you're
// copying the command from a dependnecy and the dependency's
// go.mod doesn't have a version on the left side). This way we
// don't accidentally keep pinning to an older version after our
// dependency's `replace` goes away. Sometimes it can be tricky
// to figure out what version to put on the left side; often the
// easiest way to figure it out is to get it working without a
// version, run `go mod vendor`, then grep for "=>" in
// `./vendor/modules.txt`. If you don't see a "=>" line for that
// replacement in modules.txt, then that's an indicator that we
// don't really need that `replace`, maybe instead using a
// `require`.
//
// 3. If you do add a `replace` command to this file, you must also
// add it to the go.mod in apro.git (see above for explanation).
go 1.23.3
// Because we (unfortunately) need to require k8s.io/kubernetes, which
// is (unfortunately) managed in a way that makes it hostile to being
@ -72,187 +36,254 @@ exclude (
k8s.io/sample-apiserver v0.0.0
)
// Invalid pseudo-version.
exclude github.com/go-check/check v1.0.0-20180628173108-788fd7840127
// TEMPORARY HACK, DO NOT MERGE THIS
replace github.com/datawire/go-mkopensource => github.com/LukeShu/go-mkopensource v0.0.0-20250206080114-4ff6b660d8d4
// "Normal" `replace` directives (MUST have a version number to the
// left of `=>`!!!)
//
// 1. Avoid the `replace` command as much as possible. Go only pays
// attention to the `replace` command when it appears in the main
// module, which means that if the `replace` command is required
// for the compile to work, then anything using ambassador.git as
// a library needs to duplicate that `replace` in their go.mod.
// We don't want to burden our users with that if we can avoid it
// (since we encourage them to use the gRPC Go libraries when
// implementing plugin services), and we don't want to deal with
// that ourselves in apro.git.
//
// The biggest reason we wouldn't be able to avoid it is if we
// need to pull in a library that has a `replace` in its
// go.mod--just as us adding a `replace` to our go.mod would
// require our dependents to do the same, our dependencies adding
// a `replace` requires us to do the same. And even then, if
// we're careful we might be able to avoid it.
//
// 2. If you do add a `replace` command to this file, always include
// a version number to the left of the "=>" (even if you're
// copying the command from a dependnecy and the dependency's
// go.mod doesn't have a version on the left side). This way we
// don't accidentally keep pinning to an older version after our
// dependency's `replace` goes away. Sometimes it can be tricky
// to figure out what version to put on the left side; often the
// easiest way to figure it out is to get it working without a
// version, run `go mod vendor`, then grep for "=>" in
// `./vendor/modules.txt`. If you don't see a "=>" line for that
// replacement in modules.txt, then that's an indicator that we
// don't really need that `replace`, maybe instead using a
// `require`.
//
// 3. If you do add a `replace` command to this file, you must also
// add it to the go.mod in apro.git (see above for explanation).
//
// We've got some bug-fixes that we need for conversion-gen and
// controller-gen.
replace k8s.io/code-generator => github.com/emissary-ingress/code-generator v0.28.0-alpha.0.0.20231105041308-a20b0cd90dea
replace k8s.io/code-generator v0.32.1 => github.com/emissary-ingress/code-generator v0.32.2-0.20250205235421-4d5bf4656f71
// "Anti-rename" `replace` directives (should not have a version
// number to the left of `=>`)
//
// These are the 1 exception to the "must have a version number on the
// left" rule above. This is for when a package has been renamed, we
// have the old name as an indirect dependency; use the last version
// before the rename. This prevents `go get -u ./...` from causing
// out errors.
//
// - go-metrics v0.4.1 was the last version before it renamed to github.com/hashicorp/go-metrics
// - mergo v0.3.16 was the last version before it renamed to dario.cat/mergo
replace (
github.com/armon/go-metrics => github.com/armon/go-metrics v0.4.1
github.com/imdario/mergo => github.com/imdario/mergo v0.3.16
)
////////////////////////////////////////////////////////////////////////////////
// Everything from here on out should be managed by `go get` and `go //
// mod tidy` and friends; a human should generally not be editing //
// these themselves. //
////////////////////////////////////////////////////////////////////////////////
require (
github.com/Masterminds/sprig v2.22.0+incompatible
github.com/cenkalti/backoff/v4 v4.2.1
github.com/cenkalti/backoff/v4 v4.3.0
github.com/census-instrumentation/opencensus-proto v0.4.1
github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4
github.com/cespare/xxhash/v2 v2.3.0
github.com/cncf/xds/go v0.0.0-20250121191232-2f005788dc42
github.com/datawire/dlib v1.3.1
github.com/datawire/dtest v0.0.0-20210928162311-722b199c4c2f
github.com/datawire/go-mkopensource v0.0.12-0.20230821212923-d1d8451579a1
github.com/envoyproxy/protoc-gen-validate v1.0.2
github.com/fsnotify/fsnotify v1.7.0
github.com/datawire/go-mkopensource v0.0.13
github.com/envoyproxy/protoc-gen-validate v1.2.1
github.com/fsnotify/fsnotify v1.8.0
github.com/go-logr/zapr v1.3.0
github.com/google/go-cmp v0.6.0
github.com/google/uuid v1.5.0
github.com/gorilla/websocket v1.5.1
github.com/hashicorp/consul/api v1.26.1
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/hashicorp/consul/api v1.31.0
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51
github.com/pkg/errors v0.9.1
github.com/planetscale/vtprotobuf v0.6.0
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2
github.com/prometheus/client_model v0.5.0
github.com/prometheus/client_model v0.6.1
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.8.0
github.com/spf13/pflag v1.0.5
github.com/spf13/viper v1.7.0
github.com/stretchr/testify v1.8.4
go.opentelemetry.io/proto/otlp v1.0.0
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.6
github.com/spf13/viper v1.19.0
github.com/stretchr/testify v1.10.0
go.opentelemetry.io/proto/otlp v1.5.0
go.uber.org/goleak v1.3.0
go.uber.org/zap v1.26.0
golang.org/x/mod v0.15.0
golang.org/x/sync v0.6.0
golang.org/x/sys v0.18.0
google.golang.org/genproto/googleapis/api v0.0.0-20231211222908-989df2bf70f3
google.golang.org/genproto/googleapis/rpc v0.0.0-20240102182953-50ed04b92917
google.golang.org/grpc v1.60.1
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.3.0
google.golang.org/protobuf v1.33.0
go.uber.org/zap v1.27.0
golang.org/x/mod v0.23.0
golang.org/x/sync v0.11.0
golang.org/x/sys v0.30.0
google.golang.org/genproto/googleapis/api v0.0.0-20250204164813-702378808489
google.golang.org/genproto/googleapis/rpc v0.0.0-20250204164813-702378808489
google.golang.org/grpc v1.70.0
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.5.1
google.golang.org/protobuf v1.36.4
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.30.1
k8s.io/apiextensions-apiserver v0.30.1
k8s.io/apimachinery v0.30.1
k8s.io/cli-runtime v0.30.1
k8s.io/client-go v0.30.1
k8s.io/code-generator v0.30.1
k8s.io/klog/v2 v2.120.1
k8s.io/kubectl v0.30.1
k8s.io/kubernetes v1.30.1
k8s.io/metrics v0.30.1
sigs.k8s.io/controller-runtime v0.18.2
sigs.k8s.io/controller-tools v0.13.0
sigs.k8s.io/e2e-framework v0.3.0
k8s.io/api v0.32.1
k8s.io/apiextensions-apiserver v0.32.1
k8s.io/apimachinery v0.32.1
k8s.io/cli-runtime v0.32.1
k8s.io/client-go v0.32.1
k8s.io/code-generator v0.32.1
k8s.io/klog/v2 v2.130.1
k8s.io/kubectl v0.32.1
k8s.io/kubernetes v1.32.1
k8s.io/metrics v0.32.1
sigs.k8s.io/controller-runtime v0.20.1
sigs.k8s.io/controller-tools v0.17.1
sigs.k8s.io/e2e-framework v0.6.0
sigs.k8s.io/gateway-api v0.2.0
sigs.k8s.io/yaml v1.4.0
)
require (
dario.cat/mergo v1.0.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
cel.dev/expr v0.19.2 // indirect
dario.cat/mergo v1.0.1 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Masterminds/goutils v1.1.1 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20230923063757-afb1ddc0824c // indirect
github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230305170008-8188dc5388df // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/ProtonMail/go-crypto v1.1.5 // indirect
github.com/antlr4-go/antlr/v4 v4.13.1 // indirect
github.com/armon/go-metrics v0.5.4 // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/cloudflare/circl v1.3.7 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/chai2010/gettext-go v1.0.3 // indirect
github.com/cloudflare/circl v1.6.0 // indirect
github.com/cyphar/filepath-securejoin v0.4.1 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/distribution/reference v0.5.0 // indirect
github.com/emicklei/go-restful/v3 v3.11.0 // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/emicklei/go-restful/v3 v3.12.1 // indirect
github.com/emirpasic/gods v1.18.1 // indirect
github.com/evanphx/json-patch v5.7.0+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.0 // indirect
github.com/evanphx/json-patch v5.9.11+incompatible // indirect
github.com/evanphx/json-patch/v5 v5.9.11 // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/camelcase v1.0.0 // indirect
github.com/fatih/color v1.16.0 // indirect
github.com/fatih/color v1.18.0 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
github.com/go-errors/errors v1.5.1 // indirect
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
github.com/go-git/go-billy/v5 v5.5.0 // indirect
github.com/go-git/go-git/v5 v5.11.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-openapi/jsonpointer v0.20.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.4 // indirect
github.com/gobuffalo/flect v1.0.2 // indirect
github.com/go-git/go-billy/v5 v5.6.2 // indirect
github.com/go-git/go-git/v5 v5.13.2 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gobuffalo/flect v1.0.3 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/cel-go v0.18.1 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/btree v1.1.3 // indirect
github.com/google/cel-go v0.23.2 // indirect
github.com/google/gnostic-models v0.6.9 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-hclog v1.5.0 // indirect
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
github.com/hashicorp/go-metrics v0.5.4 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-rootcerts v1.0.2 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/hashicorp/hcl v1.0.0 // indirect
github.com/hashicorp/serf v0.10.1 // indirect
github.com/huandu/xstrings v1.3.2 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/hashicorp/serf v0.10.2 // indirect
github.com/huandu/xstrings v1.5.0 // indirect
github.com/imdario/mergo v1.0.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/josharian/intern v1.0.1-0.20211109044230-42b52b674af5 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/klauspost/compress v1.17.11 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/magiconair/properties v1.8.1 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/magiconair/properties v1.8.9 // indirect
github.com/mailru/easyjson v0.9.0 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20221205130635-1aeaba878587 // indirect
github.com/moby/spdystream v0.5.0 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/pelletier/go-toml v1.2.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pjbgf/sha1cd v0.3.0 // indirect
github.com/prometheus/client_golang v1.17.0 // indirect
github.com/prometheus/common v0.45.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/pjbgf/sha1cd v0.3.2 // indirect
github.com/prometheus/client_golang v1.20.5 // indirect
github.com/prometheus/common v0.62.0 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sergi/go-diff v1.3.1 // indirect
github.com/skeema/knownhosts v1.2.1 // indirect
github.com/spf13/afero v1.3.3 // indirect
github.com/spf13/cast v1.3.0 // indirect
github.com/spf13/jwalterweatherman v1.0.0 // indirect
github.com/sagikazarmark/locafero v0.7.0 // indirect
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 // indirect
github.com/skeema/knownhosts v1.3.1 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.12.0 // indirect
github.com/spf13/cast v1.7.1 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
github.com/subosito/gotenv v1.2.0 // indirect
github.com/vladimirvivien/gexe v0.2.0 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/vladimirvivien/gexe v0.4.1 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.opentelemetry.io/otel v1.34.0 // indirect
go.opentelemetry.io/otel/trace v1.34.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/exp v0.0.0-20231110203233-9a3e6036ecaa // indirect
golang.org/x/net v0.23.0 // indirect
golang.org/x/oauth2 v0.15.0 // indirect
golang.org/x/term v0.18.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/tools v0.18.0 // indirect
golang.org/x/crypto v0.32.0 // indirect
golang.org/x/exp v0.0.0-20250128182459-e0ece0dbea4c // indirect
golang.org/x/net v0.34.0 // indirect
golang.org/x/oauth2 v0.26.0 // indirect
golang.org/x/term v0.29.0 // indirect
golang.org/x/text v0.22.0 // indirect
golang.org/x/time v0.10.0 // indirect
golang.org/x/tools v0.29.0 // indirect
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto v0.0.0-20231212172506-995d672761c0 // indirect
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.51.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiserver v0.30.1 // indirect
k8s.io/component-base v0.30.1 // indirect
k8s.io/gengo v0.0.0-20230829151522-9cce18d56c01 // indirect
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
k8s.io/utils v0.0.0-20230726121419-3b25d923346b // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/kustomize/kyaml v0.14.3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
k8s.io/apiserver v0.32.1 // indirect
k8s.io/component-base v0.32.1 // indirect
k8s.io/component-helpers v0.32.1 // indirect
k8s.io/controller-manager v0.32.1 // indirect
k8s.io/gengo/v2 v2.0.0-20250130153323-76c5745d3511 // indirect
k8s.io/kube-openapi v0.0.0-20241212222426-2c72e554b1e7 // indirect
k8s.io/utils v0.0.0-20241210054802-24370beab758 // indirect
sigs.k8s.io/json v0.0.0-20241014173422-cfa47c3a1cc8 // indirect
sigs.k8s.io/kustomize/api v0.19.0 // indirect
sigs.k8s.io/kustomize/kyaml v0.19.0 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.5.0 // indirect
)

732
go.sum

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -3,12 +3,12 @@ package ambex
import (
// standard library
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"strconv"
// third-party libraries
"github.com/cespare/xxhash/v2"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/anypb"
@ -146,9 +146,12 @@ func V3ListenerToRdsListener(lnr *v3listener.Listener) (*v3listener.Listener, []
// associated with a given listener.
filterChainMatch, _ := json.Marshal(fc.GetFilterChainMatch())
// Use MD5 because it's decently fast and cryptographic security isn't needed.
matchHash := md5.Sum(filterChainMatch)
matchKey := hex.EncodeToString(matchHash[:])
// Use xxhash64 because it's decently fast and cryptographic security isn't needed.
h := xxhash.New()
if _, err := h.Write(filterChainMatch); err != nil {
return nil, nil, fmt.Errorf("xxhash write error: %w", err)
}
matchKey := strconv.FormatUint(h.Sum64(), 16)
rc.Name = fmt.Sprintf("%s-routeconfig-%s-%d", l.Name, matchKey, matchKeyIndex[matchKey])

View File

@ -78,7 +78,7 @@ func TestV3ListenerToRdsListener(t *testing.T) {
for i, rc := range routes {
// Confirm that the route name was transformed to the hashed version
assert.Equal(t, fmt.Sprintf("emissary-ingress-listener-8080-routeconfig-8c82e45fa3f94ab4e879543e0a1a30ac-%d", i), rc.GetName())
assert.Equal(t, fmt.Sprintf("emissary-ingress-listener-8080-routeconfig-29865f40cbcf32dc-%d", i), rc.GetName())
// Make sure the virtual hosts are unmodified
virtualHosts := rc.GetVirtualHosts()

View File

@ -5,7 +5,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.33.0
// protoc-gen-go v1.36.4
// protoc v3.21.5
// source: agent/director.proto
@ -19,6 +19,7 @@ import (
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
@ -127,10 +128,7 @@ func (SecretSyncCommand_Action) EnumDescriptor() ([]byte, []int) {
// This is the identity of the ambassador the agent is reporting on behalf of
// no user account specific information should be contained in here
type Identity struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// The account ID assigned by the DCP
//
// Deprecated: Marked as deprecated in agent/director.proto.
@ -148,16 +146,16 @@ type Identity struct {
// Label or description for the user
//
// Deprecated: Marked as deprecated in agent/director.proto.
Label string `protobuf:"bytes,6,opt,name=label,proto3" json:"label,omitempty"`
Label string `protobuf:"bytes,6,opt,name=label,proto3" json:"label,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Identity) Reset() {
*x = Identity{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Identity) String() string {
@ -168,7 +166,7 @@ func (*Identity) ProtoMessage() {}
func (x *Identity) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -231,12 +229,9 @@ func (x *Identity) GetLabel() string {
// Information that Ambassador's Agent can send to the Director
// component of the DCP
type Snapshot struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Identity *Identity `protobuf:"bytes,1,opt,name=identity,proto3" json:"identity,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
Identity *Identity `protobuf:"bytes,1,opt,name=identity,proto3" json:"identity,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
// no longer used.
//
// Deprecated: Marked as deprecated in agent/director.proto.
@ -245,17 +240,17 @@ type Snapshot struct {
// describes how the raw_snapshot is encoded
ContentType string `protobuf:"bytes,5,opt,name=content_type,json=contentType,proto3" json:"content_type,omitempty"`
// api version of RawSnapshot
ApiVersion string `protobuf:"bytes,6,opt,name=api_version,json=apiVersion,proto3" json:"api_version,omitempty"`
SnapshotTs *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=snapshot_ts,json=snapshotTs,proto3" json:"snapshot_ts,omitempty"`
ApiVersion string `protobuf:"bytes,6,opt,name=api_version,json=apiVersion,proto3" json:"api_version,omitempty"`
SnapshotTs *timestamppb.Timestamp `protobuf:"bytes,7,opt,name=snapshot_ts,json=snapshotTs,proto3" json:"snapshot_ts,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Snapshot) Reset() {
*x = Snapshot{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Snapshot) String() string {
@ -266,7 +261,7 @@ func (*Snapshot) ProtoMessage() {}
func (x *Snapshot) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -334,20 +329,17 @@ func (x *Snapshot) GetSnapshotTs() *timestamppb.Timestamp {
// RawSnapshotChunk is a fragment of a JSON serialization of a
// Snapshot protobuf object.
type RawSnapshotChunk struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Chunk []byte `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"`
unknownFields protoimpl.UnknownFields
Chunk []byte `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *RawSnapshotChunk) Reset() {
*x = RawSnapshotChunk{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RawSnapshotChunk) String() string {
@ -358,7 +350,7 @@ func (*RawSnapshotChunk) ProtoMessage() {}
func (x *RawSnapshotChunk) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -382,27 +374,24 @@ func (x *RawSnapshotChunk) GetChunk() []byte {
// Diagnostic information from ambassador admin
type Diagnostics struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Identity *Identity `protobuf:"bytes,1,opt,name=identity,proto3" json:"identity,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
RawDiagnostics []byte `protobuf:"bytes,3,opt,name=raw_diagnostics,json=rawDiagnostics,proto3" json:"raw_diagnostics,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
Identity *Identity `protobuf:"bytes,1,opt,name=identity,proto3" json:"identity,omitempty"`
Message string `protobuf:"bytes,2,opt,name=message,proto3" json:"message,omitempty"`
RawDiagnostics []byte `protobuf:"bytes,3,opt,name=raw_diagnostics,json=rawDiagnostics,proto3" json:"raw_diagnostics,omitempty"`
// describes how the raw_diagnostic is encoded
ContentType string `protobuf:"bytes,4,opt,name=content_type,json=contentType,proto3" json:"content_type,omitempty"`
// api version of Diagnostics
ApiVersion string `protobuf:"bytes,5,opt,name=api_version,json=apiVersion,proto3" json:"api_version,omitempty"`
SnapshotTs *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=snapshot_ts,json=snapshotTs,proto3" json:"snapshot_ts,omitempty"`
ApiVersion string `protobuf:"bytes,5,opt,name=api_version,json=apiVersion,proto3" json:"api_version,omitempty"`
SnapshotTs *timestamppb.Timestamp `protobuf:"bytes,6,opt,name=snapshot_ts,json=snapshotTs,proto3" json:"snapshot_ts,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Diagnostics) Reset() {
*x = Diagnostics{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Diagnostics) String() string {
@ -413,7 +402,7 @@ func (*Diagnostics) ProtoMessage() {}
func (x *Diagnostics) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -473,20 +462,17 @@ func (x *Diagnostics) GetSnapshotTs() *timestamppb.Timestamp {
// RawDiagnosticChunk is a fragment of a JSON serialization of a
// Diagnostic protobuf object.
type RawDiagnosticsChunk struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Chunk []byte `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"`
unknownFields protoimpl.UnknownFields
Chunk []byte `protobuf:"bytes,1,opt,name=chunk,proto3" json:"chunk,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *RawDiagnosticsChunk) Reset() {
*x = RawDiagnosticsChunk{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RawDiagnosticsChunk) String() string {
@ -497,7 +483,7 @@ func (*RawDiagnosticsChunk) ProtoMessage() {}
func (x *RawDiagnosticsChunk) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -520,23 +506,20 @@ func (x *RawDiagnosticsChunk) GetChunk() []byte {
}
type Service struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
Labels map[string]string `protobuf:"bytes,3,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Annotations map[string]string `protobuf:"bytes,4,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
Labels map[string]string `protobuf:"bytes,3,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
Annotations map[string]string `protobuf:"bytes,4,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
sizeCache protoimpl.SizeCache
}
func (x *Service) Reset() {
*x = Service{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Service) String() string {
@ -547,7 +530,7 @@ func (*Service) ProtoMessage() {}
func (x *Service) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -592,18 +575,16 @@ func (x *Service) GetAnnotations() map[string]string {
// The Director's response to a Snapshot from the Agent
type SnapshotResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *SnapshotResponse) Reset() {
*x = SnapshotResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SnapshotResponse) String() string {
@ -614,7 +595,7 @@ func (*SnapshotResponse) ProtoMessage() {}
func (x *SnapshotResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -631,18 +612,16 @@ func (*SnapshotResponse) Descriptor() ([]byte, []int) {
// The Director's response to a Diagnostics message from the Agent
type DiagnosticsResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DiagnosticsResponse) Reset() {
*x = DiagnosticsResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DiagnosticsResponse) String() string {
@ -653,7 +632,7 @@ func (*DiagnosticsResponse) ProtoMessage() {}
func (x *DiagnosticsResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -670,11 +649,8 @@ func (*DiagnosticsResponse) Descriptor() ([]byte, []int) {
// Instructions that the DCP can send to Ambassador
type Directive struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
ID string `protobuf:"bytes,1,opt,name=ID,proto3" json:"ID,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
ID string `protobuf:"bytes,1,opt,name=ID,proto3" json:"ID,omitempty"`
// Stop sending snapshots. The default value (false) indicates that
// snapshot should be sent.
StopReporting bool `protobuf:"varint,2,opt,name=stop_reporting,json=stopReporting,proto3" json:"stop_reporting,omitempty"`
@ -683,16 +659,16 @@ type Directive struct {
// the existing report period.
MinReportPeriod *durationpb.Duration `protobuf:"bytes,3,opt,name=min_report_period,json=minReportPeriod,proto3" json:"min_report_period,omitempty"`
// Commands to execute
Commands []*Command `protobuf:"bytes,4,rep,name=commands,proto3" json:"commands,omitempty"`
Commands []*Command `protobuf:"bytes,4,rep,name=commands,proto3" json:"commands,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Directive) Reset() {
*x = Directive{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Directive) String() string {
@ -703,7 +679,7 @@ func (*Directive) ProtoMessage() {}
func (x *Directive) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -748,23 +724,20 @@ func (x *Directive) GetCommands() []*Command {
// An individual instruction from the DCP
type Command struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
state protoimpl.MessageState `protogen:"open.v1"`
// Log this message if present
Message string `protobuf:"bytes,1,opt,name=message,proto3" json:"message,omitempty"`
RolloutCommand *RolloutCommand `protobuf:"bytes,2,opt,name=rolloutCommand,proto3" json:"rolloutCommand,omitempty"`
SecretSyncCommand *SecretSyncCommand `protobuf:"bytes,3,opt,name=secretSyncCommand,proto3" json:"secretSyncCommand,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *Command) Reset() {
*x = Command{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *Command) String() string {
@ -775,7 +748,7 @@ func (*Command) ProtoMessage() {}
func (x *Command) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -812,23 +785,20 @@ func (x *Command) GetSecretSyncCommand() *SecretSyncCommand {
}
type RolloutCommand struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
Action RolloutCommand_Action `protobuf:"varint,3,opt,name=action,proto3,enum=agent.RolloutCommand_Action" json:"action,omitempty"`
CommandId string `protobuf:"bytes,4,opt,name=command_id,json=commandId,proto3" json:"command_id,omitempty"`
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
Action RolloutCommand_Action `protobuf:"varint,3,opt,name=action,proto3,enum=agent.RolloutCommand_Action" json:"action,omitempty"`
CommandId string `protobuf:"bytes,4,opt,name=command_id,json=commandId,proto3" json:"command_id,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *RolloutCommand) Reset() {
*x = RolloutCommand{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[10]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *RolloutCommand) String() string {
@ -839,7 +809,7 @@ func (*RolloutCommand) ProtoMessage() {}
func (x *RolloutCommand) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[10]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -883,24 +853,21 @@ func (x *RolloutCommand) GetCommandId() string {
}
type SecretSyncCommand struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
CommandId string `protobuf:"bytes,3,opt,name=command_id,json=commandId,proto3" json:"command_id,omitempty"`
Action SecretSyncCommand_Action `protobuf:"varint,4,opt,name=action,proto3,enum=agent.SecretSyncCommand_Action" json:"action,omitempty"`
Secret map[string][]byte `protobuf:"bytes,5,rep,name=secret,proto3" json:"secret,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
unknownFields protoimpl.UnknownFields
Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
CommandId string `protobuf:"bytes,3,opt,name=command_id,json=commandId,proto3" json:"command_id,omitempty"`
Action SecretSyncCommand_Action `protobuf:"varint,4,opt,name=action,proto3,enum=agent.SecretSyncCommand_Action" json:"action,omitempty"`
Secret map[string][]byte `protobuf:"bytes,5,rep,name=secret,proto3" json:"secret,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
sizeCache protoimpl.SizeCache
}
func (x *SecretSyncCommand) Reset() {
*x = SecretSyncCommand{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[11]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *SecretSyncCommand) String() string {
@ -911,7 +878,7 @@ func (*SecretSyncCommand) ProtoMessage() {}
func (x *SecretSyncCommand) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[11]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -962,22 +929,19 @@ func (x *SecretSyncCommand) GetSecret() map[string][]byte {
}
type CommandResult struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
CommandId string `protobuf:"bytes,1,opt,name=command_id,json=commandId,proto3" json:"command_id,omitempty"`
Success bool `protobuf:"varint,2,opt,name=success,proto3" json:"success,omitempty"`
Message string `protobuf:"bytes,3,opt,name=message,proto3" json:"message,omitempty"`
unknownFields protoimpl.UnknownFields
CommandId string `protobuf:"bytes,1,opt,name=command_id,json=commandId,proto3" json:"command_id,omitempty"`
Success bool `protobuf:"varint,2,opt,name=success,proto3" json:"success,omitempty"`
Message string `protobuf:"bytes,3,opt,name=message,proto3" json:"message,omitempty"`
sizeCache protoimpl.SizeCache
}
func (x *CommandResult) Reset() {
*x = CommandResult{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[12]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CommandResult) String() string {
@ -988,7 +952,7 @@ func (*CommandResult) ProtoMessage() {}
func (x *CommandResult) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[12]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -1025,18 +989,16 @@ func (x *CommandResult) GetMessage() string {
}
type CommandResultResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *CommandResultResponse) Reset() {
*x = CommandResultResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[13]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[13]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *CommandResultResponse) String() string {
@ -1047,7 +1009,7 @@ func (*CommandResultResponse) ProtoMessage() {}
func (x *CommandResultResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[13]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -1063,22 +1025,19 @@ func (*CommandResultResponse) Descriptor() ([]byte, []int) {
}
type StreamMetricsMessage struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Identity *Identity `protobuf:"bytes,1,opt,name=identity,proto3" json:"identity,omitempty"`
state protoimpl.MessageState `protogen:"open.v1"`
Identity *Identity `protobuf:"bytes,1,opt,name=identity,proto3" json:"identity,omitempty"`
// A list of metric entries
EnvoyMetrics []*_go.MetricFamily `protobuf:"bytes,2,rep,name=envoy_metrics,json=envoyMetrics,proto3" json:"envoy_metrics,omitempty"`
EnvoyMetrics []*_go.MetricFamily `protobuf:"bytes,2,rep,name=envoy_metrics,json=envoyMetrics,proto3" json:"envoy_metrics,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *StreamMetricsMessage) Reset() {
*x = StreamMetricsMessage{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[14]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[14]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *StreamMetricsMessage) String() string {
@ -1089,7 +1048,7 @@ func (*StreamMetricsMessage) ProtoMessage() {}
func (x *StreamMetricsMessage) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[14]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -1119,18 +1078,16 @@ func (x *StreamMetricsMessage) GetEnvoyMetrics() []*_go.MetricFamily {
}
type StreamMetricsResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *StreamMetricsResponse) Reset() {
*x = StreamMetricsResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_agent_director_proto_msgTypes[15]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
mi := &file_agent_director_proto_msgTypes[15]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *StreamMetricsResponse) String() string {
@ -1141,7 +1098,7 @@ func (*StreamMetricsResponse) ProtoMessage() {}
func (x *StreamMetricsResponse) ProtoReflect() protoreflect.Message {
mi := &file_agent_director_proto_msgTypes[15]
if protoimpl.UnsafeEnabled && x != nil {
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@ -1158,7 +1115,7 @@ func (*StreamMetricsResponse) Descriptor() ([]byte, []int) {
var File_agent_director_proto protoreflect.FileDescriptor
var file_agent_director_proto_rawDesc = []byte{
var file_agent_director_proto_rawDesc = string([]byte{
0x0a, 0x14, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x2f, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x6f, 0x72,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x05, 0x61, 0x67, 0x65, 0x6e, 0x74, 0x1a, 0x1e, 0x67,
0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64,
@ -1341,23 +1298,23 @@ var file_agent_director_proto_rawDesc = []byte{
0x6d, 0x6d, 0x61, 0x6e, 0x64, 0x52, 0x65, 0x73, 0x75, 0x6c, 0x74, 0x52, 0x65, 0x73, 0x70, 0x6f,
0x6e, 0x73, 0x65, 0x22, 0x00, 0x42, 0x09, 0x5a, 0x07, 0x2e, 0x2f, 0x61, 0x67, 0x65, 0x6e, 0x74,
0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
})
var (
file_agent_director_proto_rawDescOnce sync.Once
file_agent_director_proto_rawDescData = file_agent_director_proto_rawDesc
file_agent_director_proto_rawDescData []byte
)
func file_agent_director_proto_rawDescGZIP() []byte {
file_agent_director_proto_rawDescOnce.Do(func() {
file_agent_director_proto_rawDescData = protoimpl.X.CompressGZIP(file_agent_director_proto_rawDescData)
file_agent_director_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_agent_director_proto_rawDesc), len(file_agent_director_proto_rawDesc)))
})
return file_agent_director_proto_rawDescData
}
var file_agent_director_proto_enumTypes = make([]protoimpl.EnumInfo, 2)
var file_agent_director_proto_msgTypes = make([]protoimpl.MessageInfo, 19)
var file_agent_director_proto_goTypes = []interface{}{
var file_agent_director_proto_goTypes = []any{
(RolloutCommand_Action)(0), // 0: agent.RolloutCommand.Action
(SecretSyncCommand_Action)(0), // 1: agent.SecretSyncCommand.Action
(*Identity)(nil), // 2: agent.Identity
@ -1424,205 +1381,11 @@ func file_agent_director_proto_init() {
if File_agent_director_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_agent_director_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Identity); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Snapshot); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawSnapshotChunk); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Diagnostics); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RawDiagnosticsChunk); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Service); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SnapshotResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DiagnosticsResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Directive); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Command); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*RolloutCommand); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SecretSyncCommand); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CommandResult); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[13].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CommandResultResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[14].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*StreamMetricsMessage); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_agent_director_proto_msgTypes[15].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*StreamMetricsResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_agent_director_proto_rawDesc,
RawDescriptor: unsafe.Slice(unsafe.StringData(file_agent_director_proto_rawDesc), len(file_agent_director_proto_rawDesc)),
NumEnums: 2,
NumMessages: 19,
NumExtensions: 0,
@ -1634,7 +1397,6 @@ func file_agent_director_proto_init() {
MessageInfos: file_agent_director_proto_msgTypes,
}.Build()
File_agent_director_proto = out.File
file_agent_director_proto_rawDesc = nil
file_agent_director_proto_goTypes = nil
file_agent_director_proto_depIdxs = nil
}

View File

@ -5,7 +5,7 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions:
// - protoc-gen-go-grpc v1.3.0
// - protoc-gen-go-grpc v1.5.1
// - protoc v3.21.5
// source: agent/director.proto
@ -20,8 +20,8 @@ import (
// This is a compile-time assertion to ensure that this generated file
// is compatible with the grpc package it is being compiled against.
// Requires gRPC-Go v1.32.0 or later.
const _ = grpc.SupportPackageIsVersion7
// Requires gRPC-Go v1.64.0 or later.
const _ = grpc.SupportPackageIsVersion9
const (
Director_Report_FullMethodName = "/agent.Director/Report"
@ -41,13 +41,13 @@ type DirectorClient interface {
// method is deprecated, you should call ReportStream instead.
Report(ctx context.Context, in *Snapshot, opts ...grpc.CallOption) (*SnapshotResponse, error)
// Report a consistent Snapshot of information to the DCP.
ReportStream(ctx context.Context, opts ...grpc.CallOption) (Director_ReportStreamClient, error)
ReportStream(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[RawSnapshotChunk, SnapshotResponse], error)
// Report a consistent Diagnostics snapshot of information to the DCP.
StreamDiagnostics(ctx context.Context, opts ...grpc.CallOption) (Director_StreamDiagnosticsClient, error)
StreamDiagnostics(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[RawDiagnosticsChunk, DiagnosticsResponse], error)
// Stream metrics to the DCP.
StreamMetrics(ctx context.Context, opts ...grpc.CallOption) (Director_StreamMetricsClient, error)
StreamMetrics(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[StreamMetricsMessage, StreamMetricsResponse], error)
// Retrieve Directives from the DCP
Retrieve(ctx context.Context, in *Identity, opts ...grpc.CallOption) (Director_RetrieveClient, error)
Retrieve(ctx context.Context, in *Identity, opts ...grpc.CallOption) (grpc.ServerStreamingClient[Directive], error)
// Reports the result of a command execution to the cloud
ReportCommandResult(ctx context.Context, in *CommandResult, opts ...grpc.CallOption) (*CommandResultResponse, error)
}
@ -62,122 +62,61 @@ func NewDirectorClient(cc grpc.ClientConnInterface) DirectorClient {
// Deprecated: Do not use.
func (c *directorClient) Report(ctx context.Context, in *Snapshot, opts ...grpc.CallOption) (*SnapshotResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(SnapshotResponse)
err := c.cc.Invoke(ctx, Director_Report_FullMethodName, in, out, opts...)
err := c.cc.Invoke(ctx, Director_Report_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *directorClient) ReportStream(ctx context.Context, opts ...grpc.CallOption) (Director_ReportStreamClient, error) {
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[0], Director_ReportStream_FullMethodName, opts...)
func (c *directorClient) ReportStream(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[RawSnapshotChunk, SnapshotResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[0], Director_ReportStream_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &directorReportStreamClient{stream}
x := &grpc.GenericClientStream[RawSnapshotChunk, SnapshotResponse]{ClientStream: stream}
return x, nil
}
type Director_ReportStreamClient interface {
Send(*RawSnapshotChunk) error
CloseAndRecv() (*SnapshotResponse, error)
grpc.ClientStream
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_ReportStreamClient = grpc.ClientStreamingClient[RawSnapshotChunk, SnapshotResponse]
type directorReportStreamClient struct {
grpc.ClientStream
}
func (x *directorReportStreamClient) Send(m *RawSnapshotChunk) error {
return x.ClientStream.SendMsg(m)
}
func (x *directorReportStreamClient) CloseAndRecv() (*SnapshotResponse, error) {
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
m := new(SnapshotResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *directorClient) StreamDiagnostics(ctx context.Context, opts ...grpc.CallOption) (Director_StreamDiagnosticsClient, error) {
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[1], Director_StreamDiagnostics_FullMethodName, opts...)
func (c *directorClient) StreamDiagnostics(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[RawDiagnosticsChunk, DiagnosticsResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[1], Director_StreamDiagnostics_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &directorStreamDiagnosticsClient{stream}
x := &grpc.GenericClientStream[RawDiagnosticsChunk, DiagnosticsResponse]{ClientStream: stream}
return x, nil
}
type Director_StreamDiagnosticsClient interface {
Send(*RawDiagnosticsChunk) error
CloseAndRecv() (*DiagnosticsResponse, error)
grpc.ClientStream
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_StreamDiagnosticsClient = grpc.ClientStreamingClient[RawDiagnosticsChunk, DiagnosticsResponse]
type directorStreamDiagnosticsClient struct {
grpc.ClientStream
}
func (x *directorStreamDiagnosticsClient) Send(m *RawDiagnosticsChunk) error {
return x.ClientStream.SendMsg(m)
}
func (x *directorStreamDiagnosticsClient) CloseAndRecv() (*DiagnosticsResponse, error) {
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
m := new(DiagnosticsResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *directorClient) StreamMetrics(ctx context.Context, opts ...grpc.CallOption) (Director_StreamMetricsClient, error) {
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[2], Director_StreamMetrics_FullMethodName, opts...)
func (c *directorClient) StreamMetrics(ctx context.Context, opts ...grpc.CallOption) (grpc.ClientStreamingClient[StreamMetricsMessage, StreamMetricsResponse], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[2], Director_StreamMetrics_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &directorStreamMetricsClient{stream}
x := &grpc.GenericClientStream[StreamMetricsMessage, StreamMetricsResponse]{ClientStream: stream}
return x, nil
}
type Director_StreamMetricsClient interface {
Send(*StreamMetricsMessage) error
CloseAndRecv() (*StreamMetricsResponse, error)
grpc.ClientStream
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_StreamMetricsClient = grpc.ClientStreamingClient[StreamMetricsMessage, StreamMetricsResponse]
type directorStreamMetricsClient struct {
grpc.ClientStream
}
func (x *directorStreamMetricsClient) Send(m *StreamMetricsMessage) error {
return x.ClientStream.SendMsg(m)
}
func (x *directorStreamMetricsClient) CloseAndRecv() (*StreamMetricsResponse, error) {
if err := x.ClientStream.CloseSend(); err != nil {
return nil, err
}
m := new(StreamMetricsResponse)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
func (c *directorClient) Retrieve(ctx context.Context, in *Identity, opts ...grpc.CallOption) (Director_RetrieveClient, error) {
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[3], Director_Retrieve_FullMethodName, opts...)
func (c *directorClient) Retrieve(ctx context.Context, in *Identity, opts ...grpc.CallOption) (grpc.ServerStreamingClient[Directive], error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
stream, err := c.cc.NewStream(ctx, &Director_ServiceDesc.Streams[3], Director_Retrieve_FullMethodName, cOpts...)
if err != nil {
return nil, err
}
x := &directorRetrieveClient{stream}
x := &grpc.GenericClientStream[Identity, Directive]{ClientStream: stream}
if err := x.ClientStream.SendMsg(in); err != nil {
return nil, err
}
@ -187,26 +126,13 @@ func (c *directorClient) Retrieve(ctx context.Context, in *Identity, opts ...grp
return x, nil
}
type Director_RetrieveClient interface {
Recv() (*Directive, error)
grpc.ClientStream
}
type directorRetrieveClient struct {
grpc.ClientStream
}
func (x *directorRetrieveClient) Recv() (*Directive, error) {
m := new(Directive)
if err := x.ClientStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_RetrieveClient = grpc.ServerStreamingClient[Directive]
func (c *directorClient) ReportCommandResult(ctx context.Context, in *CommandResult, opts ...grpc.CallOption) (*CommandResultResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CommandResultResponse)
err := c.cc.Invoke(ctx, Director_ReportCommandResult_FullMethodName, in, out, opts...)
err := c.cc.Invoke(ctx, Director_ReportCommandResult_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
@ -215,48 +141,52 @@ func (c *directorClient) ReportCommandResult(ctx context.Context, in *CommandRes
// DirectorServer is the server API for Director service.
// All implementations must embed UnimplementedDirectorServer
// for forward compatibility
// for forward compatibility.
type DirectorServer interface {
// Deprecated: Do not use.
// Report a consistent Snapshot of information to the DCP. This
// method is deprecated, you should call ReportStream instead.
Report(context.Context, *Snapshot) (*SnapshotResponse, error)
// Report a consistent Snapshot of information to the DCP.
ReportStream(Director_ReportStreamServer) error
ReportStream(grpc.ClientStreamingServer[RawSnapshotChunk, SnapshotResponse]) error
// Report a consistent Diagnostics snapshot of information to the DCP.
StreamDiagnostics(Director_StreamDiagnosticsServer) error
StreamDiagnostics(grpc.ClientStreamingServer[RawDiagnosticsChunk, DiagnosticsResponse]) error
// Stream metrics to the DCP.
StreamMetrics(Director_StreamMetricsServer) error
StreamMetrics(grpc.ClientStreamingServer[StreamMetricsMessage, StreamMetricsResponse]) error
// Retrieve Directives from the DCP
Retrieve(*Identity, Director_RetrieveServer) error
Retrieve(*Identity, grpc.ServerStreamingServer[Directive]) error
// Reports the result of a command execution to the cloud
ReportCommandResult(context.Context, *CommandResult) (*CommandResultResponse, error)
mustEmbedUnimplementedDirectorServer()
}
// UnimplementedDirectorServer must be embedded to have forward compatible implementations.
type UnimplementedDirectorServer struct {
}
// UnimplementedDirectorServer must be embedded to have
// forward compatible implementations.
//
// NOTE: this should be embedded by value instead of pointer to avoid a nil
// pointer dereference when methods are called.
type UnimplementedDirectorServer struct{}
func (UnimplementedDirectorServer) Report(context.Context, *Snapshot) (*SnapshotResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Report not implemented")
}
func (UnimplementedDirectorServer) ReportStream(Director_ReportStreamServer) error {
func (UnimplementedDirectorServer) ReportStream(grpc.ClientStreamingServer[RawSnapshotChunk, SnapshotResponse]) error {
return status.Errorf(codes.Unimplemented, "method ReportStream not implemented")
}
func (UnimplementedDirectorServer) StreamDiagnostics(Director_StreamDiagnosticsServer) error {
func (UnimplementedDirectorServer) StreamDiagnostics(grpc.ClientStreamingServer[RawDiagnosticsChunk, DiagnosticsResponse]) error {
return status.Errorf(codes.Unimplemented, "method StreamDiagnostics not implemented")
}
func (UnimplementedDirectorServer) StreamMetrics(Director_StreamMetricsServer) error {
func (UnimplementedDirectorServer) StreamMetrics(grpc.ClientStreamingServer[StreamMetricsMessage, StreamMetricsResponse]) error {
return status.Errorf(codes.Unimplemented, "method StreamMetrics not implemented")
}
func (UnimplementedDirectorServer) Retrieve(*Identity, Director_RetrieveServer) error {
func (UnimplementedDirectorServer) Retrieve(*Identity, grpc.ServerStreamingServer[Directive]) error {
return status.Errorf(codes.Unimplemented, "method Retrieve not implemented")
}
func (UnimplementedDirectorServer) ReportCommandResult(context.Context, *CommandResult) (*CommandResultResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ReportCommandResult not implemented")
}
func (UnimplementedDirectorServer) mustEmbedUnimplementedDirectorServer() {}
func (UnimplementedDirectorServer) testEmbeddedByValue() {}
// UnsafeDirectorServer may be embedded to opt out of forward compatibility for this service.
// Use of this interface is not recommended, as added methods to DirectorServer will
@ -266,6 +196,13 @@ type UnsafeDirectorServer interface {
}
func RegisterDirectorServer(s grpc.ServiceRegistrar, srv DirectorServer) {
// If the following call pancis, it indicates UnimplementedDirectorServer was
// embedded by pointer and is nil. This will cause panics if an
// unimplemented method is ever invoked, so we test this at initialization
// time to prevent it from happening at runtime later due to I/O.
if t, ok := srv.(interface{ testEmbeddedByValue() }); ok {
t.testEmbeddedByValue()
}
s.RegisterService(&Director_ServiceDesc, srv)
}
@ -288,103 +225,36 @@ func _Director_Report_Handler(srv interface{}, ctx context.Context, dec func(int
}
func _Director_ReportStream_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(DirectorServer).ReportStream(&directorReportStreamServer{stream})
return srv.(DirectorServer).ReportStream(&grpc.GenericServerStream[RawSnapshotChunk, SnapshotResponse]{ServerStream: stream})
}
type Director_ReportStreamServer interface {
SendAndClose(*SnapshotResponse) error
Recv() (*RawSnapshotChunk, error)
grpc.ServerStream
}
type directorReportStreamServer struct {
grpc.ServerStream
}
func (x *directorReportStreamServer) SendAndClose(m *SnapshotResponse) error {
return x.ServerStream.SendMsg(m)
}
func (x *directorReportStreamServer) Recv() (*RawSnapshotChunk, error) {
m := new(RawSnapshotChunk)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_ReportStreamServer = grpc.ClientStreamingServer[RawSnapshotChunk, SnapshotResponse]
func _Director_StreamDiagnostics_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(DirectorServer).StreamDiagnostics(&directorStreamDiagnosticsServer{stream})
return srv.(DirectorServer).StreamDiagnostics(&grpc.GenericServerStream[RawDiagnosticsChunk, DiagnosticsResponse]{ServerStream: stream})
}
type Director_StreamDiagnosticsServer interface {
SendAndClose(*DiagnosticsResponse) error
Recv() (*RawDiagnosticsChunk, error)
grpc.ServerStream
}
type directorStreamDiagnosticsServer struct {
grpc.ServerStream
}
func (x *directorStreamDiagnosticsServer) SendAndClose(m *DiagnosticsResponse) error {
return x.ServerStream.SendMsg(m)
}
func (x *directorStreamDiagnosticsServer) Recv() (*RawDiagnosticsChunk, error) {
m := new(RawDiagnosticsChunk)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_StreamDiagnosticsServer = grpc.ClientStreamingServer[RawDiagnosticsChunk, DiagnosticsResponse]
func _Director_StreamMetrics_Handler(srv interface{}, stream grpc.ServerStream) error {
return srv.(DirectorServer).StreamMetrics(&directorStreamMetricsServer{stream})
return srv.(DirectorServer).StreamMetrics(&grpc.GenericServerStream[StreamMetricsMessage, StreamMetricsResponse]{ServerStream: stream})
}
type Director_StreamMetricsServer interface {
SendAndClose(*StreamMetricsResponse) error
Recv() (*StreamMetricsMessage, error)
grpc.ServerStream
}
type directorStreamMetricsServer struct {
grpc.ServerStream
}
func (x *directorStreamMetricsServer) SendAndClose(m *StreamMetricsResponse) error {
return x.ServerStream.SendMsg(m)
}
func (x *directorStreamMetricsServer) Recv() (*StreamMetricsMessage, error) {
m := new(StreamMetricsMessage)
if err := x.ServerStream.RecvMsg(m); err != nil {
return nil, err
}
return m, nil
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_StreamMetricsServer = grpc.ClientStreamingServer[StreamMetricsMessage, StreamMetricsResponse]
func _Director_Retrieve_Handler(srv interface{}, stream grpc.ServerStream) error {
m := new(Identity)
if err := stream.RecvMsg(m); err != nil {
return err
}
return srv.(DirectorServer).Retrieve(m, &directorRetrieveServer{stream})
return srv.(DirectorServer).Retrieve(m, &grpc.GenericServerStream[Identity, Directive]{ServerStream: stream})
}
type Director_RetrieveServer interface {
Send(*Directive) error
grpc.ServerStream
}
type directorRetrieveServer struct {
grpc.ServerStream
}
func (x *directorRetrieveServer) Send(m *Directive) error {
return x.ServerStream.SendMsg(m)
}
// This type alias is provided for backwards compatibility with existing code that references the prior non-generic stream type by name.
type Director_RetrieveServer = grpc.ServerStreamingServer[Directive]
func _Director_ReportCommandResult_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CommandResult)

View File

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.24.4
// protoc v5.26.1
// source: contrib/envoy/extensions/filters/http/golang/v3alpha/golang.proto
package v3alpha

View File

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.24.4
// protoc v5.26.1
// source: envoy/admin/v2alpha/certs.proto
package v2alpha

View File

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.24.4
// protoc v5.26.1
// source: envoy/admin/v2alpha/clusters.proto
package v2alpha

View File

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.24.4
// protoc v5.26.1
// source: envoy/admin/v2alpha/config_dump.proto
package v2alpha

View File

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.24.4
// protoc v5.26.1
// source: envoy/admin/v2alpha/listeners.proto
package v2alpha

View File

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.24.4
// protoc v5.26.1
// source: envoy/admin/v2alpha/memory.proto
package v2alpha

View File

@ -1,7 +1,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.30.0
// protoc v4.24.4
// protoc v5.26.1
// source: envoy/admin/v2alpha/metrics.proto
package v2alpha

Some files were not shown because too many files have changed in this diff Show More