Compare commits
No commits in common. "main" and "release/v2.290.0" have entirely different histories.
main
...
release/v2
|
@ -3,7 +3,7 @@
|
||||||
"build": {
|
"build": {
|
||||||
"dockerfile": "Dockerfile",
|
"dockerfile": "Dockerfile",
|
||||||
"args": {
|
"args": {
|
||||||
"DEV_VERSION": "v47",
|
"DEV_VERSION": "v45",
|
||||||
"http_proxy": "${localEnv:http_proxy}",
|
"http_proxy": "${localEnv:http_proxy}",
|
||||||
"https_proxy": "${localEnv:https_proxy}"
|
"https_proxy": "${localEnv:https_proxy}"
|
||||||
}
|
}
|
||||||
|
@ -23,15 +23,7 @@
|
||||||
"zxh404.vscode-proto3"
|
"zxh404.vscode-proto3"
|
||||||
],
|
],
|
||||||
"settings": {
|
"settings": {
|
||||||
"files.insertFinalNewline": true,
|
"files.insertFinalNewline": true
|
||||||
"[git-commit]": {
|
|
||||||
"editor.rulers": [
|
|
||||||
72,
|
|
||||||
80
|
|
||||||
],
|
|
||||||
"editor.wordWrap": "wordWrapColumn",
|
|
||||||
"editor.wordWrapColumn": 80
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
|
@ -1,156 +0,0 @@
|
||||||
# Linkerd2 Proxy Copilot Instructions
|
|
||||||
|
|
||||||
## Code Generation
|
|
||||||
|
|
||||||
- Code MUST pass `cargo fmt`.
|
|
||||||
- Code MUST pass `cargo clippy --all-targets --all-features -- -D warnings`.
|
|
||||||
- Markdown MUST pass `markdownlint-cli2`.
|
|
||||||
- Prefer `?` for error propagation.
|
|
||||||
- Avoid `unwrap()` and `expect()` outside tests.
|
|
||||||
- Use `tracing` crate macros (`tracing::info!`, etc.) for structured logging.
|
|
||||||
|
|
||||||
### Comments
|
|
||||||
|
|
||||||
Comments should explain **why**, not **what**. Focus on high-level rationale and
|
|
||||||
design intent at the function or block level, rather than line-by-line
|
|
||||||
descriptions.
|
|
||||||
|
|
||||||
- Use comments to capture:
|
|
||||||
- System-facing or interface-level concerns
|
|
||||||
- Key invariants, preconditions, and postconditions
|
|
||||||
- Design decisions and trade-offs
|
|
||||||
- Cross-references to architecture or design documentation
|
|
||||||
- Avoid:
|
|
||||||
- Line-by-line commentary explaining obvious code
|
|
||||||
- Restating what the code already clearly expresses
|
|
||||||
- For public APIs:
|
|
||||||
- Use `///` doc comments to describe the contract, behavior, parameters, and
|
|
||||||
usage examples
|
|
||||||
- For internal rationale:
|
|
||||||
- Use `//` comments sparingly to note non-obvious reasoning or edge-case
|
|
||||||
handling
|
|
||||||
- Be neutral and factual.
|
|
||||||
|
|
||||||
### Rust File Organization
|
|
||||||
|
|
||||||
For Rust source files, enforce this layout:
|
|
||||||
|
|
||||||
1. **Non‑public imports**
|
|
||||||
- Declare all `use` statements for private/internal crates first.
|
|
||||||
- Group imports to avoid duplicates and do **not** add blank lines between
|
|
||||||
`use` statements.
|
|
||||||
|
|
||||||
2. **Module declarations**
|
|
||||||
- List all `mod` declarations.
|
|
||||||
|
|
||||||
3. **Re‑exports**
|
|
||||||
- Follow with `pub use` statements.
|
|
||||||
|
|
||||||
4. **Type definitions**
|
|
||||||
- Define `struct`, `enum`, `type`, and `trait` declarations.
|
|
||||||
- Sort by visibility: `pub` first, then `pub(crate)`, then private.
|
|
||||||
- Public types should be documented with `///` comments.
|
|
||||||
|
|
||||||
5. **Impl blocks**
|
|
||||||
- Implement methods in the same order as types above.
|
|
||||||
- Precede each type’s `impl` block with a header comment: `// === <TypeName> ===`
|
|
||||||
|
|
||||||
6. **Tests**
|
|
||||||
- End with a `tests` module guarded by `#[cfg(test)]`.
|
|
||||||
- If the in‑file test module exceeds 100 lines, move it to
|
|
||||||
`tests/<filename>.rs` as a child integration‑test module.
|
|
||||||
|
|
||||||
## Test Generation
|
|
||||||
|
|
||||||
- Async tests MUST use `tokio::test`.
|
|
||||||
- Synchronous tests use `#[test]`.
|
|
||||||
- Include at least one failing‑edge‑case test per public function.
|
|
||||||
- Use `tracing::info!` for logging in tests, usually in place of comments.
|
|
||||||
|
|
||||||
## Code Review
|
|
||||||
|
|
||||||
### Rust
|
|
||||||
|
|
||||||
- Point out any `unsafe` blocks and justify their safety.
|
|
||||||
- Flag functions >50 LOC for refactor suggestions.
|
|
||||||
- Highlight missing docs on public items.
|
|
||||||
|
|
||||||
### Markdown
|
|
||||||
|
|
||||||
- Use `markdownlint-cli2` to check for linting errors.
|
|
||||||
- Lines SHOULD be wrapped at 80 characters.
|
|
||||||
- Fenced code blocks MUST include a language identifier.
|
|
||||||
|
|
||||||
### Copilot Instructions
|
|
||||||
|
|
||||||
- Start each instruction with an imperative, present‑tense verb.
|
|
||||||
- Keep each instruction under 120 characters.
|
|
||||||
- Provide one directive per instruction; avoid combining multiple ideas.
|
|
||||||
- Use "MUST" and "SHOULD" sparingly to emphasize critical rules.
|
|
||||||
- Avoid semicolons and complex punctuation within bullets.
|
|
||||||
- Do not reference external links, documents, or specific coding standards.
|
|
||||||
|
|
||||||
## Commit Messages
|
|
||||||
|
|
||||||
Commits follow the Conventional Commits specification:
|
|
||||||
|
|
||||||
### Subject
|
|
||||||
|
|
||||||
Subjects are in the form: `<type>[optional scope]: <description>`
|
|
||||||
|
|
||||||
- **Type**: feat, fix, docs, refactor, test, chore, ci, build, perf, revert
|
|
||||||
(others by agreement)
|
|
||||||
- **Scope**: optional, lowercase; may include `/` to denote sub‑modules (e.g.
|
|
||||||
`http/detect`)
|
|
||||||
- **Description**: imperative mood, present tense, no trailing period
|
|
||||||
- MUST be less than 72 characters
|
|
||||||
- Omit needless words!
|
|
||||||
|
|
||||||
### Body
|
|
||||||
|
|
||||||
Non-trivial commits SHOULD include a body summarizing the change.
|
|
||||||
|
|
||||||
- Explain *why* the change was needed.
|
|
||||||
- Describe *what* was done at a high level.
|
|
||||||
- Use present-tense narration.
|
|
||||||
- Use complete sentences, paragraphs, and punctuation.
|
|
||||||
- Preceded by a blank line.
|
|
||||||
- Wrapped at 80 characters.
|
|
||||||
- Omit needless words!
|
|
||||||
|
|
||||||
### Breaking changes
|
|
||||||
|
|
||||||
If the change introduces a backwards-incompatible change, it MUST be marked as
|
|
||||||
such.
|
|
||||||
|
|
||||||
- Indicated by `!` after the type/scope (e.g. `feat(inbound)!: …`)
|
|
||||||
- Optionally including a `BREAKING CHANGE:` section in the footer explaining the
|
|
||||||
change in behavior.
|
|
||||||
|
|
||||||
### Examples
|
|
||||||
|
|
||||||
```text
|
|
||||||
feat(auth): add JWT refresh endpoint
|
|
||||||
|
|
||||||
There is currently no way to refresh a JWT token.
|
|
||||||
|
|
||||||
This exposes a new `/refresh` route that returns a refreshed token.
|
|
||||||
```
|
|
||||||
|
|
||||||
```text
|
|
||||||
feat(api)!: remove deprecated v1 routes
|
|
||||||
|
|
||||||
The `/v1/*` endpoints have been deprecated for a long time and are no
|
|
||||||
longer called by clients.
|
|
||||||
|
|
||||||
This change removes the `/v1/*` endpoints and all associated code,
|
|
||||||
including integration tests and documentation.
|
|
||||||
|
|
||||||
BREAKING CHANGE: The previously-deprecated `/v1/*` endpoints were removed.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pull Requests
|
|
||||||
|
|
||||||
- The subject line MUST be in the conventional commit format.
|
|
||||||
- Auto‑generate a PR body summarizing the problem, solution, and verification steps.
|
|
||||||
- List breaking changes under a separate **Breaking Changes** heading.
|
|
|
@ -21,7 +21,6 @@ updates:
|
||||||
groups:
|
groups:
|
||||||
boring:
|
boring:
|
||||||
patterns:
|
patterns:
|
||||||
- "tokio-boring"
|
|
||||||
- "boring*"
|
- "boring*"
|
||||||
futures:
|
futures:
|
||||||
patterns:
|
patterns:
|
||||||
|
@ -44,9 +43,6 @@ updates:
|
||||||
- "tokio-rustls"
|
- "tokio-rustls"
|
||||||
- "rustls*"
|
- "rustls*"
|
||||||
- "ring"
|
- "ring"
|
||||||
symbolic:
|
|
||||||
patterns:
|
|
||||||
- "symbolic-*"
|
|
||||||
tracing:
|
tracing:
|
||||||
patterns:
|
patterns:
|
||||||
- "tracing*"
|
- "tracing*"
|
||||||
|
|
|
@ -22,13 +22,13 @@ permissions:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
container: ghcr.io/linkerd/dev:v47-rust
|
container: ghcr.io/linkerd/dev:v45-rust
|
||||||
timeout-minutes: 20
|
timeout-minutes: 20
|
||||||
continue-on-error: true
|
continue-on-error: true
|
||||||
steps:
|
steps:
|
||||||
- run: rustup toolchain install --profile=minimal beta
|
- run: rustup toolchain install --profile=minimal beta
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- run: just toolchain=beta fetch
|
- run: just toolchain=beta fetch
|
||||||
- run: just toolchain=beta build
|
- run: just toolchain=beta build
|
||||||
|
|
|
@ -21,11 +21,11 @@ env:
|
||||||
jobs:
|
jobs:
|
||||||
meta:
|
meta:
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- id: changed
|
- id: changed
|
||||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
uses: tj-actions/changed-files@b74df86ccb65173a8e33ba5492ac1a2ca6b216fd
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
.codecov.yml
|
.codecov.yml
|
||||||
|
@ -40,19 +40,19 @@ jobs:
|
||||||
codecov:
|
codecov:
|
||||||
needs: meta
|
needs: meta
|
||||||
if: (github.event_name == 'push' && github.ref == 'refs/heads/main') || needs.meta.outputs.any_changed == 'true'
|
if: (github.event_name == 'push' && github.ref == 'refs/heads/main') || needs.meta.outputs.any_changed == 'true'
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
timeout-minutes: 30
|
timeout-minutes: 30
|
||||||
container:
|
container:
|
||||||
image: docker://ghcr.io/linkerd/dev:v47-rust
|
image: docker://ghcr.io/linkerd/dev:v45-rust
|
||||||
options: --security-opt seccomp=unconfined # 🤷
|
options: --security-opt seccomp=unconfined # 🤷
|
||||||
env:
|
env:
|
||||||
CXX: "/usr/bin/clang++-19"
|
CXX: "/usr/bin/clang++-19"
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
|
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6
|
||||||
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --no-run
|
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --no-run
|
||||||
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --skip-clean --ignore-tests --no-fail-fast --out=Xml
|
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --skip-clean --ignore-tests --no-fail-fast --out=Xml
|
||||||
# Some tests are especially flakey in coverage tests. That's fine. We
|
# Some tests are especially flakey in coverage tests. That's fine. We
|
||||||
# only really care to measure how much of our codebase is covered.
|
# only really care to measure how much of our codebase is covered.
|
||||||
continue-on-error: true
|
continue-on-error: true
|
||||||
- uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24
|
- uses: codecov/codecov-action@0565863a31f2c772f9f0395002a31e3f06189574
|
||||||
|
|
|
@ -26,13 +26,13 @@ permissions:
|
||||||
jobs:
|
jobs:
|
||||||
list-changed:
|
list-changed:
|
||||||
timeout-minutes: 3
|
timeout-minutes: 3
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
container: docker://rust:1.88.0
|
container: docker://rust:1.83.0
|
||||||
steps:
|
steps:
|
||||||
- run: apt update && apt install -y jo
|
- run: apt update && apt install -y jo
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
- uses: tj-actions/changed-files@b74df86ccb65173a8e33ba5492ac1a2ca6b216fd
|
||||||
id: changed-files
|
id: changed-files
|
||||||
- name: list changed crates
|
- name: list changed crates
|
||||||
id: list-changed
|
id: list-changed
|
||||||
|
@ -47,15 +47,15 @@ jobs:
|
||||||
build:
|
build:
|
||||||
needs: [list-changed]
|
needs: [list-changed]
|
||||||
timeout-minutes: 40
|
timeout-minutes: 40
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
container: docker://rust:1.88.0
|
container: docker://rust:1.83.0
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
dir: ${{ fromJson(needs.list-changed.outputs.dirs) }}
|
dir: ${{ fromJson(needs.list-changed.outputs.dirs) }}
|
||||||
steps:
|
steps:
|
||||||
- run: rustup toolchain add nightly
|
- run: rustup toolchain add nightly
|
||||||
- run: cargo install cargo-fuzz
|
- run: cargo install cargo-fuzz
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- working-directory: ${{matrix.dir}}
|
- working-directory: ${{matrix.dir}}
|
||||||
run: cargo +nightly fetch
|
run: cargo +nightly fetch
|
||||||
|
|
|
@ -12,9 +12,9 @@ on:
|
||||||
jobs:
|
jobs:
|
||||||
markdownlint:
|
markdownlint:
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- uses: DavidAnson/markdownlint-cli2-action@992badcdf24e3b8eb7e87ff9287fe931bcb00c6e
|
- uses: DavidAnson/markdownlint-cli2-action@05f32210e84442804257b2a6f20b273450ec8265
|
||||||
with:
|
with:
|
||||||
globs: "**/*.md"
|
globs: "**/*.md"
|
||||||
|
|
|
@ -22,13 +22,13 @@ permissions:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
container: ghcr.io/linkerd/dev:v47-rust
|
container: ghcr.io/linkerd/dev:v45-rust
|
||||||
timeout-minutes: 20
|
timeout-minutes: 20
|
||||||
continue-on-error: true
|
continue-on-error: true
|
||||||
steps:
|
steps:
|
||||||
- run: rustup toolchain install --profile=minimal nightly
|
- run: rustup toolchain install --profile=minimal nightly
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- run: just toolchain=nightly fetch
|
- run: just toolchain=nightly fetch
|
||||||
- run: just toolchain=nightly profile=release build
|
- run: just toolchain=nightly profile=release build
|
||||||
|
|
|
@ -14,24 +14,24 @@ concurrency:
|
||||||
jobs:
|
jobs:
|
||||||
meta:
|
meta:
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- id: build
|
- id: build
|
||||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
uses: tj-actions/changed-files@b74df86ccb65173a8e33ba5492ac1a2ca6b216fd
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
.github/workflows/pr.yml
|
.github/workflows/pr.yml
|
||||||
justfile
|
justfile
|
||||||
Dockerfile
|
Dockerfile
|
||||||
- id: actions
|
- id: actions
|
||||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
uses: tj-actions/changed-files@b74df86ccb65173a8e33ba5492ac1a2ca6b216fd
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
.github/workflows/**
|
.github/workflows/**
|
||||||
.devcontainer/*
|
.devcontainer/*
|
||||||
- id: cargo
|
- id: cargo
|
||||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
uses: tj-actions/changed-files@b74df86ccb65173a8e33ba5492ac1a2ca6b216fd
|
||||||
with:
|
with:
|
||||||
files_ignore: "Cargo.toml"
|
files_ignore: "Cargo.toml"
|
||||||
files: |
|
files: |
|
||||||
|
@ -40,7 +40,7 @@ jobs:
|
||||||
if: steps.cargo.outputs.any_changed == 'true'
|
if: steps.cargo.outputs.any_changed == 'true'
|
||||||
run: ./.github/list-crates.sh ${{ steps.cargo.outputs.all_changed_files }}
|
run: ./.github/list-crates.sh ${{ steps.cargo.outputs.all_changed_files }}
|
||||||
- id: rust
|
- id: rust
|
||||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
uses: tj-actions/changed-files@b74df86ccb65173a8e33ba5492ac1a2ca6b216fd
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
**/*.rs
|
**/*.rs
|
||||||
|
@ -57,7 +57,7 @@ jobs:
|
||||||
info:
|
info:
|
||||||
timeout-minutes: 3
|
timeout-minutes: 3
|
||||||
needs: meta
|
needs: meta
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- name: Info
|
- name: Info
|
||||||
run: |
|
run: |
|
||||||
|
@ -74,25 +74,25 @@ jobs:
|
||||||
actions:
|
actions:
|
||||||
needs: meta
|
needs: meta
|
||||||
if: needs.meta.outputs.actions_changed == 'true'
|
if: needs.meta.outputs.actions_changed == 'true'
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: linkerd/dev/actions/setup-tools@v47
|
- uses: linkerd/dev/actions/setup-tools@v45
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: just action-lint
|
- run: just action-lint
|
||||||
- run: just action-dev-check
|
- run: just action-dev-check
|
||||||
|
|
||||||
rust:
|
rust:
|
||||||
needs: meta
|
needs: meta
|
||||||
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
|
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
container: ghcr.io/linkerd/dev:v47-rust
|
container: ghcr.io/linkerd/dev:v45-rust
|
||||||
permissions:
|
permissions:
|
||||||
contents: read
|
contents: read
|
||||||
timeout-minutes: 20
|
timeout-minutes: 20
|
||||||
steps:
|
steps:
|
||||||
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
|
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6
|
||||||
- run: just fetch
|
- run: just fetch
|
||||||
- run: cargo deny --all-features check bans licenses sources
|
- run: cargo deny --all-features check bans licenses sources
|
||||||
- run: just check-fmt
|
- run: just check-fmt
|
||||||
|
@ -107,15 +107,15 @@ jobs:
|
||||||
needs: meta
|
needs: meta
|
||||||
if: needs.meta.outputs.cargo_changed == 'true'
|
if: needs.meta.outputs.cargo_changed == 'true'
|
||||||
timeout-minutes: 20
|
timeout-minutes: 20
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
container: ghcr.io/linkerd/dev:v47-rust
|
container: ghcr.io/linkerd/dev:v45-rust
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
crate: ${{ fromJson(needs.meta.outputs.cargo_crates) }}
|
crate: ${{ fromJson(needs.meta.outputs.cargo_crates) }}
|
||||||
steps:
|
steps:
|
||||||
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
|
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6
|
||||||
- run: just fetch
|
- run: just fetch
|
||||||
- run: just check-crate ${{ matrix.crate }}
|
- run: just check-crate ${{ matrix.crate }}
|
||||||
|
|
||||||
|
@ -123,11 +123,11 @@ jobs:
|
||||||
needs: meta
|
needs: meta
|
||||||
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
|
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
|
||||||
timeout-minutes: 20
|
timeout-minutes: 20
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
env:
|
env:
|
||||||
WAIT_TIMEOUT: 2m
|
WAIT_TIMEOUT: 2m
|
||||||
steps:
|
steps:
|
||||||
- uses: linkerd/dev/actions/setup-tools@v47
|
- uses: linkerd/dev/actions/setup-tools@v45
|
||||||
- name: scurl https://run.linkerd.io/install-edge | sh
|
- name: scurl https://run.linkerd.io/install-edge | sh
|
||||||
run: |
|
run: |
|
||||||
scurl https://run.linkerd.io/install-edge | sh
|
scurl https://run.linkerd.io/install-edge | sh
|
||||||
|
@ -136,7 +136,7 @@ jobs:
|
||||||
tag=$(linkerd version --client --short)
|
tag=$(linkerd version --client --short)
|
||||||
echo "linkerd $tag"
|
echo "linkerd $tag"
|
||||||
echo "LINKERD_TAG=$tag" >> "$GITHUB_ENV"
|
echo "LINKERD_TAG=$tag" >> "$GITHUB_ENV"
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: just docker
|
- run: just docker
|
||||||
- run: just k3d-create
|
- run: just k3d-create
|
||||||
- run: just k3d-load-linkerd
|
- run: just k3d-load-linkerd
|
||||||
|
@ -149,7 +149,7 @@ jobs:
|
||||||
timeout-minutes: 3
|
timeout-minutes: 3
|
||||||
needs: [meta, actions, rust, rust-crates, linkerd-install]
|
needs: [meta, actions, rust, rust-crates, linkerd-install]
|
||||||
if: always()
|
if: always()
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: write
|
contents: write
|
||||||
|
@ -168,7 +168,7 @@ jobs:
|
||||||
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')
|
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')
|
||||||
run: exit 1
|
run: exit 1
|
||||||
|
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'
|
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'
|
||||||
- name: "Merge dependabot changes"
|
- name: "Merge dependabot changes"
|
||||||
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'
|
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'
|
||||||
|
|
|
@ -13,7 +13,7 @@ concurrency:
|
||||||
jobs:
|
jobs:
|
||||||
last-release:
|
last-release:
|
||||||
if: github.repository == 'linkerd/linkerd2-proxy' # Don't run this in forks.
|
if: github.repository == 'linkerd/linkerd2-proxy' # Don't run this in forks.
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
env:
|
env:
|
||||||
GH_REPO: ${{ github.repository }}
|
GH_REPO: ${{ github.repository }}
|
||||||
|
@ -41,10 +41,10 @@ jobs:
|
||||||
|
|
||||||
last-commit:
|
last-commit:
|
||||||
needs: last-release
|
needs: last-release
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- name: Check if the most recent commit is after the last release
|
- name: Check if the most recent commit is after the last release
|
||||||
id: recency
|
id: recency
|
||||||
env:
|
env:
|
||||||
|
@ -62,7 +62,7 @@ jobs:
|
||||||
trigger-release:
|
trigger-release:
|
||||||
needs: [last-release, last-commit]
|
needs: [last-release, last-commit]
|
||||||
if: needs.last-release.outputs.recent == 'false' && needs.last-commit.outputs.after-release == 'true'
|
if: needs.last-release.outputs.recent == 'false' && needs.last-commit.outputs.after-release == 'true'
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
permissions:
|
permissions:
|
||||||
actions: write
|
actions: write
|
||||||
|
|
|
@ -46,7 +46,6 @@ on:
|
||||||
default: true
|
default: true
|
||||||
|
|
||||||
env:
|
env:
|
||||||
CARGO: "cargo auditable"
|
|
||||||
CARGO_INCREMENTAL: 0
|
CARGO_INCREMENTAL: 0
|
||||||
CARGO_NET_RETRY: 10
|
CARGO_NET_RETRY: 10
|
||||||
RUSTFLAGS: "-D warnings -A deprecated --cfg tokio_unstable"
|
RUSTFLAGS: "-D warnings -A deprecated --cfg tokio_unstable"
|
||||||
|
@ -59,25 +58,9 @@ concurrency:
|
||||||
jobs:
|
jobs:
|
||||||
meta:
|
meta:
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- id: meta
|
||||||
if: github.event_name == 'pull_request'
|
|
||||||
- id: workflow
|
|
||||||
if: github.event_name == 'pull_request'
|
|
||||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
.github/workflows/release.yml
|
|
||||||
- id: build
|
|
||||||
if: github.event_name == 'pull_request'
|
|
||||||
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
justfile
|
|
||||||
Cargo.toml
|
|
||||||
|
|
||||||
- id: version
|
|
||||||
env:
|
env:
|
||||||
VERSION: ${{ inputs.version }}
|
VERSION: ${{ inputs.version }}
|
||||||
shell: bash
|
shell: bash
|
||||||
|
@ -85,45 +68,47 @@ jobs:
|
||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
shopt -s extglob
|
shopt -s extglob
|
||||||
if [[ "$GITHUB_EVENT_NAME" == pull_request ]]; then
|
if [[ "$GITHUB_EVENT_NAME" == pull_request ]]; then
|
||||||
echo version="0.0.0-test.${GITHUB_SHA:0:7}" >> "$GITHUB_OUTPUT"
|
echo version="0.0.0-test.${GITHUB_SHA:0:7}"
|
||||||
|
echo archs='["amd64"]'
|
||||||
|
echo oses='["linux"]'
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi >> "$GITHUB_OUTPUT"
|
||||||
if ! [[ "$VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+(-[0-9A-Za-z-]+)?(\+[0-9A-Za-z-]+)?$ ]]; then
|
if ! [[ "$VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+(-[0-9A-Za-z-]+)?(\+[0-9A-Za-z-]+)?$ ]]; then
|
||||||
echo "Invalid version: $VERSION" >&2
|
echo "Invalid version: $VERSION" >&2
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
echo version="${VERSION#v}" >> "$GITHUB_OUTPUT"
|
( echo version="${VERSION#v}"
|
||||||
|
echo archs='["amd64", "arm64", "arm"]'
|
||||||
- id: platform
|
|
||||||
shell: bash
|
|
||||||
env:
|
|
||||||
WORKFLOW_CHANGED: ${{ steps.workflow.outputs.any_changed }}
|
|
||||||
run: |
|
|
||||||
if [[ "$GITHUB_EVENT_NAME" == pull_request && "$WORKFLOW_CHANGED" != 'true' ]]; then
|
|
||||||
( echo archs='["amd64"]'
|
|
||||||
echo oses='["linux"]' ) >> "$GITHUB_OUTPUT"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
( echo archs='["amd64", "arm64"]'
|
|
||||||
echo oses='["linux", "windows"]'
|
echo oses='["linux", "windows"]'
|
||||||
) >> "$GITHUB_OUTPUT"
|
) >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
|
if: github.event_name == 'pull_request'
|
||||||
|
- id: changed
|
||||||
|
if: github.event_name == 'pull_request'
|
||||||
|
uses: tj-actions/changed-files@b74df86ccb65173a8e33ba5492ac1a2ca6b216fd
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
.github/workflows/release.yml
|
||||||
|
justfile
|
||||||
|
Cargo.toml
|
||||||
|
|
||||||
outputs:
|
outputs:
|
||||||
archs: ${{ steps.platform.outputs.archs }}
|
archs: ${{ steps.meta.outputs.archs }}
|
||||||
oses: ${{ steps.platform.outputs.oses }}
|
oses: ${{ steps.meta.outputs.oses }}
|
||||||
version: ${{ steps.version.outputs.version }}
|
version: ${{ steps.meta.outputs.version }}
|
||||||
package: ${{ github.event_name == 'workflow_dispatch' || steps.build.outputs.any_changed == 'true' || steps.workflow.outputs.any_changed == 'true' }}
|
package: ${{ github.event_name == 'workflow_dispatch' || steps.changed.outputs.any_changed == 'true' }}
|
||||||
profile: ${{ inputs.profile || 'release' }}
|
profile: ${{ inputs.profile || 'release' }}
|
||||||
publish: ${{ inputs.publish }}
|
publish: ${{ inputs.publish }}
|
||||||
ref: ${{ inputs.ref || github.sha }}
|
ref: ${{ inputs.ref || github.sha }}
|
||||||
tag: "${{ inputs.tag-prefix || 'release/' }}v${{ steps.version.outputs.version }}"
|
tag: "${{ inputs.tag-prefix || 'release/' }}v${{ steps.meta.outputs.version }}"
|
||||||
prerelease: ${{ inputs.prerelease }}
|
prerelease: ${{ inputs.prerelease }}
|
||||||
draft: ${{ inputs.draft }}
|
draft: ${{ inputs.draft }}
|
||||||
latest: ${{ inputs.latest }}
|
latest: ${{ inputs.latest }}
|
||||||
|
|
||||||
info:
|
info:
|
||||||
needs: meta
|
needs: meta
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
timeout-minutes: 3
|
timeout-minutes: 3
|
||||||
steps:
|
steps:
|
||||||
- name: Inputs
|
- name: Inputs
|
||||||
|
@ -149,13 +134,15 @@ jobs:
|
||||||
exclude:
|
exclude:
|
||||||
- os: windows
|
- os: windows
|
||||||
arch: arm64
|
arch: arm64
|
||||||
|
- os: windows
|
||||||
|
arch: arm
|
||||||
|
|
||||||
# If we're not actually building on a release tag, don't short-circuit on
|
# If we're not actually building on a release tag, don't short-circuit on
|
||||||
# errors. This helps us know whether a failure is platform-specific.
|
# errors. This helps us know whether a failure is platform-specific.
|
||||||
continue-on-error: ${{ needs.meta.outputs.publish != 'true' }}
|
continue-on-error: ${{ needs.meta.outputs.publish != 'true' }}
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
timeout-minutes: 40
|
timeout-minutes: 40
|
||||||
container: docker://ghcr.io/linkerd/dev:v47-rust-musl
|
container: docker://ghcr.io/linkerd/dev:v45-rust-musl
|
||||||
env:
|
env:
|
||||||
LINKERD2_PROXY_VENDOR: ${{ github.repository_owner }}
|
LINKERD2_PROXY_VENDOR: ${{ github.repository_owner }}
|
||||||
LINKERD2_PROXY_VERSION: ${{ needs.meta.outputs.version }}
|
LINKERD2_PROXY_VERSION: ${{ needs.meta.outputs.version }}
|
||||||
|
@ -163,19 +150,15 @@ jobs:
|
||||||
# TODO: add to dev image
|
# TODO: add to dev image
|
||||||
- name: Install MiniGW
|
- name: Install MiniGW
|
||||||
if: matrix.os == 'windows'
|
if: matrix.os == 'windows'
|
||||||
run: apt-get update && apt-get install -y mingw-w64
|
run: apt-get update && apt-get install mingw-w64 -y
|
||||||
- name: Install cross compilation toolchain
|
|
||||||
if: matrix.arch == 'arm64'
|
|
||||||
run: apt-get update && apt-get install -y binutils-aarch64-linux-gnu
|
|
||||||
|
|
||||||
- name: Configure git
|
- name: Configure git
|
||||||
run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
with:
|
with:
|
||||||
ref: ${{ needs.meta.outputs.ref }}
|
ref: ${{ needs.meta.outputs.ref }}
|
||||||
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
|
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6
|
||||||
with:
|
with:
|
||||||
key: ${{ matrix.os }}-${{ matrix.arch }}
|
key: ${{ matrix.arch }}
|
||||||
- run: just fetch
|
- run: just fetch
|
||||||
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} rustup
|
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} rustup
|
||||||
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} profile=${{ needs.meta.outputs.profile }} build
|
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} profile=${{ needs.meta.outputs.profile }} build
|
||||||
|
@ -187,7 +170,7 @@ jobs:
|
||||||
|
|
||||||
publish:
|
publish:
|
||||||
needs: [meta, package]
|
needs: [meta, package]
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
permissions:
|
permissions:
|
||||||
actions: write
|
actions: write
|
||||||
|
@ -204,13 +187,13 @@ jobs:
|
||||||
git config --global user.name "$GITHUB_USERNAME"
|
git config --global user.name "$GITHUB_USERNAME"
|
||||||
git config --global user.email "$GITHUB_USERNAME"@users.noreply.github.com
|
git config --global user.email "$GITHUB_USERNAME"@users.noreply.github.com
|
||||||
# Tag the release.
|
# Tag the release.
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
with:
|
with:
|
||||||
token: ${{ secrets.LINKERD2_PROXY_GITHUB_TOKEN || github.token }}
|
token: ${{ secrets.LINKERD2_PROXY_GITHUB_TOKEN || github.token }}
|
||||||
ref: ${{ needs.meta.outputs.ref }}
|
ref: ${{ needs.meta.outputs.ref }}
|
||||||
- run: git tag -a -m "$VERSION" "$TAG"
|
- run: git tag -a -m "$VERSION" "$TAG"
|
||||||
# Fetch the artifacts.
|
# Fetch the artifacts.
|
||||||
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
|
- uses: actions/download-artifact@95815c38cf2ff2164869cbab79da8d1f422bc89e
|
||||||
with:
|
with:
|
||||||
path: artifacts
|
path: artifacts
|
||||||
- run: du -h artifacts/**/*
|
- run: du -h artifacts/**/*
|
||||||
|
@ -218,7 +201,7 @@ jobs:
|
||||||
- if: needs.meta.outputs.publish == 'true'
|
- if: needs.meta.outputs.publish == 'true'
|
||||||
run: git push origin "$TAG"
|
run: git push origin "$TAG"
|
||||||
- if: needs.meta.outputs.publish == 'true'
|
- if: needs.meta.outputs.publish == 'true'
|
||||||
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8
|
uses: softprops/action-gh-release@c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda
|
||||||
with:
|
with:
|
||||||
name: ${{ env.VERSION }}
|
name: ${{ env.VERSION }}
|
||||||
tag_name: ${{ env.TAG }}
|
tag_name: ${{ env.TAG }}
|
||||||
|
@ -242,7 +225,7 @@ jobs:
|
||||||
needs: publish
|
needs: publish
|
||||||
if: always()
|
if: always()
|
||||||
timeout-minutes: 3
|
timeout-minutes: 3
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- name: Results
|
- name: Results
|
||||||
run: |
|
run: |
|
||||||
|
|
|
@ -13,8 +13,8 @@ on:
|
||||||
jobs:
|
jobs:
|
||||||
sh-lint:
|
sh-lint:
|
||||||
timeout-minutes: 5
|
timeout-minutes: 5
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: linkerd/dev/actions/setup-tools@v47
|
- uses: linkerd/dev/actions/setup-tools@v45
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: just sh-lint
|
- run: just sh-lint
|
||||||
|
|
|
@ -13,10 +13,10 @@ permissions:
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
devcontainer:
|
devcontainer:
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
container: ghcr.io/linkerd/dev:v47-rust
|
container: ghcr.io/linkerd/dev:v45-rust
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
|
||||||
- run: |
|
- run: |
|
||||||
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'
|
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'
|
||||||
|
@ -35,10 +35,10 @@ jobs:
|
||||||
|
|
||||||
|
|
||||||
workflows:
|
workflows:
|
||||||
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
|
runs-on: ubuntu-24.04
|
||||||
steps:
|
steps:
|
||||||
- uses: linkerd/dev/actions/setup-tools@v47
|
- uses: linkerd/dev/actions/setup-tools@v45
|
||||||
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
|
||||||
- shell: bash
|
- shell: bash
|
||||||
run: |
|
run: |
|
||||||
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'
|
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'
|
||||||
|
|
1230
Cargo.lock
1230
Cargo.lock
File diff suppressed because it is too large
Load Diff
11
Cargo.toml
11
Cargo.toml
|
@ -42,6 +42,8 @@ members = [
|
||||||
"linkerd/idle-cache",
|
"linkerd/idle-cache",
|
||||||
"linkerd/io",
|
"linkerd/io",
|
||||||
"linkerd/meshtls",
|
"linkerd/meshtls",
|
||||||
|
"linkerd/meshtls/boring",
|
||||||
|
"linkerd/meshtls/rustls",
|
||||||
"linkerd/meshtls/verifier",
|
"linkerd/meshtls/verifier",
|
||||||
"linkerd/metrics",
|
"linkerd/metrics",
|
||||||
"linkerd/mock/http-body",
|
"linkerd/mock/http-body",
|
||||||
|
@ -69,7 +71,6 @@ members = [
|
||||||
"linkerd/reconnect",
|
"linkerd/reconnect",
|
||||||
"linkerd/retry",
|
"linkerd/retry",
|
||||||
"linkerd/router",
|
"linkerd/router",
|
||||||
"linkerd/rustls",
|
|
||||||
"linkerd/service-profiles",
|
"linkerd/service-profiles",
|
||||||
"linkerd/signal",
|
"linkerd/signal",
|
||||||
"linkerd/stack",
|
"linkerd/stack",
|
||||||
|
@ -114,14 +115,14 @@ prost = { version = "0.13" }
|
||||||
prost-build = { version = "0.13", default-features = false }
|
prost-build = { version = "0.13", default-features = false }
|
||||||
prost-types = { version = "0.13" }
|
prost-types = { version = "0.13" }
|
||||||
tokio-rustls = { version = "0.26", default-features = false, features = [
|
tokio-rustls = { version = "0.26", default-features = false, features = [
|
||||||
|
"ring",
|
||||||
"logging",
|
"logging",
|
||||||
] }
|
] }
|
||||||
tonic = { version = "0.13", default-features = false }
|
tonic = { version = "0.12", default-features = false }
|
||||||
tonic-build = { version = "0.13", default-features = false }
|
tonic-build = { version = "0.12", default-features = false }
|
||||||
tower = { version = "0.5", default-features = false }
|
tower = { version = "0.5", default-features = false }
|
||||||
tower-service = { version = "0.3" }
|
tower-service = { version = "0.3" }
|
||||||
tower-test = { version = "0.4" }
|
tower-test = { version = "0.4" }
|
||||||
tracing = { version = "0.1" }
|
|
||||||
|
|
||||||
[workspace.dependencies.http-body-util]
|
[workspace.dependencies.http-body-util]
|
||||||
version = "0.1.3"
|
version = "0.1.3"
|
||||||
|
@ -134,4 +135,4 @@ default-features = false
|
||||||
features = ["tokio", "tracing"]
|
features = ["tokio", "tracing"]
|
||||||
|
|
||||||
[workspace.dependencies.linkerd2-proxy-api]
|
[workspace.dependencies.linkerd2-proxy-api]
|
||||||
version = "0.17.0"
|
version = "0.16.0"
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
# This is intended **DEVELOPMENT ONLY**, i.e. so that proxy developers can
|
# This is intended **DEVELOPMENT ONLY**, i.e. so that proxy developers can
|
||||||
# easily test the proxy in the context of the larger `linkerd2` project.
|
# easily test the proxy in the context of the larger `linkerd2` project.
|
||||||
|
|
||||||
ARG RUST_IMAGE=ghcr.io/linkerd/dev:v47-rust
|
ARG RUST_IMAGE=ghcr.io/linkerd/dev:v45-rust
|
||||||
|
|
||||||
# Use an arbitrary ~recent edge release image to get the proxy
|
# Use an arbitrary ~recent edge release image to get the proxy
|
||||||
# identity-initializing and linkerd-await wrappers.
|
# identity-initializing and linkerd-await wrappers.
|
||||||
|
@ -14,16 +14,11 @@ FROM $LINKERD2_IMAGE as linkerd2
|
||||||
FROM --platform=$BUILDPLATFORM $RUST_IMAGE as fetch
|
FROM --platform=$BUILDPLATFORM $RUST_IMAGE as fetch
|
||||||
|
|
||||||
ARG PROXY_FEATURES=""
|
ARG PROXY_FEATURES=""
|
||||||
ARG TARGETARCH="amd64"
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y time && \
|
apt-get install -y time && \
|
||||||
if [[ "$PROXY_FEATURES" =~ .*meshtls-boring.* ]] ; then \
|
if [[ "$PROXY_FEATURES" =~ .*meshtls-boring.* ]] ; then \
|
||||||
apt-get install -y golang ; \
|
apt-get install -y golang ; \
|
||||||
fi && \
|
fi && \
|
||||||
case "$TARGETARCH" in \
|
|
||||||
amd64) true ;; \
|
|
||||||
arm64) apt-get install --no-install-recommends -y binutils-aarch64-linux-gnu ;; \
|
|
||||||
esac && \
|
|
||||||
rm -rf /var/lib/apt/lists/*
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
ENV CARGO_NET_RETRY=10
|
ENV CARGO_NET_RETRY=10
|
||||||
|
@ -38,6 +33,7 @@ RUN --mount=type=cache,id=cargo,target=/usr/local/cargo/registry \
|
||||||
FROM fetch as build
|
FROM fetch as build
|
||||||
ENV CARGO_INCREMENTAL=0
|
ENV CARGO_INCREMENTAL=0
|
||||||
ENV RUSTFLAGS="-D warnings -A deprecated --cfg tokio_unstable"
|
ENV RUSTFLAGS="-D warnings -A deprecated --cfg tokio_unstable"
|
||||||
|
ARG TARGETARCH="amd64"
|
||||||
ARG PROFILE="release"
|
ARG PROFILE="release"
|
||||||
ARG LINKERD2_PROXY_VERSION=""
|
ARG LINKERD2_PROXY_VERSION=""
|
||||||
ARG LINKERD2_PROXY_VENDOR=""
|
ARG LINKERD2_PROXY_VENDOR=""
|
||||||
|
|
|
@ -86,9 +86,8 @@ minutes to review our [code of conduct][coc].
|
||||||
We test our code by way of fuzzing and this is described in [FUZZING.md](/docs/FUZZING.md).
|
We test our code by way of fuzzing and this is described in [FUZZING.md](/docs/FUZZING.md).
|
||||||
|
|
||||||
A third party security audit focused on fuzzing Linkerd2-proxy was performed by
|
A third party security audit focused on fuzzing Linkerd2-proxy was performed by
|
||||||
Ada Logics in 2021. The
|
Ada Logics in 2021. The full report is available
|
||||||
[full report](/docs/reports/linkerd2-proxy-fuzzing-report.pdf) can be found in
|
[here](/docs/reports/linkerd2-proxy-fuzzing-report.pdf).
|
||||||
the `docs/reports/` directory.
|
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
|
|
22
deny.toml
22
deny.toml
|
@ -2,6 +2,7 @@
|
||||||
targets = [
|
targets = [
|
||||||
{ triple = "x86_64-unknown-linux-gnu" },
|
{ triple = "x86_64-unknown-linux-gnu" },
|
||||||
{ triple = "aarch64-unknown-linux-gnu" },
|
{ triple = "aarch64-unknown-linux-gnu" },
|
||||||
|
{ triple = "armv7-unknown-linux-gnu" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[advisories]
|
[advisories]
|
||||||
|
@ -25,12 +26,17 @@ confidence-threshold = 0.8
|
||||||
exceptions = [
|
exceptions = [
|
||||||
{ allow = [
|
{ allow = [
|
||||||
"ISC",
|
"ISC",
|
||||||
|
"MIT",
|
||||||
"OpenSSL",
|
"OpenSSL",
|
||||||
], name = "aws-lc-sys", version = "*" },
|
], name = "ring", version = "*" },
|
||||||
{ allow = [
|
]
|
||||||
"ISC",
|
|
||||||
"OpenSSL",
|
[[licenses.clarify]]
|
||||||
], name = "aws-lc-fips-sys", version = "*" },
|
name = "ring"
|
||||||
|
version = "*"
|
||||||
|
expression = "MIT AND ISC AND OpenSSL"
|
||||||
|
license-files = [
|
||||||
|
{ path = "LICENSE", hash = 0xbd0eed23 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[bans]
|
[bans]
|
||||||
|
@ -42,8 +48,6 @@ deny = [
|
||||||
{ name = "rustls", wrappers = ["tokio-rustls"] },
|
{ name = "rustls", wrappers = ["tokio-rustls"] },
|
||||||
# rustls-webpki should be used instead.
|
# rustls-webpki should be used instead.
|
||||||
{ name = "webpki" },
|
{ name = "webpki" },
|
||||||
# aws-lc-rs should be used instead.
|
|
||||||
{ name = "ring" }
|
|
||||||
]
|
]
|
||||||
skip = [
|
skip = [
|
||||||
# `linkerd-trace-context`, `rustls-pemfile` and `tonic` depend on `base64`
|
# `linkerd-trace-context`, `rustls-pemfile` and `tonic` depend on `base64`
|
||||||
|
@ -62,10 +66,6 @@ skip-tree = [
|
||||||
{ name = "rustix", version = "0.38" },
|
{ name = "rustix", version = "0.38" },
|
||||||
# `pprof` uses a number of old dependencies. for now, we skip its subtree.
|
# `pprof` uses a number of old dependencies. for now, we skip its subtree.
|
||||||
{ name = "pprof" },
|
{ name = "pprof" },
|
||||||
# aws-lc-rs uses a slightly outdated version of bindgen
|
|
||||||
{ name = "bindgen", version = "0.69.5" },
|
|
||||||
# socket v0.6 is still propagating through the ecosystem
|
|
||||||
{ name = "socket2", version = "0.5" },
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[sources]
|
[sources]
|
||||||
|
|
|
@ -12,12 +12,9 @@ engine.
|
||||||
We place the fuzz tests into folders within the individual crates that the fuzz
|
We place the fuzz tests into folders within the individual crates that the fuzz
|
||||||
tests target. For example, we have a fuzz test that that target the crate
|
tests target. For example, we have a fuzz test that that target the crate
|
||||||
`/linkerd/addr` and the code in `/linkerd/addr/src` and thus the fuzz test that
|
`/linkerd/addr` and the code in `/linkerd/addr/src` and thus the fuzz test that
|
||||||
targets this crate is put in `/linkerd/addr/fuzz`.
|
targets this crate is put in `/linkerd/addr/fuzz`. The folder set up we use for
|
||||||
|
each of the fuzz tests is automatically generated by `cargo fuzz init`
|
||||||
The folder structure for each of the fuzz tests is automatically generated by
|
(described [here](https://github.com/rust-fuzz/cargo-fuzz#cargo-fuzz-init)).
|
||||||
`cargo fuzz init`. See cargo fuzz's
|
|
||||||
[`README.md`](https://github.com/rust-fuzz/cargo-fuzz#cargo-fuzz-init) for more
|
|
||||||
information.
|
|
||||||
|
|
||||||
### Fuzz targets
|
### Fuzz targets
|
||||||
|
|
||||||
|
@ -99,5 +96,6 @@ unit-test-like fuzzers, but are essentially just more substantial in nature. The
|
||||||
idea behind these fuzzers is to test end-to-end concepts more so than individual
|
idea behind these fuzzers is to test end-to-end concepts more so than individual
|
||||||
components of the proxy.
|
components of the proxy.
|
||||||
|
|
||||||
The [inbound fuzzer](/linkerd/app/inbound/fuzz/fuzz_targets/fuzz_target_1.rs)
|
The inbound fuzzer
|
||||||
is an example of this.
|
[here](/linkerd/app/inbound/fuzz/fuzz_targets/fuzz_target_1.rs) is an example of
|
||||||
|
this.
|
||||||
|
|
10
justfile
10
justfile
|
@ -18,10 +18,6 @@ features := ""
|
||||||
export LINKERD2_PROXY_VERSION := env_var_or_default("LINKERD2_PROXY_VERSION", "0.0.0-dev" + `git rev-parse --short HEAD`)
|
export LINKERD2_PROXY_VERSION := env_var_or_default("LINKERD2_PROXY_VERSION", "0.0.0-dev" + `git rev-parse --short HEAD`)
|
||||||
export LINKERD2_PROXY_VENDOR := env_var_or_default("LINKERD2_PROXY_VENDOR", `whoami` + "@" + `hostname`)
|
export LINKERD2_PROXY_VENDOR := env_var_or_default("LINKERD2_PROXY_VENDOR", `whoami` + "@" + `hostname`)
|
||||||
|
|
||||||
# TODO: these variables will be included in dev v48
|
|
||||||
export AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_gnu := env_var_or_default("AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_gnu", "-fuse-ld=/usr/aarch64-linux-gnu/bin/ld")
|
|
||||||
export AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_musl := env_var_or_default("AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_musl", "-fuse-ld=/usr/aarch64-linux-gnu/bin/ld")
|
|
||||||
|
|
||||||
# The version name to use for packages.
|
# The version name to use for packages.
|
||||||
package_version := "v" + LINKERD2_PROXY_VERSION
|
package_version := "v" + LINKERD2_PROXY_VERSION
|
||||||
|
|
||||||
|
@ -30,7 +26,7 @@ docker-repo := "localhost/linkerd/proxy"
|
||||||
docker-tag := `git rev-parse --abbrev-ref HEAD | sed 's|/|.|g'` + "." + `git rev-parse --short HEAD`
|
docker-tag := `git rev-parse --abbrev-ref HEAD | sed 's|/|.|g'` + "." + `git rev-parse --short HEAD`
|
||||||
docker-image := docker-repo + ":" + docker-tag
|
docker-image := docker-repo + ":" + docker-tag
|
||||||
|
|
||||||
# The architecture name to use for packages. Either 'amd64' or 'arm64'.
|
# The architecture name to use for packages. Either 'amd64', 'arm64', or 'arm'.
|
||||||
arch := "amd64"
|
arch := "amd64"
|
||||||
# The OS name to use for packages. Either 'linux' or 'windows'.
|
# The OS name to use for packages. Either 'linux' or 'windows'.
|
||||||
os := "linux"
|
os := "linux"
|
||||||
|
@ -43,6 +39,8 @@ _target := if os + '-' + arch == "linux-amd64" {
|
||||||
"x86_64-unknown-linux-" + libc
|
"x86_64-unknown-linux-" + libc
|
||||||
} else if os + '-' + arch == "linux-arm64" {
|
} else if os + '-' + arch == "linux-arm64" {
|
||||||
"aarch64-unknown-linux-" + libc
|
"aarch64-unknown-linux-" + libc
|
||||||
|
} else if os + '-' + arch == "linux-arm" {
|
||||||
|
"armv7-unknown-linux-" + libc + "eabihf"
|
||||||
} else if os + '-' + arch == "windows-amd64" {
|
} else if os + '-' + arch == "windows-amd64" {
|
||||||
"x86_64-pc-windows-" + libc
|
"x86_64-pc-windows-" + libc
|
||||||
} else {
|
} else {
|
||||||
|
@ -141,7 +139,7 @@ _strip:
|
||||||
|
|
||||||
_package_bin := _package_dir / "bin" / "linkerd2-proxy"
|
_package_bin := _package_dir / "bin" / "linkerd2-proxy"
|
||||||
|
|
||||||
# XXX aarch64-musl builds do not enable PIE, so we use target-specific
|
# XXX {aarch64,arm}-musl builds do not enable PIE, so we use target-specific
|
||||||
# files to document those differences.
|
# files to document those differences.
|
||||||
_expected_checksec := '.checksec' / arch + '-' + libc + '.json'
|
_expected_checksec := '.checksec' / arch + '-' + libc + '.json'
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ cargo-fuzz = true
|
||||||
libfuzzer-sys = "0.4"
|
libfuzzer-sys = "0.4"
|
||||||
linkerd-addr = { path = ".." }
|
linkerd-addr = { path = ".." }
|
||||||
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
|
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
|
||||||
# Prevent this from interfering with workspaces
|
# Prevent this from interfering with workspaces
|
||||||
[workspace]
|
[workspace]
|
||||||
|
|
|
@ -100,11 +100,15 @@ impl Addr {
|
||||||
// them ourselves.
|
// them ourselves.
|
||||||
format!("[{}]", a.ip())
|
format!("[{}]", a.ip())
|
||||||
};
|
};
|
||||||
http::uri::Authority::from_str(&ip)
|
http::uri::Authority::from_str(&ip).unwrap_or_else(|err| {
|
||||||
.unwrap_or_else(|err| panic!("SocketAddr ({a}) must be valid authority: {err}"))
|
panic!("SocketAddr ({}) must be valid authority: {}", a, err)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
Addr::Socket(a) => {
|
||||||
|
http::uri::Authority::from_str(&a.to_string()).unwrap_or_else(|err| {
|
||||||
|
panic!("SocketAddr ({}) must be valid authority: {}", a, err)
|
||||||
|
})
|
||||||
}
|
}
|
||||||
Addr::Socket(a) => http::uri::Authority::from_str(&a.to_string())
|
|
||||||
.unwrap_or_else(|err| panic!("SocketAddr ({a}) must be valid authority: {err}")),
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -261,14 +265,14 @@ mod tests {
|
||||||
];
|
];
|
||||||
for (host, expected_result) in cases {
|
for (host, expected_result) in cases {
|
||||||
let a = Addr::from_str(host).unwrap();
|
let a = Addr::from_str(host).unwrap();
|
||||||
assert_eq!(a.is_loopback(), *expected_result, "{host:?}")
|
assert_eq!(a.is_loopback(), *expected_result, "{:?}", host)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn test_to_http_authority(cases: &[&str]) {
|
fn test_to_http_authority(cases: &[&str]) {
|
||||||
let width = cases.iter().map(|s| s.len()).max().unwrap_or(0);
|
let width = cases.iter().map(|s| s.len()).max().unwrap_or(0);
|
||||||
for host in cases {
|
for host in cases {
|
||||||
print!("trying {host:width$} ... ");
|
print!("trying {:1$} ... ", host, width);
|
||||||
Addr::from_str(host).unwrap().to_http_authority();
|
Addr::from_str(host).unwrap().to_http_authority();
|
||||||
println!("ok");
|
println!("ok");
|
||||||
}
|
}
|
||||||
|
|
|
@ -36,4 +36,4 @@ tokio = { version = "1", features = ["rt"] }
|
||||||
tokio-stream = { version = "0.1", features = ["time", "sync"] }
|
tokio-stream = { version = "0.1", features = ["time", "sync"] }
|
||||||
tonic = { workspace = true, default-features = false, features = ["prost"] }
|
tonic = { workspace = true, default-features = false, features = ["prost"] }
|
||||||
tower = { workspace = true }
|
tower = { workspace = true }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
|
|
@ -22,12 +22,12 @@ http-body = { workspace = true }
|
||||||
http-body-util = { workspace = true }
|
http-body-util = { workspace = true }
|
||||||
hyper = { workspace = true, features = ["http1", "http2"] }
|
hyper = { workspace = true, features = ["http1", "http2"] }
|
||||||
futures = { version = "0.3", default-features = false }
|
futures = { version = "0.3", default-features = false }
|
||||||
pprof = { version = "0.15", optional = true, features = ["prost-codec"] }
|
pprof = { version = "0.14", optional = true, features = ["prost-codec"] }
|
||||||
serde = "1"
|
serde = "1"
|
||||||
serde_json = "1"
|
serde_json = "1"
|
||||||
thiserror = "2"
|
thiserror = "2"
|
||||||
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
|
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
|
||||||
linkerd-app-core = { path = "../core" }
|
linkerd-app-core = { path = "../core" }
|
||||||
linkerd-app-inbound = { path = "../inbound" }
|
linkerd-app-inbound = { path = "../inbound" }
|
||||||
|
|
|
@ -13,7 +13,7 @@
|
||||||
use futures::future::{self, TryFutureExt};
|
use futures::future::{self, TryFutureExt};
|
||||||
use http::StatusCode;
|
use http::StatusCode;
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
metrics::{self as metrics, legacy::FmtMetrics},
|
metrics::{self as metrics, FmtMetrics},
|
||||||
proxy::http::{Body, BoxBody, ClientHandle, Request, Response},
|
proxy::http::{Body, BoxBody, ClientHandle, Request, Response},
|
||||||
trace, Error, Result,
|
trace, Error, Result,
|
||||||
};
|
};
|
||||||
|
@ -32,7 +32,7 @@ pub use self::readiness::{Latch, Readiness};
|
||||||
|
|
||||||
#[derive(Clone)]
|
#[derive(Clone)]
|
||||||
pub struct Admin<M> {
|
pub struct Admin<M> {
|
||||||
metrics: metrics::legacy::Serve<M>,
|
metrics: metrics::Serve<M>,
|
||||||
tracing: trace::Handle,
|
tracing: trace::Handle,
|
||||||
ready: Readiness,
|
ready: Readiness,
|
||||||
shutdown_tx: mpsc::UnboundedSender<()>,
|
shutdown_tx: mpsc::UnboundedSender<()>,
|
||||||
|
@ -52,7 +52,7 @@ impl<M> Admin<M> {
|
||||||
tracing: trace::Handle,
|
tracing: trace::Handle,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
Self {
|
Self {
|
||||||
metrics: metrics::legacy::Serve::new(metrics),
|
metrics: metrics::Serve::new(metrics),
|
||||||
ready,
|
ready,
|
||||||
shutdown_tx,
|
shutdown_tx,
|
||||||
enable_shutdown,
|
enable_shutdown,
|
||||||
|
|
|
@ -27,7 +27,7 @@ where
|
||||||
.into_body()
|
.into_body()
|
||||||
.collect()
|
.collect()
|
||||||
.await
|
.await
|
||||||
.map_err(io::Error::other)?
|
.map_err(|e| io::Error::new(io::ErrorKind::Other, e))?
|
||||||
.aggregate();
|
.aggregate();
|
||||||
match level.set_from(body.chunk()) {
|
match level.set_from(body.chunk()) {
|
||||||
Ok(_) => mk_rsp(StatusCode::NO_CONTENT, BoxBody::empty()),
|
Ok(_) => mk_rsp(StatusCode::NO_CONTENT, BoxBody::empty()),
|
||||||
|
|
|
@ -2,7 +2,7 @@ use linkerd_app_core::{
|
||||||
classify,
|
classify,
|
||||||
config::ServerConfig,
|
config::ServerConfig,
|
||||||
drain, errors, identity,
|
drain, errors, identity,
|
||||||
metrics::{self, legacy::FmtMetrics},
|
metrics::{self, FmtMetrics},
|
||||||
proxy::http,
|
proxy::http,
|
||||||
serve,
|
serve,
|
||||||
svc::{self, ExtractParam, InsertParam, Param},
|
svc::{self, ExtractParam, InsertParam, Param},
|
||||||
|
@ -214,7 +214,7 @@ impl Config {
|
||||||
impl Param<transport::labels::Key> for Tcp {
|
impl Param<transport::labels::Key> for Tcp {
|
||||||
fn param(&self) -> transport::labels::Key {
|
fn param(&self) -> transport::labels::Key {
|
||||||
transport::labels::Key::inbound_server(
|
transport::labels::Key::inbound_server(
|
||||||
self.tls.as_ref().map(|t| t.labels()),
|
self.tls.clone(),
|
||||||
self.addr.into(),
|
self.addr.into(),
|
||||||
self.policy.server_label(),
|
self.policy.server_label(),
|
||||||
)
|
)
|
||||||
|
@ -272,7 +272,7 @@ impl Param<metrics::ServerLabel> for Http {
|
||||||
impl Param<metrics::EndpointLabels> for Permitted {
|
impl Param<metrics::EndpointLabels> for Permitted {
|
||||||
fn param(&self) -> metrics::EndpointLabels {
|
fn param(&self) -> metrics::EndpointLabels {
|
||||||
metrics::InboundEndpointLabels {
|
metrics::InboundEndpointLabels {
|
||||||
tls: self.http.tcp.tls.as_ref().map(|t| t.labels()),
|
tls: self.http.tcp.tls.clone(),
|
||||||
authority: None,
|
authority: None,
|
||||||
target_addr: self.http.tcp.addr.into(),
|
target_addr: self.http.tcp.addr.into(),
|
||||||
policy: self.permit.labels.clone(),
|
policy: self.permit.labels.clone(),
|
||||||
|
|
|
@ -13,23 +13,31 @@ independently of the inbound and outbound proxy logic.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
bytes = { workspace = true }
|
||||||
drain = { workspace = true, features = ["retain"] }
|
drain = { workspace = true, features = ["retain"] }
|
||||||
http = { workspace = true }
|
http = { workspace = true }
|
||||||
http-body = { workspace = true }
|
http-body = { workspace = true }
|
||||||
|
http-body-util = { workspace = true }
|
||||||
hyper = { workspace = true, features = ["http1", "http2"] }
|
hyper = { workspace = true, features = ["http1", "http2"] }
|
||||||
|
hyper-util = { workspace = true }
|
||||||
futures = { version = "0.3", default-features = false }
|
futures = { version = "0.3", default-features = false }
|
||||||
ipnet = "2.11"
|
ipnet = "2.11"
|
||||||
prometheus-client = { workspace = true }
|
prometheus-client = { workspace = true }
|
||||||
|
regex = "1"
|
||||||
|
serde_json = "1"
|
||||||
thiserror = "2"
|
thiserror = "2"
|
||||||
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
|
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
|
||||||
tokio-stream = { version = "0.1", features = ["time"] }
|
tokio-stream = { version = "0.1", features = ["time"] }
|
||||||
tonic = { workspace = true, default-features = false, features = ["prost"] }
|
tonic = { workspace = true, default-features = false, features = ["prost"] }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
parking_lot = "0.12"
|
||||||
pin-project = "1"
|
pin-project = "1"
|
||||||
|
|
||||||
linkerd-addr = { path = "../../addr" }
|
linkerd-addr = { path = "../../addr" }
|
||||||
linkerd-conditional = { path = "../../conditional" }
|
linkerd-conditional = { path = "../../conditional" }
|
||||||
linkerd-dns = { path = "../../dns" }
|
linkerd-dns = { path = "../../dns" }
|
||||||
|
linkerd-duplex = { path = "../../duplex" }
|
||||||
|
linkerd-errno = { path = "../../errno" }
|
||||||
linkerd-error = { path = "../../error" }
|
linkerd-error = { path = "../../error" }
|
||||||
linkerd-error-respond = { path = "../../error-respond" }
|
linkerd-error-respond = { path = "../../error-respond" }
|
||||||
linkerd-exp-backoff = { path = "../../exp-backoff" }
|
linkerd-exp-backoff = { path = "../../exp-backoff" }
|
||||||
|
@ -56,7 +64,6 @@ linkerd-proxy-tcp = { path = "../../proxy/tcp" }
|
||||||
linkerd-proxy-transport = { path = "../../proxy/transport" }
|
linkerd-proxy-transport = { path = "../../proxy/transport" }
|
||||||
linkerd-reconnect = { path = "../../reconnect" }
|
linkerd-reconnect = { path = "../../reconnect" }
|
||||||
linkerd-router = { path = "../../router" }
|
linkerd-router = { path = "../../router" }
|
||||||
linkerd-rustls = { path = "../../rustls" }
|
|
||||||
linkerd-service-profiles = { path = "../../service-profiles" }
|
linkerd-service-profiles = { path = "../../service-profiles" }
|
||||||
linkerd-stack = { path = "../../stack" }
|
linkerd-stack = { path = "../../stack" }
|
||||||
linkerd-stack-metrics = { path = "../../stack/metrics" }
|
linkerd-stack-metrics = { path = "../../stack/metrics" }
|
||||||
|
@ -76,6 +83,5 @@ features = ["make", "spawn-ready", "timeout", "util", "limit"]
|
||||||
semver = "1"
|
semver = "1"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
bytes = { workspace = true }
|
|
||||||
http-body-util = { workspace = true }
|
|
||||||
linkerd-mock-http-body = { path = "../../mock/http-body" }
|
linkerd-mock-http-body = { path = "../../mock/http-body" }
|
||||||
|
quickcheck = { version = "1", default-features = false }
|
||||||
|
|
|
@ -4,11 +4,11 @@ fn set_env(name: &str, cmd: &mut Command) {
|
||||||
let value = match cmd.output() {
|
let value = match cmd.output() {
|
||||||
Ok(output) => String::from_utf8(output.stdout).unwrap(),
|
Ok(output) => String::from_utf8(output.stdout).unwrap(),
|
||||||
Err(err) => {
|
Err(err) => {
|
||||||
println!("cargo:warning={err}");
|
println!("cargo:warning={}", err);
|
||||||
"".to_string()
|
"".to_string()
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
println!("cargo:rustc-env={name}={value}");
|
println!("cargo:rustc-env={}={}", name, value);
|
||||||
}
|
}
|
||||||
|
|
||||||
fn version() -> String {
|
fn version() -> String {
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
use crate::profiles;
|
use crate::profiles;
|
||||||
|
pub use classify::gate;
|
||||||
use linkerd_error::Error;
|
use linkerd_error::Error;
|
||||||
use linkerd_proxy_client_policy as client_policy;
|
use linkerd_proxy_client_policy as client_policy;
|
||||||
use linkerd_proxy_http::{classify, HasH2Reason, ResponseTimeoutError};
|
use linkerd_proxy_http::{classify, HasH2Reason, ResponseTimeoutError};
|
||||||
|
@ -213,7 +214,7 @@ fn h2_error(err: &Error) -> String {
|
||||||
if let Some(reason) = err.h2_reason() {
|
if let Some(reason) = err.h2_reason() {
|
||||||
// This should output the error code in the same format as the spec,
|
// This should output the error code in the same format as the spec,
|
||||||
// for example: PROTOCOL_ERROR
|
// for example: PROTOCOL_ERROR
|
||||||
format!("h2({reason:?})")
|
format!("h2({:?})", reason)
|
||||||
} else {
|
} else {
|
||||||
trace!("classifying found non-h2 error: {:?}", err);
|
trace!("classifying found non-h2 error: {:?}", err);
|
||||||
String::from("unclassified")
|
String::from("unclassified")
|
||||||
|
|
|
@ -101,7 +101,7 @@ impl Config {
|
||||||
identity: identity::NewClient,
|
identity: identity::NewClient,
|
||||||
) -> svc::ArcNewService<
|
) -> svc::ArcNewService<
|
||||||
(),
|
(),
|
||||||
svc::BoxCloneSyncService<http::Request<tonic::body::Body>, http::Response<RspBody>>,
|
svc::BoxCloneSyncService<http::Request<tonic::body::BoxBody>, http::Response<RspBody>>,
|
||||||
> {
|
> {
|
||||||
let addr = self.addr;
|
let addr = self.addr;
|
||||||
tracing::trace!(%addr, "Building");
|
tracing::trace!(%addr, "Building");
|
||||||
|
|
|
@ -25,7 +25,6 @@ pub mod metrics;
|
||||||
pub mod proxy;
|
pub mod proxy;
|
||||||
pub mod serve;
|
pub mod serve;
|
||||||
pub mod svc;
|
pub mod svc;
|
||||||
pub mod tls_info;
|
|
||||||
pub mod transport;
|
pub mod transport;
|
||||||
|
|
||||||
pub use self::build_info::{BuildInfo, BUILD_INFO};
|
pub use self::build_info::{BuildInfo, BUILD_INFO};
|
||||||
|
|
|
@ -54,7 +54,7 @@ pub struct Proxy {
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub struct ControlLabels {
|
pub struct ControlLabels {
|
||||||
addr: Addr,
|
addr: Addr,
|
||||||
server_id: tls::ConditionalClientTlsLabels,
|
server_id: tls::ConditionalClientTls,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
|
@ -65,7 +65,7 @@ pub enum EndpointLabels {
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub struct InboundEndpointLabels {
|
pub struct InboundEndpointLabels {
|
||||||
pub tls: tls::ConditionalServerTlsLabels,
|
pub tls: tls::ConditionalServerTls,
|
||||||
pub authority: Option<http::uri::Authority>,
|
pub authority: Option<http::uri::Authority>,
|
||||||
pub target_addr: SocketAddr,
|
pub target_addr: SocketAddr,
|
||||||
pub policy: RouteAuthzLabels,
|
pub policy: RouteAuthzLabels,
|
||||||
|
@ -98,7 +98,7 @@ pub struct RouteAuthzLabels {
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub struct OutboundEndpointLabels {
|
pub struct OutboundEndpointLabels {
|
||||||
pub server_id: tls::ConditionalClientTlsLabels,
|
pub server_id: tls::ConditionalClientTls,
|
||||||
pub authority: Option<http::uri::Authority>,
|
pub authority: Option<http::uri::Authority>,
|
||||||
pub labels: Option<String>,
|
pub labels: Option<String>,
|
||||||
pub zone_locality: OutboundZoneLocality,
|
pub zone_locality: OutboundZoneLocality,
|
||||||
|
@ -155,10 +155,10 @@ where
|
||||||
I: Iterator<Item = (&'i String, &'i String)>,
|
I: Iterator<Item = (&'i String, &'i String)>,
|
||||||
{
|
{
|
||||||
let (k0, v0) = labels_iter.next()?;
|
let (k0, v0) = labels_iter.next()?;
|
||||||
let mut out = format!("{prefix}_{k0}=\"{v0}\"");
|
let mut out = format!("{}_{}=\"{}\"", prefix, k0, v0);
|
||||||
|
|
||||||
for (k, v) in labels_iter {
|
for (k, v) in labels_iter {
|
||||||
write!(out, ",{prefix}_{k}=\"{v}\"").expect("label concat must succeed");
|
write!(out, ",{}_{}=\"{}\"", prefix, k, v).expect("label concat must succeed");
|
||||||
}
|
}
|
||||||
Some(out)
|
Some(out)
|
||||||
}
|
}
|
||||||
|
@ -166,7 +166,7 @@ where
|
||||||
// === impl Metrics ===
|
// === impl Metrics ===
|
||||||
|
|
||||||
impl Metrics {
|
impl Metrics {
|
||||||
pub fn new(retain_idle: Duration) -> (Self, impl legacy::FmtMetrics + Clone + Send + 'static) {
|
pub fn new(retain_idle: Duration) -> (Self, impl FmtMetrics + Clone + Send + 'static) {
|
||||||
let (control, control_report) = {
|
let (control, control_report) = {
|
||||||
let m = http_metrics::Requests::<ControlLabels, Class>::default();
|
let m = http_metrics::Requests::<ControlLabels, Class>::default();
|
||||||
let r = m.clone().into_report(retain_idle).with_prefix("control");
|
let r = m.clone().into_report(retain_idle).with_prefix("control");
|
||||||
|
@ -223,7 +223,6 @@ impl Metrics {
|
||||||
opentelemetry,
|
opentelemetry,
|
||||||
};
|
};
|
||||||
|
|
||||||
use legacy::FmtMetrics as _;
|
|
||||||
let report = endpoint_report
|
let report = endpoint_report
|
||||||
.and_report(profile_route_report)
|
.and_report(profile_route_report)
|
||||||
.and_report(retry_report)
|
.and_report(retry_report)
|
||||||
|
@ -244,17 +243,15 @@ impl svc::Param<ControlLabels> for control::ControlAddr {
|
||||||
fn param(&self) -> ControlLabels {
|
fn param(&self) -> ControlLabels {
|
||||||
ControlLabels {
|
ControlLabels {
|
||||||
addr: self.addr.clone(),
|
addr: self.addr.clone(),
|
||||||
server_id: self.identity.as_ref().map(tls::ClientTls::labels),
|
server_id: self.identity.clone(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for ControlLabels {
|
impl FmtLabels for ControlLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self { addr, server_id } = self;
|
write!(f, "addr=\"{}\",", self.addr)?;
|
||||||
|
TlsConnect::from(&self.server_id).fmt_labels(f)?;
|
||||||
write!(f, "addr=\"{addr}\",")?;
|
|
||||||
TlsConnect::from(server_id).fmt_labels(f)?;
|
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
@ -282,19 +279,13 @@ impl ProfileRouteLabels {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for ProfileRouteLabels {
|
impl FmtLabels for ProfileRouteLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self {
|
self.direction.fmt_labels(f)?;
|
||||||
direction,
|
write!(f, ",dst=\"{}\"", self.addr)?;
|
||||||
addr,
|
|
||||||
labels,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
direction.fmt_labels(f)?;
|
if let Some(labels) = self.labels.as_ref() {
|
||||||
write!(f, ",dst=\"{addr}\"")?;
|
write!(f, ",{}", labels)?;
|
||||||
|
|
||||||
if let Some(labels) = labels.as_ref() {
|
|
||||||
write!(f, ",{labels}")?;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
|
@ -315,7 +306,7 @@ impl From<OutboundEndpointLabels> for EndpointLabels {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for EndpointLabels {
|
impl FmtLabels for EndpointLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
match self {
|
match self {
|
||||||
Self::Inbound(i) => (Direction::In, i).fmt_labels(f),
|
Self::Inbound(i) => (Direction::In, i).fmt_labels(f),
|
||||||
|
@ -324,36 +315,32 @@ impl legacy::FmtLabels for EndpointLabels {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for InboundEndpointLabels {
|
impl FmtLabels for InboundEndpointLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self {
|
if let Some(a) = self.authority.as_ref() {
|
||||||
tls,
|
|
||||||
authority,
|
|
||||||
target_addr,
|
|
||||||
policy,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
if let Some(a) = authority.as_ref() {
|
|
||||||
Authority(a).fmt_labels(f)?;
|
Authority(a).fmt_labels(f)?;
|
||||||
write!(f, ",")?;
|
write!(f, ",")?;
|
||||||
}
|
}
|
||||||
|
|
||||||
((TargetAddr(*target_addr), TlsAccept::from(tls)), policy).fmt_labels(f)?;
|
(
|
||||||
|
(TargetAddr(self.target_addr), TlsAccept::from(&self.tls)),
|
||||||
|
&self.policy,
|
||||||
|
)
|
||||||
|
.fmt_labels(f)?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for ServerLabel {
|
impl FmtLabels for ServerLabel {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self(meta, port) = self;
|
|
||||||
write!(
|
write!(
|
||||||
f,
|
f,
|
||||||
"srv_group=\"{}\",srv_kind=\"{}\",srv_name=\"{}\",srv_port=\"{}\"",
|
"srv_group=\"{}\",srv_kind=\"{}\",srv_name=\"{}\",srv_port=\"{}\"",
|
||||||
meta.group(),
|
self.0.group(),
|
||||||
meta.kind(),
|
self.0.kind(),
|
||||||
meta.name(),
|
self.0.name(),
|
||||||
port
|
self.1
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -375,47 +362,41 @@ impl prom::EncodeLabelSetMut for ServerLabel {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for ServerAuthzLabels {
|
impl FmtLabels for ServerAuthzLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self { server, authz } = self;
|
self.server.fmt_labels(f)?;
|
||||||
|
|
||||||
server.fmt_labels(f)?;
|
|
||||||
write!(
|
write!(
|
||||||
f,
|
f,
|
||||||
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
|
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
|
||||||
authz.group(),
|
self.authz.group(),
|
||||||
authz.kind(),
|
self.authz.kind(),
|
||||||
authz.name()
|
self.authz.name()
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for RouteLabels {
|
impl FmtLabels for RouteLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self { server, route } = self;
|
self.server.fmt_labels(f)?;
|
||||||
|
|
||||||
server.fmt_labels(f)?;
|
|
||||||
write!(
|
write!(
|
||||||
f,
|
f,
|
||||||
",route_group=\"{}\",route_kind=\"{}\",route_name=\"{}\"",
|
",route_group=\"{}\",route_kind=\"{}\",route_name=\"{}\"",
|
||||||
route.group(),
|
self.route.group(),
|
||||||
route.kind(),
|
self.route.kind(),
|
||||||
route.name(),
|
self.route.name(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for RouteAuthzLabels {
|
impl FmtLabels for RouteAuthzLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self { route, authz } = self;
|
self.route.fmt_labels(f)?;
|
||||||
|
|
||||||
route.fmt_labels(f)?;
|
|
||||||
write!(
|
write!(
|
||||||
f,
|
f,
|
||||||
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
|
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
|
||||||
authz.group(),
|
self.authz.group(),
|
||||||
authz.kind(),
|
self.authz.kind(),
|
||||||
authz.name(),
|
self.authz.name(),
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -426,28 +407,19 @@ impl svc::Param<OutboundZoneLocality> for OutboundEndpointLabels {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for OutboundEndpointLabels {
|
impl FmtLabels for OutboundEndpointLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self {
|
if let Some(a) = self.authority.as_ref() {
|
||||||
server_id,
|
|
||||||
authority,
|
|
||||||
labels,
|
|
||||||
// TODO(kate): this label is not currently emitted.
|
|
||||||
zone_locality: _,
|
|
||||||
target_addr,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
if let Some(a) = authority.as_ref() {
|
|
||||||
Authority(a).fmt_labels(f)?;
|
Authority(a).fmt_labels(f)?;
|
||||||
write!(f, ",")?;
|
write!(f, ",")?;
|
||||||
}
|
}
|
||||||
|
|
||||||
let ta = TargetAddr(*target_addr);
|
let ta = TargetAddr(self.target_addr);
|
||||||
let tls = TlsConnect::from(server_id);
|
let tls = TlsConnect::from(&self.server_id);
|
||||||
(ta, tls).fmt_labels(f)?;
|
(ta, tls).fmt_labels(f)?;
|
||||||
|
|
||||||
if let Some(labels) = labels.as_ref() {
|
if let Some(labels) = self.labels.as_ref() {
|
||||||
write!(f, ",{labels}")?;
|
write!(f, ",{}", labels)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
|
@ -463,20 +435,19 @@ impl fmt::Display for Direction {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for Direction {
|
impl FmtLabels for Direction {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
write!(f, "direction=\"{self}\"")
|
write!(f, "direction=\"{}\"", self)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for Authority<'_> {
|
impl FmtLabels for Authority<'_> {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self(authority) = self;
|
write!(f, "authority=\"{}\"", self.0)
|
||||||
write!(f, "authority=\"{authority}\"")
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for Class {
|
impl FmtLabels for Class {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let class = |ok: bool| if ok { "success" } else { "failure" };
|
let class = |ok: bool| if ok { "success" } else { "failure" };
|
||||||
|
|
||||||
|
@ -498,7 +469,8 @@ impl legacy::FmtLabels for Class {
|
||||||
|
|
||||||
Class::Error(msg) => write!(
|
Class::Error(msg) => write!(
|
||||||
f,
|
f,
|
||||||
"classification=\"failure\",grpc_status=\"\",error=\"{msg}\""
|
"classification=\"failure\",grpc_status=\"\",error=\"{}\"",
|
||||||
|
msg
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -524,15 +496,9 @@ impl StackLabels {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtLabels for StackLabels {
|
impl FmtLabels for StackLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self {
|
self.direction.fmt_labels(f)?;
|
||||||
direction,
|
write!(f, ",protocol=\"{}\",name=\"{}\"", self.protocol, self.name)
|
||||||
protocol,
|
|
||||||
name,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
direction.fmt_labels(f)?;
|
|
||||||
write!(f, ",protocol=\"{protocol}\",name=\"{name}\"")
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,70 +0,0 @@
|
||||||
use linkerd_metrics::prom;
|
|
||||||
use prometheus_client::encoding::{EncodeLabelSet, EncodeLabelValue, LabelValueEncoder};
|
|
||||||
use std::{
|
|
||||||
fmt::{Error, Write},
|
|
||||||
sync::{Arc, OnceLock},
|
|
||||||
};
|
|
||||||
|
|
||||||
static TLS_INFO: OnceLock<Arc<TlsInfo>> = OnceLock::new();
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Default, Hash, PartialEq, Eq, EncodeLabelSet)]
|
|
||||||
pub struct TlsInfo {
|
|
||||||
tls_suites: MetricValueList,
|
|
||||||
tls_kx_groups: MetricValueList,
|
|
||||||
tls_rand: String,
|
|
||||||
tls_key_provider: String,
|
|
||||||
tls_fips: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug, Default, Hash, PartialEq, Eq)]
|
|
||||||
struct MetricValueList {
|
|
||||||
values: Vec<&'static str>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl FromIterator<&'static str> for MetricValueList {
|
|
||||||
fn from_iter<T: IntoIterator<Item = &'static str>>(iter: T) -> Self {
|
|
||||||
MetricValueList {
|
|
||||||
values: iter.into_iter().collect(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl EncodeLabelValue for MetricValueList {
|
|
||||||
fn encode(&self, encoder: &mut LabelValueEncoder<'_>) -> Result<(), Error> {
|
|
||||||
for value in &self.values {
|
|
||||||
value.encode(encoder)?;
|
|
||||||
encoder.write_char(',')?;
|
|
||||||
}
|
|
||||||
Ok(())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn metric() -> prom::Family<TlsInfo, prom::ConstGauge> {
|
|
||||||
let fam = prom::Family::<TlsInfo, prom::ConstGauge>::new_with_constructor(|| {
|
|
||||||
prom::ConstGauge::new(1)
|
|
||||||
});
|
|
||||||
|
|
||||||
let tls_info = TLS_INFO.get_or_init(|| {
|
|
||||||
let provider = linkerd_rustls::get_default_provider();
|
|
||||||
|
|
||||||
let tls_suites = provider
|
|
||||||
.cipher_suites
|
|
||||||
.iter()
|
|
||||||
.flat_map(|cipher_suite| cipher_suite.suite().as_str())
|
|
||||||
.collect::<MetricValueList>();
|
|
||||||
let tls_kx_groups = provider
|
|
||||||
.kx_groups
|
|
||||||
.iter()
|
|
||||||
.flat_map(|suite| suite.name().as_str())
|
|
||||||
.collect::<MetricValueList>();
|
|
||||||
Arc::new(TlsInfo {
|
|
||||||
tls_suites,
|
|
||||||
tls_kx_groups,
|
|
||||||
tls_rand: format!("{:?}", provider.secure_random),
|
|
||||||
tls_key_provider: format!("{:?}", provider.key_provider),
|
|
||||||
tls_fips: provider.fips(),
|
|
||||||
})
|
|
||||||
});
|
|
||||||
let _ = fam.get_or_create(tls_info);
|
|
||||||
fam
|
|
||||||
}
|
|
|
@ -1,7 +1,7 @@
|
||||||
use crate::metrics::ServerLabel as PolicyServerLabel;
|
use crate::metrics::ServerLabel as PolicyServerLabel;
|
||||||
pub use crate::metrics::{Direction, OutboundEndpointLabels};
|
pub use crate::metrics::{Direction, OutboundEndpointLabels};
|
||||||
use linkerd_conditional::Conditional;
|
use linkerd_conditional::Conditional;
|
||||||
use linkerd_metrics::legacy::FmtLabels;
|
use linkerd_metrics::FmtLabels;
|
||||||
use linkerd_tls as tls;
|
use linkerd_tls as tls;
|
||||||
use std::{fmt, net::SocketAddr};
|
use std::{fmt, net::SocketAddr};
|
||||||
|
|
||||||
|
@ -20,16 +20,16 @@ pub enum Key {
|
||||||
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
|
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
|
||||||
pub struct ServerLabels {
|
pub struct ServerLabels {
|
||||||
direction: Direction,
|
direction: Direction,
|
||||||
tls: tls::ConditionalServerTlsLabels,
|
tls: tls::ConditionalServerTls,
|
||||||
target_addr: SocketAddr,
|
target_addr: SocketAddr,
|
||||||
policy: Option<PolicyServerLabel>,
|
policy: Option<PolicyServerLabel>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
|
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
|
||||||
pub struct TlsAccept<'t>(pub &'t tls::ConditionalServerTlsLabels);
|
pub struct TlsAccept<'t>(pub &'t tls::ConditionalServerTls);
|
||||||
|
|
||||||
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
|
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
|
||||||
pub(crate) struct TlsConnect<'t>(pub &'t tls::ConditionalClientTlsLabels);
|
pub(crate) struct TlsConnect<'t>(&'t tls::ConditionalClientTls);
|
||||||
|
|
||||||
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
|
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
|
||||||
pub struct TargetAddr(pub SocketAddr);
|
pub struct TargetAddr(pub SocketAddr);
|
||||||
|
@ -38,7 +38,7 @@ pub struct TargetAddr(pub SocketAddr);
|
||||||
|
|
||||||
impl Key {
|
impl Key {
|
||||||
pub fn inbound_server(
|
pub fn inbound_server(
|
||||||
tls: tls::ConditionalServerTlsLabels,
|
tls: tls::ConditionalServerTls,
|
||||||
target_addr: SocketAddr,
|
target_addr: SocketAddr,
|
||||||
server: PolicyServerLabel,
|
server: PolicyServerLabel,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
|
@ -62,7 +62,7 @@ impl FmtLabels for Key {
|
||||||
}
|
}
|
||||||
|
|
||||||
Self::InboundClient => {
|
Self::InboundClient => {
|
||||||
const NO_TLS: tls::client::ConditionalClientTlsLabels =
|
const NO_TLS: tls::client::ConditionalClientTls =
|
||||||
Conditional::None(tls::NoClientTls::Loopback);
|
Conditional::None(tls::NoClientTls::Loopback);
|
||||||
|
|
||||||
Direction::In.fmt_labels(f)?;
|
Direction::In.fmt_labels(f)?;
|
||||||
|
@ -75,7 +75,7 @@ impl FmtLabels for Key {
|
||||||
|
|
||||||
impl ServerLabels {
|
impl ServerLabels {
|
||||||
fn inbound(
|
fn inbound(
|
||||||
tls: tls::ConditionalServerTlsLabels,
|
tls: tls::ConditionalServerTls,
|
||||||
target_addr: SocketAddr,
|
target_addr: SocketAddr,
|
||||||
policy: PolicyServerLabel,
|
policy: PolicyServerLabel,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
|
@ -90,7 +90,7 @@ impl ServerLabels {
|
||||||
fn outbound(target_addr: SocketAddr) -> Self {
|
fn outbound(target_addr: SocketAddr) -> Self {
|
||||||
ServerLabels {
|
ServerLabels {
|
||||||
direction: Direction::Out,
|
direction: Direction::Out,
|
||||||
tls: tls::ConditionalServerTlsLabels::None(tls::NoServerTls::Loopback),
|
tls: tls::ConditionalServerTls::None(tls::NoServerTls::Loopback),
|
||||||
target_addr,
|
target_addr,
|
||||||
policy: None,
|
policy: None,
|
||||||
}
|
}
|
||||||
|
@ -99,17 +99,14 @@ impl ServerLabels {
|
||||||
|
|
||||||
impl FmtLabels for ServerLabels {
|
impl FmtLabels for ServerLabels {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self {
|
self.direction.fmt_labels(f)?;
|
||||||
direction,
|
|
||||||
tls,
|
|
||||||
target_addr,
|
|
||||||
policy,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
direction.fmt_labels(f)?;
|
|
||||||
f.write_str(",peer=\"src\",")?;
|
f.write_str(",peer=\"src\",")?;
|
||||||
|
|
||||||
((TargetAddr(*target_addr), TlsAccept(tls)), policy.as_ref()).fmt_labels(f)?;
|
(
|
||||||
|
(TargetAddr(self.target_addr), TlsAccept(&self.tls)),
|
||||||
|
self.policy.as_ref(),
|
||||||
|
)
|
||||||
|
.fmt_labels(f)?;
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
@ -117,28 +114,27 @@ impl FmtLabels for ServerLabels {
|
||||||
|
|
||||||
// === impl TlsAccept ===
|
// === impl TlsAccept ===
|
||||||
|
|
||||||
impl<'t> From<&'t tls::ConditionalServerTlsLabels> for TlsAccept<'t> {
|
impl<'t> From<&'t tls::ConditionalServerTls> for TlsAccept<'t> {
|
||||||
fn from(c: &'t tls::ConditionalServerTlsLabels) -> Self {
|
fn from(c: &'t tls::ConditionalServerTls) -> Self {
|
||||||
TlsAccept(c)
|
TlsAccept(c)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl FmtLabels for TlsAccept<'_> {
|
impl FmtLabels for TlsAccept<'_> {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self(tls) = self;
|
match self.0 {
|
||||||
match tls {
|
|
||||||
Conditional::None(tls::NoServerTls::Disabled) => {
|
Conditional::None(tls::NoServerTls::Disabled) => {
|
||||||
write!(f, "tls=\"disabled\"")
|
write!(f, "tls=\"disabled\"")
|
||||||
}
|
}
|
||||||
Conditional::None(why) => {
|
Conditional::None(why) => {
|
||||||
write!(f, "tls=\"no_identity\",no_tls_reason=\"{why}\"")
|
write!(f, "tls=\"no_identity\",no_tls_reason=\"{}\"", why)
|
||||||
}
|
}
|
||||||
Conditional::Some(tls::ServerTlsLabels::Established { client_id }) => match client_id {
|
Conditional::Some(tls::ServerTls::Established { client_id, .. }) => match client_id {
|
||||||
Some(id) => write!(f, "tls=\"true\",client_id=\"{id}\""),
|
Some(id) => write!(f, "tls=\"true\",client_id=\"{}\"", id),
|
||||||
None => write!(f, "tls=\"true\",client_id=\"\""),
|
None => write!(f, "tls=\"true\",client_id=\"\""),
|
||||||
},
|
},
|
||||||
Conditional::Some(tls::ServerTlsLabels::Passthru { sni }) => {
|
Conditional::Some(tls::ServerTls::Passthru { sni }) => {
|
||||||
write!(f, "tls=\"opaque\",sni=\"{sni}\"")
|
write!(f, "tls=\"opaque\",sni=\"{}\"", sni)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -146,25 +142,23 @@ impl FmtLabels for TlsAccept<'_> {
|
||||||
|
|
||||||
// === impl TlsConnect ===
|
// === impl TlsConnect ===
|
||||||
|
|
||||||
impl<'t> From<&'t tls::ConditionalClientTlsLabels> for TlsConnect<'t> {
|
impl<'t> From<&'t tls::ConditionalClientTls> for TlsConnect<'t> {
|
||||||
fn from(s: &'t tls::ConditionalClientTlsLabels) -> Self {
|
fn from(s: &'t tls::ConditionalClientTls) -> Self {
|
||||||
TlsConnect(s)
|
TlsConnect(s)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl FmtLabels for TlsConnect<'_> {
|
impl FmtLabels for TlsConnect<'_> {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self(tls) = self;
|
match self.0 {
|
||||||
|
|
||||||
match tls {
|
|
||||||
Conditional::None(tls::NoClientTls::Disabled) => {
|
Conditional::None(tls::NoClientTls::Disabled) => {
|
||||||
write!(f, "tls=\"disabled\"")
|
write!(f, "tls=\"disabled\"")
|
||||||
}
|
}
|
||||||
Conditional::None(why) => {
|
Conditional::None(why) => {
|
||||||
write!(f, "tls=\"no_identity\",no_tls_reason=\"{why}\"")
|
write!(f, "tls=\"no_identity\",no_tls_reason=\"{}\"", why)
|
||||||
}
|
}
|
||||||
Conditional::Some(tls::ClientTlsLabels { server_id }) => {
|
Conditional::Some(tls::ClientTls { server_id, .. }) => {
|
||||||
write!(f, "tls=\"true\",server_id=\"{server_id}\"")
|
write!(f, "tls=\"true\",server_id=\"{}\"", server_id)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -174,13 +168,12 @@ impl FmtLabels for TlsConnect<'_> {
|
||||||
|
|
||||||
impl FmtLabels for TargetAddr {
|
impl FmtLabels for TargetAddr {
|
||||||
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||||
let Self(target_addr) = self;
|
|
||||||
write!(
|
write!(
|
||||||
f,
|
f,
|
||||||
"target_addr=\"{}\",target_ip=\"{}\",target_port=\"{}\"",
|
"target_addr=\"{}\",target_ip=\"{}\",target_port=\"{}\"",
|
||||||
target_addr,
|
self.0,
|
||||||
target_addr.ip(),
|
self.0.ip(),
|
||||||
target_addr.port()
|
self.0.port()
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -201,8 +194,9 @@ mod tests {
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
let labels = ServerLabels::inbound(
|
let labels = ServerLabels::inbound(
|
||||||
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
|
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
|
||||||
client_id: Some("foo.id.example.com".parse().unwrap()),
|
client_id: Some("foo.id.example.com".parse().unwrap()),
|
||||||
|
negotiated_protocol: None,
|
||||||
}),
|
}),
|
||||||
([192, 0, 2, 4], 40000).into(),
|
([192, 0, 2, 4], 40000).into(),
|
||||||
PolicyServerLabel(
|
PolicyServerLabel(
|
||||||
|
|
|
@ -18,7 +18,7 @@ thiserror = "2"
|
||||||
tokio = { version = "1", features = ["sync"] }
|
tokio = { version = "1", features = ["sync"] }
|
||||||
tonic = { workspace = true, default-features = false }
|
tonic = { workspace = true, default-features = false }
|
||||||
tower = { workspace = true, default-features = false }
|
tower = { workspace = true, default-features = false }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
linkerd-app-inbound = { path = "../inbound", features = ["test-util"] }
|
linkerd-app-inbound = { path = "../inbound", features = ["test-util"] }
|
||||||
|
|
|
@ -90,7 +90,7 @@ impl Gateway {
|
||||||
detect_timeout,
|
detect_timeout,
|
||||||
queue,
|
queue,
|
||||||
addr,
|
addr,
|
||||||
meta.into(),
|
meta,
|
||||||
),
|
),
|
||||||
None => {
|
None => {
|
||||||
tracing::debug!(
|
tracing::debug!(
|
||||||
|
|
|
@ -153,7 +153,7 @@ fn mk_routes(profile: &profiles::Profile) -> Option<outbound::http::Routes> {
|
||||||
if let Some((addr, metadata)) = profile.endpoint.clone() {
|
if let Some((addr, metadata)) = profile.endpoint.clone() {
|
||||||
return Some(outbound::http::Routes::Endpoint(
|
return Some(outbound::http::Routes::Endpoint(
|
||||||
Remote(ServerAddr(addr)),
|
Remote(ServerAddr(addr)),
|
||||||
metadata.into(),
|
metadata,
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,8 @@ Configures and runs the inbound proxy
|
||||||
test-util = [
|
test-util = [
|
||||||
"linkerd-app-test",
|
"linkerd-app-test",
|
||||||
"linkerd-idle-cache/test-util",
|
"linkerd-idle-cache/test-util",
|
||||||
"linkerd-meshtls/test-util",
|
"linkerd-meshtls/rustls",
|
||||||
|
"linkerd-meshtls-rustls/test-util",
|
||||||
]
|
]
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
@ -24,7 +25,8 @@ linkerd-app-core = { path = "../core" }
|
||||||
linkerd-app-test = { path = "../test", optional = true }
|
linkerd-app-test = { path = "../test", optional = true }
|
||||||
linkerd-http-access-log = { path = "../../http/access-log" }
|
linkerd-http-access-log = { path = "../../http/access-log" }
|
||||||
linkerd-idle-cache = { path = "../../idle-cache" }
|
linkerd-idle-cache = { path = "../../idle-cache" }
|
||||||
linkerd-meshtls = { path = "../../meshtls", optional = true, default-features = false }
|
linkerd-meshtls = { path = "../../meshtls", optional = true }
|
||||||
|
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", optional = true }
|
||||||
linkerd-proxy-client-policy = { path = "../../proxy/client-policy" }
|
linkerd-proxy-client-policy = { path = "../../proxy/client-policy" }
|
||||||
linkerd-tonic-stream = { path = "../../tonic-stream" }
|
linkerd-tonic-stream = { path = "../../tonic-stream" }
|
||||||
linkerd-tonic-watch = { path = "../../tonic-watch" }
|
linkerd-tonic-watch = { path = "../../tonic-watch" }
|
||||||
|
@ -36,7 +38,7 @@ thiserror = "2"
|
||||||
tokio = { version = "1", features = ["sync"] }
|
tokio = { version = "1", features = ["sync"] }
|
||||||
tonic = { workspace = true, default-features = false }
|
tonic = { workspace = true, default-features = false }
|
||||||
tower = { workspace = true, features = ["util"] }
|
tower = { workspace = true, features = ["util"] }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
|
||||||
[dependencies.linkerd-proxy-server-policy]
|
[dependencies.linkerd-proxy-server-policy]
|
||||||
path = "../../proxy/server-policy"
|
path = "../../proxy/server-policy"
|
||||||
|
@ -47,7 +49,7 @@ hyper = { workspace = true, features = ["http1", "http2"] }
|
||||||
linkerd-app-test = { path = "../test" }
|
linkerd-app-test = { path = "../test" }
|
||||||
arbitrary = { version = "1", features = ["derive"] }
|
arbitrary = { version = "1", features = ["derive"] }
|
||||||
libfuzzer-sys = { version = "0.4", features = ["arbitrary-derive"] }
|
libfuzzer-sys = { version = "0.4", features = ["arbitrary-derive"] }
|
||||||
linkerd-meshtls = { path = "../../meshtls", features = [
|
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", features = [
|
||||||
"test-util",
|
"test-util",
|
||||||
] }
|
] }
|
||||||
|
|
||||||
|
@ -60,7 +62,8 @@ linkerd-http-metrics = { path = "../../http/metrics", features = ["test-util"] }
|
||||||
linkerd-http-box = { path = "../../http/box" }
|
linkerd-http-box = { path = "../../http/box" }
|
||||||
linkerd-idle-cache = { path = "../../idle-cache", features = ["test-util"] }
|
linkerd-idle-cache = { path = "../../idle-cache", features = ["test-util"] }
|
||||||
linkerd-io = { path = "../../io", features = ["tokio-test"] }
|
linkerd-io = { path = "../../io", features = ["tokio-test"] }
|
||||||
linkerd-meshtls = { path = "../../meshtls", features = [
|
linkerd-meshtls = { path = "../../meshtls", features = ["rustls"] }
|
||||||
|
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", features = [
|
||||||
"test-util",
|
"test-util",
|
||||||
] }
|
] }
|
||||||
linkerd-proxy-server-policy = { path = "../../proxy/server-policy", features = [
|
linkerd-proxy-server-policy = { path = "../../proxy/server-policy", features = [
|
||||||
|
|
|
@ -18,12 +18,13 @@ linkerd-app-core = { path = "../../core" }
|
||||||
linkerd-app-inbound = { path = ".." }
|
linkerd-app-inbound = { path = ".." }
|
||||||
linkerd-app-test = { path = "../../test" }
|
linkerd-app-test = { path = "../../test" }
|
||||||
linkerd-idle-cache = { path = "../../../idle-cache", features = ["test-util"] }
|
linkerd-idle-cache = { path = "../../../idle-cache", features = ["test-util"] }
|
||||||
linkerd-meshtls = { path = "../../../meshtls", features = [
|
linkerd-meshtls = { path = "../../../meshtls", features = ["rustls"] }
|
||||||
|
linkerd-meshtls-rustls = { path = "../../../meshtls/rustls", features = [
|
||||||
"test-util",
|
"test-util",
|
||||||
] }
|
] }
|
||||||
linkerd-tracing = { path = "../../../tracing", features = ["ansi"] }
|
linkerd-tracing = { path = "../../../tracing", features = ["ansi"] }
|
||||||
tokio = { version = "1", features = ["full"] }
|
tokio = { version = "1", features = ["full"] }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
|
||||||
# Prevent this from interfering with workspaces
|
# Prevent this from interfering with workspaces
|
||||||
[workspace]
|
[workspace]
|
||||||
|
|
|
@ -325,7 +325,7 @@ impl svc::Param<Remote<ServerAddr>> for Forward {
|
||||||
impl svc::Param<transport::labels::Key> for Forward {
|
impl svc::Param<transport::labels::Key> for Forward {
|
||||||
fn param(&self) -> transport::labels::Key {
|
fn param(&self) -> transport::labels::Key {
|
||||||
transport::labels::Key::inbound_server(
|
transport::labels::Key::inbound_server(
|
||||||
self.tls.as_ref().map(|t| t.labels()),
|
self.tls.clone(),
|
||||||
self.orig_dst_addr.into(),
|
self.orig_dst_addr.into(),
|
||||||
self.permit.labels.server.clone(),
|
self.permit.labels.server.clone(),
|
||||||
)
|
)
|
||||||
|
@ -429,7 +429,7 @@ impl svc::Param<ServerLabel> for Http {
|
||||||
impl svc::Param<transport::labels::Key> for Http {
|
impl svc::Param<transport::labels::Key> for Http {
|
||||||
fn param(&self) -> transport::labels::Key {
|
fn param(&self) -> transport::labels::Key {
|
||||||
transport::labels::Key::inbound_server(
|
transport::labels::Key::inbound_server(
|
||||||
self.tls.status.as_ref().map(|t| t.labels()),
|
self.tls.status.clone(),
|
||||||
self.tls.orig_dst_addr.into(),
|
self.tls.orig_dst_addr.into(),
|
||||||
self.tls.policy.server_label(),
|
self.tls.policy.server_label(),
|
||||||
)
|
)
|
||||||
|
|
|
@ -117,7 +117,7 @@ impl<N> Inbound<N> {
|
||||||
let identity = rt
|
let identity = rt
|
||||||
.identity
|
.identity
|
||||||
.server()
|
.server()
|
||||||
.spawn_with_alpn(vec![transport_header::PROTOCOL.into()])
|
.with_alpn(vec![transport_header::PROTOCOL.into()])
|
||||||
.expect("TLS credential store must be held");
|
.expect("TLS credential store must be held");
|
||||||
|
|
||||||
inner
|
inner
|
||||||
|
@ -311,8 +311,9 @@ impl Param<Remote<ServerAddr>> for AuthorizedLocalTcp {
|
||||||
impl Param<transport::labels::Key> for AuthorizedLocalTcp {
|
impl Param<transport::labels::Key> for AuthorizedLocalTcp {
|
||||||
fn param(&self) -> transport::labels::Key {
|
fn param(&self) -> transport::labels::Key {
|
||||||
transport::labels::Key::inbound_server(
|
transport::labels::Key::inbound_server(
|
||||||
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
|
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
|
||||||
client_id: Some(self.client_id.clone()),
|
client_id: Some(self.client_id.clone()),
|
||||||
|
negotiated_protocol: None,
|
||||||
}),
|
}),
|
||||||
self.addr.into(),
|
self.addr.into(),
|
||||||
self.permit.labels.server.clone(),
|
self.permit.labels.server.clone(),
|
||||||
|
@ -343,8 +344,9 @@ impl Param<Remote<ClientAddr>> for LocalHttp {
|
||||||
impl Param<transport::labels::Key> for LocalHttp {
|
impl Param<transport::labels::Key> for LocalHttp {
|
||||||
fn param(&self) -> transport::labels::Key {
|
fn param(&self) -> transport::labels::Key {
|
||||||
transport::labels::Key::inbound_server(
|
transport::labels::Key::inbound_server(
|
||||||
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
|
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
|
||||||
client_id: Some(self.client.client_id.clone()),
|
client_id: Some(self.client.client_id.clone()),
|
||||||
|
negotiated_protocol: None,
|
||||||
}),
|
}),
|
||||||
self.addr.into(),
|
self.addr.into(),
|
||||||
self.policy.server_label(),
|
self.policy.server_label(),
|
||||||
|
@ -433,14 +435,6 @@ impl Param<tls::ConditionalServerTls> for GatewayTransportHeader {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Param<tls::ConditionalServerTlsLabels> for GatewayTransportHeader {
|
|
||||||
fn param(&self) -> tls::ConditionalServerTlsLabels {
|
|
||||||
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
|
|
||||||
client_id: Some(self.client.client_id.clone()),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Param<tls::ClientId> for GatewayTransportHeader {
|
impl Param<tls::ClientId> for GatewayTransportHeader {
|
||||||
fn param(&self) -> tls::ClientId {
|
fn param(&self) -> tls::ClientId {
|
||||||
self.client.client_id.clone()
|
self.client.client_id.clone()
|
||||||
|
|
|
@ -395,7 +395,7 @@ fn endpoint_labels(
|
||||||
) -> impl svc::ExtractParam<metrics::EndpointLabels, Logical> + Clone {
|
) -> impl svc::ExtractParam<metrics::EndpointLabels, Logical> + Clone {
|
||||||
move |t: &Logical| -> metrics::EndpointLabels {
|
move |t: &Logical| -> metrics::EndpointLabels {
|
||||||
metrics::InboundEndpointLabels {
|
metrics::InboundEndpointLabels {
|
||||||
tls: t.tls.as_ref().map(|t| t.labels()),
|
tls: t.tls.clone(),
|
||||||
authority: unsafe_authority_labels
|
authority: unsafe_authority_labels
|
||||||
.then(|| t.logical.as_ref().map(|d| d.as_http_authority()))
|
.then(|| t.logical.as_ref().map(|d| d.as_http_authority()))
|
||||||
.flatten(),
|
.flatten(),
|
||||||
|
|
|
@ -664,7 +664,7 @@ async fn grpc_response_class() {
|
||||||
let response_total = metrics
|
let response_total = metrics
|
||||||
.get_response_total(
|
.get_response_total(
|
||||||
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
|
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
|
||||||
tls: Target::meshed_h2().1.map(|t| t.labels()),
|
tls: Target::meshed_h2().1,
|
||||||
authority: None,
|
authority: None,
|
||||||
target_addr: "127.0.0.1:80".parse().unwrap(),
|
target_addr: "127.0.0.1:80".parse().unwrap(),
|
||||||
policy: metrics::RouteAuthzLabels {
|
policy: metrics::RouteAuthzLabels {
|
||||||
|
@ -762,7 +762,7 @@ async fn test_unsafe_authority_labels(
|
||||||
let response_total = metrics
|
let response_total = metrics
|
||||||
.get_response_total(
|
.get_response_total(
|
||||||
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
|
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
|
||||||
tls: Target::meshed_http1().1.as_ref().map(|t| t.labels()),
|
tls: Target::meshed_http1().1,
|
||||||
authority: expected_authority,
|
authority: expected_authority,
|
||||||
target_addr: "127.0.0.1:80".parse().unwrap(),
|
target_addr: "127.0.0.1:80".parse().unwrap(),
|
||||||
policy: metrics::RouteAuthzLabels {
|
policy: metrics::RouteAuthzLabels {
|
||||||
|
@ -861,7 +861,12 @@ fn grpc_status_server(
|
||||||
|
|
||||||
#[tracing::instrument]
|
#[tracing::instrument]
|
||||||
fn connect_error() -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
|
fn connect_error() -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
|
||||||
move |_| Err(io::Error::other("server is not listening"))
|
move |_| {
|
||||||
|
Err(io::Error::new(
|
||||||
|
io::ErrorKind::Other,
|
||||||
|
"server is not listening",
|
||||||
|
))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tracing::instrument]
|
#[tracing::instrument]
|
||||||
|
|
|
@ -113,6 +113,10 @@ impl<S> Inbound<S> {
|
||||||
&self.runtime.identity
|
&self.runtime.identity
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn proxy_metrics(&self) -> &metrics::Proxy {
|
||||||
|
&self.runtime.metrics.proxy
|
||||||
|
}
|
||||||
|
|
||||||
/// A helper for gateways to instrument policy checks.
|
/// A helper for gateways to instrument policy checks.
|
||||||
pub fn authorize_http<N>(
|
pub fn authorize_http<N>(
|
||||||
&self,
|
&self,
|
||||||
|
|
|
@ -13,7 +13,7 @@ pub(crate) mod error;
|
||||||
|
|
||||||
pub use linkerd_app_core::metrics::*;
|
pub use linkerd_app_core::metrics::*;
|
||||||
|
|
||||||
/// Holds LEGACY inbound proxy metrics.
|
/// Holds outbound proxy metrics.
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
pub struct InboundMetrics {
|
pub struct InboundMetrics {
|
||||||
pub http_authz: authz::HttpAuthzMetrics,
|
pub http_authz: authz::HttpAuthzMetrics,
|
||||||
|
@ -50,7 +50,7 @@ impl InboundMetrics {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtMetrics for InboundMetrics {
|
impl FmtMetrics for InboundMetrics {
|
||||||
fn fmt_metrics(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt_metrics(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
self.http_authz.fmt_metrics(f)?;
|
self.http_authz.fmt_metrics(f)?;
|
||||||
self.http_errors.fmt_metrics(f)?;
|
self.http_errors.fmt_metrics(f)?;
|
||||||
|
|
|
@ -1,9 +1,8 @@
|
||||||
use crate::policy::{AllowPolicy, HttpRoutePermit, Meta, ServerPermit};
|
use crate::policy::{AllowPolicy, HttpRoutePermit, Meta, ServerPermit};
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
metrics::{
|
metrics::{
|
||||||
legacy::{Counter, FmtLabels, FmtMetrics},
|
metrics, Counter, FmtLabels, FmtMetrics, RouteAuthzLabels, RouteLabels, ServerAuthzLabels,
|
||||||
metrics, RouteAuthzLabels, RouteLabels, ServerAuthzLabels, ServerLabel, TargetAddr,
|
ServerLabel, TargetAddr, TlsAccept,
|
||||||
TlsAccept,
|
|
||||||
},
|
},
|
||||||
tls,
|
tls,
|
||||||
transport::OrigDstAddr,
|
transport::OrigDstAddr,
|
||||||
|
@ -68,7 +67,7 @@ pub struct HTTPLocalRateLimitLabels {
|
||||||
#[derive(Debug, Hash, PartialEq, Eq)]
|
#[derive(Debug, Hash, PartialEq, Eq)]
|
||||||
struct Key<L> {
|
struct Key<L> {
|
||||||
target: TargetAddr,
|
target: TargetAddr,
|
||||||
tls: tls::ConditionalServerTlsLabels,
|
tls: tls::ConditionalServerTls,
|
||||||
labels: L,
|
labels: L,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -81,7 +80,7 @@ type HttpLocalRateLimitKey = Key<HTTPLocalRateLimitLabels>;
|
||||||
// === impl HttpAuthzMetrics ===
|
// === impl HttpAuthzMetrics ===
|
||||||
|
|
||||||
impl HttpAuthzMetrics {
|
impl HttpAuthzMetrics {
|
||||||
pub fn allow(&self, permit: &HttpRoutePermit, tls: tls::ConditionalServerTlsLabels) {
|
pub fn allow(&self, permit: &HttpRoutePermit, tls: tls::ConditionalServerTls) {
|
||||||
self.0
|
self.0
|
||||||
.allow
|
.allow
|
||||||
.lock()
|
.lock()
|
||||||
|
@ -94,7 +93,7 @@ impl HttpAuthzMetrics {
|
||||||
&self,
|
&self,
|
||||||
labels: ServerLabel,
|
labels: ServerLabel,
|
||||||
dst: OrigDstAddr,
|
dst: OrigDstAddr,
|
||||||
tls: tls::ConditionalServerTlsLabels,
|
tls: tls::ConditionalServerTls,
|
||||||
) {
|
) {
|
||||||
self.0
|
self.0
|
||||||
.route_not_found
|
.route_not_found
|
||||||
|
@ -104,12 +103,7 @@ impl HttpAuthzMetrics {
|
||||||
.incr();
|
.incr();
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn deny(
|
pub fn deny(&self, labels: RouteLabels, dst: OrigDstAddr, tls: tls::ConditionalServerTls) {
|
||||||
&self,
|
|
||||||
labels: RouteLabels,
|
|
||||||
dst: OrigDstAddr,
|
|
||||||
tls: tls::ConditionalServerTlsLabels,
|
|
||||||
) {
|
|
||||||
self.0
|
self.0
|
||||||
.deny
|
.deny
|
||||||
.lock()
|
.lock()
|
||||||
|
@ -122,7 +116,7 @@ impl HttpAuthzMetrics {
|
||||||
&self,
|
&self,
|
||||||
labels: HTTPLocalRateLimitLabels,
|
labels: HTTPLocalRateLimitLabels,
|
||||||
dst: OrigDstAddr,
|
dst: OrigDstAddr,
|
||||||
tls: tls::ConditionalServerTlsLabels,
|
tls: tls::ConditionalServerTls,
|
||||||
) {
|
) {
|
||||||
self.0
|
self.0
|
||||||
.http_local_rate_limit
|
.http_local_rate_limit
|
||||||
|
@ -193,7 +187,7 @@ impl FmtMetrics for HttpAuthzMetrics {
|
||||||
// === impl TcpAuthzMetrics ===
|
// === impl TcpAuthzMetrics ===
|
||||||
|
|
||||||
impl TcpAuthzMetrics {
|
impl TcpAuthzMetrics {
|
||||||
pub fn allow(&self, permit: &ServerPermit, tls: tls::ConditionalServerTlsLabels) {
|
pub fn allow(&self, permit: &ServerPermit, tls: tls::ConditionalServerTls) {
|
||||||
self.0
|
self.0
|
||||||
.allow
|
.allow
|
||||||
.lock()
|
.lock()
|
||||||
|
@ -202,7 +196,7 @@ impl TcpAuthzMetrics {
|
||||||
.incr();
|
.incr();
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn deny(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) {
|
pub fn deny(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTls) {
|
||||||
self.0
|
self.0
|
||||||
.deny
|
.deny
|
||||||
.lock()
|
.lock()
|
||||||
|
@ -211,7 +205,7 @@ impl TcpAuthzMetrics {
|
||||||
.incr();
|
.incr();
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn terminate(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) {
|
pub fn terminate(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTls) {
|
||||||
self.0
|
self.0
|
||||||
.terminate
|
.terminate
|
||||||
.lock()
|
.lock()
|
||||||
|
@ -252,24 +246,18 @@ impl FmtMetrics for TcpAuthzMetrics {
|
||||||
|
|
||||||
impl FmtLabels for HTTPLocalRateLimitLabels {
|
impl FmtLabels for HTTPLocalRateLimitLabels {
|
||||||
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
let Self {
|
self.server.fmt_labels(f)?;
|
||||||
server,
|
if let Some(rl) = &self.rate_limit {
|
||||||
rate_limit,
|
|
||||||
scope,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
server.fmt_labels(f)?;
|
|
||||||
if let Some(rl) = rate_limit {
|
|
||||||
write!(
|
write!(
|
||||||
f,
|
f,
|
||||||
",ratelimit_group=\"{}\",ratelimit_kind=\"{}\",ratelimit_name=\"{}\",ratelimit_scope=\"{}\"",
|
",ratelimit_group=\"{}\",ratelimit_kind=\"{}\",ratelimit_name=\"{}\",ratelimit_scope=\"{}\"",
|
||||||
rl.group(),
|
rl.group(),
|
||||||
rl.kind(),
|
rl.kind(),
|
||||||
rl.name(),
|
rl.name(),
|
||||||
scope,
|
self.scope,
|
||||||
)
|
)
|
||||||
} else {
|
} else {
|
||||||
write!(f, ",ratelimit_scope=\"{scope}\"")
|
write!(f, ",ratelimit_scope=\"{}\"", self.scope)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -277,7 +265,7 @@ impl FmtLabels for HTTPLocalRateLimitLabels {
|
||||||
// === impl Key ===
|
// === impl Key ===
|
||||||
|
|
||||||
impl<L> Key<L> {
|
impl<L> Key<L> {
|
||||||
fn new(labels: L, dst: OrigDstAddr, tls: tls::ConditionalServerTlsLabels) -> Self {
|
fn new(labels: L, dst: OrigDstAddr, tls: tls::ConditionalServerTls) -> Self {
|
||||||
Self {
|
Self {
|
||||||
tls,
|
tls,
|
||||||
target: TargetAddr(dst.into()),
|
target: TargetAddr(dst.into()),
|
||||||
|
@ -288,30 +276,24 @@ impl<L> Key<L> {
|
||||||
|
|
||||||
impl<L: FmtLabels> FmtLabels for Key<L> {
|
impl<L: FmtLabels> FmtLabels for Key<L> {
|
||||||
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
let Self {
|
(self.target, (&self.labels, TlsAccept(&self.tls))).fmt_labels(f)
|
||||||
target,
|
|
||||||
tls,
|
|
||||||
labels,
|
|
||||||
} = self;
|
|
||||||
|
|
||||||
(target, (labels, TlsAccept(tls))).fmt_labels(f)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ServerKey {
|
impl ServerKey {
|
||||||
fn from_policy(policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) -> Self {
|
fn from_policy(policy: &AllowPolicy, tls: tls::ConditionalServerTls) -> Self {
|
||||||
Self::new(policy.server_label(), policy.dst_addr(), tls)
|
Self::new(policy.server_label(), policy.dst_addr(), tls)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl RouteAuthzKey {
|
impl RouteAuthzKey {
|
||||||
fn from_permit(permit: &HttpRoutePermit, tls: tls::ConditionalServerTlsLabels) -> Self {
|
fn from_permit(permit: &HttpRoutePermit, tls: tls::ConditionalServerTls) -> Self {
|
||||||
Self::new(permit.labels.clone(), permit.dst, tls)
|
Self::new(permit.labels.clone(), permit.dst, tls)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl ServerAuthzKey {
|
impl ServerAuthzKey {
|
||||||
fn from_permit(permit: &ServerPermit, tls: tls::ConditionalServerTlsLabels) -> Self {
|
fn from_permit(permit: &ServerPermit, tls: tls::ConditionalServerTls) -> Self {
|
||||||
Self::new(permit.labels.clone(), permit.dst, tls)
|
Self::new(permit.labels.clone(), permit.dst, tls)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,7 +8,7 @@ use crate::{
|
||||||
};
|
};
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
errors::{FailFastError, LoadShedError},
|
errors::{FailFastError, LoadShedError},
|
||||||
metrics::legacy::FmtLabels,
|
metrics::FmtLabels,
|
||||||
tls,
|
tls,
|
||||||
};
|
};
|
||||||
use std::fmt;
|
use std::fmt;
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
use super::ErrorKind;
|
use super::ErrorKind;
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
metrics::{
|
metrics::{metrics, Counter, FmtMetrics, ServerLabel},
|
||||||
legacy::{Counter, FmtMetrics},
|
|
||||||
metrics, ServerLabel,
|
|
||||||
},
|
|
||||||
svc::{self, stack::NewMonitor},
|
svc::{self, stack::NewMonitor},
|
||||||
transport::{labels::TargetAddr, OrigDstAddr},
|
transport::{labels::TargetAddr, OrigDstAddr},
|
||||||
Error,
|
Error,
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
use super::ErrorKind;
|
use super::ErrorKind;
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
metrics::{
|
metrics::{metrics, Counter, FmtMetrics},
|
||||||
legacy::{Counter, FmtMetrics},
|
|
||||||
metrics,
|
|
||||||
},
|
|
||||||
svc::{self, stack::NewMonitor},
|
svc::{self, stack::NewMonitor},
|
||||||
transport::{labels::TargetAddr, OrigDstAddr},
|
transport::{labels::TargetAddr, OrigDstAddr},
|
||||||
Error,
|
Error,
|
||||||
|
|
|
@ -33,7 +33,7 @@ static INVALID_POLICY: once_cell::sync::OnceCell<ServerPolicy> = once_cell::sync
|
||||||
|
|
||||||
impl<S> Api<S>
|
impl<S> Api<S>
|
||||||
where
|
where
|
||||||
S: tonic::client::GrpcService<tonic::body::Body, Error = Error> + Clone,
|
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error> + Clone,
|
||||||
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
||||||
{
|
{
|
||||||
pub(super) fn new(
|
pub(super) fn new(
|
||||||
|
@ -57,7 +57,7 @@ where
|
||||||
|
|
||||||
impl<S> Service<u16> for Api<S>
|
impl<S> Service<u16> for Api<S>
|
||||||
where
|
where
|
||||||
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
|
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
|
||||||
S: Clone + Send + Sync + 'static,
|
S: Clone + Send + Sync + 'static,
|
||||||
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
||||||
S::Future: Send + 'static,
|
S::Future: Send + 'static,
|
||||||
|
|
|
@ -40,7 +40,7 @@ impl Config {
|
||||||
limits: ReceiveLimits,
|
limits: ReceiveLimits,
|
||||||
) -> impl GetPolicy + Clone + Send + Sync + 'static
|
) -> impl GetPolicy + Clone + Send + Sync + 'static
|
||||||
where
|
where
|
||||||
C: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
|
C: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
|
||||||
C: Clone + Unpin + Send + Sync + 'static,
|
C: Clone + Unpin + Send + Sync + 'static,
|
||||||
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
|
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
|
||||||
C::ResponseBody: Send + 'static,
|
C::ResponseBody: Send + 'static,
|
||||||
|
|
|
@ -248,11 +248,8 @@ impl<T, N> HttpPolicyService<T, N> {
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
self.metrics.deny(
|
self.metrics
|
||||||
labels,
|
.deny(labels, self.connection.dst, self.connection.tls.clone());
|
||||||
self.connection.dst,
|
|
||||||
self.connection.tls.as_ref().map(|t| t.labels()),
|
|
||||||
);
|
|
||||||
return Err(HttpRouteUnauthorized(()).into());
|
return Err(HttpRouteUnauthorized(()).into());
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -282,19 +279,14 @@ impl<T, N> HttpPolicyService<T, N> {
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
self.metrics
|
self.metrics.allow(&permit, self.connection.tls.clone());
|
||||||
.allow(&permit, self.connection.tls.as_ref().map(|t| t.labels()));
|
|
||||||
|
|
||||||
Ok((permit, r#match, route))
|
Ok((permit, r#match, route))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn mk_route_not_found(&self) -> Error {
|
fn mk_route_not_found(&self) -> Error {
|
||||||
let labels = self.policy.server_label();
|
let labels = self.policy.server_label();
|
||||||
self.metrics.route_not_found(
|
self.metrics
|
||||||
labels,
|
.route_not_found(labels, self.connection.dst, self.connection.tls.clone());
|
||||||
self.connection.dst,
|
|
||||||
self.connection.tls.as_ref().map(|t| t.labels()),
|
|
||||||
);
|
|
||||||
HttpRouteNotFound(()).into()
|
HttpRouteNotFound(()).into()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -314,7 +306,7 @@ impl<T, N> HttpPolicyService<T, N> {
|
||||||
self.metrics.ratelimit(
|
self.metrics.ratelimit(
|
||||||
self.policy.ratelimit_label(&err),
|
self.policy.ratelimit_label(&err),
|
||||||
self.connection.dst,
|
self.connection.dst,
|
||||||
self.connection.tls.as_ref().map(|t| t.labels()),
|
self.connection.tls.clone(),
|
||||||
);
|
);
|
||||||
err.into()
|
err.into()
|
||||||
})
|
})
|
||||||
|
|
|
@ -74,7 +74,7 @@ impl<S> Store<S> {
|
||||||
opaque_ports: RangeInclusiveSet<u16>,
|
opaque_ports: RangeInclusiveSet<u16>,
|
||||||
) -> Self
|
) -> Self
|
||||||
where
|
where
|
||||||
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
|
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
|
||||||
S: Clone + Send + Sync + 'static,
|
S: Clone + Send + Sync + 'static,
|
||||||
S::Future: Send,
|
S::Future: Send,
|
||||||
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
||||||
|
@ -138,7 +138,7 @@ impl<S> Store<S> {
|
||||||
|
|
||||||
impl<S> GetPolicy for Store<S>
|
impl<S> GetPolicy for Store<S>
|
||||||
where
|
where
|
||||||
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
|
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
|
||||||
S: Clone + Send + Sync + 'static,
|
S: Clone + Send + Sync + 'static,
|
||||||
S::Future: Send,
|
S::Future: Send,
|
||||||
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
||||||
|
|
|
@ -77,8 +77,7 @@ where
|
||||||
// This new services requires a ClientAddr, so it must necessarily be built for each
|
// This new services requires a ClientAddr, so it must necessarily be built for each
|
||||||
// connection. So we can just increment the counter here since the service can only
|
// connection. So we can just increment the counter here since the service can only
|
||||||
// be used at most once.
|
// be used at most once.
|
||||||
self.metrics
|
self.metrics.allow(&permit, tls.clone());
|
||||||
.allow(&permit, tls.as_ref().map(|t| t.labels()));
|
|
||||||
|
|
||||||
let inner = self.inner.new_service((permit, target));
|
let inner = self.inner.new_service((permit, target));
|
||||||
TcpPolicy::Authorized(Authorized {
|
TcpPolicy::Authorized(Authorized {
|
||||||
|
@ -98,7 +97,7 @@ where
|
||||||
?tls, %client,
|
?tls, %client,
|
||||||
"Connection denied"
|
"Connection denied"
|
||||||
);
|
);
|
||||||
self.metrics.deny(&policy, tls.as_ref().map(|t| t.labels()));
|
self.metrics.deny(&policy, tls);
|
||||||
TcpPolicy::Unauthorized(deny)
|
TcpPolicy::Unauthorized(deny)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -168,7 +167,7 @@ where
|
||||||
%client,
|
%client,
|
||||||
"Connection terminated due to policy change",
|
"Connection terminated due to policy change",
|
||||||
);
|
);
|
||||||
metrics.terminate(&policy, tls.as_ref().map(|t| t.labels()));
|
metrics.terminate(&policy, tls);
|
||||||
return Err(denied.into());
|
return Err(denied.into());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -263,7 +263,7 @@ fn orig_dst_addr() -> OrigDstAddr {
|
||||||
OrigDstAddr(([192, 0, 2, 2], 1000).into())
|
OrigDstAddr(([192, 0, 2, 2], 1000).into())
|
||||||
}
|
}
|
||||||
|
|
||||||
impl tonic::client::GrpcService<tonic::body::Body> for MockSvc {
|
impl tonic::client::GrpcService<tonic::body::BoxBody> for MockSvc {
|
||||||
type ResponseBody = linkerd_app_core::control::RspBody;
|
type ResponseBody = linkerd_app_core::control::RspBody;
|
||||||
type Error = Error;
|
type Error = Error;
|
||||||
type Future = futures::future::Pending<Result<http::Response<Self::ResponseBody>, Self::Error>>;
|
type Future = futures::future::Pending<Result<http::Response<Self::ResponseBody>, Self::Error>>;
|
||||||
|
@ -275,7 +275,7 @@ impl tonic::client::GrpcService<tonic::body::Body> for MockSvc {
|
||||||
unreachable!()
|
unreachable!()
|
||||||
}
|
}
|
||||||
|
|
||||||
fn call(&mut self, _req: http::Request<tonic::body::Body>) -> Self::Future {
|
fn call(&mut self, _req: http::Request<tonic::body::BoxBody>) -> Self::Future {
|
||||||
unreachable!()
|
unreachable!()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -27,7 +27,7 @@ impl Inbound<()> {
|
||||||
limits: ReceiveLimits,
|
limits: ReceiveLimits,
|
||||||
) -> impl policy::GetPolicy + Clone + Send + Sync + 'static
|
) -> impl policy::GetPolicy + Clone + Send + Sync + 'static
|
||||||
where
|
where
|
||||||
C: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
|
C: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
|
||||||
C: Clone + Unpin + Send + Sync + 'static,
|
C: Clone + Unpin + Send + Sync + 'static,
|
||||||
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
|
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
|
||||||
C::ResponseBody: Send + 'static,
|
C::ResponseBody: Send + 'static,
|
||||||
|
|
|
@ -3,7 +3,9 @@ pub use futures::prelude::*;
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
config,
|
config,
|
||||||
dns::Suffix,
|
dns::Suffix,
|
||||||
drain, exp_backoff, identity, metrics,
|
drain, exp_backoff,
|
||||||
|
identity::rustls,
|
||||||
|
metrics,
|
||||||
proxy::{
|
proxy::{
|
||||||
http::{h1, h2},
|
http::{h1, h2},
|
||||||
tap,
|
tap,
|
||||||
|
@ -96,7 +98,7 @@ pub fn runtime() -> (ProxyRuntime, drain::Signal) {
|
||||||
let (tap, _) = tap::new();
|
let (tap, _) = tap::new();
|
||||||
let (metrics, _) = metrics::Metrics::new(std::time::Duration::from_secs(10));
|
let (metrics, _) = metrics::Metrics::new(std::time::Duration::from_secs(10));
|
||||||
let runtime = ProxyRuntime {
|
let runtime = ProxyRuntime {
|
||||||
identity: identity::creds::default_for_test().1,
|
identity: rustls::creds::default_for_test().1.into(),
|
||||||
metrics: metrics.proxy,
|
metrics: metrics.proxy,
|
||||||
tap,
|
tap,
|
||||||
span_sink: None,
|
span_sink: None,
|
||||||
|
|
|
@ -23,50 +23,38 @@ h2 = { workspace = true }
|
||||||
http = { workspace = true }
|
http = { workspace = true }
|
||||||
http-body = { workspace = true }
|
http-body = { workspace = true }
|
||||||
http-body-util = { workspace = true }
|
http-body-util = { workspace = true }
|
||||||
|
hyper = { workspace = true, features = [
|
||||||
|
"http1",
|
||||||
|
"http2",
|
||||||
|
"client",
|
||||||
|
"server",
|
||||||
|
] }
|
||||||
hyper-util = { workspace = true, features = ["service"] }
|
hyper-util = { workspace = true, features = ["service"] }
|
||||||
ipnet = "2"
|
ipnet = "2"
|
||||||
linkerd-app = { path = "..", features = ["allow-loopback"] }
|
linkerd-app = { path = "..", features = ["allow-loopback"] }
|
||||||
linkerd-app-core = { path = "../core" }
|
linkerd-app-core = { path = "../core" }
|
||||||
linkerd-app-test = { path = "../test" }
|
|
||||||
linkerd-meshtls = { path = "../../meshtls", features = ["test-util"] }
|
|
||||||
linkerd-metrics = { path = "../../metrics", features = ["test_util"] }
|
linkerd-metrics = { path = "../../metrics", features = ["test_util"] }
|
||||||
linkerd-rustls = { path = "../../rustls" }
|
linkerd2-proxy-api = { workspace = true, features = [
|
||||||
|
"destination",
|
||||||
|
"arbitrary",
|
||||||
|
] }
|
||||||
|
linkerd-app-test = { path = "../test" }
|
||||||
linkerd-tracing = { path = "../../tracing" }
|
linkerd-tracing = { path = "../../tracing" }
|
||||||
maplit = "1"
|
maplit = "1"
|
||||||
parking_lot = "0.12"
|
parking_lot = "0.12"
|
||||||
regex = "1"
|
regex = "1"
|
||||||
rustls-pemfile = "2.2"
|
socket2 = "0.5"
|
||||||
socket2 = "0.6"
|
|
||||||
tokio = { version = "1", features = ["io-util", "net", "rt", "macros"] }
|
tokio = { version = "1", features = ["io-util", "net", "rt", "macros"] }
|
||||||
tokio-rustls = { workspace = true }
|
|
||||||
tokio-stream = { version = "0.1", features = ["sync"] }
|
tokio-stream = { version = "0.1", features = ["sync"] }
|
||||||
tonic = { workspace = true, features = ["transport", "router"], default-features = false }
|
tokio-rustls = { workspace = true }
|
||||||
|
rustls-pemfile = "2.2"
|
||||||
tower = { workspace = true, default-features = false }
|
tower = { workspace = true, default-features = false }
|
||||||
tracing = { workspace = true }
|
tonic = { workspace = true, features = ["transport"], default-features = false }
|
||||||
|
tracing = "0.1"
|
||||||
[dependencies.hyper]
|
tracing-subscriber = { version = "0.3", default-features = false, features = [
|
||||||
workspace = true
|
|
||||||
features = [
|
|
||||||
"client",
|
|
||||||
"http1",
|
|
||||||
"http2",
|
|
||||||
"server",
|
|
||||||
]
|
|
||||||
|
|
||||||
[dependencies.linkerd2-proxy-api]
|
|
||||||
workspace = true
|
|
||||||
features = [
|
|
||||||
"arbitrary",
|
|
||||||
"destination",
|
|
||||||
]
|
|
||||||
|
|
||||||
[dependencies.tracing-subscriber]
|
|
||||||
version = "0.3"
|
|
||||||
default-features = false
|
|
||||||
features = [
|
|
||||||
"fmt",
|
"fmt",
|
||||||
"std",
|
"std",
|
||||||
]
|
] }
|
||||||
|
|
||||||
[dev-dependencies]
|
[dev-dependencies]
|
||||||
flate2 = { version = "1", default-features = false, features = [
|
flate2 = { version = "1", default-features = false, features = [
|
||||||
|
@ -74,5 +62,8 @@ flate2 = { version = "1", default-features = false, features = [
|
||||||
] }
|
] }
|
||||||
# Log streaming isn't enabled by default globally, but we want to test it.
|
# Log streaming isn't enabled by default globally, but we want to test it.
|
||||||
linkerd-app-admin = { path = "../admin", features = ["log-streaming"] }
|
linkerd-app-admin = { path = "../admin", features = ["log-streaming"] }
|
||||||
|
# No code from this crate is actually used; only necessary to enable the Rustls
|
||||||
|
# implementation.
|
||||||
|
linkerd-meshtls = { path = "../../meshtls", features = ["rustls"] }
|
||||||
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
|
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
|
||||||
serde_json = "1"
|
serde_json = "1"
|
||||||
|
|
|
@ -262,7 +262,10 @@ impl pb::destination_server::Destination for Controller {
|
||||||
}
|
}
|
||||||
|
|
||||||
tracing::warn!(?dst, ?updates, "request does not match");
|
tracing::warn!(?dst, ?updates, "request does not match");
|
||||||
let msg = format!("expected get call for {dst:?} but got get call for {req:?}");
|
let msg = format!(
|
||||||
|
"expected get call for {:?} but got get call for {:?}",
|
||||||
|
dst, req
|
||||||
|
);
|
||||||
calls.push_front(Dst::Call(dst, updates));
|
calls.push_front(Dst::Call(dst, updates));
|
||||||
return Err(grpc::Status::new(grpc::Code::Unavailable, msg));
|
return Err(grpc::Status::new(grpc::Code::Unavailable, msg));
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,8 +8,7 @@ use std::{
|
||||||
};
|
};
|
||||||
|
|
||||||
use linkerd2_proxy_api::identity as pb;
|
use linkerd2_proxy_api::identity as pb;
|
||||||
use linkerd_rustls::get_default_provider;
|
use tokio_rustls::rustls::{self, pki_types::CertificateDer, server::WebPkiClientVerifier};
|
||||||
use tokio_rustls::rustls::{self, server::WebPkiClientVerifier};
|
|
||||||
use tonic as grpc;
|
use tonic as grpc;
|
||||||
|
|
||||||
pub struct Identity {
|
pub struct Identity {
|
||||||
|
@ -35,6 +34,10 @@ type Certify = Box<
|
||||||
> + Send,
|
> + Send,
|
||||||
>;
|
>;
|
||||||
|
|
||||||
|
static TLS_VERSIONS: &[&rustls::SupportedProtocolVersion] = &[&rustls::version::TLS13];
|
||||||
|
static TLS_SUPPORTED_CIPHERSUITES: &[rustls::SupportedCipherSuite] =
|
||||||
|
&[rustls::crypto::ring::cipher_suite::TLS13_CHACHA20_POLY1305_SHA256];
|
||||||
|
|
||||||
struct Certificates {
|
struct Certificates {
|
||||||
pub leaf: Vec<u8>,
|
pub leaf: Vec<u8>,
|
||||||
pub intermediates: Vec<Vec<u8>>,
|
pub intermediates: Vec<Vec<u8>>,
|
||||||
|
@ -51,13 +54,13 @@ impl Certificates {
|
||||||
let leaf = certs
|
let leaf = certs
|
||||||
.next()
|
.next()
|
||||||
.expect("no leaf cert in pemfile")
|
.expect("no leaf cert in pemfile")
|
||||||
.map_err(|_| io::Error::other("rustls error reading certs"))?
|
.map_err(|_| io::Error::new(io::ErrorKind::Other, "rustls error reading certs"))?
|
||||||
.as_ref()
|
.as_ref()
|
||||||
.to_vec();
|
.to_vec();
|
||||||
let intermediates = certs
|
let intermediates = certs
|
||||||
.map(|cert| cert.map(|cert| cert.as_ref().to_vec()))
|
.map(|cert| cert.map(|cert| cert.as_ref().to_vec()))
|
||||||
.collect::<Result<Vec<_>, _>>()
|
.collect::<Result<Vec<_>, _>>()
|
||||||
.map_err(|_| io::Error::other("rustls error reading certs"))?;
|
.map_err(|_| io::Error::new(io::ErrorKind::Other, "rustls error reading certs"))?;
|
||||||
|
|
||||||
Ok(Certificates {
|
Ok(Certificates {
|
||||||
leaf,
|
leaf,
|
||||||
|
@ -101,16 +104,19 @@ impl Identity {
|
||||||
use std::io::Cursor;
|
use std::io::Cursor;
|
||||||
let mut roots = rustls::RootCertStore::empty();
|
let mut roots = rustls::RootCertStore::empty();
|
||||||
let trust_anchors = rustls_pemfile::certs(&mut Cursor::new(trust_anchors))
|
let trust_anchors = rustls_pemfile::certs(&mut Cursor::new(trust_anchors))
|
||||||
|
.map(|bytes| bytes.map(CertificateDer::from))
|
||||||
.collect::<Result<Vec<_>, _>>()
|
.collect::<Result<Vec<_>, _>>()
|
||||||
.expect("error parsing pemfile");
|
.expect("error parsing pemfile");
|
||||||
let (added, skipped) = roots.add_parsable_certificates(trust_anchors);
|
let (added, skipped) = roots.add_parsable_certificates(trust_anchors);
|
||||||
assert_ne!(added, 0, "trust anchors must include at least one cert");
|
assert_ne!(added, 0, "trust anchors must include at least one cert");
|
||||||
assert_eq!(skipped, 0, "no certs in pemfile should be invalid");
|
assert_eq!(skipped, 0, "no certs in pemfile should be invalid");
|
||||||
|
|
||||||
let provider = get_default_provider();
|
let mut provider = rustls::crypto::ring::default_provider();
|
||||||
|
provider.cipher_suites = TLS_SUPPORTED_CIPHERSUITES.to_vec();
|
||||||
|
let provider = Arc::new(provider);
|
||||||
|
|
||||||
let client_config = rustls::ClientConfig::builder_with_provider(provider.clone())
|
let client_config = rustls::ClientConfig::builder_with_provider(provider.clone())
|
||||||
.with_safe_default_protocol_versions()
|
.with_protocol_versions(TLS_VERSIONS)
|
||||||
.expect("client config must be valid")
|
.expect("client config must be valid")
|
||||||
.with_root_certificates(roots.clone())
|
.with_root_certificates(roots.clone())
|
||||||
.with_no_client_auth();
|
.with_no_client_auth();
|
||||||
|
@ -122,7 +128,7 @@ impl Identity {
|
||||||
.expect("server verifier must be valid");
|
.expect("server verifier must be valid");
|
||||||
|
|
||||||
let server_config = rustls::ServerConfig::builder_with_provider(provider)
|
let server_config = rustls::ServerConfig::builder_with_provider(provider)
|
||||||
.with_safe_default_protocol_versions()
|
.with_protocol_versions(TLS_VERSIONS)
|
||||||
.expect("server config must be valid")
|
.expect("server config must be valid")
|
||||||
.with_client_cert_verifier(client_cert_verifier)
|
.with_client_cert_verifier(client_cert_verifier)
|
||||||
.with_single_cert(certs.chain(), key)
|
.with_single_cert(certs.chain(), key)
|
||||||
|
@ -213,7 +219,7 @@ impl Controller {
|
||||||
let f = f.take().expect("called twice?");
|
let f = f.take().expect("called twice?");
|
||||||
let fut = f(req)
|
let fut = f(req)
|
||||||
.map_ok(grpc::Response::new)
|
.map_ok(grpc::Response::new)
|
||||||
.map_err(|e| grpc::Status::new(grpc::Code::Internal, format!("{e}")));
|
.map_err(|e| grpc::Status::new(grpc::Code::Internal, format!("{}", e)));
|
||||||
Box::pin(fut)
|
Box::pin(fut)
|
||||||
});
|
});
|
||||||
self.expect_calls.lock().push_back(func);
|
self.expect_calls.lock().push_back(func);
|
||||||
|
|
|
@ -3,7 +3,6 @@
|
||||||
#![warn(rust_2018_idioms, clippy::disallowed_methods, clippy::disallowed_types)]
|
#![warn(rust_2018_idioms, clippy::disallowed_methods, clippy::disallowed_types)]
|
||||||
#![forbid(unsafe_code)]
|
#![forbid(unsafe_code)]
|
||||||
#![recursion_limit = "256"]
|
#![recursion_limit = "256"]
|
||||||
#![allow(clippy::result_large_err)]
|
|
||||||
|
|
||||||
mod test_env;
|
mod test_env;
|
||||||
|
|
||||||
|
@ -248,7 +247,7 @@ impl fmt::Display for HumanDuration {
|
||||||
let secs = self.0.as_secs();
|
let secs = self.0.as_secs();
|
||||||
let subsec_ms = self.0.subsec_nanos() as f64 / 1_000_000f64;
|
let subsec_ms = self.0.subsec_nanos() as f64 / 1_000_000f64;
|
||||||
if secs == 0 {
|
if secs == 0 {
|
||||||
write!(fmt, "{subsec_ms}ms")
|
write!(fmt, "{}ms", subsec_ms)
|
||||||
} else {
|
} else {
|
||||||
write!(fmt, "{}s", secs as f64 + subsec_ms)
|
write!(fmt, "{}s", secs as f64 + subsec_ms)
|
||||||
}
|
}
|
||||||
|
|
|
@ -302,7 +302,7 @@ impl Controller {
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn run(self) -> controller::Listening {
|
pub async fn run(self) -> controller::Listening {
|
||||||
let routes = grpc::service::Routes::default()
|
let svc = grpc::transport::Server::builder()
|
||||||
.add_service(
|
.add_service(
|
||||||
inbound_server_policies_server::InboundServerPoliciesServer::new(Server(Arc::new(
|
inbound_server_policies_server::InboundServerPoliciesServer::new(Server(Arc::new(
|
||||||
self.inbound,
|
self.inbound,
|
||||||
|
@ -310,9 +310,9 @@ impl Controller {
|
||||||
)
|
)
|
||||||
.add_service(outbound_policies_server::OutboundPoliciesServer::new(
|
.add_service(outbound_policies_server::OutboundPoliciesServer::new(
|
||||||
Server(Arc::new(self.outbound)),
|
Server(Arc::new(self.outbound)),
|
||||||
));
|
))
|
||||||
|
.into_service();
|
||||||
controller::run(RoutesSvc(routes), "support policy controller", None).await
|
controller::run(RoutesSvc(svc), "support policy controller", None).await
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -525,9 +525,7 @@ impl Service<Request<hyper::body::Incoming>> for RoutesSvc {
|
||||||
|
|
||||||
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
|
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
|
||||||
let Self(routes) = self;
|
let Self(routes) = self;
|
||||||
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::poll_ready(
|
routes.poll_ready(cx)
|
||||||
routes, cx,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn call(&mut self, req: Request<hyper::body::Incoming>) -> Self::Future {
|
fn call(&mut self, req: Request<hyper::body::Incoming>) -> Self::Future {
|
||||||
|
|
|
@ -108,7 +108,7 @@ impl fmt::Debug for MockOrigDst {
|
||||||
match self {
|
match self {
|
||||||
Self::Addr(addr) => f
|
Self::Addr(addr) => f
|
||||||
.debug_tuple("MockOrigDst::Addr")
|
.debug_tuple("MockOrigDst::Addr")
|
||||||
.field(&format_args!("{addr}"))
|
.field(&format_args!("{}", addr))
|
||||||
.finish(),
|
.finish(),
|
||||||
Self::Direct => f.debug_tuple("MockOrigDst::Direct").finish(),
|
Self::Direct => f.debug_tuple("MockOrigDst::Direct").finish(),
|
||||||
Self::None => f.debug_tuple("MockOrigDst::None").finish(),
|
Self::None => f.debug_tuple("MockOrigDst::None").finish(),
|
||||||
|
@ -416,9 +416,9 @@ async fn run(proxy: Proxy, mut env: TestEnv, random_ports: bool) -> Listening {
|
||||||
use std::fmt::Write;
|
use std::fmt::Write;
|
||||||
let mut ports = inbound_default_ports.iter();
|
let mut ports = inbound_default_ports.iter();
|
||||||
if let Some(port) = ports.next() {
|
if let Some(port) = ports.next() {
|
||||||
let mut var = format!("{port}");
|
let mut var = format!("{}", port);
|
||||||
for port in ports {
|
for port in ports {
|
||||||
write!(&mut var, ",{port}").expect("writing to String should never fail");
|
write!(&mut var, ",{}", port).expect("writing to String should never fail");
|
||||||
}
|
}
|
||||||
info!("{}={:?}", app::env::ENV_INBOUND_PORTS, var);
|
info!("{}={:?}", app::env::ENV_INBOUND_PORTS, var);
|
||||||
env.put(app::env::ENV_INBOUND_PORTS, var);
|
env.put(app::env::ENV_INBOUND_PORTS, var);
|
||||||
|
|
|
@ -137,28 +137,28 @@ impl TapEventExt for pb::TapEvent {
|
||||||
fn request_init_authority(&self) -> &str {
|
fn request_init_authority(&self) -> &str {
|
||||||
match self.event() {
|
match self.event() {
|
||||||
pb::tap_event::http::Event::RequestInit(ev) => &ev.authority,
|
pb::tap_event::http::Event::RequestInit(ev) => &ev.authority,
|
||||||
e => panic!("not RequestInit event: {e:?}"),
|
e => panic!("not RequestInit event: {:?}", e),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn request_init_path(&self) -> &str {
|
fn request_init_path(&self) -> &str {
|
||||||
match self.event() {
|
match self.event() {
|
||||||
pb::tap_event::http::Event::RequestInit(ev) => &ev.path,
|
pb::tap_event::http::Event::RequestInit(ev) => &ev.path,
|
||||||
e => panic!("not RequestInit event: {e:?}"),
|
e => panic!("not RequestInit event: {:?}", e),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn response_init_status(&self) -> u16 {
|
fn response_init_status(&self) -> u16 {
|
||||||
match self.event() {
|
match self.event() {
|
||||||
pb::tap_event::http::Event::ResponseInit(ev) => ev.http_status as u16,
|
pb::tap_event::http::Event::ResponseInit(ev) => ev.http_status as u16,
|
||||||
e => panic!("not ResponseInit event: {e:?}"),
|
e => panic!("not ResponseInit event: {:?}", e),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fn response_end_bytes(&self) -> u64 {
|
fn response_end_bytes(&self) -> u64 {
|
||||||
match self.event() {
|
match self.event() {
|
||||||
pb::tap_event::http::Event::ResponseEnd(ev) => ev.response_bytes,
|
pb::tap_event::http::Event::ResponseEnd(ev) => ev.response_bytes,
|
||||||
e => panic!("not ResponseEnd event: {e:?}"),
|
e => panic!("not ResponseEnd event: {:?}", e),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ impl TapEventExt for pb::TapEvent {
|
||||||
}) => code,
|
}) => code,
|
||||||
_ => panic!("not Eos GrpcStatusCode: {:?}", ev.eos),
|
_ => panic!("not Eos GrpcStatusCode: {:?}", ev.eos),
|
||||||
},
|
},
|
||||||
ev => panic!("not ResponseEnd event: {ev:?}"),
|
ev => panic!("not ResponseEnd event: {:?}", ev),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -381,7 +381,7 @@ mod cross_version {
|
||||||
}
|
}
|
||||||
|
|
||||||
fn default_dst_name(port: u16) -> String {
|
fn default_dst_name(port: u16) -> String {
|
||||||
format!("{HOST}:{port}")
|
format!("{}:{}", HOST, port)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn send_default_dst(
|
fn send_default_dst(
|
||||||
|
|
|
@ -63,7 +63,7 @@ async fn wait_for_profile_stage(client: &client::Client, metrics: &client::Clien
|
||||||
for _ in 0i32..10 {
|
for _ in 0i32..10 {
|
||||||
assert_eq!(client.get("/load-profile").await, "");
|
assert_eq!(client.get("/load-profile").await, "");
|
||||||
let m = metrics.get("/metrics").await;
|
let m = metrics.get("/metrics").await;
|
||||||
let stage_metric = format!("rt_load_profile=\"{stage}\"");
|
let stage_metric = format!("rt_load_profile=\"{}\"", stage);
|
||||||
if m.contains(stage_metric.as_str()) {
|
if m.contains(stage_metric.as_str()) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
|
@ -88,12 +88,12 @@ impl TestBuilder {
|
||||||
let port = srv.addr.port();
|
let port = srv.addr.port();
|
||||||
let ctrl = controller::new();
|
let ctrl = controller::new();
|
||||||
|
|
||||||
let dst_tx = ctrl.destination_tx(format!("{host}:{port}"));
|
let dst_tx = ctrl.destination_tx(format!("{}:{}", host, port));
|
||||||
dst_tx.send_addr(srv.addr);
|
dst_tx.send_addr(srv.addr);
|
||||||
|
|
||||||
let ctrl = controller::new();
|
let ctrl = controller::new();
|
||||||
|
|
||||||
let dst_tx = ctrl.destination_tx(format!("{host}:{port}"));
|
let dst_tx = ctrl.destination_tx(format!("{}:{}", host, port));
|
||||||
dst_tx.send_addr(srv.addr);
|
dst_tx.send_addr(srv.addr);
|
||||||
|
|
||||||
let profile_tx = ctrl.profile_tx(srv.addr.to_string());
|
let profile_tx = ctrl.profile_tx(srv.addr.to_string());
|
||||||
|
|
|
@ -1318,9 +1318,9 @@ async fn metrics_compression() {
|
||||||
body.copy_to_bytes(body.remaining()),
|
body.copy_to_bytes(body.remaining()),
|
||||||
));
|
));
|
||||||
let mut scrape = String::new();
|
let mut scrape = String::new();
|
||||||
decoder
|
decoder.read_to_string(&mut scrape).unwrap_or_else(|_| {
|
||||||
.read_to_string(&mut scrape)
|
panic!("decode gzip (requested Accept-Encoding: {})", encoding)
|
||||||
.unwrap_or_else(|_| panic!("decode gzip (requested Accept-Encoding: {encoding})"));
|
});
|
||||||
scrape
|
scrape
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
|
@ -26,7 +26,7 @@ async fn is_valid_json() {
|
||||||
assert!(!json.is_empty());
|
assert!(!json.is_empty());
|
||||||
|
|
||||||
for obj in json {
|
for obj in json {
|
||||||
println!("{obj}\n");
|
println!("{}\n", obj);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -53,7 +53,7 @@ async fn query_is_valid_json() {
|
||||||
assert!(!json.is_empty());
|
assert!(!json.is_empty());
|
||||||
|
|
||||||
for obj in json {
|
for obj in json {
|
||||||
println!("{obj}\n");
|
println!("{}\n", obj);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -74,9 +74,12 @@ async fn valid_get_does_not_error() {
|
||||||
|
|
||||||
let json = logs.await.unwrap();
|
let json = logs.await.unwrap();
|
||||||
for obj in json {
|
for obj in json {
|
||||||
println!("{obj}\n");
|
println!("{}\n", obj);
|
||||||
if obj.get("error").is_some() {
|
if obj.get("error").is_some() {
|
||||||
panic!("expected the log stream to contain no error responses!\njson = {obj}");
|
panic!(
|
||||||
|
"expected the log stream to contain no error responses!\njson = {}",
|
||||||
|
obj
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -98,9 +101,12 @@ async fn valid_query_does_not_error() {
|
||||||
|
|
||||||
let json = logs.await.unwrap();
|
let json = logs.await.unwrap();
|
||||||
for obj in json {
|
for obj in json {
|
||||||
println!("{obj}\n");
|
println!("{}\n", obj);
|
||||||
if obj.get("error").is_some() {
|
if obj.get("error").is_some() {
|
||||||
panic!("expected the log stream to contain no error responses!\njson = {obj}");
|
panic!(
|
||||||
|
"expected the log stream to contain no error responses!\njson = {}",
|
||||||
|
obj
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -136,7 +142,9 @@ async fn multi_filter() {
|
||||||
level.and_then(|value| value.as_str()),
|
level.and_then(|value| value.as_str()),
|
||||||
Some("DEBUG") | Some("INFO") | Some("WARN") | Some("ERROR")
|
Some("DEBUG") | Some("INFO") | Some("WARN") | Some("ERROR")
|
||||||
),
|
),
|
||||||
"level must be DEBUG, INFO, WARN, or ERROR\n level: {level:?}\n json: {obj:#?}"
|
"level must be DEBUG, INFO, WARN, or ERROR\n level: {:?}\n json: {:#?}",
|
||||||
|
level,
|
||||||
|
obj
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -167,7 +175,7 @@ async fn get_log_stream(
|
||||||
let req = client
|
let req = client
|
||||||
.request_body(
|
.request_body(
|
||||||
client
|
client
|
||||||
.request_builder(&format!("{PATH}?{filter}"))
|
.request_builder(&format!("{}?{}", PATH, filter))
|
||||||
.method(http::Method::GET)
|
.method(http::Method::GET)
|
||||||
.body(http_body_util::Full::new(Bytes::from(filter)))
|
.body(http_body_util::Full::new(Bytes::from(filter)))
|
||||||
.unwrap(),
|
.unwrap(),
|
||||||
|
@ -223,7 +231,7 @@ where
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
println!("body failed: {e}");
|
println!("body failed: {}", e);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
|
@ -80,7 +80,10 @@ impl Test {
|
||||||
.await
|
.await
|
||||||
};
|
};
|
||||||
|
|
||||||
env.put(app::env::ENV_INBOUND_DETECT_TIMEOUT, format!("{TIMEOUT:?}"));
|
env.put(
|
||||||
|
app::env::ENV_INBOUND_DETECT_TIMEOUT,
|
||||||
|
format!("{:?}", TIMEOUT),
|
||||||
|
);
|
||||||
|
|
||||||
(self.set_env)(&mut env);
|
(self.set_env)(&mut env);
|
||||||
|
|
||||||
|
@ -124,6 +127,26 @@ async fn inbound_timeout() {
|
||||||
.await;
|
.await;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Tests that the detect metric is labeled and incremented on I/O error.
|
||||||
|
#[tokio::test]
|
||||||
|
async fn inbound_io_err() {
|
||||||
|
let _trace = trace_init();
|
||||||
|
|
||||||
|
let (proxy, metrics) = Test::default().run().await;
|
||||||
|
let client = crate::tcp::client(proxy.inbound);
|
||||||
|
|
||||||
|
let tcp_client = client.connect().await;
|
||||||
|
|
||||||
|
tcp_client.write(TcpFixture::HELLO_MSG).await;
|
||||||
|
drop(tcp_client);
|
||||||
|
|
||||||
|
metric(&proxy)
|
||||||
|
.label("error", "i/o")
|
||||||
|
.value(1u64)
|
||||||
|
.assert_in(&metrics)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
|
||||||
/// Tests that the detect metric is not incremented when TLS is successfully
|
/// Tests that the detect metric is not incremented when TLS is successfully
|
||||||
/// detected.
|
/// detected.
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
|
@ -169,6 +192,44 @@ async fn inbound_success() {
|
||||||
metric.assert_in(&metrics).await;
|
metric.assert_in(&metrics).await;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Tests both of the above cases together.
|
||||||
|
#[tokio::test]
|
||||||
|
async fn inbound_multi() {
|
||||||
|
let _trace = trace_init();
|
||||||
|
|
||||||
|
let (proxy, metrics) = Test::default().run().await;
|
||||||
|
let client = crate::tcp::client(proxy.inbound);
|
||||||
|
|
||||||
|
let metric = metric(&proxy);
|
||||||
|
let timeout_metric = metric.clone().label("error", "tls detection timeout");
|
||||||
|
let io_metric = metric.label("error", "i/o");
|
||||||
|
|
||||||
|
let tcp_client = client.connect().await;
|
||||||
|
|
||||||
|
tokio::time::sleep(TIMEOUT + Duration::from_millis(15)) // just in case
|
||||||
|
.await;
|
||||||
|
|
||||||
|
timeout_metric.clone().value(1u64).assert_in(&metrics).await;
|
||||||
|
drop(tcp_client);
|
||||||
|
|
||||||
|
let tcp_client = client.connect().await;
|
||||||
|
|
||||||
|
tcp_client.write(TcpFixture::HELLO_MSG).await;
|
||||||
|
drop(tcp_client);
|
||||||
|
|
||||||
|
io_metric.clone().value(1u64).assert_in(&metrics).await;
|
||||||
|
timeout_metric.clone().value(1u64).assert_in(&metrics).await;
|
||||||
|
|
||||||
|
let tcp_client = client.connect().await;
|
||||||
|
|
||||||
|
tokio::time::sleep(TIMEOUT + Duration::from_millis(15)) // just in case
|
||||||
|
.await;
|
||||||
|
|
||||||
|
io_metric.clone().value(1u64).assert_in(&metrics).await;
|
||||||
|
timeout_metric.clone().value(2u64).assert_in(&metrics).await;
|
||||||
|
drop(tcp_client);
|
||||||
|
}
|
||||||
|
|
||||||
/// Tests that TLS detect failure metrics are collected for the direct stack.
|
/// Tests that TLS detect failure metrics are collected for the direct stack.
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn inbound_direct_multi() {
|
async fn inbound_direct_multi() {
|
||||||
|
|
|
@ -13,7 +13,7 @@ Configures and runs the outbound proxy
|
||||||
default = []
|
default = []
|
||||||
allow-loopback = []
|
allow-loopback = []
|
||||||
test-subscriber = []
|
test-subscriber = []
|
||||||
test-util = ["linkerd-app-test", "linkerd-meshtls/test-util", "dep:http-body"]
|
test-util = ["linkerd-app-test", "linkerd-meshtls-rustls/test-util", "dep:http-body"]
|
||||||
|
|
||||||
prometheus-client-rust-242 = [] # TODO
|
prometheus-client-rust-242 = [] # TODO
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ thiserror = "2"
|
||||||
tokio = { version = "1", features = ["sync"] }
|
tokio = { version = "1", features = ["sync"] }
|
||||||
tonic = { workspace = true, default-features = false }
|
tonic = { workspace = true, default-features = false }
|
||||||
tower = { workspace = true, features = ["util"] }
|
tower = { workspace = true, features = ["util"] }
|
||||||
tracing = { workspace = true }
|
tracing = "0.1"
|
||||||
|
|
||||||
linkerd-app-core = { path = "../core" }
|
linkerd-app-core = { path = "../core" }
|
||||||
linkerd-app-test = { path = "../test", optional = true }
|
linkerd-app-test = { path = "../test", optional = true }
|
||||||
|
@ -42,7 +42,7 @@ linkerd-http-prom = { path = "../../http/prom" }
|
||||||
linkerd-http-retry = { path = "../../http/retry" }
|
linkerd-http-retry = { path = "../../http/retry" }
|
||||||
linkerd-http-route = { path = "../../http/route" }
|
linkerd-http-route = { path = "../../http/route" }
|
||||||
linkerd-identity = { path = "../../identity" }
|
linkerd-identity = { path = "../../identity" }
|
||||||
linkerd-meshtls = { path = "../../meshtls", optional = true, default-features = false }
|
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", optional = true }
|
||||||
linkerd-opaq-route = { path = "../../opaq-route" }
|
linkerd-opaq-route = { path = "../../opaq-route" }
|
||||||
linkerd-proxy-client-policy = { path = "../../proxy/client-policy", features = [
|
linkerd-proxy-client-policy = { path = "../../proxy/client-policy", features = [
|
||||||
"proto",
|
"proto",
|
||||||
|
@ -67,10 +67,10 @@ linkerd-app-test = { path = "../test", features = ["client-policy"] }
|
||||||
linkerd-http-box = { path = "../../http/box" }
|
linkerd-http-box = { path = "../../http/box" }
|
||||||
linkerd-http-prom = { path = "../../http/prom", features = ["test-util"] }
|
linkerd-http-prom = { path = "../../http/prom", features = ["test-util"] }
|
||||||
linkerd-io = { path = "../../io", features = ["tokio-test"] }
|
linkerd-io = { path = "../../io", features = ["tokio-test"] }
|
||||||
linkerd-meshtls = { path = "../../meshtls", features = [
|
linkerd-meshtls = { path = "../../meshtls", features = ["rustls"] }
|
||||||
|
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", features = [
|
||||||
"test-util",
|
"test-util",
|
||||||
] }
|
] }
|
||||||
linkerd-mock-http-body = { path = "../../mock/http-body" }
|
|
||||||
linkerd-stack = { path = "../../stack", features = ["test-util"] }
|
linkerd-stack = { path = "../../stack", features = ["test-util"] }
|
||||||
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
|
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
|
||||||
|
|
||||||
|
|
|
@ -134,13 +134,7 @@ impl<N> Outbound<N> {
|
||||||
.unwrap_or_else(|| (orig_dst, Default::default()));
|
.unwrap_or_else(|| (orig_dst, Default::default()));
|
||||||
// TODO(ver) We should be able to figure out resource coordinates for
|
// TODO(ver) We should be able to figure out resource coordinates for
|
||||||
// the endpoint?
|
// the endpoint?
|
||||||
synthesize_forward_policy(
|
synthesize_forward_policy(&META, detect_timeout, queue, addr, meta)
|
||||||
&META,
|
|
||||||
detect_timeout,
|
|
||||||
queue,
|
|
||||||
addr,
|
|
||||||
meta.into(),
|
|
||||||
)
|
|
||||||
},
|
},
|
||||||
);
|
);
|
||||||
return Ok((Some(profile), policy));
|
return Ok((Some(profile), policy));
|
||||||
|
@ -195,7 +189,7 @@ pub fn synthesize_forward_policy(
|
||||||
timeout: Duration,
|
timeout: Duration,
|
||||||
queue: policy::Queue,
|
queue: policy::Queue,
|
||||||
addr: SocketAddr,
|
addr: SocketAddr,
|
||||||
metadata: Arc<policy::EndpointMetadata>,
|
metadata: policy::EndpointMetadata,
|
||||||
) -> ClientPolicy {
|
) -> ClientPolicy {
|
||||||
policy_for_backend(
|
policy_for_backend(
|
||||||
meta,
|
meta,
|
||||||
|
|
|
@ -32,7 +32,7 @@ pub use self::balance::BalancerMetrics;
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub enum Dispatch {
|
pub enum Dispatch {
|
||||||
Balance(NameAddr, EwmaConfig),
|
Balance(NameAddr, EwmaConfig),
|
||||||
Forward(Remote<ServerAddr>, Arc<Metadata>),
|
Forward(Remote<ServerAddr>, Metadata),
|
||||||
/// A backend dispatcher that explicitly fails all requests.
|
/// A backend dispatcher that explicitly fails all requests.
|
||||||
Fail {
|
Fail {
|
||||||
message: Arc<str>,
|
message: Arc<str>,
|
||||||
|
@ -49,7 +49,7 @@ pub struct DispatcherFailed(Arc<str>);
|
||||||
pub struct Endpoint<T> {
|
pub struct Endpoint<T> {
|
||||||
addr: Remote<ServerAddr>,
|
addr: Remote<ServerAddr>,
|
||||||
is_local: bool,
|
is_local: bool,
|
||||||
metadata: Arc<Metadata>,
|
metadata: Metadata,
|
||||||
parent: T,
|
parent: T,
|
||||||
queue: QueueConfig,
|
queue: QueueConfig,
|
||||||
close_server_connection_on_remote_proxy_error: bool,
|
close_server_connection_on_remote_proxy_error: bool,
|
||||||
|
@ -279,13 +279,6 @@ impl<T> svc::Param<tls::ConditionalClientTls> for Endpoint<T> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T> svc::Param<tls::ConditionalClientTlsLabels> for Endpoint<T> {
|
|
||||||
fn param(&self) -> tls::ConditionalClientTlsLabels {
|
|
||||||
let tls: tls::ConditionalClientTls = self.param();
|
|
||||||
tls.as_ref().map(tls::ClientTls::labels)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> svc::Param<http::Variant> for Endpoint<T>
|
impl<T> svc::Param<http::Variant> for Endpoint<T>
|
||||||
where
|
where
|
||||||
T: svc::Param<http::Variant>,
|
T: svc::Param<http::Variant>,
|
||||||
|
|
|
@ -121,7 +121,7 @@ where
|
||||||
let http2 = http2.override_from(metadata.http2_client_params());
|
let http2 = http2.override_from(metadata.http2_client_params());
|
||||||
Endpoint {
|
Endpoint {
|
||||||
addr: Remote(ServerAddr(addr)),
|
addr: Remote(ServerAddr(addr)),
|
||||||
metadata: metadata.into(),
|
metadata,
|
||||||
is_local,
|
is_local,
|
||||||
parent: target.parent,
|
parent: target.parent,
|
||||||
queue: http_queue,
|
queue: http_queue,
|
||||||
|
|
|
@ -289,12 +289,6 @@ impl svc::Param<tls::ConditionalClientTls> for Endpoint {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl svc::Param<tls::ConditionalClientTlsLabels> for Endpoint {
|
|
||||||
fn param(&self) -> tls::ConditionalClientTlsLabels {
|
|
||||||
tls::ConditionalClientTlsLabels::None(tls::NoClientTls::Disabled)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl svc::Param<Option<tcp::tagged_transport::PortOverride>> for Endpoint {
|
impl svc::Param<Option<tcp::tagged_transport::PortOverride>> for Endpoint {
|
||||||
fn param(&self) -> Option<tcp::tagged_transport::PortOverride> {
|
fn param(&self) -> Option<tcp::tagged_transport::PortOverride> {
|
||||||
None
|
None
|
||||||
|
|
|
@ -8,7 +8,7 @@ use linkerd_app_core::{
|
||||||
transport::addrs::*,
|
transport::addrs::*,
|
||||||
Addr, Error, Infallible, NameAddr, CANONICAL_DST_HEADER,
|
Addr, Error, Infallible, NameAddr, CANONICAL_DST_HEADER,
|
||||||
};
|
};
|
||||||
use std::{fmt::Debug, hash::Hash, sync::Arc};
|
use std::{fmt::Debug, hash::Hash};
|
||||||
use tokio::sync::watch;
|
use tokio::sync::watch;
|
||||||
|
|
||||||
pub mod policy;
|
pub mod policy;
|
||||||
|
@ -32,7 +32,7 @@ pub enum Routes {
|
||||||
|
|
||||||
/// Fallback endpoint forwarding.
|
/// Fallback endpoint forwarding.
|
||||||
// TODO(ver) Remove this variant when policy routes are fully wired up.
|
// TODO(ver) Remove this variant when policy routes are fully wired up.
|
||||||
Endpoint(Remote<ServerAddr>, Arc<Metadata>),
|
Endpoint(Remote<ServerAddr>, Metadata),
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
|
@ -64,7 +64,7 @@ enum RouterParams<T: Clone + Debug + Eq + Hash> {
|
||||||
Profile(profile::Params<T>),
|
Profile(profile::Params<T>),
|
||||||
|
|
||||||
// TODO(ver) Remove this variant when policy routes are fully wired up.
|
// TODO(ver) Remove this variant when policy routes are fully wired up.
|
||||||
Endpoint(Remote<ServerAddr>, Arc<Metadata>, T),
|
Endpoint(Remote<ServerAddr>, Metadata, T),
|
||||||
}
|
}
|
||||||
|
|
||||||
// Only applies to requests with profiles.
|
// Only applies to requests with profiles.
|
||||||
|
|
|
@ -4,9 +4,6 @@ use super::{
|
||||||
test_util::*,
|
test_util::*,
|
||||||
LabelGrpcRouteRsp, LabelHttpRouteRsp, RequestMetrics,
|
LabelGrpcRouteRsp, LabelHttpRouteRsp, RequestMetrics,
|
||||||
};
|
};
|
||||||
use bytes::{Buf, Bytes};
|
|
||||||
use http_body::Body;
|
|
||||||
use http_body_util::BodyExt;
|
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
dns,
|
dns,
|
||||||
svc::{
|
svc::{
|
||||||
|
@ -17,10 +14,6 @@ use linkerd_app_core::{
|
||||||
};
|
};
|
||||||
use linkerd_http_prom::body_data::request::RequestBodyFamilies;
|
use linkerd_http_prom::body_data::request::RequestBodyFamilies;
|
||||||
use linkerd_proxy_client_policy as policy;
|
use linkerd_proxy_client_policy as policy;
|
||||||
use std::task::Poll;
|
|
||||||
|
|
||||||
static GRPC_STATUS: http::HeaderName = http::HeaderName::from_static("grpc-status");
|
|
||||||
static GRPC_STATUS_OK: http::HeaderValue = http::HeaderValue::from_static("0");
|
|
||||||
|
|
||||||
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
||||||
async fn http_request_statuses() {
|
async fn http_request_statuses() {
|
||||||
|
@ -527,160 +520,6 @@ async fn http_route_request_body_frames() {
|
||||||
tracing::info!("passed");
|
tracing::info!("passed");
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
|
||||||
async fn http_response_body_drop_on_eos() {
|
|
||||||
use linkerd_app_core::svc::{Service, ServiceExt};
|
|
||||||
|
|
||||||
const EXPORT_HOSTNAME_LABELS: bool = false;
|
|
||||||
let _trace = linkerd_tracing::test::trace_init();
|
|
||||||
|
|
||||||
let super::HttpRouteMetrics {
|
|
||||||
requests,
|
|
||||||
body_data,
|
|
||||||
..
|
|
||||||
} = super::HttpRouteMetrics::default();
|
|
||||||
let parent_ref = crate::ParentRef(policy::Meta::new_default("parent"));
|
|
||||||
let route_ref = crate::RouteRef(policy::Meta::new_default("route"));
|
|
||||||
let (mut svc, mut handle) = mock_http_route_metrics(
|
|
||||||
&requests,
|
|
||||||
&body_data,
|
|
||||||
&parent_ref,
|
|
||||||
&route_ref,
|
|
||||||
EXPORT_HOSTNAME_LABELS,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Define a request and a response.
|
|
||||||
let req = http::Request::default();
|
|
||||||
let rsp = http::Response::builder()
|
|
||||||
.status(200)
|
|
||||||
.body(BoxBody::from_static("contents"))
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
// Two counters for 200 responses that do/don't have an error.
|
|
||||||
let ok = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::HttpRsp {
|
|
||||||
status: Some(http::StatusCode::OK),
|
|
||||||
error: None,
|
|
||||||
},
|
|
||||||
));
|
|
||||||
let err = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::HttpRsp {
|
|
||||||
status: Some(http::StatusCode::OK),
|
|
||||||
error: Some(labels::Error::Unknown),
|
|
||||||
},
|
|
||||||
));
|
|
||||||
debug_assert_eq!(ok.get(), 0);
|
|
||||||
debug_assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Send the request, and obtain the response.
|
|
||||||
let mut body = {
|
|
||||||
handle.allow(1);
|
|
||||||
svc.ready().await.expect("ready");
|
|
||||||
let mut call = svc.call(req);
|
|
||||||
let (_req, tx) = tokio::select! {
|
|
||||||
_ = (&mut call) => unreachable!(),
|
|
||||||
res = handle.next_request() => res.unwrap(),
|
|
||||||
};
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
tx.send_response(rsp);
|
|
||||||
call.await.unwrap().into_body()
|
|
||||||
};
|
|
||||||
|
|
||||||
// The counters are not incremented yet.
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Poll a frame out of the body.
|
|
||||||
let data = body
|
|
||||||
.frame()
|
|
||||||
.await
|
|
||||||
.expect("yields a result")
|
|
||||||
.expect("yields a frame")
|
|
||||||
.into_data()
|
|
||||||
.ok()
|
|
||||||
.expect("yields data");
|
|
||||||
assert_eq!(data.chunk(), "contents".as_bytes());
|
|
||||||
assert_eq!(data.remaining(), "contents".len());
|
|
||||||
|
|
||||||
// Show that the body reports itself as being complete.
|
|
||||||
debug_assert!(body.is_end_stream());
|
|
||||||
assert_eq!(ok.get(), 1);
|
|
||||||
assert_eq!(err.get(), 0);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
|
||||||
async fn http_response_body_drop_early() {
|
|
||||||
use linkerd_app_core::svc::{Service, ServiceExt};
|
|
||||||
|
|
||||||
const EXPORT_HOSTNAME_LABELS: bool = false;
|
|
||||||
let _trace = linkerd_tracing::test::trace_init();
|
|
||||||
|
|
||||||
let super::HttpRouteMetrics {
|
|
||||||
requests,
|
|
||||||
body_data,
|
|
||||||
..
|
|
||||||
} = super::HttpRouteMetrics::default();
|
|
||||||
let parent_ref = crate::ParentRef(policy::Meta::new_default("parent"));
|
|
||||||
let route_ref = crate::RouteRef(policy::Meta::new_default("route"));
|
|
||||||
let (mut svc, mut handle) = mock_http_route_metrics(
|
|
||||||
&requests,
|
|
||||||
&body_data,
|
|
||||||
&parent_ref,
|
|
||||||
&route_ref,
|
|
||||||
EXPORT_HOSTNAME_LABELS,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Define a request and a response.
|
|
||||||
let req = http::Request::default();
|
|
||||||
let rsp = http::Response::builder()
|
|
||||||
.status(200)
|
|
||||||
.body(BoxBody::from_static("contents"))
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
// Two counters for 200 responses that do/don't have an error.
|
|
||||||
let ok = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::HttpRsp {
|
|
||||||
status: Some(http::StatusCode::OK),
|
|
||||||
error: None,
|
|
||||||
},
|
|
||||||
));
|
|
||||||
let err = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::HttpRsp {
|
|
||||||
status: Some(http::StatusCode::OK),
|
|
||||||
error: Some(labels::Error::Unknown),
|
|
||||||
},
|
|
||||||
));
|
|
||||||
debug_assert_eq!(ok.get(), 0);
|
|
||||||
debug_assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Send the request, and obtain the response.
|
|
||||||
let body = {
|
|
||||||
handle.allow(1);
|
|
||||||
svc.ready().await.expect("ready");
|
|
||||||
let mut call = svc.call(req);
|
|
||||||
let (_req, tx) = tokio::select! {
|
|
||||||
_ = (&mut call) => unreachable!(),
|
|
||||||
res = handle.next_request() => res.unwrap(),
|
|
||||||
};
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
tx.send_response(rsp);
|
|
||||||
call.await.unwrap().into_body()
|
|
||||||
};
|
|
||||||
|
|
||||||
// The counters are not incremented yet.
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// The body reports an error if it was not completed.
|
|
||||||
drop(body);
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
assert_eq!(err.get(), 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
||||||
async fn grpc_request_statuses_ok() {
|
async fn grpc_request_statuses_ok() {
|
||||||
const EXPORT_HOSTNAME_LABELS: bool = true;
|
const EXPORT_HOSTNAME_LABELS: bool = true;
|
||||||
|
@ -884,210 +723,6 @@ async fn grpc_request_statuses_error_body() {
|
||||||
.await;
|
.await;
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
|
||||||
async fn grpc_response_body_drop_on_eos() {
|
|
||||||
use linkerd_app_core::svc::{Service, ServiceExt};
|
|
||||||
|
|
||||||
const EXPORT_HOSTNAME_LABELS: bool = false;
|
|
||||||
let _trace = linkerd_tracing::test::trace_init();
|
|
||||||
|
|
||||||
let super::GrpcRouteMetrics {
|
|
||||||
requests,
|
|
||||||
body_data,
|
|
||||||
..
|
|
||||||
} = super::GrpcRouteMetrics::default();
|
|
||||||
let parent_ref = crate::ParentRef(policy::Meta::new_default("parent"));
|
|
||||||
let route_ref = crate::RouteRef(policy::Meta::new_default("route"));
|
|
||||||
let (mut svc, mut handle) = mock_grpc_route_metrics(
|
|
||||||
&requests,
|
|
||||||
&body_data,
|
|
||||||
&parent_ref,
|
|
||||||
&route_ref,
|
|
||||||
EXPORT_HOSTNAME_LABELS,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Define a request and a response.
|
|
||||||
let req = http::Request::default();
|
|
||||||
let rsp = http::Response::builder()
|
|
||||||
.status(200)
|
|
||||||
.body({
|
|
||||||
let data = Poll::Ready(Some(Ok(Bytes::from_static(b"contents"))));
|
|
||||||
let trailers = {
|
|
||||||
let mut trailers = http::HeaderMap::with_capacity(1);
|
|
||||||
trailers.insert(GRPC_STATUS.clone(), GRPC_STATUS_OK.clone());
|
|
||||||
Poll::Ready(Some(Ok(trailers)))
|
|
||||||
};
|
|
||||||
let body = linkerd_mock_http_body::MockBody::default()
|
|
||||||
.then_yield_data(data)
|
|
||||||
.then_yield_trailer(trailers);
|
|
||||||
BoxBody::new(body)
|
|
||||||
})
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
// Two counters for 200 responses that do/don't have an error.
|
|
||||||
let ok = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::GrpcRsp {
|
|
||||||
status: Some(tonic::Code::Ok),
|
|
||||||
error: None,
|
|
||||||
},
|
|
||||||
));
|
|
||||||
let err = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::GrpcRsp {
|
|
||||||
status: Some(tonic::Code::Ok),
|
|
||||||
error: Some(labels::Error::Unknown),
|
|
||||||
},
|
|
||||||
));
|
|
||||||
debug_assert_eq!(ok.get(), 0);
|
|
||||||
debug_assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Send the request, and obtain the response.
|
|
||||||
let mut body = {
|
|
||||||
handle.allow(1);
|
|
||||||
svc.ready().await.expect("ready");
|
|
||||||
let mut call = svc.call(req);
|
|
||||||
let (_req, tx) = tokio::select! {
|
|
||||||
_ = (&mut call) => unreachable!(),
|
|
||||||
res = handle.next_request() => res.unwrap(),
|
|
||||||
};
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
tx.send_response(rsp);
|
|
||||||
call.await.unwrap().into_body()
|
|
||||||
};
|
|
||||||
|
|
||||||
// The counters are not incremented yet.
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Poll a frame out of the body.
|
|
||||||
let data = body
|
|
||||||
.frame()
|
|
||||||
.await
|
|
||||||
.expect("yields a result")
|
|
||||||
.expect("yields a frame")
|
|
||||||
.into_data()
|
|
||||||
.ok()
|
|
||||||
.expect("yields data");
|
|
||||||
assert_eq!(data.chunk(), "contents".as_bytes());
|
|
||||||
assert_eq!(data.remaining(), "contents".len());
|
|
||||||
|
|
||||||
// Poll the trailers out of the body.
|
|
||||||
let trls = body
|
|
||||||
.frame()
|
|
||||||
.await
|
|
||||||
.expect("yields a result")
|
|
||||||
.expect("yields a frame")
|
|
||||||
.into_trailers()
|
|
||||||
.ok()
|
|
||||||
.expect("yields trailers");
|
|
||||||
assert_eq!(trls.get(&GRPC_STATUS).unwrap(), GRPC_STATUS_OK);
|
|
||||||
|
|
||||||
// Show that the body reports itself as being complete.
|
|
||||||
debug_assert!(body.is_end_stream());
|
|
||||||
assert_eq!(ok.get(), 1);
|
|
||||||
assert_eq!(err.get(), 0);
|
|
||||||
}
|
|
||||||
|
|
||||||
#[tokio::test(flavor = "current_thread", start_paused = true)]
|
|
||||||
async fn grpc_response_body_drop_early() {
|
|
||||||
use linkerd_app_core::svc::{Service, ServiceExt};
|
|
||||||
|
|
||||||
const EXPORT_HOSTNAME_LABELS: bool = false;
|
|
||||||
let _trace = linkerd_tracing::test::trace_init();
|
|
||||||
|
|
||||||
let super::GrpcRouteMetrics {
|
|
||||||
requests,
|
|
||||||
body_data,
|
|
||||||
..
|
|
||||||
} = super::GrpcRouteMetrics::default();
|
|
||||||
let parent_ref = crate::ParentRef(policy::Meta::new_default("parent"));
|
|
||||||
let route_ref = crate::RouteRef(policy::Meta::new_default("route"));
|
|
||||||
let (mut svc, mut handle) = mock_grpc_route_metrics(
|
|
||||||
&requests,
|
|
||||||
&body_data,
|
|
||||||
&parent_ref,
|
|
||||||
&route_ref,
|
|
||||||
EXPORT_HOSTNAME_LABELS,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Define a request and a response.
|
|
||||||
let req = http::Request::default();
|
|
||||||
let rsp = http::Response::builder()
|
|
||||||
.status(200)
|
|
||||||
.body({
|
|
||||||
let data = Poll::Ready(Some(Ok(Bytes::from_static(b"contents"))));
|
|
||||||
let trailers = {
|
|
||||||
let mut trailers = http::HeaderMap::with_capacity(1);
|
|
||||||
trailers.insert(GRPC_STATUS.clone(), GRPC_STATUS_OK.clone());
|
|
||||||
Poll::Ready(Some(Ok(trailers)))
|
|
||||||
};
|
|
||||||
let body = linkerd_mock_http_body::MockBody::default()
|
|
||||||
.then_yield_data(data)
|
|
||||||
.then_yield_trailer(trailers);
|
|
||||||
BoxBody::new(body)
|
|
||||||
})
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
// Two counters for 200 responses that do/don't have an error.
|
|
||||||
let ok = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::GrpcRsp {
|
|
||||||
status: Some(tonic::Code::Ok),
|
|
||||||
error: None,
|
|
||||||
},
|
|
||||||
));
|
|
||||||
let err = requests.get_statuses(&labels::Rsp(
|
|
||||||
labels::Route::new(parent_ref.clone(), route_ref.clone(), None),
|
|
||||||
labels::GrpcRsp {
|
|
||||||
status: None,
|
|
||||||
error: Some(labels::Error::Unknown),
|
|
||||||
},
|
|
||||||
));
|
|
||||||
debug_assert_eq!(ok.get(), 0);
|
|
||||||
debug_assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Send the request, and obtain the response.
|
|
||||||
let mut body = {
|
|
||||||
handle.allow(1);
|
|
||||||
svc.ready().await.expect("ready");
|
|
||||||
let mut call = svc.call(req);
|
|
||||||
let (_req, tx) = tokio::select! {
|
|
||||||
_ = (&mut call) => unreachable!(),
|
|
||||||
res = handle.next_request() => res.unwrap(),
|
|
||||||
};
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
tx.send_response(rsp);
|
|
||||||
call.await.unwrap().into_body()
|
|
||||||
};
|
|
||||||
|
|
||||||
// The counters are not incremented yet.
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Poll a frame out of the body.
|
|
||||||
let data = body
|
|
||||||
.frame()
|
|
||||||
.await
|
|
||||||
.expect("yields a result")
|
|
||||||
.expect("yields a frame")
|
|
||||||
.into_data()
|
|
||||||
.ok()
|
|
||||||
.expect("yields data");
|
|
||||||
assert_eq!(data.chunk(), "contents".as_bytes());
|
|
||||||
assert_eq!(data.remaining(), "contents".len());
|
|
||||||
|
|
||||||
// The counters are not incremented yet.
|
|
||||||
debug_assert!(!body.is_end_stream());
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
assert_eq!(err.get(), 0);
|
|
||||||
|
|
||||||
// Then, drop the body without polling the trailers.
|
|
||||||
drop(body);
|
|
||||||
assert_eq!(ok.get(), 0);
|
|
||||||
assert_eq!(err.get(), 1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// === Utils ===
|
// === Utils ===
|
||||||
|
|
||||||
const MOCK_GRPC_REQ_URI: &str = "http://host/svc/method";
|
const MOCK_GRPC_REQ_URI: &str = "http://host/svc/method";
|
||||||
|
|
|
@ -54,7 +54,7 @@ impl retry::Policy for RetryPolicy {
|
||||||
|
|
||||||
if let Some(codes) = self.retryable_grpc_statuses.as_ref() {
|
if let Some(codes) = self.retryable_grpc_statuses.as_ref() {
|
||||||
let grpc_status = Self::grpc_status(rsp);
|
let grpc_status = Self::grpc_status(rsp);
|
||||||
let retryable = grpc_status.is_some_and(|c| codes.contains(c));
|
let retryable = grpc_status.map_or(false, |c| codes.contains(c));
|
||||||
tracing::debug!(retryable, grpc.status = ?grpc_status);
|
tracing::debug!(retryable, grpc.status = ?grpc_status);
|
||||||
if retryable {
|
if retryable {
|
||||||
return true;
|
return true;
|
||||||
|
|
|
@ -214,7 +214,7 @@ impl Outbound<()> {
|
||||||
detect_timeout,
|
detect_timeout,
|
||||||
queue,
|
queue,
|
||||||
addr,
|
addr,
|
||||||
meta.into(),
|
meta,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -146,7 +146,7 @@ impl Outbound<()> {
|
||||||
export_hostname_labels: bool,
|
export_hostname_labels: bool,
|
||||||
) -> impl policy::GetPolicy
|
) -> impl policy::GetPolicy
|
||||||
where
|
where
|
||||||
C: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
|
C: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
|
||||||
C: Clone + Unpin + Send + Sync + 'static,
|
C: Clone + Unpin + Send + Sync + 'static,
|
||||||
C::ResponseBody: proxy::http::Body<Data = tonic::codegen::Bytes, Error = Error>,
|
C::ResponseBody: proxy::http::Body<Data = tonic::codegen::Bytes, Error = Error>,
|
||||||
C::ResponseBody: Send + 'static,
|
C::ResponseBody: Send + 'static,
|
||||||
|
|
|
@ -130,7 +130,7 @@ impl OutboundMetrics {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl legacy::FmtMetrics for OutboundMetrics {
|
impl FmtMetrics for OutboundMetrics {
|
||||||
fn fmt_metrics(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt_metrics(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
self.http_errors.fmt_metrics(f)?;
|
self.http_errors.fmt_metrics(f)?;
|
||||||
self.tcp_errors.fmt_metrics(f)?;
|
self.tcp_errors.fmt_metrics(f)?;
|
||||||
|
@ -243,7 +243,7 @@ impl EncodeLabelSet for RouteRef {
|
||||||
|
|
||||||
// === impl ConcreteLabels ===
|
// === impl ConcreteLabels ===
|
||||||
|
|
||||||
impl legacy::FmtLabels for ConcreteLabels {
|
impl FmtLabels for ConcreteLabels {
|
||||||
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
let ConcreteLabels(parent, backend) = self;
|
let ConcreteLabels(parent, backend) = self;
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,7 @@ pub(crate) use self::{http::Http, tcp::Tcp};
|
||||||
use crate::http::IdentityRequired;
|
use crate::http::IdentityRequired;
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
errors::{FailFastError, LoadShedError},
|
errors::{FailFastError, LoadShedError},
|
||||||
metrics::legacy::FmtLabels,
|
metrics::FmtLabels,
|
||||||
proxy::http::ResponseTimeoutError,
|
proxy::http::ResponseTimeoutError,
|
||||||
};
|
};
|
||||||
use std::fmt;
|
use std::fmt;
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
use super::ErrorKind;
|
use super::ErrorKind;
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
metrics::{
|
metrics::{metrics, Counter, FmtMetrics},
|
||||||
legacy::{Counter, FmtMetrics},
|
|
||||||
metrics,
|
|
||||||
},
|
|
||||||
svc, Error,
|
svc, Error,
|
||||||
};
|
};
|
||||||
use parking_lot::RwLock;
|
use parking_lot::RwLock;
|
||||||
|
|
|
@ -1,9 +1,6 @@
|
||||||
use super::ErrorKind;
|
use super::ErrorKind;
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
metrics::{
|
metrics::{metrics, Counter, FmtMetrics},
|
||||||
legacy::{Counter, FmtMetrics},
|
|
||||||
metrics,
|
|
||||||
},
|
|
||||||
svc,
|
svc,
|
||||||
transport::{labels::TargetAddr, OrigDstAddr},
|
transport::{labels::TargetAddr, OrigDstAddr},
|
||||||
Error,
|
Error,
|
||||||
|
|
|
@ -32,7 +32,7 @@ use tracing::info_span;
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub enum Dispatch {
|
pub enum Dispatch {
|
||||||
Balance(NameAddr, balance::EwmaConfig),
|
Balance(NameAddr, balance::EwmaConfig),
|
||||||
Forward(Remote<ServerAddr>, Arc<Metadata>),
|
Forward(Remote<ServerAddr>, Metadata),
|
||||||
/// A backend dispatcher that explicitly fails all requests.
|
/// A backend dispatcher that explicitly fails all requests.
|
||||||
Fail {
|
Fail {
|
||||||
message: Arc<str>,
|
message: Arc<str>,
|
||||||
|
@ -57,7 +57,7 @@ pub struct ConcreteError {
|
||||||
pub struct Endpoint<T> {
|
pub struct Endpoint<T> {
|
||||||
addr: Remote<ServerAddr>,
|
addr: Remote<ServerAddr>,
|
||||||
is_local: bool,
|
is_local: bool,
|
||||||
metadata: Arc<Metadata>,
|
metadata: Metadata,
|
||||||
parent: T,
|
parent: T,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -196,7 +196,7 @@ impl<C> Outbound<C> {
|
||||||
let is_local = inbound_ips.contains(&addr.ip());
|
let is_local = inbound_ips.contains(&addr.ip());
|
||||||
Endpoint {
|
Endpoint {
|
||||||
addr: Remote(ServerAddr(addr)),
|
addr: Remote(ServerAddr(addr)),
|
||||||
metadata: metadata.into(),
|
metadata,
|
||||||
is_local,
|
is_local,
|
||||||
parent: target.parent,
|
parent: target.parent,
|
||||||
}
|
}
|
||||||
|
@ -419,10 +419,3 @@ impl<T> svc::Param<tls::ConditionalClientTls> for Endpoint<T> {
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T> svc::Param<tls::ConditionalClientTlsLabels> for Endpoint<T> {
|
|
||||||
fn param(&self) -> tls::ConditionalClientTlsLabels {
|
|
||||||
let tls: tls::ConditionalClientTls = self.param();
|
|
||||||
tls.as_ref().map(tls::ClientTls::labels)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -160,7 +160,9 @@ async fn balances() {
|
||||||
}
|
}
|
||||||
assert!(
|
assert!(
|
||||||
seen0 && seen1,
|
seen0 && seen1,
|
||||||
"Both endpoints must be used; ep0={seen0} ep1={seen1}"
|
"Both endpoints must be used; ep0={} ep1={}",
|
||||||
|
seen0,
|
||||||
|
seen1
|
||||||
);
|
);
|
||||||
|
|
||||||
// When we remove the ep0, all traffic goes to ep1:
|
// When we remove the ep0, all traffic goes to ep1:
|
||||||
|
@ -188,7 +190,8 @@ async fn balances() {
|
||||||
task.abort();
|
task.abort();
|
||||||
assert!(
|
assert!(
|
||||||
errors::is_caused_by::<FailFastError>(&*err),
|
errors::is_caused_by::<FailFastError>(&*err),
|
||||||
"unexpected error: {err}"
|
"unexpected error: {}",
|
||||||
|
err
|
||||||
);
|
);
|
||||||
assert!(resolved.only_configured(), "Resolution must be reused");
|
assert!(resolved.only_configured(), "Resolution must be reused");
|
||||||
}
|
}
|
||||||
|
|
|
@ -33,7 +33,7 @@ static INVALID_POLICY: once_cell::sync::OnceCell<ClientPolicy> = once_cell::sync
|
||||||
|
|
||||||
impl<S> Api<S>
|
impl<S> Api<S>
|
||||||
where
|
where
|
||||||
S: tonic::client::GrpcService<tonic::body::Body, Error = Error> + Clone,
|
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error> + Clone,
|
||||||
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
||||||
{
|
{
|
||||||
pub(crate) fn new(
|
pub(crate) fn new(
|
||||||
|
@ -59,7 +59,7 @@ where
|
||||||
|
|
||||||
impl<S> Service<Addr> for Api<S>
|
impl<S> Service<Addr> for Api<S>
|
||||||
where
|
where
|
||||||
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
|
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
|
||||||
S: Clone + Send + Sync + 'static,
|
S: Clone + Send + Sync + 'static,
|
||||||
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
|
||||||
S::Future: Send + 'static,
|
S::Future: Send + 'static,
|
||||||
|
|
|
@ -8,8 +8,8 @@ use std::task::{Context, Poll};
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
pub struct Connect {
|
pub struct Connect {
|
||||||
addr: Remote<ServerAddr>,
|
pub addr: Remote<ServerAddr>,
|
||||||
tls: tls::ConditionalClientTls,
|
pub tls: tls::ConditionalClientTls,
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Prevents outbound connections on the loopback interface, unless the
|
/// Prevents outbound connections on the loopback interface, unless the
|
||||||
|
@ -77,12 +77,6 @@ where
|
||||||
|
|
||||||
// === impl Connect ===
|
// === impl Connect ===
|
||||||
|
|
||||||
impl Connect {
|
|
||||||
pub fn new(addr: Remote<ServerAddr>, tls: tls::ConditionalClientTls) -> Self {
|
|
||||||
Self { addr, tls }
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl svc::Param<Remote<ServerAddr>> for Connect {
|
impl svc::Param<Remote<ServerAddr>> for Connect {
|
||||||
fn param(&self) -> Remote<ServerAddr> {
|
fn param(&self) -> Remote<ServerAddr> {
|
||||||
self.addr
|
self.addr
|
||||||
|
@ -94,14 +88,3 @@ impl svc::Param<tls::ConditionalClientTls> for Connect {
|
||||||
self.tls.clone()
|
self.tls.clone()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
|
||||||
impl Connect {
|
|
||||||
pub fn addr(&self) -> &Remote<ServerAddr> {
|
|
||||||
&self.addr
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn tls(&self) -> &tls::ConditionalClientTls {
|
|
||||||
&self.tls
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -67,7 +67,10 @@ where
|
||||||
let tls: tls::ConditionalClientTls = ep.param();
|
let tls: tls::ConditionalClientTls = ep.param();
|
||||||
if let tls::ConditionalClientTls::None(reason) = tls {
|
if let tls::ConditionalClientTls::None(reason) = tls {
|
||||||
trace!(%reason, "Not attempting opaque transport");
|
trace!(%reason, "Not attempting opaque transport");
|
||||||
let target = Connect::new(ep.param(), tls);
|
let target = Connect {
|
||||||
|
addr: ep.param(),
|
||||||
|
tls,
|
||||||
|
};
|
||||||
return Box::pin(self.inner.connect(target).err_into::<Error>());
|
return Box::pin(self.inner.connect(target).err_into::<Error>());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -106,10 +109,10 @@ where
|
||||||
|
|
||||||
let protocol: Option<SessionProtocol> = ep.param();
|
let protocol: Option<SessionProtocol> = ep.param();
|
||||||
|
|
||||||
let connect = self.inner.connect(Connect::new(
|
let connect = self.inner.connect(Connect {
|
||||||
Remote(ServerAddr((addr.ip(), connect_port).into())),
|
addr: Remote(ServerAddr((addr.ip(), connect_port).into())),
|
||||||
tls,
|
tls,
|
||||||
));
|
});
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
let (mut io, meta) = connect.await.map_err(Into::into)?;
|
let (mut io, meta) = connect.await.map_err(Into::into)?;
|
||||||
|
|
||||||
|
@ -200,9 +203,9 @@ mod test {
|
||||||
) -> impl Fn(Connect) -> futures::future::Ready<Result<(tokio_test::io::Mock, ConnectMeta), io::Error>>
|
) -> impl Fn(Connect) -> futures::future::Ready<Result<(tokio_test::io::Mock, ConnectMeta), io::Error>>
|
||||||
{
|
{
|
||||||
move |ep| {
|
move |ep| {
|
||||||
let Remote(ServerAddr(sa)) = ep.addr();
|
let Remote(ServerAddr(sa)) = ep.addr;
|
||||||
assert_eq!(sa.port(), 4143);
|
assert_eq!(sa.port(), 4143);
|
||||||
assert!(ep.tls().is_some());
|
assert!(ep.tls.is_some());
|
||||||
let buf = header.encode_prefaced_buf().expect("Must encode");
|
let buf = header.encode_prefaced_buf().expect("Must encode");
|
||||||
let io = tokio_test::io::Builder::new()
|
let io = tokio_test::io::Builder::new()
|
||||||
.write(&buf[..])
|
.write(&buf[..])
|
||||||
|
@ -222,9 +225,9 @@ mod test {
|
||||||
|
|
||||||
let svc = TaggedTransport {
|
let svc = TaggedTransport {
|
||||||
inner: service_fn(|ep: Connect| {
|
inner: service_fn(|ep: Connect| {
|
||||||
let Remote(ServerAddr(sa)) = ep.addr();
|
let Remote(ServerAddr(sa)) = ep.addr;
|
||||||
assert_eq!(sa.port(), 4321);
|
assert_eq!(sa.port(), 4321);
|
||||||
assert!(ep.tls().is_none());
|
assert!(ep.tls.is_none());
|
||||||
let io = tokio_test::io::Builder::new().write(b"hello").build();
|
let io = tokio_test::io::Builder::new().write(b"hello").build();
|
||||||
let meta = tls::ConnectMeta {
|
let meta = tls::ConnectMeta {
|
||||||
socket: Local(ClientAddr(([0, 0, 0, 0], 0).into())),
|
socket: Local(ClientAddr(([0, 0, 0, 0], 0).into())),
|
||||||
|
|
|
@ -60,7 +60,7 @@ pub(crate) fn runtime() -> (ProxyRuntime, drain::Signal) {
|
||||||
let (tap, _) = tap::new();
|
let (tap, _) = tap::new();
|
||||||
let (metrics, _) = metrics::Metrics::new(std::time::Duration::from_secs(10));
|
let (metrics, _) = metrics::Metrics::new(std::time::Duration::from_secs(10));
|
||||||
let runtime = ProxyRuntime {
|
let runtime = ProxyRuntime {
|
||||||
identity: linkerd_meshtls::creds::default_for_test().1,
|
identity: linkerd_meshtls_rustls::creds::default_for_test().1.into(),
|
||||||
metrics: metrics.proxy,
|
metrics: metrics.proxy,
|
||||||
tap,
|
tap,
|
||||||
span_sink: None,
|
span_sink: None,
|
||||||
|
|
|
@ -31,7 +31,7 @@ use tracing::info_span;
|
||||||
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
|
||||||
pub enum Dispatch {
|
pub enum Dispatch {
|
||||||
Balance(NameAddr, balance::EwmaConfig),
|
Balance(NameAddr, balance::EwmaConfig),
|
||||||
Forward(Remote<ServerAddr>, Arc<Metadata>),
|
Forward(Remote<ServerAddr>, Metadata),
|
||||||
/// A backend dispatcher that explicitly fails all requests.
|
/// A backend dispatcher that explicitly fails all requests.
|
||||||
Fail {
|
Fail {
|
||||||
message: Arc<str>,
|
message: Arc<str>,
|
||||||
|
@ -56,7 +56,7 @@ pub struct ConcreteError {
|
||||||
pub struct Endpoint<T> {
|
pub struct Endpoint<T> {
|
||||||
addr: Remote<ServerAddr>,
|
addr: Remote<ServerAddr>,
|
||||||
is_local: bool,
|
is_local: bool,
|
||||||
metadata: Arc<Metadata>,
|
metadata: Metadata,
|
||||||
parent: T,
|
parent: T,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -174,7 +174,7 @@ impl<C> Outbound<C> {
|
||||||
let is_local = inbound_ips.contains(&addr.ip());
|
let is_local = inbound_ips.contains(&addr.ip());
|
||||||
Endpoint {
|
Endpoint {
|
||||||
addr: Remote(ServerAddr(addr)),
|
addr: Remote(ServerAddr(addr)),
|
||||||
metadata: metadata.into(),
|
metadata,
|
||||||
is_local,
|
is_local,
|
||||||
parent: target.parent,
|
parent: target.parent,
|
||||||
}
|
}
|
||||||
|
@ -385,10 +385,3 @@ impl<T> svc::Param<tls::ConditionalClientTls> for Endpoint<T> {
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<T> svc::Param<tls::ConditionalClientTlsLabels> for Endpoint<T> {
|
|
||||||
fn param(&self) -> tls::ConditionalClientTlsLabels {
|
|
||||||
let tls: tls::ConditionalClientTls = self.param();
|
|
||||||
tls.as_ref().map(tls::ClientTls::labels)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -11,18 +11,13 @@ use linkerd_proxy_client_policy::{self as client_policy, tls::sni};
|
||||||
use parking_lot::Mutex;
|
use parking_lot::Mutex;
|
||||||
use std::{
|
use std::{
|
||||||
collections::HashMap,
|
collections::HashMap,
|
||||||
marker::PhantomData,
|
|
||||||
net::SocketAddr,
|
net::SocketAddr,
|
||||||
sync::Arc,
|
sync::Arc,
|
||||||
task::{Context, Poll},
|
task::{Context, Poll},
|
||||||
time::Duration,
|
time::Duration,
|
||||||
};
|
};
|
||||||
use tokio::sync::watch;
|
use tokio::sync::watch;
|
||||||
use tokio_rustls::rustls::{
|
use tokio_rustls::rustls::pki_types::DnsName;
|
||||||
internal::msgs::codec::{Codec, Reader},
|
|
||||||
pki_types::DnsName,
|
|
||||||
InvalidMessage,
|
|
||||||
};
|
|
||||||
|
|
||||||
mod basic;
|
mod basic;
|
||||||
|
|
||||||
|
@ -147,7 +142,7 @@ fn default_backend(addr: SocketAddr) -> client_policy::Backend {
|
||||||
capacity: 100,
|
capacity: 100,
|
||||||
failfast_timeout: Duration::from_secs(10),
|
failfast_timeout: Duration::from_secs(10),
|
||||||
},
|
},
|
||||||
dispatcher: BackendDispatcher::Forward(addr, EndpointMetadata::default().into()),
|
dispatcher: BackendDispatcher::Forward(addr, EndpointMetadata::default()),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -175,57 +170,44 @@ fn sni_route(backend: client_policy::Backend, sni: sni::MatchSni) -> client_poli
|
||||||
// generates a sample ClientHello TLS message for testing
|
// generates a sample ClientHello TLS message for testing
|
||||||
fn generate_client_hello(sni: &str) -> Vec<u8> {
|
fn generate_client_hello(sni: &str) -> Vec<u8> {
|
||||||
use tokio_rustls::rustls::{
|
use tokio_rustls::rustls::{
|
||||||
internal::msgs::{base::Payload, codec::Codec, message::PlainMessage},
|
internal::msgs::{
|
||||||
ContentType, ProtocolVersion,
|
base::Payload,
|
||||||
|
codec::{Codec, Reader},
|
||||||
|
enums::Compression,
|
||||||
|
handshake::{
|
||||||
|
ClientExtension, ClientHelloPayload, HandshakeMessagePayload, HandshakePayload,
|
||||||
|
Random, ServerName, SessionId,
|
||||||
|
},
|
||||||
|
message::{MessagePayload, PlainMessage},
|
||||||
|
},
|
||||||
|
CipherSuite, ContentType, HandshakeType, ProtocolVersion,
|
||||||
};
|
};
|
||||||
|
|
||||||
let sni = DnsName::try_from(sni.to_string()).unwrap();
|
let sni = DnsName::try_from(sni.to_string()).unwrap();
|
||||||
let sni = trim_hostname_trailing_dot_for_sni(&sni);
|
let sni = trim_hostname_trailing_dot_for_sni(&sni);
|
||||||
|
|
||||||
// rustls has internal-only types that can encode a ClientHello, but they are mostly
|
let mut server_name_bytes = vec![];
|
||||||
// inaccessible and an unstable part of the public API anyway. Manually encode one here for
|
0u8.encode(&mut server_name_bytes); // encode the type first
|
||||||
// testing only instead.
|
(sni.as_ref().len() as u16).encode(&mut server_name_bytes); // then the length as u16
|
||||||
|
server_name_bytes.extend_from_slice(sni.as_ref().as_bytes()); // then the server name itself
|
||||||
|
|
||||||
let mut hs_payload_bytes = vec![];
|
let server_name =
|
||||||
1u8.encode(&mut hs_payload_bytes); // client hello ID
|
ServerName::read(&mut Reader::init(&server_name_bytes)).expect("Server name is valid");
|
||||||
|
|
||||||
let client_hello_body = {
|
let hs_payload = HandshakeMessagePayload {
|
||||||
let mut payload = LengthPayload::<U24>::empty();
|
typ: HandshakeType::ClientHello,
|
||||||
|
payload: HandshakePayload::ClientHello(ClientHelloPayload {
|
||||||
payload.buf.extend_from_slice(&[0x03, 0x03]); // client version, TLSv1.2
|
client_version: ProtocolVersion::TLSv1_2,
|
||||||
|
random: Random::from([0; 32]),
|
||||||
payload.buf.extend_from_slice(&[0u8; 32]); // random
|
session_id: SessionId::read(&mut Reader::init(&[0])).unwrap(),
|
||||||
|
cipher_suites: vec![CipherSuite::TLS_NULL_WITH_NULL_NULL],
|
||||||
0u8.encode(&mut payload.buf); // session ID
|
compression_methods: vec![Compression::Null],
|
||||||
|
extensions: vec![ClientExtension::ServerName(vec![server_name])],
|
||||||
LengthPayload::<u16>::from_slice(&[0x00, 0x00] /* TLS_NULL_WITH_NULL_NULL */)
|
}),
|
||||||
.encode(&mut payload.buf);
|
|
||||||
|
|
||||||
LengthPayload::<u8>::from_slice(&[0x00] /* no compression */).encode(&mut payload.buf);
|
|
||||||
|
|
||||||
let extensions = {
|
|
||||||
let mut payload = LengthPayload::<u16>::empty();
|
|
||||||
0u16.encode(&mut payload.buf); // server name extension ID
|
|
||||||
|
|
||||||
let server_name_extension = {
|
|
||||||
let mut payload = LengthPayload::<u16>::empty();
|
|
||||||
let server_name = {
|
|
||||||
let mut payload = LengthPayload::<u16>::empty();
|
|
||||||
0u8.encode(&mut payload.buf); // DNS hostname ID
|
|
||||||
LengthPayload::<u16>::from_slice(sni.as_ref().as_bytes())
|
|
||||||
.encode(&mut payload.buf);
|
|
||||||
payload
|
|
||||||
};
|
|
||||||
server_name.encode(&mut payload.buf);
|
|
||||||
payload
|
|
||||||
};
|
|
||||||
server_name_extension.encode(&mut payload.buf);
|
|
||||||
payload
|
|
||||||
};
|
|
||||||
extensions.encode(&mut payload.buf);
|
|
||||||
payload
|
|
||||||
};
|
};
|
||||||
client_hello_body.encode(&mut hs_payload_bytes);
|
|
||||||
|
let mut hs_payload_bytes = Vec::default();
|
||||||
|
MessagePayload::handshake(hs_payload).encode(&mut hs_payload_bytes);
|
||||||
|
|
||||||
let message = PlainMessage {
|
let message = PlainMessage {
|
||||||
typ: ContentType::Handshake,
|
typ: ContentType::Handshake,
|
||||||
|
@ -236,65 +218,6 @@ fn generate_client_hello(sni: &str) -> Vec<u8> {
|
||||||
message.into_unencrypted_opaque().encode()
|
message.into_unencrypted_opaque().encode()
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
|
||||||
struct LengthPayload<T> {
|
|
||||||
buf: Vec<u8>,
|
|
||||||
_boo: PhantomData<fn() -> T>,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl<T> LengthPayload<T> {
|
|
||||||
fn empty() -> Self {
|
|
||||||
Self {
|
|
||||||
buf: vec![],
|
|
||||||
_boo: PhantomData,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn from_slice(s: &[u8]) -> Self {
|
|
||||||
Self {
|
|
||||||
buf: s.to_vec(),
|
|
||||||
_boo: PhantomData,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Codec<'_> for LengthPayload<u8> {
|
|
||||||
fn encode(&self, bytes: &mut Vec<u8>) {
|
|
||||||
(self.buf.len() as u8).encode(bytes);
|
|
||||||
bytes.extend_from_slice(&self.buf);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn read(_: &mut Reader<'_>) -> std::result::Result<Self, InvalidMessage> {
|
|
||||||
unimplemented!()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Codec<'_> for LengthPayload<u16> {
|
|
||||||
fn encode(&self, bytes: &mut Vec<u8>) {
|
|
||||||
(self.buf.len() as u16).encode(bytes);
|
|
||||||
bytes.extend_from_slice(&self.buf);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn read(_: &mut Reader<'_>) -> std::result::Result<Self, InvalidMessage> {
|
|
||||||
unimplemented!()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug)]
|
|
||||||
struct U24;
|
|
||||||
|
|
||||||
impl Codec<'_> for LengthPayload<U24> {
|
|
||||||
fn encode(&self, bytes: &mut Vec<u8>) {
|
|
||||||
let len = self.buf.len() as u32;
|
|
||||||
bytes.extend_from_slice(&len.to_be_bytes()[1..]);
|
|
||||||
bytes.extend_from_slice(&self.buf);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn read(_: &mut Reader<'_>) -> std::result::Result<Self, InvalidMessage> {
|
|
||||||
unimplemented!()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn trim_hostname_trailing_dot_for_sni(dns_name: &DnsName<'_>) -> DnsName<'static> {
|
fn trim_hostname_trailing_dot_for_sni(dns_name: &DnsName<'_>) -> DnsName<'static> {
|
||||||
let dns_name_str = dns_name.as_ref();
|
let dns_name_str = dns_name.as_ref();
|
||||||
|
|
||||||
|
|
|
@ -43,7 +43,7 @@ impl Config {
|
||||||
) -> Result<
|
) -> Result<
|
||||||
Dst<
|
Dst<
|
||||||
impl svc::Service<
|
impl svc::Service<
|
||||||
http::Request<tonic::body::Body>,
|
http::Request<tonic::body::BoxBody>,
|
||||||
Response = http::Response<control::RspBody>,
|
Response = http::Response<control::RspBody>,
|
||||||
Error = Error,
|
Error = Error,
|
||||||
Future = impl Send,
|
Future = impl Send,
|
||||||
|
|
|
@ -83,8 +83,8 @@ pub enum ParseError {
|
||||||
),
|
),
|
||||||
#[error("not a valid port range")]
|
#[error("not a valid port range")]
|
||||||
NotAPortRange,
|
NotAPortRange,
|
||||||
#[error("{0}")]
|
#[error(transparent)]
|
||||||
AddrError(#[source] addr::Error),
|
AddrError(addr::Error),
|
||||||
#[error("only two addresses are supported")]
|
#[error("only two addresses are supported")]
|
||||||
TooManyAddrs,
|
TooManyAddrs,
|
||||||
#[error("not a valid identity name")]
|
#[error("not a valid identity name")]
|
||||||
|
@ -233,12 +233,8 @@ pub const ENV_IDENTITY_IDENTITY_SERVER_ID: &str = "LINKERD2_PROXY_IDENTITY_SERVE
|
||||||
pub const ENV_IDENTITY_IDENTITY_SERVER_NAME: &str = "LINKERD2_PROXY_IDENTITY_SERVER_NAME";
|
pub const ENV_IDENTITY_IDENTITY_SERVER_NAME: &str = "LINKERD2_PROXY_IDENTITY_SERVER_NAME";
|
||||||
|
|
||||||
// If this config is set, then the proxy will be configured to use Spire as identity
|
// If this config is set, then the proxy will be configured to use Spire as identity
|
||||||
// provider. On Unix systems this needs to be a path to a UDS while on Windows - a
|
// provider
|
||||||
// named pipe path.
|
|
||||||
pub const ENV_IDENTITY_SPIRE_WORKLOAD_API_ADDRESS: &str =
|
|
||||||
"LINKERD2_PROXY_IDENTITY_SPIRE_WORKLOAD_API_ADDRESS";
|
|
||||||
pub const ENV_IDENTITY_SPIRE_SOCKET: &str = "LINKERD2_PROXY_IDENTITY_SPIRE_SOCKET";
|
pub const ENV_IDENTITY_SPIRE_SOCKET: &str = "LINKERD2_PROXY_IDENTITY_SPIRE_SOCKET";
|
||||||
|
|
||||||
pub const IDENTITY_SPIRE_BASE: &str = "LINKERD2_PROXY_IDENTITY_SPIRE";
|
pub const IDENTITY_SPIRE_BASE: &str = "LINKERD2_PROXY_IDENTITY_SPIRE";
|
||||||
const DEFAULT_SPIRE_BACKOFF: ExponentialBackoff =
|
const DEFAULT_SPIRE_BACKOFF: ExponentialBackoff =
|
||||||
ExponentialBackoff::new_unchecked(Duration::from_millis(100), Duration::from_secs(1), 0.1);
|
ExponentialBackoff::new_unchecked(Duration::from_millis(100), Duration::from_secs(1), 0.1);
|
||||||
|
@ -294,7 +290,7 @@ const DEFAULT_INBOUND_HTTP_FAILFAST_TIMEOUT: Duration = Duration::from_secs(1);
|
||||||
const DEFAULT_INBOUND_DETECT_TIMEOUT: Duration = Duration::from_secs(10);
|
const DEFAULT_INBOUND_DETECT_TIMEOUT: Duration = Duration::from_secs(10);
|
||||||
const DEFAULT_INBOUND_CONNECT_TIMEOUT: Duration = Duration::from_millis(300);
|
const DEFAULT_INBOUND_CONNECT_TIMEOUT: Duration = Duration::from_millis(300);
|
||||||
const DEFAULT_INBOUND_CONNECT_BACKOFF: ExponentialBackoff =
|
const DEFAULT_INBOUND_CONNECT_BACKOFF: ExponentialBackoff =
|
||||||
ExponentialBackoff::new_unchecked(Duration::from_millis(100), Duration::from_secs(10), 0.1);
|
ExponentialBackoff::new_unchecked(Duration::from_millis(100), Duration::from_millis(500), 0.1);
|
||||||
|
|
||||||
const DEFAULT_OUTBOUND_TCP_QUEUE_CAPACITY: usize = 10_000;
|
const DEFAULT_OUTBOUND_TCP_QUEUE_CAPACITY: usize = 10_000;
|
||||||
const DEFAULT_OUTBOUND_TCP_FAILFAST_TIMEOUT: Duration = Duration::from_secs(3);
|
const DEFAULT_OUTBOUND_TCP_FAILFAST_TIMEOUT: Duration = Duration::from_secs(3);
|
||||||
|
@ -303,7 +299,7 @@ const DEFAULT_OUTBOUND_HTTP_FAILFAST_TIMEOUT: Duration = Duration::from_secs(3);
|
||||||
const DEFAULT_OUTBOUND_DETECT_TIMEOUT: Duration = Duration::from_secs(10);
|
const DEFAULT_OUTBOUND_DETECT_TIMEOUT: Duration = Duration::from_secs(10);
|
||||||
const DEFAULT_OUTBOUND_CONNECT_TIMEOUT: Duration = Duration::from_secs(1);
|
const DEFAULT_OUTBOUND_CONNECT_TIMEOUT: Duration = Duration::from_secs(1);
|
||||||
const DEFAULT_OUTBOUND_CONNECT_BACKOFF: ExponentialBackoff =
|
const DEFAULT_OUTBOUND_CONNECT_BACKOFF: ExponentialBackoff =
|
||||||
ExponentialBackoff::new_unchecked(Duration::from_millis(100), Duration::from_secs(60), 0.1);
|
ExponentialBackoff::new_unchecked(Duration::from_millis(100), Duration::from_millis(500), 0.1);
|
||||||
|
|
||||||
const DEFAULT_CONTROL_QUEUE_CAPACITY: usize = 100;
|
const DEFAULT_CONTROL_QUEUE_CAPACITY: usize = 100;
|
||||||
const DEFAULT_CONTROL_FAILFAST_TIMEOUT: Duration = Duration::from_secs(10);
|
const DEFAULT_CONTROL_FAILFAST_TIMEOUT: Duration = Duration::from_secs(10);
|
||||||
|
@ -343,12 +339,12 @@ const DEFAULT_INBOUND_HTTP1_CONNECTION_POOL_IDLE_TIMEOUT: Duration = Duration::f
|
||||||
// TODO(ver) This should be configurable at the load balancer level.
|
// TODO(ver) This should be configurable at the load balancer level.
|
||||||
const DEFAULT_OUTBOUND_HTTP1_CONNECTION_POOL_IDLE_TIMEOUT: Duration = Duration::from_secs(3);
|
const DEFAULT_OUTBOUND_HTTP1_CONNECTION_POOL_IDLE_TIMEOUT: Duration = Duration::from_secs(3);
|
||||||
|
|
||||||
// By default, we limit the number of connections that may be opened per-host.
|
// By default, we don't limit the number of connections a connection pol may
|
||||||
// We pick a high number (10k) that shouldn't interfere with most workloads, but
|
// use, as doing so can severely impact CPU utilization for applications with
|
||||||
// will prevent issues with our outbound HTTP client from exhausting the file
|
// many concurrent requests. It's generally preferable to use the MAX_IDLE_AGE
|
||||||
// descriptors available to the process.
|
// limitations to quickly drop idle connections.
|
||||||
const DEFAULT_INBOUND_MAX_IDLE_CONNS_PER_ENDPOINT: usize = 10_000;
|
const DEFAULT_INBOUND_MAX_IDLE_CONNS_PER_ENDPOINT: usize = usize::MAX;
|
||||||
const DEFAULT_OUTBOUND_MAX_IDLE_CONNS_PER_ENDPOINT: usize = 10_000;
|
const DEFAULT_OUTBOUND_MAX_IDLE_CONNS_PER_ENDPOINT: usize = usize::MAX;
|
||||||
|
|
||||||
// These settings limit the number of requests that have not received responses,
|
// These settings limit the number of requests that have not received responses,
|
||||||
// including those buffered in the proxy and dispatched to the destination
|
// including those buffered in the proxy and dispatched to the destination
|
||||||
|
@ -913,13 +909,8 @@ pub fn parse_config<S: Strings>(strings: &S) -> Result<super::Config, EnvError>
|
||||||
let identity = {
|
let identity = {
|
||||||
let tls = tls?;
|
let tls = tls?;
|
||||||
|
|
||||||
match parse_deprecated(
|
match strings.get(ENV_IDENTITY_SPIRE_SOCKET)? {
|
||||||
strings,
|
Some(socket) => match &tls.id {
|
||||||
ENV_IDENTITY_SPIRE_WORKLOAD_API_ADDRESS,
|
|
||||||
ENV_IDENTITY_SPIRE_SOCKET,
|
|
||||||
|s| Ok(s.to_string()),
|
|
||||||
)? {
|
|
||||||
Some(workload_api_addr) => match &tls.id {
|
|
||||||
// TODO: perform stricter SPIFFE ID validation following:
|
// TODO: perform stricter SPIFFE ID validation following:
|
||||||
// https://github.com/spiffe/spiffe/blob/27b59b81ba8c56885ac5d4be73b35b9b3305fd7a/standards/SPIFFE-ID.md
|
// https://github.com/spiffe/spiffe/blob/27b59b81ba8c56885ac5d4be73b35b9b3305fd7a/standards/SPIFFE-ID.md
|
||||||
identity::Id::Uri(uri)
|
identity::Id::Uri(uri)
|
||||||
|
@ -928,7 +919,7 @@ pub fn parse_config<S: Strings>(strings: &S) -> Result<super::Config, EnvError>
|
||||||
identity::Config::Spire {
|
identity::Config::Spire {
|
||||||
tls,
|
tls,
|
||||||
client: spire::Config {
|
client: spire::Config {
|
||||||
workload_api_addr: std::sync::Arc::new(workload_api_addr),
|
socket_addr: std::sync::Arc::new(socket),
|
||||||
backoff: parse_backoff(
|
backoff: parse_backoff(
|
||||||
strings,
|
strings,
|
||||||
IDENTITY_SPIRE_BASE,
|
IDENTITY_SPIRE_BASE,
|
||||||
|
@ -1117,11 +1108,11 @@ pub fn parse_backoff<S: Strings>(
|
||||||
base: &str,
|
base: &str,
|
||||||
default: ExponentialBackoff,
|
default: ExponentialBackoff,
|
||||||
) -> Result<ExponentialBackoff, EnvError> {
|
) -> Result<ExponentialBackoff, EnvError> {
|
||||||
let min_env = format!("LINKERD2_PROXY_{base}_EXP_BACKOFF_MIN");
|
let min_env = format!("LINKERD2_PROXY_{}_EXP_BACKOFF_MIN", base);
|
||||||
let min = parse(strings, &min_env, parse_duration);
|
let min = parse(strings, &min_env, parse_duration);
|
||||||
let max_env = format!("LINKERD2_PROXY_{base}_EXP_BACKOFF_MAX");
|
let max_env = format!("LINKERD2_PROXY_{}_EXP_BACKOFF_MAX", base);
|
||||||
let max = parse(strings, &max_env, parse_duration);
|
let max = parse(strings, &max_env, parse_duration);
|
||||||
let jitter_env = format!("LINKERD2_PROXY_{base}_EXP_BACKOFF_JITTER");
|
let jitter_env = format!("LINKERD2_PROXY_{}_EXP_BACKOFF_JITTER", base);
|
||||||
let jitter = parse(strings, &jitter_env, parse_number::<f64>);
|
let jitter = parse(strings, &jitter_env, parse_number::<f64>);
|
||||||
|
|
||||||
match (min?, max?, jitter?) {
|
match (min?, max?, jitter?) {
|
||||||
|
@ -1256,7 +1247,7 @@ pub fn parse_linkerd_identity_config<S: Strings>(
|
||||||
Ok((control, certify))
|
Ok((control, certify))
|
||||||
}
|
}
|
||||||
(addr, end_entity_dir, token, _minr, _maxr) => {
|
(addr, end_entity_dir, token, _minr, _maxr) => {
|
||||||
let s = format!("{ENV_IDENTITY_SVC_BASE}_ADDR and {ENV_IDENTITY_SVC_BASE}_NAME");
|
let s = format!("{0}_ADDR and {0}_NAME", ENV_IDENTITY_SVC_BASE);
|
||||||
let svc_env: &str = s.as_str();
|
let svc_env: &str = s.as_str();
|
||||||
for (unset, name) in &[
|
for (unset, name) in &[
|
||||||
(addr.is_none(), svc_env),
|
(addr.is_none(), svc_env),
|
||||||
|
|
|
@ -172,11 +172,11 @@ mod tests {
|
||||||
fn test_unit<F: Fn(u64) -> Duration>(unit: &str, to_duration: F) {
|
fn test_unit<F: Fn(u64) -> Duration>(unit: &str, to_duration: F) {
|
||||||
for v in &[0, 1, 23, 456_789] {
|
for v in &[0, 1, 23, 456_789] {
|
||||||
let d = to_duration(*v);
|
let d = to_duration(*v);
|
||||||
let text = format!("{v}{unit}");
|
let text = format!("{}{}", v, unit);
|
||||||
assert_eq!(parse_duration(&text), Ok(d), "text=\"{text}\"");
|
assert_eq!(parse_duration(&text), Ok(d), "text=\"{}\"", text);
|
||||||
|
|
||||||
let text = format!(" {v}{unit}\t");
|
let text = format!(" {}{}\t", v, unit);
|
||||||
assert_eq!(parse_duration(&text), Ok(d), "text=\"{text}\"");
|
assert_eq!(parse_duration(&text), Ok(d), "text=\"{}\"", text);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -245,7 +245,7 @@ mod tests {
|
||||||
fn p(s: &str) -> Result<Vec<String>, ParseError> {
|
fn p(s: &str) -> Result<Vec<String>, ParseError> {
|
||||||
let mut sfxs = parse_dns_suffixes(s)?
|
let mut sfxs = parse_dns_suffixes(s)?
|
||||||
.into_iter()
|
.into_iter()
|
||||||
.map(|s| format!("{s}"))
|
.map(|s| format!("{}", s))
|
||||||
.collect::<Vec<_>>();
|
.collect::<Vec<_>>();
|
||||||
sfxs.sort();
|
sfxs.sort();
|
||||||
Ok(sfxs)
|
Ok(sfxs)
|
||||||
|
|
|
@ -4,8 +4,7 @@ pub use linkerd_app_core::identity::{client, Id};
|
||||||
use linkerd_app_core::{
|
use linkerd_app_core::{
|
||||||
control, dns,
|
control, dns,
|
||||||
identity::{
|
identity::{
|
||||||
client::linkerd::Certify, creds, watch as watch_identity, CertMetrics, Credentials,
|
client::linkerd::Certify, creds, CertMetrics, Credentials, DerX509, Mode, WithCertMetrics,
|
||||||
DerX509, WithCertMetrics,
|
|
||||||
},
|
},
|
||||||
metrics::{prom, ControlHttp as ClientMetrics},
|
metrics::{prom, ControlHttp as ClientMetrics},
|
||||||
Result,
|
Result,
|
||||||
|
@ -110,7 +109,7 @@ impl Config {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
Self::Spire { client, tls } => {
|
Self::Spire { client, tls } => {
|
||||||
let addr = client.workload_api_addr.clone();
|
let addr = client.socket_addr.clone();
|
||||||
let spire = spire::client::Spire::new(tls.id.clone());
|
let spire = spire::client::Spire::new(tls.id.clone());
|
||||||
|
|
||||||
let (store, receiver, ready) = watch(tls, metrics.cert)?;
|
let (store, receiver, ready) = watch(tls, metrics.cert)?;
|
||||||
|
@ -138,7 +137,8 @@ fn watch(
|
||||||
watch::Receiver<bool>,
|
watch::Receiver<bool>,
|
||||||
)> {
|
)> {
|
||||||
let (tx, ready) = watch::channel(false);
|
let (tx, ready) = watch::channel(false);
|
||||||
let (store, receiver) = watch_identity(tls.id, tls.server_name, &tls.trust_anchors_pem)?;
|
let (store, receiver) =
|
||||||
|
Mode::default().watch(tls.id, tls.server_name, &tls.trust_anchors_pem)?;
|
||||||
let cred = WithCertMetrics::new(metrics, NotifyReady { store, tx });
|
let cred = WithCertMetrics::new(metrics, NotifyReady { store, tx });
|
||||||
Ok((cred, receiver, ready))
|
Ok((cred, receiver, ready))
|
||||||
}
|
}
|
||||||
|
|
|
@ -19,10 +19,9 @@ use linkerd_app_core::{
|
||||||
config::ServerConfig,
|
config::ServerConfig,
|
||||||
control::{ControlAddr, Metrics as ControlMetrics},
|
control::{ControlAddr, Metrics as ControlMetrics},
|
||||||
dns, drain,
|
dns, drain,
|
||||||
metrics::{legacy::FmtMetrics, prom},
|
metrics::{prom, FmtMetrics},
|
||||||
serve,
|
serve,
|
||||||
svc::Param,
|
svc::Param,
|
||||||
tls_info,
|
|
||||||
transport::{addrs::*, listen::Bind},
|
transport::{addrs::*, listen::Bind},
|
||||||
Error, ProxyRuntime,
|
Error, ProxyRuntime,
|
||||||
};
|
};
|
||||||
|
@ -84,13 +83,13 @@ pub struct App {
|
||||||
tap: tap::Tap,
|
tap: tap::Tap,
|
||||||
}
|
}
|
||||||
|
|
||||||
// === impl Config ===
|
|
||||||
|
|
||||||
impl Config {
|
impl Config {
|
||||||
pub fn try_from_env() -> Result<Self, env::EnvError> {
|
pub fn try_from_env() -> Result<Self, env::EnvError> {
|
||||||
env::Env.try_config()
|
env::Env.try_config()
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Config {
|
||||||
/// Build an application.
|
/// Build an application.
|
||||||
///
|
///
|
||||||
/// It is currently required that this be run on a Tokio runtime, since some
|
/// It is currently required that this be run on a Tokio runtime, since some
|
||||||
|
@ -252,6 +251,9 @@ impl Config {
|
||||||
export_hostname_labels,
|
export_hostname_labels,
|
||||||
);
|
);
|
||||||
|
|
||||||
|
let dst_addr = dst.addr.clone();
|
||||||
|
// registry.sub_registry_with_prefix("gateway"),
|
||||||
|
|
||||||
let gateway = gateway::Gateway::new(gateway, inbound.clone(), outbound.clone()).stack(
|
let gateway = gateway::Gateway::new(gateway, inbound.clone(), outbound.clone()).stack(
|
||||||
dst.resolve.clone(),
|
dst.resolve.clone(),
|
||||||
dst.profiles.clone(),
|
dst.profiles.clone(),
|
||||||
|
@ -302,7 +304,6 @@ impl Config {
|
||||||
error!(%error, "Failed to register process metrics");
|
error!(%error, "Failed to register process metrics");
|
||||||
}
|
}
|
||||||
registry.register("proxy_build_info", "Proxy build info", BUILD_INFO.metric());
|
registry.register("proxy_build_info", "Proxy build info", BUILD_INFO.metric());
|
||||||
registry.register("rustls_info", "Proxy TLS info", tls_info::metric());
|
|
||||||
|
|
||||||
let admin = {
|
let admin = {
|
||||||
let identity = identity.receiver().server();
|
let identity = identity.receiver().server();
|
||||||
|
@ -329,7 +330,7 @@ impl Config {
|
||||||
|
|
||||||
Ok(App {
|
Ok(App {
|
||||||
admin,
|
admin,
|
||||||
dst: dst.addr,
|
dst: dst_addr,
|
||||||
drain: drain_tx,
|
drain: drain_tx,
|
||||||
identity,
|
identity,
|
||||||
inbound_addr,
|
inbound_addr,
|
||||||
|
@ -357,8 +358,6 @@ impl Config {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// === impl App ===
|
|
||||||
|
|
||||||
impl App {
|
impl App {
|
||||||
pub fn admin_addr(&self) -> Local<ServerAddr> {
|
pub fn admin_addr(&self) -> Local<ServerAddr> {
|
||||||
self.admin.listen_addr
|
self.admin.listen_addr
|
||||||
|
@ -397,7 +396,7 @@ impl App {
|
||||||
|
|
||||||
pub fn tracing_addr(&self) -> Option<&ControlAddr> {
|
pub fn tracing_addr(&self) -> Option<&ControlAddr> {
|
||||||
match self.trace_collector {
|
match self.trace_collector {
|
||||||
trace_collector::TraceCollector::Disabled => None,
|
trace_collector::TraceCollector::Disabled { .. } => None,
|
||||||
crate::trace_collector::TraceCollector::Enabled(ref oc) => Some(&oc.addr),
|
crate::trace_collector::TraceCollector::Enabled(ref oc) => Some(&oc.addr),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -46,7 +46,7 @@ impl Config {
|
||||||
) -> Result<
|
) -> Result<
|
||||||
Policy<
|
Policy<
|
||||||
impl svc::Service<
|
impl svc::Service<
|
||||||
http::Request<tonic::body::Body>,
|
http::Request<tonic::body::BoxBody>,
|
||||||
Response = http::Response<control::RspBody>,
|
Response = http::Response<control::RspBody>,
|
||||||
Error = Error,
|
Error = Error,
|
||||||
Future = impl Send,
|
Future = impl Send,
|
||||||
|
|
|
@ -4,26 +4,40 @@ use tokio::sync::watch;
|
||||||
|
|
||||||
pub use linkerd_app_core::identity::client::spire as client;
|
pub use linkerd_app_core::identity::client::spire as client;
|
||||||
|
|
||||||
|
#[cfg(target_os = "linux")]
|
||||||
|
const UNIX_PREFIX: &str = "unix:";
|
||||||
|
#[cfg(target_os = "linux")]
|
||||||
const TONIC_DEFAULT_URI: &str = "http://[::]:50051";
|
const TONIC_DEFAULT_URI: &str = "http://[::]:50051";
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
pub struct Config {
|
pub struct Config {
|
||||||
pub workload_api_addr: Arc<String>,
|
pub socket_addr: Arc<String>,
|
||||||
pub backoff: ExponentialBackoff,
|
pub backoff: ExponentialBackoff,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Connects to SPIRE workload API via Unix Domain Socket
|
// Connects to SPIRE workload API via Unix Domain Socket
|
||||||
pub struct Client {
|
pub struct Client {
|
||||||
|
#[cfg_attr(not(target_os = "linux"), allow(dead_code))]
|
||||||
config: Config,
|
config: Config,
|
||||||
}
|
}
|
||||||
|
|
||||||
// === impl Client ===
|
// === impl Client ===
|
||||||
|
|
||||||
|
#[cfg(target_os = "linux")]
|
||||||
impl From<Config> for Client {
|
impl From<Config> for Client {
|
||||||
fn from(config: Config) -> Self {
|
fn from(config: Config) -> Self {
|
||||||
Self { config }
|
Self { config }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(not(target_os = "linux"))]
|
||||||
|
impl From<Config> for Client {
|
||||||
|
fn from(_: Config) -> Self {
|
||||||
|
panic!("Spire is supported on Linux only")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(target_os = "linux")]
|
||||||
impl tower::Service<()> for Client {
|
impl tower::Service<()> for Client {
|
||||||
type Response = tonic::Response<watch::Receiver<client::SvidUpdate>>;
|
type Response = tonic::Response<watch::Receiver<client::SvidUpdate>>;
|
||||||
type Error = Error;
|
type Error = Error;
|
||||||
|
@ -37,43 +51,25 @@ impl tower::Service<()> for Client {
|
||||||
}
|
}
|
||||||
|
|
||||||
fn call(&mut self, _req: ()) -> Self::Future {
|
fn call(&mut self, _req: ()) -> Self::Future {
|
||||||
let addr = self.config.workload_api_addr.clone();
|
let socket = self.config.socket_addr.clone();
|
||||||
let backoff = self.config.backoff;
|
let backoff = self.config.backoff;
|
||||||
Box::pin(async move {
|
Box::pin(async move {
|
||||||
|
use tokio::net::UnixStream;
|
||||||
use tonic::transport::{Endpoint, Uri};
|
use tonic::transport::{Endpoint, Uri};
|
||||||
|
|
||||||
|
// Strip the 'unix:' prefix for tonic compatibility.
|
||||||
|
let stripped_path = socket
|
||||||
|
.strip_prefix(UNIX_PREFIX)
|
||||||
|
.unwrap_or(socket.as_str())
|
||||||
|
.to_string();
|
||||||
|
|
||||||
// We will ignore this uri because uds do not use it
|
// We will ignore this uri because uds do not use it
|
||||||
// if your connector does use the uri it will be provided
|
// if your connector does use the uri it will be provided
|
||||||
// as the request to the `MakeConnection`.
|
// as the request to the `MakeConnection`.
|
||||||
let chan = Endpoint::try_from(TONIC_DEFAULT_URI)?
|
let chan = Endpoint::try_from(TONIC_DEFAULT_URI)?
|
||||||
.connect_with_connector(tower::util::service_fn(move |_: Uri| {
|
.connect_with_connector(tower::util::service_fn(move |_: Uri| {
|
||||||
#[cfg(unix)]
|
use futures::TryFutureExt;
|
||||||
{
|
UnixStream::connect(stripped_path.clone()).map_ok(hyper_util::rt::TokioIo::new)
|
||||||
use futures::TryFutureExt;
|
|
||||||
|
|
||||||
// The 'unix:' scheme must be stripped from socket paths.
|
|
||||||
let path = addr.strip_prefix("unix:").unwrap_or(addr.as_str());
|
|
||||||
|
|
||||||
tokio::net::UnixStream::connect(path.to_string())
|
|
||||||
.map_ok(hyper_util::rt::TokioIo::new)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(windows)]
|
|
||||||
{
|
|
||||||
use tokio::net::windows::named_pipe;
|
|
||||||
let named_pipe_path = addr.clone();
|
|
||||||
let client = named_pipe::ClientOptions::new()
|
|
||||||
.open(named_pipe_path.as_str())
|
|
||||||
.map(hyper_util::rt::TokioIo::new);
|
|
||||||
|
|
||||||
futures::future::ready(client)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(not(any(unix, windows)))]
|
|
||||||
{
|
|
||||||
compile_error!("Spire is supported only on Windows and Unix systems.");
|
|
||||||
futures::future::pending()
|
|
||||||
}
|
|
||||||
}))
|
}))
|
||||||
.await?;
|
.await?;
|
||||||
|
|
||||||
|
@ -84,3 +80,21 @@ impl tower::Service<()> for Client {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(not(target_os = "linux"))]
|
||||||
|
impl tower::Service<()> for Client {
|
||||||
|
type Response = tonic::Response<watch::Receiver<client::SvidUpdate>>;
|
||||||
|
type Error = Error;
|
||||||
|
type Future = futures::future::BoxFuture<'static, Result<Self::Response, Self::Error>>;
|
||||||
|
|
||||||
|
fn poll_ready(
|
||||||
|
&mut self,
|
||||||
|
_cx: &mut std::task::Context<'_>,
|
||||||
|
) -> std::task::Poll<Result<(), Self::Error>> {
|
||||||
|
unimplemented!("Spire is supported on Linux only")
|
||||||
|
}
|
||||||
|
|
||||||
|
fn call(&mut self, _req: ()) -> Self::Future {
|
||||||
|
unimplemented!("Spire is supported on Linux only")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue