Compare commits

..

No commits in common. "main" and "release/v2.223.0" have entirely different histories.

513 changed files with 12074 additions and 29664 deletions

View File

@ -4,8 +4,6 @@ FROM ghcr.io/linkerd/dev:${DEV_VERSION}
RUN scurl https://run.linkerd.io/install-edge | sh && \
mkdir -p "$HOME/bin" && ln -s "$HOME/.linkerd2/bin/linkerd" "$HOME/bin/linkerd"
ENV RUSTFLAGS="--cfg tokio_unstable"
# XXX(ver) This doesn't currently work, because it puts
# /usr/local/cargo/registry into a weird state with regard to permissions.
#RUN rustup toolchain install --profile=minimal nightly

View File

@ -3,7 +3,7 @@
"build": {
"dockerfile": "Dockerfile",
"args": {
"DEV_VERSION": "v47",
"DEV_VERSION": "v43",
"http_proxy": "${localEnv:http_proxy}",
"https_proxy": "${localEnv:https_proxy}"
}
@ -23,15 +23,7 @@
"zxh404.vscode-proto3"
],
"settings": {
"files.insertFinalNewline": true,
"[git-commit]": {
"editor.rulers": [
72,
80
],
"editor.wordWrap": "wordWrapColumn",
"editor.wordWrapColumn": 80
}
"files.insertFinalNewline": true
}
}
},
@ -50,8 +42,7 @@
"overrideCommand": false,
"remoteUser": "code",
"containerEnv": {
"CXX": "clang++-19",
"RUSTFLAGS": "--cfg tokio_unstable"
"CXX": "clang++-14",
},
"mounts": [
{

View File

@ -1,156 +0,0 @@
# Linkerd2 Proxy Copilot Instructions
## Code Generation
- Code MUST pass `cargo fmt`.
- Code MUST pass `cargo clippy --all-targets --all-features -- -D warnings`.
- Markdown MUST pass `markdownlint-cli2`.
- Prefer `?` for error propagation.
- Avoid `unwrap()` and `expect()` outside tests.
- Use `tracing` crate macros (`tracing::info!`, etc.) for structured logging.
### Comments
Comments should explain **why**, not **what**. Focus on high-level rationale and
design intent at the function or block level, rather than line-by-line
descriptions.
- Use comments to capture:
- System-facing or interface-level concerns
- Key invariants, preconditions, and postconditions
- Design decisions and trade-offs
- Cross-references to architecture or design documentation
- Avoid:
- Line-by-line commentary explaining obvious code
- Restating what the code already clearly expresses
- For public APIs:
- Use `///` doc comments to describe the contract, behavior, parameters, and
usage examples
- For internal rationale:
- Use `//` comments sparingly to note non-obvious reasoning or edge-case
handling
- Be neutral and factual.
### Rust File Organization
For Rust source files, enforce this layout:
1. **Nonpublic imports**
- Declare all `use` statements for private/internal crates first.
- Group imports to avoid duplicates and do **not** add blank lines between
`use` statements.
2. **Module declarations**
- List all `mod` declarations.
3. **Reexports**
- Follow with `pub use` statements.
4. **Type definitions**
- Define `struct`, `enum`, `type`, and `trait` declarations.
- Sort by visibility: `pub` first, then `pub(crate)`, then private.
- Public types should be documented with `///` comments.
5. **Impl blocks**
- Implement methods in the same order as types above.
- Precede each types `impl` block with a header comment: `// === <TypeName> ===`
6. **Tests**
- End with a `tests` module guarded by `#[cfg(test)]`.
- If the infile test module exceeds 100lines, move it to
`tests/<filename>.rs` as a child integrationtest module.
## Test Generation
- Async tests MUST use `tokio::test`.
- Synchronous tests use `#[test]`.
- Include at least one failingedgecase test per public function.
- Use `tracing::info!` for logging in tests, usually in place of comments.
## Code Review
### Rust
- Point out any `unsafe` blocks and justify their safety.
- Flag functions >50 LOC for refactor suggestions.
- Highlight missing docs on public items.
### Markdown
- Use `markdownlint-cli2` to check for linting errors.
- Lines SHOULD be wrapped at 80 characters.
- Fenced code blocks MUST include a language identifier.
### Copilot Instructions
- Start each instruction with an imperative, presenttense verb.
- Keep each instruction under 120 characters.
- Provide one directive per instruction; avoid combining multiple ideas.
- Use "MUST" and "SHOULD" sparingly to emphasize critical rules.
- Avoid semicolons and complex punctuation within bullets.
- Do not reference external links, documents, or specific coding standards.
## Commit Messages
Commits follow the Conventional Commits specification:
### Subject
Subjects are in the form: `<type>[optional scope]: <description>`
- **Type**: feat, fix, docs, refactor, test, chore, ci, build, perf, revert
(others by agreement)
- **Scope**: optional, lowercase; may include `/` to denote submodules (e.g.
`http/detect`)
- **Description**: imperative mood, present tense, no trailing period
- MUST be less than 72 characters
- Omit needless words!
### Body
Non-trivial commits SHOULD include a body summarizing the change.
- Explain *why* the change was needed.
- Describe *what* was done at a high level.
- Use present-tense narration.
- Use complete sentences, paragraphs, and punctuation.
- Preceded by a blank line.
- Wrapped at 80 characters.
- Omit needless words!
### Breaking changes
If the change introduces a backwards-incompatible change, it MUST be marked as
such.
- Indicated by `!` after the type/scope (e.g. `feat(inbound)!: …`)
- Optionally including a `BREAKING CHANGE:` section in the footer explaining the
change in behavior.
### Examples
```text
feat(auth): add JWT refresh endpoint
There is currently no way to refresh a JWT token.
This exposes a new `/refresh` route that returns a refreshed token.
```
```text
feat(api)!: remove deprecated v1 routes
The `/v1/*` endpoints have been deprecated for a long time and are no
longer called by clients.
This change removes the `/v1/*` endpoints and all associated code,
including integration tests and documentation.
BREAKING CHANGE: The previously-deprecated `/v1/*` endpoints were removed.
```
## Pull Requests
- The subject line MUST be in the conventional commit format.
- Autogenerate a PR body summarizing the problem, solution, and verification steps.
- List breaking changes under a separate **Breaking Changes** heading.

View File

@ -11,6 +11,12 @@ updates:
allow:
- dependency-type: "all"
ignore:
# These dependencies will be updated via higher-level aggregator dependencies like `clap`,
# `futures`, `prost`, `tracing`, and `trust-dns-resolver`:
- dependency-name: "futures-*"
- dependency-name: "prost-derive"
- dependency-name: "tracing-*"
- dependency-name: "trust-dns-proto"
# These dependencies are for platforms that we don't support:
- dependency-name: "hermit-abi"
- dependency-name: "redox_*"
@ -18,38 +24,6 @@ updates:
- dependency-name: "wasm-bindgen"
- dependency-name: "web-sys"
- dependency-name: "windows*"
groups:
boring:
patterns:
- "tokio-boring"
- "boring*"
futures:
patterns:
- "futures*"
grpc:
patterns:
- "prost*"
- "tonic*"
hickory:
patterns:
- "hickory*"
icu4x:
patterns:
- "icu_*"
opentelemetry:
patterns:
- "opentelemetry*"
rustls:
patterns:
- "tokio-rustls"
- "rustls*"
- "ring"
symbolic:
patterns:
- "symbolic-*"
tracing:
patterns:
- "tracing*"
- package-ecosystem: cargo
directory: /linkerd/addr/fuzz

View File

@ -15,20 +15,20 @@ env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUSTUP_MAX_RETRIES: 10
RUSTFLAGS: "-D warnings --cfg tokio_unstable"
RUSTFLAGS: "-D warnings"
permissions:
contents: read
jobs:
build:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v43-rust
timeout-minutes: 20
continue-on-error: true
steps:
- run: rustup toolchain install --profile=minimal beta
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- run: just toolchain=beta fetch
- run: just toolchain=beta build

View File

@ -15,17 +15,17 @@ concurrency:
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUSTFLAGS: "-D warnings -A deprecated --cfg tokio_unstable -C debuginfo=2"
RUSTFLAGS: "-D warnings -A deprecated -C debuginfo=2"
RUSTUP_MAX_RETRIES: 10
jobs:
meta:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- id: changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@77af4bed286740ef1a6387dc4e4e4dec39f96054
with:
files: |
.codecov.yml
@ -40,19 +40,19 @@ jobs:
codecov:
needs: meta
if: (github.event_name == 'push' && github.ref == 'refs/heads/main') || needs.meta.outputs.any_changed == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 30
container:
image: docker://ghcr.io/linkerd/dev:v47-rust
image: docker://ghcr.io/linkerd/dev:v43-rust
options: --security-opt seccomp=unconfined # 🤷
env:
CXX: "/usr/bin/clang++-19"
CXX: "/usr/bin/clang++-14"
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --no-run
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --skip-clean --ignore-tests --no-fail-fast --out=Xml
# Some tests are especially flakey in coverage tests. That's fine. We
# only really care to measure how much of our codebase is covered.
continue-on-error: true
- uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24
- uses: codecov/codecov-action@54bcd8715eee62d40e33596ef5e8f0f48dbbccab

View File

@ -17,7 +17,7 @@ env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUST_BACKTRACE: short
RUSTFLAGS: "-D warnings -A deprecated --cfg tokio_unstable -C debuginfo=0"
RUSTFLAGS: "-D warnings -A deprecated -C debuginfo=0"
RUSTUP_MAX_RETRIES: 10
permissions:
@ -26,13 +26,13 @@ permissions:
jobs:
list-changed:
timeout-minutes: 3
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: docker://rust:1.88.0
runs-on: ubuntu-latest
container: docker://rust:1.76.0
steps:
- run: apt update && apt install -y jo
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
- uses: tj-actions/changed-files@77af4bed286740ef1a6387dc4e4e4dec39f96054
id: changed-files
- name: list changed crates
id: list-changed
@ -47,15 +47,15 @@ jobs:
build:
needs: [list-changed]
timeout-minutes: 40
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: docker://rust:1.88.0
runs-on: ubuntu-latest
container: docker://rust:1.76.0
strategy:
matrix:
dir: ${{ fromJson(needs.list-changed.outputs.dirs) }}
steps:
- run: rustup toolchain add nightly
- run: cargo install cargo-fuzz
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- working-directory: ${{matrix.dir}}
run: cargo +nightly fetch

View File

@ -12,9 +12,9 @@ on:
jobs:
markdownlint:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: DavidAnson/markdownlint-cli2-action@992badcdf24e3b8eb7e87ff9287fe931bcb00c6e
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: DavidAnson/markdownlint-cli2-action@510b996878fc0d1a46c8a04ec86b06dbfba09de7
with:
globs: "**/*.md"

View File

@ -14,7 +14,7 @@ on:
env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUSTFLAGS: "-D warnings -A opaque_hidden_inferred_bound --cfg tokio_unstable -C debuginfo=0"
RUSTFLAGS: "-D warnings -A opaque_hidden_inferred_bound -C debuginfo=0"
RUSTUP_MAX_RETRIES: 10
permissions:
@ -22,13 +22,13 @@ permissions:
jobs:
build:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v43-rust
timeout-minutes: 20
continue-on-error: true
steps:
- run: rustup toolchain install --profile=minimal nightly
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- run: just toolchain=nightly fetch
- run: just toolchain=nightly profile=release build

View File

@ -5,7 +5,7 @@ env:
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUSTUP_MAX_RETRIES: 10
RUSTFLAGS: "-D warnings -D deprecated --cfg tokio_unstable -C debuginfo=0"
RUSTFLAGS: "-D warnings -D deprecated -C debuginfo=0"
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
@ -14,24 +14,24 @@ concurrency:
jobs:
meta:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- id: build
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@77af4bed286740ef1a6387dc4e4e4dec39f96054
with:
files: |
.github/workflows/pr.yml
justfile
Dockerfile
- id: actions
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@77af4bed286740ef1a6387dc4e4e4dec39f96054
with:
files: |
.github/workflows/**
.devcontainer/*
- id: cargo
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@77af4bed286740ef1a6387dc4e4e4dec39f96054
with:
files_ignore: "Cargo.toml"
files: |
@ -40,7 +40,7 @@ jobs:
if: steps.cargo.outputs.any_changed == 'true'
run: ./.github/list-crates.sh ${{ steps.cargo.outputs.all_changed_files }}
- id: rust
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@77af4bed286740ef1a6387dc4e4e4dec39f96054
with:
files: |
**/*.rs
@ -57,7 +57,7 @@ jobs:
info:
timeout-minutes: 3
needs: meta
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- name: Info
run: |
@ -74,48 +74,49 @@ jobs:
actions:
needs: meta
if: needs.meta.outputs.actions_changed == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: linkerd/dev/actions/setup-tools@v43
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: just action-lint
- run: just action-dev-check
rust:
needs: meta
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v43-rust
permissions:
contents: read
timeout-minutes: 20
steps:
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84
- run: just fetch
- run: cargo deny --all-features check bans licenses sources
- name: Run cargo deny check bans licenses sources
uses: EmbarkStudios/cargo-deny-action@64015a69ee7ee08f6c56455089cdaf6ad974fd15
with:
command: check bans licenses sources
- run: just check-fmt
- run: just clippy
- run: just doc
- run: just test --exclude=linkerd2-proxy --no-run
- run: just test --exclude=linkerd2-proxy
env:
NEXTEST_RETRIES: 3
rust-crates:
needs: meta
if: needs.meta.outputs.cargo_changed == 'true'
timeout-minutes: 20
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v43-rust
strategy:
matrix:
crate: ${{ fromJson(needs.meta.outputs.cargo_crates) }}
steps:
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84
- run: just fetch
- run: just check-crate ${{ matrix.crate }}
@ -123,11 +124,9 @@ jobs:
needs: meta
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
timeout-minutes: 20
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
env:
WAIT_TIMEOUT: 2m
runs-on: ubuntu-latest
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: linkerd/dev/actions/setup-tools@v43
- name: scurl https://run.linkerd.io/install-edge | sh
run: |
scurl https://run.linkerd.io/install-edge | sh
@ -136,12 +135,12 @@ jobs:
tag=$(linkerd version --client --short)
echo "linkerd $tag"
echo "LINKERD_TAG=$tag" >> "$GITHUB_ENV"
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: just docker
- run: just k3d-create
- run: just-k3d create
- run: just k3d-load-linkerd
- run: just linkerd-install
- run: just linkerd-check-control-plane-proxy
- run: just linkerd-check-contol-plane-proxy
env:
TMPDIR: ${{ runner.temp }}
@ -149,7 +148,7 @@ jobs:
timeout-minutes: 3
needs: [meta, actions, rust, rust-crates, linkerd-install]
if: always()
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
permissions:
contents: write
@ -168,7 +167,7 @@ jobs:
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')
run: exit 1
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'
- name: "Merge dependabot changes"
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'

View File

@ -1,78 +0,0 @@
name: Weekly proxy release
on:
schedule:
# Wednesday at ~8:40PM Pacific
- cron: "40 3 * * 3"
workflow_dispatch: {}
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
last-release:
if: github.repository == 'linkerd/linkerd2-proxy' # Don't run this in forks.
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
timeout-minutes: 5
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
steps:
- name: Latest release
id: latest
run: gh release view --json name,publishedAt | jq -r 'to_entries[] | (.key + "=" + .value)' >> "$GITHUB_OUTPUT"
- name: Check if release was in last 72 hours
id: recency
env:
PUBLISHED_AT: ${{ steps.latest.outputs.publishedAt }}
run: |
if [ "$(date -d "$PUBLISHED_AT" +%s)" -gt "$(date -d '72 hours ago' +%s)" ]; then
echo "Last release $PUBLISHED_AT is recent" >&2
echo "recent=true" >> "$GITHUB_OUTPUT"
else
echo "Last release $PUBLISHED_AT is not recent" >&2
echo "recent=false" >> "$GITHUB_OUTPUT"
fi
outputs:
version: ${{ steps.latest.outputs.name }}
published-at: ${{ steps.latest.outputs.publishedAt }}
recent: ${{ steps.recency.outputs.recent == 'true' }}
last-commit:
needs: last-release
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
timeout-minutes: 5
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- name: Check if the most recent commit is after the last release
id: recency
env:
PUBLISHED_AT: ${{ needs.last-release.outputs.published-at }}
run: |
if [ "$(git log -1 --format=%ct)" -gt "$(date -d "$PUBLISHED_AT" +%s)" ]; then
echo "HEAD after last release $PUBLISHED_AT" >&2
echo "after-release=true" >> "$GITHUB_OUTPUT"
else
echo "after-release=false" >> "$GITHUB_OUTPUT"
fi
outputs:
after-release: ${{ steps.recency.outputs.after-release == 'true' }}
trigger-release:
needs: [last-release, last-commit]
if: needs.last-release.outputs.recent == 'false' && needs.last-commit.outputs.after-release == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
timeout-minutes: 5
permissions:
actions: write
env:
GH_REPO: ${{ github.repository }}
GH_TOKEN: ${{ github.token }}
LAST_VERSION: ${{ needs.last-release.outputs.version }}
steps:
- name: Get the latest minor version
run: |
m="$(echo "$LAST_VERSION" | cut -d. -f2)"
echo MINOR_VERSION="$((m+1))" >> "$GITHUB_ENV"
- run: gh workflow run release.yml -f publish=true -f version=v2."$MINOR_VERSION".0

View File

@ -39,45 +39,24 @@ on:
required: false
type: boolean
default: false
latest:
description: "Make this the latest release?"
required: false
type: boolean
default: true
env:
CARGO: "cargo auditable"
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUSTFLAGS: "-D warnings -A deprecated --cfg tokio_unstable"
CHECKSEC_VERSION: 2.5.0
RUSTFLAGS: "-D warnings -A deprecated"
RUSTUP_MAX_RETRIES: 10
concurrency:
group: ${{ github.workflow }}-${{ inputs.ref || github.head_ref }}
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
meta:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
if: github.event_name == 'pull_request'
- id: workflow
if: github.event_name == 'pull_request'
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
with:
files: |
.github/workflows/release.yml
- id: build
if: github.event_name == 'pull_request'
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
with:
files: |
justfile
Cargo.toml
- id: version
- id: meta
env:
VERSION: ${{ inputs.version }}
shell: bash
@ -85,57 +64,61 @@ jobs:
set -euo pipefail
shopt -s extglob
if [[ "$GITHUB_EVENT_NAME" == pull_request ]]; then
echo version="0.0.0-test.${GITHUB_SHA:0:7}" >> "$GITHUB_OUTPUT"
echo version="0.0.0-test.${GITHUB_SHA:0:7}"
echo archs='["amd64"]'
exit 0
fi
fi >> "$GITHUB_OUTPUT"
if ! [[ "$VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+(-[0-9A-Za-z-]+)?(\+[0-9A-Za-z-]+)?$ ]]; then
echo "Invalid version: $VERSION" >&2
exit 1
fi
echo version="${VERSION#v}" >> "$GITHUB_OUTPUT"
- id: platform
shell: bash
env:
WORKFLOW_CHANGED: ${{ steps.workflow.outputs.any_changed }}
run: |
if [[ "$GITHUB_EVENT_NAME" == pull_request && "$WORKFLOW_CHANGED" != 'true' ]]; then
( echo archs='["amd64"]'
echo oses='["linux"]' ) >> "$GITHUB_OUTPUT"
exit 0
fi
( echo archs='["amd64", "arm64"]'
echo oses='["linux", "windows"]'
( echo version="${VERSION#v}"
echo archs='["amd64", "arm64", "arm"]'
) >> "$GITHUB_OUTPUT"
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
if: github.event_name == 'pull_request'
- id: changed
if: github.event_name == 'pull_request'
uses: tj-actions/changed-files@77af4bed286740ef1a6387dc4e4e4dec39f96054
with:
files: |
.github/workflows/release.yml
justfile
Cargo.toml
outputs:
archs: ${{ steps.platform.outputs.archs }}
oses: ${{ steps.platform.outputs.oses }}
version: ${{ steps.version.outputs.version }}
package: ${{ github.event_name == 'workflow_dispatch' || steps.build.outputs.any_changed == 'true' || steps.workflow.outputs.any_changed == 'true' }}
archs: ${{ steps.meta.outputs.archs }}
version: ${{ steps.meta.outputs.version }}
package: ${{ github.event_name == 'workflow_dispatch' || steps.changed.outputs.any_changed == 'true' }}
profile: ${{ inputs.profile || 'release' }}
publish: ${{ inputs.publish }}
ref: ${{ inputs.ref || github.sha }}
tag: "${{ inputs.tag-prefix || 'release/' }}v${{ steps.version.outputs.version }}"
ref: ${{ inputs.ref || github.ref }}
tag: "${{ inputs.tag-prefix || 'release/' }}v${{ steps.meta.outputs.version }}"
prerelease: ${{ inputs.prerelease }}
draft: ${{ inputs.draft }}
latest: ${{ inputs.latest }}
info:
needs: meta
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- name: Inputs
- name: Info
run: |
jq . <<EOF
${{ toJson(inputs) }}
EOF
- name: Meta
run: |
jq . <<EOF
${{ toJson(needs.meta.outputs) }}
EOF
echo 'github.repository_owner: ${{ github.repository_owner }}'
echo 'inputs.version: ${{ inputs.version }}'
echo 'inputs.tag-prefix: ${{ inputs.tag-prefix }}'
echo 'inputs.profile: ${{ inputs.profile }}'
echo 'inputs.publish: ${{ inputs.publish }}'
echo 'inputs.ref: ${{ inputs.ref }}'
echo 'needs.meta.outputs.archs: ${{ needs.meta.outputs.archs }}'
echo 'needs.meta.outputs.version: ${{ needs.meta.outputs.version }}'
echo 'needs.meta.outputs.package: ${{ needs.meta.outputs.package }}'
echo 'needs.meta.outputs.profile: ${{ needs.meta.outputs.profile }}'
echo 'needs.meta.outputs.publish: ${{ needs.meta.outputs.publish }}'
echo 'needs.meta.outputs.ref: ${{ needs.meta.outputs.ref }}'
echo 'needs.meta.outputs.prerelease: ${{ needs.meta.outputs.prerelease }}'
echo 'needs.meta.outputs.draft: ${{ needs.meta.outputs.draft }}'
package:
needs: meta
@ -144,105 +127,75 @@ jobs:
strategy:
matrix:
arch: ${{ fromJson(needs.meta.outputs.archs) }}
os: ${{ fromJson(needs.meta.outputs.oses) }}
libc: [gnu] # musl
exclude:
- os: windows
arch: arm64
# If we're not actually building on a release tag, don't short-circuit on
# errors. This helps us know whether a failure is platform-specific.
continue-on-error: ${{ needs.meta.outputs.publish != 'true' }}
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 40
container: docker://ghcr.io/linkerd/dev:v47-rust-musl
container: docker://ghcr.io/linkerd/dev:v43-rust-musl
env:
LINKERD2_PROXY_VENDOR: ${{ github.repository_owner }}
LINKERD2_PROXY_VERSION: ${{ needs.meta.outputs.version }}
steps:
# TODO: add to dev image
- name: Install MiniGW
if: matrix.os == 'windows'
run: apt-get update && apt-get install -y mingw-w64
- name: Install cross compilation toolchain
if: matrix.arch == 'arm64'
run: apt-get update && apt-get install -y binutils-aarch64-linux-gnu
- name: Configure git
run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
ref: ${{ needs.meta.outputs.ref }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84
with:
key: ${{ matrix.os }}-${{ matrix.arch }}
key: ${{ matrix.libc }}
- run: just fetch
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} rustup
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} profile=${{ needs.meta.outputs.profile }} build
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} profile=${{ needs.meta.outputs.profile }} package
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} rustup
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} profile=${{ needs.meta.outputs.profile }} build
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} profile=${{ needs.meta.outputs.profile }} package
- uses: actions/upload-artifact@5d5d22a31266ced268874388b861e4b58bb5c2f3
with:
name: ${{ matrix.arch }}-${{ matrix.os }}-artifacts
name: ${{ matrix.arch }}-artifacts
path: target/package/*
publish:
needs: [meta, package]
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
actions: write
contents: write
env:
VERSION: v${{ needs.meta.outputs.version }}
TAG: ${{ needs.meta.outputs.tag }}
steps:
- name: Configure git
env:
GITHUB_USERNAME: ${{ vars.LINKERD2_PROXY_GITHUB_USERNAME || 'github-actions[bot]' }}
run: |
git config --global --add safe.directory "$PWD" # actions/runner#2033
git config --global user.name "$GITHUB_USERNAME"
git config --global user.email "$GITHUB_USERNAME"@users.noreply.github.com
git config --global user.name 'github-actions[bot]'
git config --global user.email 'github-actions[bot]@users.noreply.github.com'
# Tag the release.
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
with:
token: ${{ secrets.LINKERD2_PROXY_GITHUB_TOKEN || github.token }}
ref: ${{ needs.meta.outputs.ref }}
- run: git tag -a -m "$VERSION" "$TAG"
- run: git tag -a -m 'v${{ needs.meta.outputs.version }}' '${{ needs.meta.outputs.tag }}'
# Fetch the artifacts.
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
- uses: actions/download-artifact@c850b930e6ba138125429b7e5c93fc707a7f8427
with:
path: artifacts
- run: du -h artifacts/**/*
# Publish the release.
- if: needs.meta.outputs.publish == 'true'
run: git push origin "$TAG"
run: git push origin '${{ needs.meta.outputs.tag }}'
- if: needs.meta.outputs.publish == 'true'
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8
uses: softprops/action-gh-release@9d7c94cfd0a1f3ed45544c887983e9fa900f0564
with:
name: ${{ env.VERSION }}
tag_name: ${{ env.TAG }}
name: v${{ needs.meta.outputs.version }}
tag_name: ${{ needs.meta.outputs.tag }}
files: artifacts/**/*
generate_release_notes: true
prerelease: ${{ needs.meta.outputs.prerelease }}
draft: ${{ needs.meta.outputs.draft }}
make_latest: ${{ needs.meta.outputs.latest }}
- if: >-
needs.meta.outputs.publish == 'true' &&
needs.meta.outputs.prerelease == 'false' &&
needs.meta.outputs.draft == 'false' &&
needs.meta.outputs.latest == 'true'
name: Trigger sync-proxy in linkerd2
run: gh workflow run sync-proxy.yml -f version="$TAG"
env:
GH_REPO: ${{ vars.LINKERD2_REPO || 'linkerd/linkerd2' }}
GH_TOKEN: ${{ secrets.LINKERD2_GITHUB_TOKEN }}
release-ok:
needs: publish
if: always()
timeout-minutes: 3
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- name: Results
run: |

View File

@ -13,8 +13,8 @@ on:
jobs:
sh-lint:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: linkerd/dev/actions/setup-tools@v43
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: just sh-lint

View File

@ -13,10 +13,10 @@ permissions:
jobs:
devcontainer:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v43-rust
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- run: |
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'
@ -35,10 +35,10 @@ jobs:
workflows:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: linkerd/dev/actions/setup-tools@v43
- uses: actions/checkout@9bb56186c3b09b4f86b1c65136769dd318469633
- shell: bash
run: |
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'

2
.gitignore vendored
View File

@ -1,5 +1,3 @@
.cargo
**/.cargo
target
**/target
**/corpus

2695
Cargo.lock

File diff suppressed because it is too large Load Diff

View File

@ -16,6 +16,7 @@ members = [
"linkerd/app",
"linkerd/conditional",
"linkerd/distribute",
"linkerd/detect",
"linkerd/dns/name",
"linkerd/dns",
"linkerd/duplex",
@ -23,31 +24,21 @@ members = [
"linkerd/errno",
"linkerd/error-respond",
"linkerd/exp-backoff",
"linkerd/http/access-log",
"linkerd/http/box",
"linkerd/http/classify",
"linkerd/http/detect",
"linkerd/http/h2",
"linkerd/http/insert",
"linkerd/http/metrics",
"linkerd/http/override-authority",
"linkerd/http/prom",
"linkerd/http/retain",
"linkerd/http/retry",
"linkerd/http/route",
"linkerd/http/stream-timeouts",
"linkerd/http/upgrade",
"linkerd/http/variant",
"linkerd/http-access-log",
"linkerd/http-box",
"linkerd/http-classify",
"linkerd/http-metrics",
"linkerd/http-retry",
"linkerd/http-route",
"linkerd/identity",
"linkerd/idle-cache",
"linkerd/io",
"linkerd/meshtls",
"linkerd/meshtls/boring",
"linkerd/meshtls/rustls",
"linkerd/meshtls/verifier",
"linkerd/metrics",
"linkerd/mock/http-body",
"linkerd/opaq-route",
"linkerd/opencensus",
"linkerd/opentelemetry",
"linkerd/pool",
"linkerd/pool/mock",
"linkerd/pool/p2c",
@ -69,24 +60,21 @@ members = [
"linkerd/reconnect",
"linkerd/retry",
"linkerd/router",
"linkerd/rustls",
"linkerd/service-profiles",
"linkerd/signal",
"linkerd/stack",
"linkerd/stack/metrics",
"linkerd/stack/tracing",
"linkerd/system",
"linkerd/tonic-stream",
"linkerd/tonic-watch",
"linkerd/tls",
"linkerd/tls/route",
"linkerd/tls/test-util",
"linkerd/tracing",
"linkerd/transport-header",
"linkerd/transport-metrics",
"linkerd/workers",
"linkerd2-proxy",
"opencensus-proto",
"opentelemetry-proto",
"spiffe-proto",
"tools",
]
@ -94,44 +82,3 @@ members = [
[profile.release]
debug = 1
lto = true
[workspace.package]
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[workspace.dependencies]
bytes = { version = "1" }
drain = { version = "0.2", default-features = false }
h2 = { version = "0.4" }
http = { version = "1" }
http-body = { version = "1" }
hyper = { version = "1", default-features = false }
prometheus-client = { version = "0.23" }
prost = { version = "0.13" }
prost-build = { version = "0.13", default-features = false }
prost-types = { version = "0.13" }
tokio-rustls = { version = "0.26", default-features = false, features = [
"logging",
] }
tonic = { version = "0.13", default-features = false }
tonic-build = { version = "0.13", default-features = false }
tower = { version = "0.5", default-features = false }
tower-service = { version = "0.3" }
tower-test = { version = "0.4" }
tracing = { version = "0.1" }
[workspace.dependencies.http-body-util]
version = "0.1.3"
default-features = false
features = ["channel"]
[workspace.dependencies.hyper-util]
version = "0.1"
default-features = false
features = ["tokio", "tracing"]
[workspace.dependencies.linkerd2-proxy-api]
version = "0.17.0"

View File

@ -3,7 +3,7 @@
# This is intended **DEVELOPMENT ONLY**, i.e. so that proxy developers can
# easily test the proxy in the context of the larger `linkerd2` project.
ARG RUST_IMAGE=ghcr.io/linkerd/dev:v47-rust
ARG RUST_IMAGE=ghcr.io/linkerd/dev:v43-rust
# Use an arbitrary ~recent edge release image to get the proxy
# identity-initializing and linkerd-await wrappers.
@ -14,16 +14,11 @@ FROM $LINKERD2_IMAGE as linkerd2
FROM --platform=$BUILDPLATFORM $RUST_IMAGE as fetch
ARG PROXY_FEATURES=""
ARG TARGETARCH="amd64"
RUN apt-get update && \
apt-get install -y time && \
if [[ "$PROXY_FEATURES" =~ .*meshtls-boring.* ]] ; then \
apt-get install -y golang ; \
fi && \
case "$TARGETARCH" in \
amd64) true ;; \
arm64) apt-get install --no-install-recommends -y binutils-aarch64-linux-gnu ;; \
esac && \
rm -rf /var/lib/apt/lists/*
ENV CARGO_NET_RETRY=10
@ -37,7 +32,8 @@ RUN --mount=type=cache,id=cargo,target=/usr/local/cargo/registry \
# Build the proxy.
FROM fetch as build
ENV CARGO_INCREMENTAL=0
ENV RUSTFLAGS="-D warnings -A deprecated --cfg tokio_unstable"
ENV RUSTFLAGS="-D warnings -A deprecated"
ARG TARGETARCH="amd64"
ARG PROFILE="release"
ARG LINKERD2_PROXY_VERSION=""
ARG LINKERD2_PROXY_VENDOR=""

View File

@ -86,9 +86,8 @@ minutes to review our [code of conduct][coc].
We test our code by way of fuzzing and this is described in [FUZZING.md](/docs/FUZZING.md).
A third party security audit focused on fuzzing Linkerd2-proxy was performed by
Ada Logics in 2021. The
[full report](/docs/reports/linkerd2-proxy-fuzzing-report.pdf) can be found in
the `docs/reports/` directory.
Ada Logics in 2021. The full report is available
[here](/docs/reports/linkerd2-proxy-fuzzing-report.pdf).
## License

View File

@ -1,36 +1,58 @@
[graph]
targets = [
{ triple = "x86_64-unknown-linux-gnu" },
{ triple = "aarch64-unknown-linux-gnu" },
{ triple = "armv7-unknown-linux-gnu" },
]
[advisories]
db-path = "~/.cargo/advisory-db"
db-urls = ["https://github.com/rustsec/advisory-db"]
vulnerability = "deny"
unmaintained = "warn"
yanked = "deny"
notice = "warn"
ignore = []
[licenses]
unlicensed = "deny"
allow = [
"Apache-2.0",
"BSD-2-Clause",
"BSD-3-Clause",
"ISC",
"MIT",
"Unicode-3.0",
"Zlib",
]
deny = []
copyleft = "deny"
allow-osi-fsf-free = "neither"
default = "deny"
# Ignore local workspace license values for unpublished crates.
private = { ignore = true }
confidence-threshold = 0.8
exceptions = [
{ allow = [
"ISC",
"OpenSSL",
], name = "aws-lc-sys", version = "*" },
"Zlib",
], name = "adler32", version = "*" },
{ allow = [
"ISC",
"MIT",
"OpenSSL",
], name = "aws-lc-fips-sys", version = "*" },
], name = "ring", version = "*" },
# The Unicode-DFS-2016 license is necessary for unicode-ident because they
# use data from the unicode tables to generate the tables which are
# included in the application. We do not distribute those data files so
# this is not a problem for us. See https://github.com/dtolnay/unicode-ident/pull/9/files
{ allow = [
"Unicode-DFS-2016",
], name = "unicode-ident", version = "*" },
]
[[licenses.clarify]]
name = "ring"
version = "*"
expression = "MIT AND ISC AND OpenSSL"
license-files = [
{ path = "LICENSE", hash = 0xbd0eed23 },
]
[bans]
@ -42,35 +64,41 @@ deny = [
{ name = "rustls", wrappers = ["tokio-rustls"] },
# rustls-webpki should be used instead.
{ name = "webpki" },
# aws-lc-rs should be used instead.
{ name = "ring" }
]
skip = [
# The proc-macro ecosystem is in the middle of a migration from `syn` v1 to
# `syn` v2. Allow both versions to coexist peacefully for now.
#
# Since `syn` is used by proc-macros (executed at compile time), duplicate
# versions won't have an impact on the final binary size.
{ name = "syn" },
# `tonic` v0.6 depends on `bitflags` v1.x, while `boring-sys` depends on
# `bitflags` v2.x. Allow both versions to coexist peacefully for now.
{ name = "bitflags", version = "1" },
# `linkerd-trace-context`, `rustls-pemfile` and `tonic` depend on `base64`
# v0.13.1 while `rcgen` depends on v0.21.5
{ name = "base64" },
# tonic/axum depend on a newer `tower`, which we are still catching up to.
# see #3744.
{ name = "tower", version = "0.5" },
# https://github.com/hawkw/matchers/pull/4
{ name = "regex-automata", version = "0.1" },
{ name = "regex-syntax", version = "0.6" },
# `trust-dns-proto`, depends on `idna` v0.2.3 while `url` depends on v0.5.0
{ name = "idna" },
# Some dependencies still use indexmap v1.
{ name = "indexmap", version = "1" },
{ name = "hashbrown", version = "0.12" },
]
skip-tree = [
# thiserror v2 is still propagating through the ecosystem
{ name = "thiserror", version = "1" },
# rand v0.9 is still propagating through the ecosystem
{ name = "rand", version = "0.8" },
# rust v1.0 is still propagating through the ecosystem
{ name = "rustix", version = "0.38" },
# `pprof` uses a number of old dependencies. for now, we skip its subtree.
{ name = "pprof" },
# aws-lc-rs uses a slightly outdated version of bindgen
{ name = "bindgen", version = "0.69.5" },
# socket v0.6 is still propagating through the ecosystem
{ name = "socket2", version = "0.5" },
# right now we have a mix of versions of this crate in the ecosystem
# procfs uses 0.36.14, tempfile uses 0.37.4
{ name = "rustix" },
# Hyper v0.14 depends on an older socket2 version.
{ name = "socket2" },
]
[sources]
unknown-registry = "deny"
unknown-git = "deny"
allow-registry = [
"https://github.com/rust-lang/crates.io-index",
]
allow-registry = ["https://github.com/rust-lang/crates.io-index"]
[sources.allow-org]
github = ["linkerd"]

View File

@ -12,12 +12,9 @@ engine.
We place the fuzz tests into folders within the individual crates that the fuzz
tests target. For example, we have a fuzz test that that target the crate
`/linkerd/addr` and the code in `/linkerd/addr/src` and thus the fuzz test that
targets this crate is put in `/linkerd/addr/fuzz`.
The folder structure for each of the fuzz tests is automatically generated by
`cargo fuzz init`. See cargo fuzz's
[`README.md`](https://github.com/rust-fuzz/cargo-fuzz#cargo-fuzz-init) for more
information.
targets this crate is put in `/linkerd/addr/fuzz`. The folder set up we use for
each of the fuzz tests is automatically generated by `cargo fuzz init`
(described [here](https://github.com/rust-fuzz/cargo-fuzz#cargo-fuzz-init)).
### Fuzz targets
@ -99,5 +96,6 @@ unit-test-like fuzzers, but are essentially just more substantial in nature. The
idea behind these fuzzers is to test end-to-end concepts more so than individual
components of the proxy.
The [inbound fuzzer](/linkerd/app/inbound/fuzz/fuzz_targets/fuzz_target_1.rs)
is an example of this.
The inbound fuzzer
[here](/linkerd/app/inbound/fuzz/fuzz_targets/fuzz_target_1.rs) is an example of
this.

View File

@ -1,18 +1,17 @@
[package]
name = "hyper-balance"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[dependencies]
futures = { version = "0.3", default-features = false }
http = { workspace = true }
http-body = { workspace = true }
hyper = { workspace = true }
http = "0.2"
hyper = "0.14"
pin-project = "1"
tower = { workspace = true, default-features = false, features = ["load"] }
tower = { version = "0.4", default-features = false, features = ["load"] }
tokio = { version = "1", features = ["macros"] }
[dev-dependencies]

View File

@ -1,7 +1,7 @@
#![deny(rust_2018_idioms, clippy::disallowed_methods, clippy::disallowed_types)]
#![forbid(unsafe_code)]
use http_body::Body;
use hyper::body::HttpBody;
use pin_project::pin_project;
use std::pin::Pin;
use std::task::{Context, Poll};
@ -38,7 +38,7 @@ pub struct PendingUntilEosBody<T, B> {
impl<T, B> TrackCompletion<T, http::Response<B>> for PendingUntilFirstData
where
B: Body,
B: HttpBody,
{
type Output = http::Response<PendingUntilFirstDataBody<T, B>>;
@ -59,7 +59,7 @@ where
impl<T, B> TrackCompletion<T, http::Response<B>> for PendingUntilEos
where
B: Body,
B: HttpBody,
{
type Output = http::Response<PendingUntilEosBody<T, B>>;
@ -80,7 +80,7 @@ where
impl<T, B> Default for PendingUntilFirstDataBody<T, B>
where
B: Body + Default,
B: HttpBody + Default,
{
fn default() -> Self {
Self {
@ -90,9 +90,9 @@ where
}
}
impl<T, B> Body for PendingUntilFirstDataBody<T, B>
impl<T, B> HttpBody for PendingUntilFirstDataBody<T, B>
where
B: Body,
B: HttpBody,
T: Send + 'static,
{
type Data = B::Data;
@ -102,20 +102,32 @@ where
self.body.is_end_stream()
}
fn poll_frame(
fn poll_data(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
let this = self.project();
let ret = futures::ready!(this.body.poll_frame(cx));
let ret = futures::ready!(this.body.poll_data(cx));
// Once a frame is received, the handle is dropped. On subsequent calls, this
// Once a data frame is received, the handle is dropped. On subsequent calls, this
// is a noop.
drop(this.handle.take());
Poll::Ready(ret)
}
fn poll_trailers(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
let this = self.project();
// If this is being called, the handle definitely should have been dropped
// already.
drop(this.handle.take());
this.body.poll_trailers(cx)
}
#[inline]
fn size_hint(&self) -> hyper::body::SizeHint {
self.body.size_hint()
@ -126,7 +138,7 @@ where
impl<T, B> Default for PendingUntilEosBody<T, B>
where
B: Body + Default,
B: HttpBody + Default,
{
fn default() -> Self {
Self {
@ -136,7 +148,7 @@ where
}
}
impl<T: Send + 'static, B: Body> Body for PendingUntilEosBody<T, B> {
impl<T: Send + 'static, B: HttpBody> HttpBody for PendingUntilEosBody<T, B> {
type Data = B::Data;
type Error = B::Error;
@ -145,21 +157,35 @@ impl<T: Send + 'static, B: Body> Body for PendingUntilEosBody<T, B> {
self.body.is_end_stream()
}
fn poll_frame(
fn poll_data(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
let mut this = self.project();
let body = &mut this.body;
tokio::pin!(body);
let frame = futures::ready!(body.poll_frame(cx));
let ret = futures::ready!(body.poll_data(cx));
// If this was the last frame, then drop the handle immediately.
if this.body.is_end_stream() {
drop(this.handle.take());
}
Poll::Ready(frame)
Poll::Ready(ret)
}
fn poll_trailers(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
let this = self.project();
let ret = futures::ready!(this.body.poll_trailers(cx));
// Once trailers are received, the handle is dropped immediately (in case the body
// is retained longer for some reason).
drop(this.handle.take());
Poll::Ready(ret)
}
#[inline]
@ -172,7 +198,7 @@ impl<T: Send + 'static, B: Body> Body for PendingUntilEosBody<T, B> {
mod tests {
use super::{PendingUntilEos, PendingUntilFirstData};
use futures::future::poll_fn;
use http_body::{Body, Frame};
use hyper::body::HttpBody;
use std::collections::VecDeque;
use std::io::Cursor;
use std::pin::Pin;
@ -199,13 +225,11 @@ mod tests {
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok")
.into_data()
.expect("frame is data");
.expect("data some")
.expect("data ok");
assert!(wk.upgrade().is_none());
}
@ -258,10 +282,10 @@ mod tests {
let res = assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll());
assert!(res.expect("frame is some").is_err());
assert!(res.expect("data is some").is_err());
assert!(wk.upgrade().is_none());
}
@ -284,21 +308,21 @@ mod tests {
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data some")
.expect("data ok");
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data some")
.expect("data ok");
assert!(wk.upgrade().is_none());
}
@ -331,42 +355,40 @@ mod tests {
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data")
.expect("data ok");
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data")
.expect("data ok");
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok")
.into_trailers()
.expect("is trailers");
assert!(wk.upgrade().is_none());
let poll = assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll());
assert!(poll.is_none());
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_trailers(cx)
}))
.poll())
.expect("trailers ok")
.expect("trailers");
assert!(wk.upgrade().is_none());
}
@ -389,7 +411,7 @@ mod tests {
let poll = assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll());
assert!(poll.expect("some").is_err());
@ -407,7 +429,7 @@ mod tests {
#[derive(Default)]
struct TestBody(VecDeque<&'static str>, Option<http::HeaderMap>);
impl Body for TestBody {
impl HttpBody for TestBody {
type Data = Cursor<&'static str>;
type Error = &'static str;
@ -415,27 +437,26 @@ mod tests {
self.0.is_empty() & self.1.is_none()
}
fn poll_frame(
fn poll_data(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
Poll::Ready(self.as_mut().0.pop_front().map(Cursor::new).map(Ok))
}
fn poll_trailers(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
let mut this = self.as_mut();
// Return the next data frame from the sequence of chunks.
if let Some(chunk) = this.0.pop_front() {
let frame = Some(Ok(Frame::data(Cursor::new(chunk))));
return Poll::Ready(frame);
}
// Yield the trailers once all data frames have been yielded.
let trailers = this.1.take().map(Frame::<Self::Data>::trailers).map(Ok);
Poll::Ready(trailers)
assert!(this.0.is_empty());
Poll::Ready(Ok(this.1.take()))
}
}
#[derive(Default)]
struct ErrBody(Option<&'static str>);
impl Body for ErrBody {
impl HttpBody for ErrBody {
type Data = Cursor<&'static str>;
type Error = &'static str;
@ -443,13 +464,18 @@ mod tests {
self.0.is_none()
}
fn poll_frame(
fn poll_data(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
let err = self.as_mut().0.take().expect("err");
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
Poll::Ready(Some(Err(self.as_mut().0.take().expect("err"))))
}
Poll::Ready(Some(Err(err)))
fn poll_trailers(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
Poll::Ready(Err(self.as_mut().0.take().expect("err")))
}
}
}

View File

@ -15,13 +15,9 @@ toolchain := ""
features := ""
export LINKERD2_PROXY_VERSION := env_var_or_default("LINKERD2_PROXY_VERSION", "0.0.0-dev" + `git rev-parse --short HEAD`)
export LINKERD2_PROXY_VERSION := env_var_or_default("LINKERD2_PROXY_VERSION", "0.0.0-dev." + `git rev-parse --short HEAD`)
export LINKERD2_PROXY_VENDOR := env_var_or_default("LINKERD2_PROXY_VENDOR", `whoami` + "@" + `hostname`)
# TODO: these variables will be included in dev v48
export AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_gnu := env_var_or_default("AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_gnu", "-fuse-ld=/usr/aarch64-linux-gnu/bin/ld")
export AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_musl := env_var_or_default("AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_musl", "-fuse-ld=/usr/aarch64-linux-gnu/bin/ld")
# The version name to use for packages.
package_version := "v" + LINKERD2_PROXY_VERSION
@ -30,30 +26,28 @@ docker-repo := "localhost/linkerd/proxy"
docker-tag := `git rev-parse --abbrev-ref HEAD | sed 's|/|.|g'` + "." + `git rev-parse --short HEAD`
docker-image := docker-repo + ":" + docker-tag
# The architecture name to use for packages. Either 'amd64' or 'arm64'.
# The architecture name to use for packages. Either 'amd64', 'arm64', or 'arm'.
arch := "amd64"
# The OS name to use for packages. Either 'linux' or 'windows'.
os := "linux"
libc := 'gnu'
# If a `arch` is specified, then we change the default cargo `--target`
# to support cross-compilation. Otherwise, we use `rustup` to find the default.
_target := if os + '-' + arch == "linux-amd64" {
_target := if arch == 'amd64' {
"x86_64-unknown-linux-" + libc
} else if os + '-' + arch == "linux-arm64" {
} else if arch == "arm64" {
"aarch64-unknown-linux-" + libc
} else if os + '-' + arch == "windows-amd64" {
"x86_64-pc-windows-" + libc
} else if arch == "arm" {
"armv7-unknown-linux-" + libc + "eabihf"
} else {
error("unsupported: os=" + os + " arch=" + arch + " libc=" + libc)
error("unsupported arch=" + arch)
}
_cargo := 'just-cargo profile=' + profile + ' target=' + _target + ' toolchain=' + toolchain
_target_dir := "target" / _target / profile
_target_bin := _target_dir / "linkerd2-proxy" + if os == 'windows' { '.exe' } else { '' }
_package_name := "linkerd2-proxy-" + package_version + "-" + os + "-" + arch + if libc == 'musl' { '-static' } else { '' }
_target_bin := _target_dir / "linkerd2-proxy"
_package_name := "linkerd2-proxy-" + package_version + "-" + arch + if libc == 'musl' { '-static' } else { '' }
_package_dir := "target/package" / _package_name
shasum := "shasum -a 256"
@ -63,9 +57,7 @@ _features := if features == "all" {
"--no-default-features --features=" + features
} else { "" }
wait-timeout := env_var_or_default("WAIT_TIMEOUT", "1m")
export CXX := 'clang++-19'
export CXX := 'clang++-14'
#
# Recipes
@ -141,7 +133,7 @@ _strip:
_package_bin := _package_dir / "bin" / "linkerd2-proxy"
# XXX aarch64-musl builds do not enable PIE, so we use target-specific
# XXX {aarch64,arm}-musl builds do not enable PIE, so we use target-specific
# files to document those differences.
_expected_checksec := '.checksec' / arch + '-' + libc + '.json'
@ -245,7 +237,7 @@ linkerd-tag := env_var_or_default('LINKERD_TAG', '')
_controller-image := 'ghcr.io/linkerd/controller'
_policy-image := 'ghcr.io/linkerd/policy-controller'
_init-image := 'ghcr.io/linkerd/proxy-init'
_init-tag := 'v2.4.0'
_init-tag := 'v2.2.0'
_kubectl := 'just-k3d kubectl'
_linkerd := 'linkerd --context=k3d-$(just-k3d --evaluate K3D_CLUSTER_NAME)'
@ -260,12 +252,6 @@ _tag-set:
_k3d-ready:
@just-k3d ready
export K3D_CLUSTER_NAME := "l5d-proxy"
export K3D_CREATE_FLAGS := "--no-lb"
export K3S_DISABLE := "local-storage,traefik,servicelb,metrics-server@server:*"
k3d-create: && _k3d-ready
@just-k3d create
k3d-load-linkerd: _tag-set _k3d-ready
for i in \
'{{ _controller-image }}:{{ linkerd-tag }}' \
@ -282,12 +268,11 @@ k3d-load-linkerd: _tag-set _k3d-ready
# Install crds on the test cluster.
_linkerd-crds-install: _k3d-ready
{{ _kubectl }} apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml
{{ _linkerd }} install --crds \
| {{ _kubectl }} apply -f -
{{ _kubectl }} wait crd --for condition=established \
--selector='linkerd.io/control-plane-ns' \
--timeout={{ wait-timeout }}
--timeout=1m
# Install linkerd on the test cluster using test images.
linkerd-install *args='': _tag-set k3d-load-linkerd _linkerd-crds-install && _linkerd-ready
@ -309,7 +294,7 @@ linkerd-uninstall:
{{ _linkerd }} uninstall \
| {{ _kubectl }} delete -f -
linkerd-check-control-plane-proxy:
linkerd-check-contol-plane-proxy:
#!/usr/bin/env bash
set -euo pipefail
check=$(mktemp --tmpdir check-XXXX.json)
@ -328,14 +313,4 @@ linkerd-check-control-plane-proxy:
_linkerd-ready:
{{ _kubectl }} wait pod --for=condition=ready \
--namespace=linkerd --selector='linkerd.io/control-plane-component' \
--timeout={{ wait-timeout }}
#
# Dev Container
#
devcontainer-up:
devcontainer.js up --workspace-folder=.
devcontainer-exec container-id *args:
devcontainer.js exec --container-id={{ container-id }} {{ args }}
--timeout=1m

View File

@ -1,16 +1,13 @@
[package]
name = "linkerd-addr"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[dependencies]
http = { workspace = true }
ipnet = "2.11"
http = "0.2"
ipnet = "2.7"
linkerd-dns-name = { path = "../dns/name" }
thiserror = "2"
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(fuzzing)'] }
thiserror = "1"

View File

@ -1,10 +1,9 @@
[package]
name = "linkerd-addr-fuzz"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.0.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
publish = false
edition = "2021"
[package.metadata]
cargo-fuzz = true
@ -13,7 +12,7 @@ cargo-fuzz = true
libfuzzer-sys = "0.4"
linkerd-addr = { path = ".." }
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
tracing = { workspace = true }
tracing = "0.1"
# Prevent this from interfering with workspaces
[workspace]

View File

@ -100,11 +100,15 @@ impl Addr {
// them ourselves.
format!("[{}]", a.ip())
};
http::uri::Authority::from_str(&ip)
.unwrap_or_else(|err| panic!("SocketAddr ({a}) must be valid authority: {err}"))
http::uri::Authority::from_str(&ip).unwrap_or_else(|err| {
panic!("SocketAddr ({}) must be valid authority: {}", a, err)
})
}
Addr::Socket(a) => {
http::uri::Authority::from_str(&a.to_string()).unwrap_or_else(|err| {
panic!("SocketAddr ({}) must be valid authority: {}", a, err)
})
}
Addr::Socket(a) => http::uri::Authority::from_str(&a.to_string())
.unwrap_or_else(|err| panic!("SocketAddr ({a}) must be valid authority: {err}")),
}
}
@ -261,14 +265,14 @@ mod tests {
];
for (host, expected_result) in cases {
let a = Addr::from_str(host).unwrap();
assert_eq!(a.is_loopback(), *expected_result, "{host:?}")
assert_eq!(a.is_loopback(), *expected_result, "{:?}", host)
}
}
fn test_to_http_authority(cases: &[&str]) {
let width = cases.iter().map(|s| s.len()).max().unwrap_or(0);
for host in cases {
print!("trying {host:width$} ... ");
print!("trying {:1$} ... ", host, width);
Addr::from_str(host).unwrap().to_http_authority();
println!("ok");
}

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Configures and executes the proxy
@ -18,7 +18,6 @@ pprof = ["linkerd-app-admin/pprof"]
[dependencies]
futures = { version = "0.3", default-features = false }
hyper-util = { workspace = true }
linkerd-app-admin = { path = "./admin" }
linkerd-app-core = { path = "./core" }
linkerd-app-gateway = { path = "./gateway" }
@ -26,14 +25,12 @@ linkerd-app-inbound = { path = "./inbound" }
linkerd-app-outbound = { path = "./outbound" }
linkerd-error = { path = "../error" }
linkerd-opencensus = { path = "../opencensus" }
linkerd-opentelemetry = { path = "../opentelemetry" }
linkerd-tonic-stream = { path = "../tonic-stream" }
linkerd-workers = { path = "../workers" }
rangemap = "1"
regex = "1"
thiserror = "2"
thiserror = "1"
tokio = { version = "1", features = ["rt"] }
tokio-stream = { version = "0.1", features = ["time", "sync"] }
tonic = { workspace = true, default-features = false, features = ["prost"] }
tower = { workspace = true }
tracing = { workspace = true }
tonic = { version = "0.10", default-features = false, features = ["prost"] }
tower = "0.4"
tracing = "0.1"

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-admin"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
The linkerd proxy's admin server.
"""
@ -15,26 +15,22 @@ pprof = ["deflate", "dep:pprof"]
log-streaming = ["linkerd-tracing/stream"]
[dependencies]
bytes = { workspace = true }
deflate = { version = "1", optional = true, features = ["gzip"] }
http = { workspace = true }
http-body = { workspace = true }
http-body-util = { workspace = true }
hyper = { workspace = true, features = ["http1", "http2"] }
http = "0.2"
hyper = { version = "0.14", features = ["http1", "http2"] }
futures = { version = "0.3", default-features = false }
pprof = { version = "0.15", optional = true, features = ["prost-codec"] }
serde = "1"
serde_json = "1"
thiserror = "2"
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
tracing = { workspace = true }
linkerd-app-core = { path = "../core" }
linkerd-app-inbound = { path = "../inbound" }
linkerd-tracing = { path = "../../tracing" }
pprof = { version = "0.13", optional = true, features = ["prost-codec"] }
serde = "1"
serde_json = "1"
thiserror = "1"
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
tracing = "0.1"
[dependencies.tower]
workspace = true
version = "0.4"
default-features = false
features = [
"buffer",

View File

@ -12,9 +12,13 @@
use futures::future::{self, TryFutureExt};
use http::StatusCode;
use hyper::{
body::{Body, HttpBody},
Request, Response,
};
use linkerd_app_core::{
metrics::{self as metrics, legacy::FmtMetrics},
proxy::http::{Body, BoxBody, ClientHandle, Request, Response},
metrics::{self as metrics, FmtMetrics},
proxy::http::ClientHandle,
trace, Error, Result,
};
use std::{
@ -32,30 +36,27 @@ pub use self::readiness::{Latch, Readiness};
#[derive(Clone)]
pub struct Admin<M> {
metrics: metrics::legacy::Serve<M>,
metrics: metrics::Serve<M>,
tracing: trace::Handle,
ready: Readiness,
shutdown_tx: mpsc::UnboundedSender<()>,
enable_shutdown: bool,
#[cfg(feature = "pprof")]
pprof: Option<crate::pprof::Pprof>,
}
pub type ResponseFuture = Pin<Box<dyn Future<Output = Result<Response<BoxBody>>> + Send + 'static>>;
pub type ResponseFuture = Pin<Box<dyn Future<Output = Result<Response<Body>>> + Send + 'static>>;
impl<M> Admin<M> {
pub fn new(
metrics: M,
ready: Readiness,
shutdown_tx: mpsc::UnboundedSender<()>,
enable_shutdown: bool,
tracing: trace::Handle,
) -> Self {
Self {
metrics: metrics::legacy::Serve::new(metrics),
metrics: metrics::Serve::new(metrics),
ready,
shutdown_tx,
enable_shutdown,
tracing,
#[cfg(feature = "pprof")]
@ -69,30 +70,30 @@ impl<M> Admin<M> {
self
}
fn ready_rsp(&self) -> Response<BoxBody> {
fn ready_rsp(&self) -> Response<Body> {
if self.ready.is_ready() {
Response::builder()
.status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("ready\n"))
.body("ready\n".into())
.expect("builder with known status code must not fail")
} else {
Response::builder()
.status(StatusCode::SERVICE_UNAVAILABLE)
.body(BoxBody::from_static("not ready\n"))
.body("not ready\n".into())
.expect("builder with known status code must not fail")
}
}
fn live_rsp() -> Response<BoxBody> {
fn live_rsp() -> Response<Body> {
Response::builder()
.status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("live\n"))
.body("live\n".into())
.expect("builder with known status code must not fail")
}
fn env_rsp<B>(req: Request<B>) -> Response<BoxBody> {
fn env_rsp<B>(req: Request<B>) -> Response<Body> {
use std::{collections::HashMap, env, ffi::OsString};
if req.method() != http::Method::GET {
@ -138,74 +139,56 @@ impl<M> Admin<M> {
json::json_rsp(&env)
}
fn shutdown(&self) -> Response<BoxBody> {
if !self.enable_shutdown {
return Response::builder()
.status(StatusCode::NOT_FOUND)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("shutdown endpoint is not enabled\n"))
.expect("builder with known status code must not fail");
}
fn shutdown(&self) -> Response<Body> {
if self.shutdown_tx.send(()).is_ok() {
Response::builder()
.status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("shutdown\n"))
.body("shutdown\n".into())
.expect("builder with known status code must not fail")
} else {
Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("shutdown listener dropped\n"))
.body("shutdown listener dropped\n".into())
.expect("builder with known status code must not fail")
}
}
fn internal_error_rsp(error: impl ToString) -> http::Response<BoxBody> {
fn internal_error_rsp(error: impl ToString) -> http::Response<Body> {
http::Response::builder()
.status(http::StatusCode::INTERNAL_SERVER_ERROR)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::new(error.to_string()))
.body(error.to_string().into())
.expect("builder with known status code should not fail")
}
fn not_found() -> Response<BoxBody> {
fn not_found() -> Response<Body> {
Response::builder()
.status(http::StatusCode::NOT_FOUND)
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail")
}
fn method_not_allowed() -> Response<BoxBody> {
fn method_not_allowed() -> Response<Body> {
Response::builder()
.status(http::StatusCode::METHOD_NOT_ALLOWED)
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail")
}
fn forbidden_not_localhost() -> Response<BoxBody> {
fn forbidden_not_localhost() -> Response<Body> {
Response::builder()
.status(http::StatusCode::FORBIDDEN)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::new::<String>(
"Requests are only permitted from localhost.".into(),
))
.body("Requests are only permitted from localhost.".into())
.expect("builder with known status code must not fail")
}
fn client_is_localhost<B>(req: &Request<B>) -> bool {
req.extensions()
.get::<ClientHandle>()
.map(|a| match a.addr.ip() {
std::net::IpAddr::V4(v4) => v4.is_loopback(),
std::net::IpAddr::V6(v6) => {
if let Some(v4) = v6.to_ipv4_mapped() {
v4.is_loopback()
} else {
v6.is_loopback()
}
}
})
.map(|a| a.addr.ip().is_loopback())
.unwrap_or(false)
}
}
@ -213,11 +196,11 @@ impl<M> Admin<M> {
impl<M, B> tower::Service<http::Request<B>> for Admin<M>
where
M: FmtMetrics,
B: Body + Send + 'static,
B: HttpBody + Send + Sync + 'static,
B::Error: Into<Error>,
B::Data: Send,
{
type Response = http::Response<BoxBody>;
type Response = http::Response<Body>;
type Error = Error;
type Future = ResponseFuture;
@ -323,13 +306,13 @@ mod tests {
let (_, t) = trace::Settings::default().build();
let (s, _) = mpsc::unbounded_channel();
let admin = Admin::new((), r, s, true, t);
let admin = Admin::new((), r, s, t);
macro_rules! call {
() => {{
let r = Request::builder()
.method(Method::GET)
.uri("http://0.0.0.0/ready")
.body(BoxBody::empty())
.body(Body::empty())
.unwrap();
let f = admin.clone().oneshot(r);
timeout(TIMEOUT, f).await.expect("timeout").expect("call")

View File

@ -1,17 +1,14 @@
static JSON_MIME: &str = "application/json";
pub(in crate::server) static JSON_HEADER_VAL: HeaderValue = HeaderValue::from_static(JSON_MIME);
use bytes::Bytes;
use hyper::{
header::{self, HeaderValue},
StatusCode,
Body, StatusCode,
};
use linkerd_app_core::proxy::http::BoxBody;
pub(crate) fn json_error_rsp(
error: impl ToString,
status: http::StatusCode,
) -> http::Response<BoxBody> {
) -> http::Response<Body> {
mk_rsp(
status,
&serde_json::json!({
@ -21,12 +18,11 @@ pub(crate) fn json_error_rsp(
)
}
pub(crate) fn json_rsp(val: &impl serde::Serialize) -> http::Response<BoxBody> {
pub(crate) fn json_rsp(val: &impl serde::Serialize) -> http::Response<Body> {
mk_rsp(StatusCode::OK, val)
}
#[allow(clippy::result_large_err)]
pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Response<BoxBody>> {
pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Response<Body>> {
if let Some(accept) = req.headers().get(header::ACCEPT) {
let accept = match std::str::from_utf8(accept.as_bytes()) {
Ok(accept) => accept,
@ -45,7 +41,7 @@ pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Respon
tracing::warn!(?accept, "Accept header will not accept 'application/json'");
return Err(http::Response::builder()
.status(StatusCode::NOT_ACCEPTABLE)
.body(BoxBody::from_static(JSON_MIME))
.body(JSON_MIME.into())
.expect("builder with known status code must not fail"));
}
}
@ -53,26 +49,18 @@ pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Respon
Ok(())
}
fn mk_rsp(status: StatusCode, val: &impl serde::Serialize) -> http::Response<BoxBody> {
// Serialize the value into JSON, and then place the bytes in a boxed response body.
let json = serde_json::to_vec(val)
.map(Bytes::from)
.map(http_body_util::Full::new)
.map(BoxBody::new);
match json {
Ok(body) => http::Response::builder()
fn mk_rsp(status: StatusCode, val: &impl serde::Serialize) -> http::Response<Body> {
match serde_json::to_vec(val) {
Ok(json) => http::Response::builder()
.status(status)
.header(header::CONTENT_TYPE, JSON_HEADER_VAL.clone())
.body(body)
.body(json.into())
.expect("builder with known status code must not fail"),
Err(error) => {
tracing::warn!(?error, "failed to serialize JSON value");
http::Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.body(BoxBody::new(format!(
"failed to serialize JSON value: {error}"
)))
.body(format!("failed to serialize JSON value: {error}").into())
.expect("builder with known status code must not fail")
}
}

View File

@ -1,18 +1,17 @@
use bytes::Buf;
use http::{header, StatusCode};
use linkerd_app_core::{
proxy::http::{Body, BoxBody},
trace::level,
Error,
use hyper::{
body::{Buf, HttpBody},
Body,
};
use linkerd_app_core::{trace::level, Error};
use std::io;
pub async fn serve<B>(
level: level::Handle,
req: http::Request<B>,
) -> Result<http::Response<BoxBody>, Error>
) -> Result<http::Response<Body>, Error>
where
B: Body,
B: HttpBody,
B::Error: Into<Error>,
{
Ok(match *req.method() {
@ -22,15 +21,11 @@ where
}
http::Method::PUT => {
use http_body_util::BodyExt;
let body = req
.into_body()
.collect()
let body = hyper::body::aggregate(req.into_body())
.await
.map_err(io::Error::other)?
.aggregate();
.map_err(|e| io::Error::new(io::ErrorKind::Other, e))?;
match level.set_from(body.chunk()) {
Ok(_) => mk_rsp(StatusCode::NO_CONTENT, BoxBody::empty()),
Ok(_) => mk_rsp(StatusCode::NO_CONTENT, Body::empty()),
Err(error) => {
tracing::warn!(%error, "Setting log level failed");
mk_rsp(StatusCode::BAD_REQUEST, error)
@ -42,19 +37,14 @@ where
.status(StatusCode::METHOD_NOT_ALLOWED)
.header(header::ALLOW, "GET")
.header(header::ALLOW, "PUT")
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail"),
})
}
fn mk_rsp<B>(status: StatusCode, body: B) -> http::Response<BoxBody>
where
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Into<Error>,
{
fn mk_rsp(status: StatusCode, body: impl Into<Body>) -> http::Response<Body> {
http::Response::builder()
.status(status)
.body(BoxBody::new(body))
.body(body.into())
.expect("builder with known status code must not fail")
}

View File

@ -1,9 +1,10 @@
use crate::server::json;
use bytes::{Buf, Bytes};
use futures::FutureExt;
use hyper::{header, StatusCode};
use hyper::{
body::{Buf, Bytes},
header, Body, StatusCode,
};
use linkerd_app_core::{
proxy::http::{Body, BoxBody},
trace::{self},
Error,
};
@ -26,9 +27,9 @@ macro_rules! recover {
pub async fn serve<B>(
handle: trace::Handle,
req: http::Request<B>,
) -> Result<http::Response<BoxBody>, Error>
) -> Result<http::Response<Body>, Error>
where
B: Body,
B: hyper::body::HttpBody,
B::Error: Into<Error>,
{
let handle = handle.into_stream();
@ -51,13 +52,10 @@ where
// If the request is a QUERY, use the request body
method if method.as_str() == "QUERY" => {
// TODO(eliza): validate that the request has a content-length...
use http_body_util::BodyExt;
let body = recover!(
req.into_body()
.collect()
hyper::body::aggregate(req.into_body())
.await
.map_err(Into::into)
.map(http_body_util::Collected::aggregate),
.map_err(Into::into),
"Reading log stream request body",
StatusCode::BAD_REQUEST
);
@ -76,7 +74,7 @@ where
.status(StatusCode::METHOD_NOT_ALLOWED)
.header(header::ALLOW, "GET")
.header(header::ALLOW, "QUERY")
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail"));
}
};
@ -101,7 +99,7 @@ where
// https://github.com/hawkw/thingbuf/issues/62 would allow us to avoid the
// copy by passing the channel's pooled buffer directly to hyper, and
// returning it to the channel to be reused when hyper is done with it.
let (mut tx, body) = http_body_util::channel::Channel::<Bytes, Error>::new(1024);
let (mut tx, body) = Body::channel();
tokio::spawn(
async move {
// TODO(eliza): we could definitely implement some batching here.
@ -126,7 +124,7 @@ where
}),
);
Ok(mk_rsp(StatusCode::OK, BoxBody::new(body)))
Ok(mk_rsp(StatusCode::OK, body))
}
fn parse_filter(filter_str: &str) -> Result<EnvFilter, impl std::error::Error> {
@ -135,10 +133,10 @@ fn parse_filter(filter_str: &str) -> Result<EnvFilter, impl std::error::Error> {
filter
}
fn mk_rsp<B>(status: StatusCode, body: B) -> http::Response<B> {
fn mk_rsp(status: StatusCode, body: impl Into<Body>) -> http::Response<Body> {
http::Response::builder()
.status(status)
.header(header::CONTENT_TYPE, json::JSON_HEADER_VAL.clone())
.body(body)
.body(body.into())
.expect("builder with known status code must not fail")
}

View File

@ -1,8 +1,8 @@
use linkerd_app_core::{
classify,
config::ServerConfig,
drain, errors, identity,
metrics::{self, legacy::FmtMetrics},
detect, drain, errors, identity,
metrics::{self, FmtMetrics},
proxy::http,
serve,
svc::{self, ExtractParam, InsertParam, Param},
@ -24,7 +24,6 @@ pub struct Config {
pub metrics_retain_idle: Duration,
#[cfg(feature = "pprof")]
pub enable_profiling: bool,
pub enable_shutdown: bool,
}
pub struct Task {
@ -52,7 +51,7 @@ struct Tcp {
#[derive(Clone, Debug)]
struct Http {
tcp: Tcp,
version: http::Variant,
version: http::Version,
}
#[derive(Clone, Debug)]
@ -88,7 +87,7 @@ impl Config {
) -> Result<Task>
where
R: FmtMetrics + Clone + Send + Sync + Unpin + 'static,
B: Bind<ServerConfig, BoundAddrs = Local<ServerAddr>>,
B: Bind<ServerConfig>,
B::Addrs: svc::Param<Remote<ClientAddr>>,
B::Addrs: svc::Param<Local<ServerAddr>>,
B::Addrs: svc::Param<AddrPair>,
@ -101,7 +100,7 @@ impl Config {
let (ready, latch) = crate::server::Readiness::new();
#[cfg_attr(not(feature = "pprof"), allow(unused_mut))]
let admin = crate::server::Admin::new(report, ready, shutdown, self.enable_shutdown, trace);
let admin = crate::server::Admin::new(report, ready, shutdown, trace);
#[cfg(feature = "pprof")]
let admin = admin.with_profiling(self.enable_profiling);
@ -122,7 +121,6 @@ impl Config {
.push_on_service(http::BoxResponse::layer())
.arc_new_clone_http();
let inbound::DetectMetrics(detect_metrics) = metrics.detect.clone();
let tcp = http
.unlift_new()
.push(http::NewServeHttp::layer({
@ -130,18 +128,18 @@ impl Config {
move |t: &Http| {
http::ServerParams {
version: t.version,
http2: Default::default(),
h2: Default::default(),
drain: drain.clone(),
}
}
}))
.push_filter(
|(http, tcp): (
http::Detection,
Result<Option<http::Version>, detect::DetectTimeoutError<_>>,
Tcp,
)| {
match http {
http::Detection::Http(version) => Ok(Http { version, tcp }),
Ok(Some(version)) => Ok(Http { version, tcp }),
// If detection timed out, we can make an educated guess at the proper
// behavior:
// - If the connection was meshed, it was most likely transported over
@ -149,12 +147,12 @@ impl Config {
// - If the connection was unmeshed, it was mostly likely HTTP/1.
// - If we received some unexpected SNI, the client is mostly likely
// confused/stale.
http::Detection::ReadTimeout(_timeout) => {
Err(_timeout) => {
let version = match tcp.tls {
tls::ConditionalServerTls::None(_) => http::Variant::Http1,
tls::ConditionalServerTls::None(_) => http::Version::Http1,
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
..
}) => http::Variant::H2,
}) => http::Version::H2,
tls::ConditionalServerTls::Some(tls::ServerTls::Passthru {
sni,
}) => {
@ -167,7 +165,7 @@ impl Config {
}
// If the connection failed HTTP detection, check if we detected TLS for
// another target. This might indicate that the client is confused/stale.
http::Detection::NotHttp => match tcp.tls {
Ok(None) => match tcp.tls {
tls::ConditionalServerTls::Some(tls::ServerTls::Passthru { sni }) => {
Err(UnexpectedSni(sni, tcp.client).into())
}
@ -178,12 +176,9 @@ impl Config {
)
.arc_new_tcp()
.lift_new_with_target()
.push(http::NewDetect::layer(move |tcp: &Tcp| {
http::DetectParams {
read_timeout: DETECT_TIMEOUT,
metrics: detect_metrics.metrics(tcp.policy.server_label())
}
}))
.push(detect::NewDetectService::layer(svc::stack::CloneParam::from(
detect::Config::<http::DetectHttp>::from_timeout(DETECT_TIMEOUT),
)))
.push(transport::metrics::NewServer::layer(metrics.proxy.transport))
.push_map_target(move |(tls, addrs): (tls::ConditionalServerTls, B::Addrs)| {
Tcp {
@ -214,7 +209,7 @@ impl Config {
impl Param<transport::labels::Key> for Tcp {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
self.tls.as_ref().map(|t| t.labels()),
self.tls.clone(),
self.addr.into(),
self.policy.server_label(),
)
@ -223,8 +218,8 @@ impl Param<transport::labels::Key> for Tcp {
// === impl Http ===
impl Param<http::Variant> for Http {
fn param(&self) -> http::Variant {
impl Param<http::Version> for Http {
fn param(&self) -> http::Version {
self.version
}
}
@ -272,7 +267,7 @@ impl Param<metrics::ServerLabel> for Http {
impl Param<metrics::EndpointLabels> for Permitted {
fn param(&self) -> metrics::EndpointLabels {
metrics::InboundEndpointLabels {
tls: self.http.tcp.tls.as_ref().map(|t| t.labels()),
tls: self.http.tcp.tls.clone(),
authority: None,
target_addr: self.http.tcp.addr.into(),
policy: self.permit.labels.clone(),

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-core"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Core infrastructure for the proxy application
@ -13,34 +13,29 @@ independently of the inbound and outbound proxy logic.
"""
[dependencies]
drain = { workspace = true, features = ["retain"] }
http = { workspace = true }
http-body = { workspace = true }
hyper = { workspace = true, features = ["http1", "http2"] }
bytes = "1"
drain = { version = "0.1", features = ["retain"] }
http = "0.2"
http-body = "0.4"
hyper = { version = "0.14", features = ["http1", "http2"] }
futures = { version = "0.3", default-features = false }
ipnet = "2.11"
prometheus-client = { workspace = true }
thiserror = "2"
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
tokio-stream = { version = "0.1", features = ["time"] }
tonic = { workspace = true, default-features = false, features = ["prost"] }
tracing = { workspace = true }
pin-project = "1"
ipnet = "2.7"
linkerd-addr = { path = "../../addr" }
linkerd-conditional = { path = "../../conditional" }
linkerd-dns = { path = "../../dns" }
linkerd-detect = { path = "../../detect" }
linkerd-duplex = { path = "../../duplex" }
linkerd-errno = { path = "../../errno" }
linkerd-error = { path = "../../error" }
linkerd-error-respond = { path = "../../error-respond" }
linkerd-exp-backoff = { path = "../../exp-backoff" }
linkerd-http-metrics = { path = "../../http/metrics" }
linkerd-http-metrics = { path = "../../http-metrics" }
linkerd-identity = { path = "../../identity" }
linkerd-idle-cache = { path = "../../idle-cache" }
linkerd-io = { path = "../../io" }
linkerd-meshtls = { path = "../../meshtls", default-features = false }
linkerd-metrics = { path = "../../metrics", features = ["process", "stack"] }
linkerd-opencensus = { path = "../../opencensus" }
linkerd-opentelemetry = { path = "../../opentelemetry" }
linkerd-proxy-api-resolve = { path = "../../proxy/api-resolve" }
linkerd-proxy-balance = { path = "../../proxy/balance" }
linkerd-proxy-core = { path = "../../proxy/core" }
@ -56,7 +51,6 @@ linkerd-proxy-tcp = { path = "../../proxy/tcp" }
linkerd-proxy-transport = { path = "../../proxy/transport" }
linkerd-reconnect = { path = "../../reconnect" }
linkerd-router = { path = "../../router" }
linkerd-rustls = { path = "../../rustls" }
linkerd-service-profiles = { path = "../../service-profiles" }
linkerd-stack = { path = "../../stack" }
linkerd-stack-metrics = { path = "../../stack/metrics" }
@ -66,16 +60,27 @@ linkerd-transport-header = { path = "../../transport-header" }
linkerd-transport-metrics = { path = "../../transport-metrics" }
linkerd-tls = { path = "../../tls" }
linkerd-trace-context = { path = "../../trace-context" }
prometheus-client = "0.22"
regex = "1"
serde_json = "1"
thiserror = "1"
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
tokio-stream = { version = "0.1", features = ["time"] }
tonic = { version = "0.10", default-features = false, features = ["prost"] }
tracing = "0.1"
parking_lot = "0.12"
pin-project = "1"
[dependencies.tower]
workspace = true
version = "0.4"
default-features = false
features = ["make", "spawn-ready", "timeout", "util", "limit"]
[target.'cfg(target_os = "linux")'.dependencies]
linkerd-system = { path = "../../system" }
[build-dependencies]
semver = "1"
[dev-dependencies]
bytes = { workspace = true }
http-body-util = { workspace = true }
linkerd-mock-http-body = { path = "../../mock/http-body" }
quickcheck = { version = "1", default-features = false }

View File

@ -4,18 +4,18 @@ fn set_env(name: &str, cmd: &mut Command) {
let value = match cmd.output() {
Ok(output) => String::from_utf8(output.stdout).unwrap(),
Err(err) => {
println!("cargo:warning={err}");
println!("cargo:warning={}", err);
"".to_string()
}
};
println!("cargo:rustc-env={name}={value}");
println!("cargo:rustc-env={}={}", name, value);
}
fn version() -> String {
if let Ok(v) = std::env::var("LINKERD2_PROXY_VERSION") {
if !v.is_empty() {
if let Err(err) = semver::Version::parse(&v) {
panic!("LINKERD2_PROXY_VERSION must be semver: version='{v}' error='{err}'");
if semver::Version::parse(&v).is_err() {
panic!("LINKERD2_PROXY_VERSION must be semver");
}
return v;
}

View File

@ -1,4 +1,5 @@
use crate::profiles;
pub use classify::gate;
use linkerd_error::Error;
use linkerd_proxy_client_policy as client_policy;
use linkerd_proxy_http::{classify, HasH2Reason, ResponseTimeoutError};
@ -213,7 +214,7 @@ fn h2_error(err: &Error) -> String {
if let Some(reason) = err.h2_reason() {
// This should output the error code in the same format as the spec,
// for example: PROTOCOL_ERROR
format!("h2({reason:?})")
format!("h2({:?})", reason)
} else {
trace!("classifying found non-h2 error: {:?}", err);
String::from("unclassified")

View File

@ -1,17 +1,16 @@
pub use crate::exp_backoff::ExponentialBackoff;
use crate::{
proxy::http::{h1, h2},
svc::{queue, ExtractParam, Param},
transport::{DualListenAddr, Keepalive, ListenAddr, UserTimeout},
proxy::http::{self, h1, h2},
svc::{queue, CloneParam, ExtractParam, Param},
transport::{Keepalive, ListenAddr},
};
use std::time::Duration;
#[derive(Clone, Debug)]
pub struct ServerConfig {
pub addr: DualListenAddr,
pub addr: ListenAddr,
pub keepalive: Keepalive,
pub user_timeout: UserTimeout,
pub http2: h2::ServerParams,
pub h2_settings: h2::Settings,
}
#[derive(Clone, Debug)]
@ -19,9 +18,8 @@ pub struct ConnectConfig {
pub backoff: ExponentialBackoff,
pub timeout: Duration,
pub keepalive: Keepalive,
pub user_timeout: UserTimeout,
pub http1: h1::PoolSettings,
pub http2: h2::ClientParams,
pub h1_settings: h1::PoolSettings,
pub h2_settings: h2::Settings,
}
#[derive(Clone, Debug)]
@ -59,17 +57,19 @@ impl<T> ExtractParam<queue::Timeout, T> for QueueConfig {
}
}
// === impl ServerConfig ===
// === impl ProxyConfig ===
impl Param<DualListenAddr> for ServerConfig {
fn param(&self) -> DualListenAddr {
self.addr
impl ProxyConfig {
pub fn detect_http(&self) -> CloneParam<linkerd_detect::Config<http::DetectHttp>> {
linkerd_detect::Config::from_timeout(self.detect_protocol_timeout).into()
}
}
// === impl ServerConfig ===
impl Param<ListenAddr> for ServerConfig {
fn param(&self) -> ListenAddr {
ListenAddr(self.addr.0)
self.addr
}
}
@ -78,9 +78,3 @@ impl Param<Keepalive> for ServerConfig {
self.keepalive
}
}
impl Param<UserTimeout> for ServerConfig {
fn param(&self) -> UserTimeout {
self.user_timeout
}
}

View File

@ -69,39 +69,24 @@ impl fmt::Display for ControlAddr {
}
}
pub type RspBody = linkerd_http_metrics::requests::ResponseBody<
http::balance::Body<hyper::body::Incoming>,
classify::Eos,
>;
#[derive(Clone, Debug, Default)]
pub struct Metrics {
balance: balance::Metrics,
}
pub type RspBody =
linkerd_http_metrics::requests::ResponseBody<http::balance::Body<hyper::Body>, classify::Eos>;
const EWMA_CONFIG: http::balance::EwmaConfig = http::balance::EwmaConfig {
default_rtt: time::Duration::from_millis(30),
decay: time::Duration::from_secs(10),
};
impl Metrics {
pub fn register(registry: &mut prom::Registry) -> Self {
Metrics {
balance: balance::Metrics::register(registry.sub_registry_with_prefix("balancer")),
}
}
}
impl Config {
pub fn build(
self,
dns: dns::Resolver,
legacy_metrics: metrics::ControlHttp,
metrics: Metrics,
metrics: metrics::ControlHttp,
registry: &mut prom::Registry,
identity: identity::NewClient,
) -> svc::ArcNewService<
(),
svc::BoxCloneSyncService<http::Request<tonic::body::Body>, http::Response<RspBody>>,
svc::BoxCloneSyncService<http::Request<tonic::body::BoxBody>, http::Response<RspBody>>,
> {
let addr = self.addr;
tracing::trace!(%addr, "Building");
@ -114,7 +99,7 @@ impl Config {
warn!(error, "Failed to resolve control-plane component");
if let Some(e) = crate::errors::cause_ref::<dns::ResolveError>(&*error) {
if let Some(ttl) = e.negative_ttl() {
return Ok::<_, Error>(Either::Left(
return Ok(Either::Left(
IntervalStream::new(time::interval(ttl)).map(|_| ()),
));
}
@ -126,16 +111,13 @@ impl Config {
}
};
let client = svc::stack(ConnectTcp::new(
self.connect.keepalive,
self.connect.user_timeout,
))
.push(tls::Client::layer(identity))
.push_connect_timeout(self.connect.timeout) // Client<NewClient, ConnectTcp>
.push_map_target(|(_version, target)| target)
.push(self::client::layer::<_, _>(self.connect.http2))
.push_on_service(svc::MapErr::layer_boxed())
.into_new_service();
let client = svc::stack(ConnectTcp::new(self.connect.keepalive))
.push(tls::Client::layer(identity))
.push_connect_timeout(self.connect.timeout)
.push_map_target(|(_version, target)| target)
.push(self::client::layer())
.push_on_service(svc::MapErr::layer_boxed())
.into_new_service();
let endpoint = client
// Ensure that connection is driven independently of the load
@ -151,8 +133,12 @@ impl Config {
let balance = endpoint
.lift_new()
.push(self::balance::layer(metrics.balance, dns, resolve_backoff))
.push(legacy_metrics.to_layer::<classify::Response, _, _>())
.push(self::balance::layer(
registry.sub_registry_with_prefix("balancer"),
dns,
resolve_backoff,
))
.push(metrics.to_layer::<classify::Response, _, _>())
.push(classify::NewClassify::layer_default());
balance
@ -247,17 +233,15 @@ mod balance {
use super::{client::Target, ControlAddr};
use crate::{
dns,
metrics::prom::encoding::EncodeLabelSet,
metrics::prom::{self, encoding::EncodeLabelSet},
proxy::{dns_resolve::DnsResolve, http, resolve::recover},
svc, tls,
};
use linkerd_stack::ExtractParam;
use std::net::SocketAddr;
pub(super) type Metrics = http::balance::MetricFamilies<Labels>;
pub fn layer<B, R: Clone, N>(
metrics: Metrics,
registry: &mut prom::Registry,
dns: dns::Resolver,
recover: R,
) -> impl svc::Layer<
@ -265,12 +249,9 @@ mod balance {
Service = http::NewBalance<B, Params, recover::Resolve<R, DnsResolve>, NewIntoTarget<N>>,
> {
let resolve = recover::Resolve::new(recover, DnsResolve::new(dns));
let metrics = Params(http::balance::MetricFamilies::register(registry));
svc::layer::mk(move |inner| {
http::NewBalance::new(
NewIntoTarget { inner },
resolve.clone(),
Params(metrics.clone()),
)
http::NewBalance::new(NewIntoTarget { inner }, resolve.clone(), metrics.clone())
})
}
@ -289,7 +270,7 @@ mod balance {
}
#[derive(Clone, Debug, Hash, PartialEq, Eq, EncodeLabelSet)]
pub(super) struct Labels {
struct Labels {
addr: String,
}
@ -335,7 +316,11 @@ mod client {
svc, tls,
transport::{Remote, ServerAddr},
};
use std::net::SocketAddr;
use linkerd_proxy_http::h2::Settings as H2Settings;
use std::{
net::SocketAddr,
task::{Context, Poll},
};
#[derive(Clone, Hash, Debug, Eq, PartialEq)]
pub struct Target {
@ -349,6 +334,11 @@ mod client {
}
}
#[derive(Debug)]
pub struct Client<C, B> {
inner: http::h2::Connect<C, B>,
}
// === impl Target ===
impl svc::Param<Remote<ServerAddr>> for Target {
@ -371,12 +361,47 @@ mod client {
// === impl Layer ===
pub fn layer<C, B>(
h2: http::h2::ClientParams,
) -> impl svc::Layer<C, Service = http::h2::Connect<C, B>> + Clone
pub fn layer<C, B>() -> impl svc::Layer<C, Service = Client<C, B>> + Copy
where
http::h2::Connect<C, B>: tower::Service<Target>,
{
svc::layer::mk(move |mk_conn| http::h2::Connect::new(mk_conn, h2.clone()))
svc::layer::mk(|mk_conn| {
let inner = http::h2::Connect::new(mk_conn, H2Settings::default());
Client { inner }
})
}
// === impl Client ===
impl<C, B> tower::Service<Target> for Client<C, B>
where
http::h2::Connect<C, B>: tower::Service<Target>,
{
type Response = <http::h2::Connect<C, B> as tower::Service<Target>>::Response;
type Error = <http::h2::Connect<C, B> as tower::Service<Target>>::Error;
type Future = <http::h2::Connect<C, B> as tower::Service<Target>>::Future;
#[inline]
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx)
}
#[inline]
fn call(&mut self, target: Target) -> Self::Future {
self.inner.call(target)
}
}
// A manual impl is needed since derive adds `B: Clone`, but that's just
// a PhantomData.
impl<C, B> Clone for Client<C, B>
where
http::h2::Connect<C, B>: Clone,
{
fn clone(&self) -> Self {
Client {
inner: self.inner.clone(),
}
}
}
}

View File

@ -1,55 +1,30 @@
use self::metrics::Labels;
use linkerd_metrics::prom::{Counter, Family, Registry};
use std::time::Duration;
pub use linkerd_dns::*;
mod metrics;
use std::path::PathBuf;
use std::time::Duration;
#[derive(Clone, Debug)]
pub struct Config {
pub min_ttl: Option<Duration>,
pub max_ttl: Option<Duration>,
pub resolv_conf_path: PathBuf,
}
pub struct Dns {
resolver: Resolver,
resolutions: Family<Labels, Counter>,
}
// === impl Dns ===
impl Dns {
/// Returns a new [`Resolver`].
pub fn resolver(&self, client: &'static str) -> Resolver {
let metrics = self.metrics(client);
self.resolver.clone().with_metrics(metrics)
}
pub resolver: Resolver,
}
// === impl Config ===
impl Config {
pub fn build(self, registry: &mut Registry) -> Dns {
let resolutions = Family::default();
registry.register(
"resolutions",
"Counts the number of DNS records that have been resolved.",
resolutions.clone(),
);
pub fn build(self) -> Dns {
let resolver =
Resolver::from_system_config_with(&self).expect("system DNS config must be valid");
Dns {
resolver,
resolutions,
}
Dns { resolver }
}
}
impl ConfigureResolver for Config {
/// Modify a `hickory-resolver::config::ResolverOpts` to reflect
/// Modify a `trust-dns-resolver::config::ResolverOpts` to reflect
/// the configured minimum and maximum DNS TTL values.
fn configure_resolver(&self, opts: &mut ResolverOpts) {
opts.positive_min_ttl = self.min_ttl;

View File

@ -1,115 +0,0 @@
use super::{Dns, Metrics};
use linkerd_metrics::prom::encoding::{
EncodeLabel, EncodeLabelSet, EncodeLabelValue, LabelSetEncoder, LabelValueEncoder,
};
use std::fmt::{Display, Write};
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
pub(super) struct Labels {
client: &'static str,
record_type: RecordType,
result: Outcome,
}
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
enum RecordType {
A,
Srv,
}
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
enum Outcome {
Ok,
NotFound,
}
// === impl Dns ===
impl Dns {
pub(super) fn metrics(&self, client: &'static str) -> Metrics {
let family = &self.resolutions;
let a_records_resolved = (*family.get_or_create(&Labels {
client,
record_type: RecordType::A,
result: Outcome::Ok,
}))
.clone();
let a_records_not_found = (*family.get_or_create(&Labels {
client,
record_type: RecordType::A,
result: Outcome::NotFound,
}))
.clone();
let srv_records_resolved = (*family.get_or_create(&Labels {
client,
record_type: RecordType::Srv,
result: Outcome::Ok,
}))
.clone();
let srv_records_not_found = (*family.get_or_create(&Labels {
client,
record_type: RecordType::Srv,
result: Outcome::NotFound,
}))
.clone();
Metrics {
a_records_resolved,
a_records_not_found,
srv_records_resolved,
srv_records_not_found,
}
}
}
// === impl Labels ===
impl EncodeLabelSet for Labels {
fn encode(&self, mut encoder: LabelSetEncoder<'_>) -> Result<(), std::fmt::Error> {
let Self {
client,
record_type,
result,
} = self;
("client", *client).encode(encoder.encode_label())?;
("record_type", record_type).encode(encoder.encode_label())?;
("result", result).encode(encoder.encode_label())?;
Ok(())
}
}
// === impl Outcome ===
impl EncodeLabelValue for &Outcome {
fn encode(&self, encoder: &mut LabelValueEncoder<'_>) -> Result<(), std::fmt::Error> {
encoder.write_str(self.to_string().as_str())
}
}
impl Display for Outcome {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Self::Ok => "ok",
Self::NotFound => "not_found",
})
}
}
// === impl RecordType ===
impl EncodeLabelValue for &RecordType {
fn encode(&self, encoder: &mut LabelValueEncoder<'_>) -> Result<(), std::fmt::Error> {
encoder.write_str(self.to_string().as_str())
}
}
impl Display for RecordType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Self::A => "A/AAAA",
Self::Srv => "SRV",
})
}
}

View File

@ -1,4 +1,3 @@
pub mod body;
pub mod respond;
pub use self::respond::{HttpRescue, NewRespond, NewRespondService, SyntheticHttpResponse};
@ -7,16 +6,6 @@ pub use linkerd_proxy_http::h2::H2Error;
pub use linkerd_stack::{FailFastError, LoadShedError};
pub use tonic::Code as Grpc;
/// Header names and values related to error responses.
pub mod header {
use http::header::{HeaderName, HeaderValue};
pub const L5D_PROXY_CONNECTION: HeaderName = HeaderName::from_static("l5d-proxy-connection");
pub const L5D_PROXY_ERROR: HeaderName = HeaderName::from_static("l5d-proxy-error");
pub(super) const GRPC_CONTENT_TYPE: HeaderValue = HeaderValue::from_static("application/grpc");
pub(super) const GRPC_MESSAGE: HeaderName = HeaderName::from_static("grpc-message");
pub(super) const GRPC_STATUS: HeaderName = HeaderName::from_static("grpc-status");
}
#[derive(Debug, thiserror::Error)]
#[error("connect timed out after {0:?}")]
pub struct ConnectTimeout(pub(crate) std::time::Duration);
@ -29,27 +18,3 @@ pub fn has_grpc_status(error: &crate::Error, code: tonic::Code) -> bool {
.map(|s| s.code() == code)
.unwrap_or(false)
}
// Copied from tonic, where it's private.
fn code_header(code: tonic::Code) -> http::HeaderValue {
use {http::HeaderValue, tonic::Code};
match code {
Code::Ok => HeaderValue::from_static("0"),
Code::Cancelled => HeaderValue::from_static("1"),
Code::Unknown => HeaderValue::from_static("2"),
Code::InvalidArgument => HeaderValue::from_static("3"),
Code::DeadlineExceeded => HeaderValue::from_static("4"),
Code::NotFound => HeaderValue::from_static("5"),
Code::AlreadyExists => HeaderValue::from_static("6"),
Code::PermissionDenied => HeaderValue::from_static("7"),
Code::ResourceExhausted => HeaderValue::from_static("8"),
Code::FailedPrecondition => HeaderValue::from_static("9"),
Code::Aborted => HeaderValue::from_static("10"),
Code::OutOfRange => HeaderValue::from_static("11"),
Code::Unimplemented => HeaderValue::from_static("12"),
Code::Internal => HeaderValue::from_static("13"),
Code::Unavailable => HeaderValue::from_static("14"),
Code::DataLoss => HeaderValue::from_static("15"),
Code::Unauthenticated => HeaderValue::from_static("16"),
}
}

View File

@ -1,314 +0,0 @@
use super::{
header::{GRPC_MESSAGE, GRPC_STATUS},
respond::{HttpRescue, SyntheticHttpResponse},
};
use http::header::HeaderValue;
use http_body::Frame;
use linkerd_error::{Error, Result};
use pin_project::pin_project;
use std::{
pin::Pin,
task::{Context, Poll},
};
use tracing::{debug, warn};
/// Returns a "gRPC rescue" body.
///
/// This returns a body that, should the inner `B`-typed body return an error when polling for
/// DATA frames, will "rescue" the stream and return a TRAILERS frame that describes the error.
#[pin_project(project = ResponseBodyProj)]
pub struct ResponseBody<R, B>(#[pin] Inner<R, B>);
#[pin_project(project = InnerProj)]
enum Inner<R, B> {
/// An inert body that delegates directly down to the underlying body `B`.
Passthru(#[pin] B),
/// A body that will be rescued if it yields an error.
GrpcRescue {
#[pin]
inner: B,
/// An error response [strategy][HttpRescue].
rescue: R,
emit_headers: bool,
},
/// The underlying body `B` yielded an error and was "rescued".
Rescued,
}
// === impl ResponseBody ===
impl<R, B> ResponseBody<R, B> {
/// Returns a body in "passthru" mode.
pub fn passthru(inner: B) -> Self {
Self(Inner::Passthru(inner))
}
/// Returns a "gRPC rescue" body.
pub fn grpc_rescue(inner: B, rescue: R, emit_headers: bool) -> Self {
Self(Inner::GrpcRescue {
inner,
rescue,
emit_headers,
})
}
}
impl<R, B: Default + linkerd_proxy_http::Body> Default for ResponseBody<R, B> {
fn default() -> Self {
Self(Inner::Passthru(B::default()))
}
}
impl<R, B> linkerd_proxy_http::Body for ResponseBody<R, B>
where
B: linkerd_proxy_http::Body<Error = Error>,
R: HttpRescue<B::Error>,
{
type Data = B::Data;
type Error = B::Error;
fn poll_frame(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<std::result::Result<http_body::Frame<Self::Data>, Self::Error>>> {
let ResponseBodyProj(inner) = self.as_mut().project();
match inner.project() {
InnerProj::Passthru(inner) => inner.poll_frame(cx),
InnerProj::GrpcRescue {
inner,
rescue,
emit_headers,
} => match inner.poll_frame(cx) {
Poll::Ready(Some(Err(error))) => {
// The inner body has yielded an error, which we will try to rescue. If so,
// yield synthetic trailers reporting the error.
let trailers = Self::rescue(error, rescue, *emit_headers)?;
self.set(Self(Inner::Rescued));
Poll::Ready(Some(Ok(Frame::trailers(trailers))))
}
poll => poll,
},
InnerProj::Rescued => Poll::Ready(None),
}
}
#[inline]
fn is_end_stream(&self) -> bool {
let Self(inner) = self;
match inner {
Inner::Passthru(inner) => inner.is_end_stream(),
Inner::GrpcRescue { inner, .. } => inner.is_end_stream(),
Inner::Rescued => true,
}
}
#[inline]
fn size_hint(&self) -> http_body::SizeHint {
let Self(inner) = self;
match inner {
Inner::Passthru(inner) => inner.size_hint(),
Inner::GrpcRescue { inner, .. } => inner.size_hint(),
Inner::Rescued => http_body::SizeHint::with_exact(0),
}
}
}
impl<R, B> ResponseBody<R, B>
where
B: http_body::Body,
R: HttpRescue<B::Error>,
{
/// Maps an error yielded by the inner body to a collection of gRPC trailers.
///
/// This function returns `Ok(trailers)` if the given [`HttpRescue<E>`] strategy could identify
/// a cause for an error yielded by the inner `B`-typed body.
fn rescue(
error: B::Error,
rescue: &R,
emit_headers: bool,
) -> Result<http::HeaderMap, B::Error> {
let SyntheticHttpResponse {
grpc_status,
message,
..
} = rescue.rescue(error)?;
debug!(grpc.status = ?grpc_status, "Synthesizing gRPC trailers");
let mut t = http::HeaderMap::new();
t.insert(GRPC_STATUS, super::code_header(grpc_status));
if emit_headers {
// A gRPC message trailer is only included if instructed to emit additional headers.
t.insert(
GRPC_MESSAGE,
HeaderValue::from_str(&message).unwrap_or_else(|error| {
warn!(%error, "Failed to encode error header");
HeaderValue::from_static("Unexpected error")
}),
);
}
Ok(t)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::errors::header::{GRPC_MESSAGE, GRPC_STATUS};
use http::HeaderMap;
use linkerd_mock_http_body::MockBody;
struct MockRescue;
impl<E> HttpRescue<E> for MockRescue {
/// Attempts to synthesize a response from the given error.
fn rescue(&self, _: E) -> Result<SyntheticHttpResponse, E> {
let synthetic = SyntheticHttpResponse::internal_error("MockRescue::rescue");
Ok(synthetic)
}
}
#[tokio::test]
async fn rescue_body_recovers_from_error_without_grpc_message() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let trailers = {
let mut trls = HeaderMap::with_capacity(1);
let value = HeaderValue::from_static("caboose");
trls.insert("trailer", value);
trls
};
let rescue = {
let inner = MockBody::default()
.then_yield_data(Poll::Ready(Some(Ok("inter".into()))))
.then_yield_data(Poll::Ready(Some(Err("an error midstream".into()))))
.then_yield_data(Poll::Ready(Some(Ok("rupted".into()))))
.then_yield_trailer(Poll::Ready(Some(Ok(trailers))));
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, Some(trailers)) = body_to_string(rescue).await else {
panic!("trailers should exist");
};
assert_eq!(data, "inter");
assert_eq!(
trailers[GRPC_STATUS],
i32::from(tonic::Code::Internal).to_string()
);
assert_eq!(trailers.get(GRPC_MESSAGE), None);
}
#[tokio::test]
async fn rescue_body_recovers_from_error_emitting_message() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let trailers = {
let mut trls = HeaderMap::with_capacity(1);
let value = HeaderValue::from_static("caboose");
trls.insert("trailer", value);
trls
};
let rescue = {
let inner = MockBody::default()
.then_yield_data(Poll::Ready(Some(Ok("inter".into()))))
.then_yield_data(Poll::Ready(Some(Err("an error midstream".into()))))
.then_yield_data(Poll::Ready(Some(Ok("rupted".into()))))
.then_yield_trailer(Poll::Ready(Some(Ok(trailers))));
let rescue = MockRescue;
let emit_headers = true;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, Some(trailers)) = body_to_string(rescue).await else {
panic!("trailers should exist");
};
assert_eq!(data, "inter");
assert_eq!(
trailers[GRPC_STATUS],
i32::from(tonic::Code::Internal).to_string()
);
assert_eq!(trailers[GRPC_MESSAGE], "MockRescue::rescue");
}
#[tokio::test]
async fn rescue_body_works_for_empty() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let rescue = {
let inner = MockBody::default();
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, trailers) = body_to_string(rescue).await;
assert_eq!(data, "");
assert_eq!(trailers, None);
}
#[tokio::test]
async fn rescue_body_works_for_body_with_data() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let rescue = {
let inner = MockBody::default().then_yield_data(Poll::Ready(Some(Ok("unary".into()))));
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, trailers) = body_to_string(rescue).await;
assert_eq!(data, "unary");
assert_eq!(trailers, None);
}
#[tokio::test]
async fn rescue_body_works_for_body_with_trailers() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let trailers = {
let mut trls = HeaderMap::with_capacity(1);
let value = HeaderValue::from_static("caboose");
trls.insert("trailer", value);
trls
};
let rescue = {
let inner = MockBody::default().then_yield_trailer(Poll::Ready(Some(Ok(trailers))));
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, trailers) = body_to_string(rescue).await;
assert_eq!(data, "");
assert_eq!(trailers.expect("has trailers")["trailer"], "caboose");
}
async fn body_to_string<B>(mut body: B) -> (String, Option<HeaderMap>)
where
B: http_body::Body + Unpin,
B::Error: std::fmt::Debug,
{
use http_body_util::BodyExt;
let mut data = String::new();
let mut trailers = None;
// Continue reading frames from the body until it is finished.
while let Some(frame) = body
.frame()
.await
.transpose()
.expect("reading a frame succeeds")
{
match frame.into_data().map(|mut buf| {
use bytes::Buf;
let bytes = buf.copy_to_bytes(buf.remaining());
String::from_utf8(bytes.to_vec()).unwrap()
}) {
Ok(ref s) => data.push_str(s),
Err(frame) => {
let trls = frame
.into_trailers()
.map_err(drop)
.expect("test frame is either data or trailers");
trailers = Some(trls);
}
}
}
tracing::info!(?data, ?trailers, "finished reading body");
(data, trailers)
}
}

View File

@ -1,16 +1,21 @@
use super::{
body::ResponseBody,
header::{GRPC_CONTENT_TYPE, GRPC_MESSAGE, GRPC_STATUS, L5D_PROXY_CONNECTION, L5D_PROXY_ERROR},
};
use crate::svc;
use http::header::{HeaderValue, LOCATION};
use linkerd_error::{Error, Result};
use linkerd_error_respond as respond;
use linkerd_proxy_http::{orig_proto, ClientHandle};
use linkerd_proxy_http::orig_proto;
pub use linkerd_proxy_http::{ClientHandle, HasH2Reason};
use linkerd_stack::ExtractParam;
use std::borrow::Cow;
use pin_project::pin_project;
use std::{
borrow::Cow,
pin::Pin,
task::{Context, Poll},
};
use tracing::{debug, info_span, warn};
pub const L5D_PROXY_CONNECTION: &str = "l5d-proxy-connection";
pub const L5D_PROXY_ERROR: &str = "l5d-proxy-error";
pub fn layer<R, P: Clone, N>(
params: P,
) -> impl svc::layer::Layer<N, Service = NewRespondService<R, P, N>> + Clone {
@ -28,10 +33,10 @@ pub trait HttpRescue<E> {
#[derive(Clone, Debug)]
pub struct SyntheticHttpResponse {
pub grpc_status: tonic::Code,
grpc_status: tonic::Code,
http_status: http::StatusCode,
close_connection: bool,
pub message: Cow<'static, str>,
message: Cow<'static, str>,
location: Option<HeaderValue>,
}
@ -57,6 +62,22 @@ pub struct Respond<R> {
emit_headers: bool,
}
#[pin_project(project = ResponseBodyProj)]
pub enum ResponseBody<R, B> {
Passthru(#[pin] B),
GrpcRescue {
#[pin]
inner: B,
trailers: Option<http::HeaderMap>,
rescue: R,
emit_headers: bool,
},
}
const GRPC_CONTENT_TYPE: &str = "application/grpc";
const GRPC_STATUS: &str = "grpc-status";
const GRPC_MESSAGE: &str = "grpc-message";
// === impl HttpRescue ===
impl<E, F> HttpRescue<E> for F
@ -99,17 +120,7 @@ impl SyntheticHttpResponse {
Self {
close_connection: true,
http_status: http::StatusCode::GATEWAY_TIMEOUT,
grpc_status: tonic::Code::DeadlineExceeded,
message: Cow::Owned(msg.to_string()),
location: None,
}
}
pub fn gateway_timeout_nonfatal(msg: impl ToString) -> Self {
Self {
close_connection: false,
http_status: http::StatusCode::GATEWAY_TIMEOUT,
grpc_status: tonic::Code::DeadlineExceeded,
grpc_status: tonic::Code::Unavailable,
message: Cow::Owned(msg.to_string()),
location: None,
}
@ -145,16 +156,6 @@ impl SyntheticHttpResponse {
}
}
pub fn rate_limited(msg: impl ToString) -> Self {
Self {
http_status: http::StatusCode::TOO_MANY_REQUESTS,
grpc_status: tonic::Code::ResourceExhausted,
close_connection: false,
message: Cow::Owned(msg.to_string()),
location: None,
}
}
pub fn loop_detected(msg: impl ToString) -> Self {
Self {
http_status: http::StatusCode::LOOP_DETECTED,
@ -226,7 +227,7 @@ impl SyntheticHttpResponse {
.version(http::Version::HTTP_2)
.header(http::header::CONTENT_LENGTH, "0")
.header(http::header::CONTENT_TYPE, GRPC_CONTENT_TYPE)
.header(GRPC_STATUS, super::code_header(self.grpc_status));
.header(GRPC_STATUS, code_header(self.grpc_status));
if emit_headers {
rsp = rsp
@ -325,15 +326,7 @@ where
let is_grpc = req
.headers()
.get(http::header::CONTENT_TYPE)
.and_then(|v| {
v.to_str().ok().map(|s| {
s.starts_with(
GRPC_CONTENT_TYPE
.to_str()
.expect("GRPC_CONTENT_TYPE only contains visible ASCII"),
)
})
})
.and_then(|v| v.to_str().ok().map(|s| s.starts_with(GRPC_CONTENT_TYPE)))
.unwrap_or(false);
Respond {
client,
@ -375,7 +368,7 @@ impl<R> Respond<R> {
impl<B, R> respond::Respond<http::Response<B>, Error> for Respond<R>
where
B: Default + linkerd_proxy_http::Body,
B: Default + hyper::body::HttpBody,
R: HttpRescue<Error> + Clone,
{
type Response = http::Response<ResponseBody<R, B>>;
@ -383,14 +376,19 @@ where
fn respond(&self, res: Result<http::Response<B>>) -> Result<Self::Response> {
let error = match res {
Ok(rsp) => {
return Ok(rsp.map(|inner| match self {
return Ok(rsp.map(|b| match self {
Respond {
is_grpc: true,
rescue,
emit_headers,
..
} => ResponseBody::grpc_rescue(inner, rescue.clone(), *emit_headers),
_ => ResponseBody::passthru(inner),
} => ResponseBody::GrpcRescue {
inner: b,
trailers: None,
rescue: rescue.clone(),
emit_headers: *emit_headers,
},
_ => ResponseBody::Passthru(b),
}));
}
Err(error) => error,
@ -423,3 +421,127 @@ where
Ok(rsp)
}
}
// === impl ResponseBody ===
impl<R, B: Default + hyper::body::HttpBody> Default for ResponseBody<R, B> {
fn default() -> Self {
ResponseBody::Passthru(B::default())
}
}
impl<R, B> hyper::body::HttpBody for ResponseBody<R, B>
where
B: hyper::body::HttpBody<Error = Error>,
R: HttpRescue<B::Error>,
{
type Data = B::Data;
type Error = B::Error;
fn poll_data(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
match self.project() {
ResponseBodyProj::Passthru(inner) => inner.poll_data(cx),
ResponseBodyProj::GrpcRescue {
inner,
trailers,
rescue,
emit_headers,
} => {
// should not be calling poll_data if we have set trailers derived from an error
assert!(trailers.is_none());
match inner.poll_data(cx) {
Poll::Ready(Some(Err(error))) => {
let SyntheticHttpResponse {
grpc_status,
message,
..
} = rescue.rescue(error)?;
let t = Self::grpc_trailers(grpc_status, &message, *emit_headers);
*trailers = Some(t);
Poll::Ready(None)
}
data => data,
}
}
}
}
#[inline]
fn poll_trailers(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
match self.project() {
ResponseBodyProj::Passthru(inner) => inner.poll_trailers(cx),
ResponseBodyProj::GrpcRescue {
inner, trailers, ..
} => match trailers.take() {
Some(t) => Poll::Ready(Ok(Some(t))),
None => inner.poll_trailers(cx),
},
}
}
#[inline]
fn is_end_stream(&self) -> bool {
match self {
Self::Passthru(inner) => inner.is_end_stream(),
Self::GrpcRescue {
inner, trailers, ..
} => trailers.is_none() && inner.is_end_stream(),
}
}
#[inline]
fn size_hint(&self) -> http_body::SizeHint {
match self {
Self::Passthru(inner) => inner.size_hint(),
Self::GrpcRescue { inner, .. } => inner.size_hint(),
}
}
}
impl<R, B> ResponseBody<R, B> {
fn grpc_trailers(code: tonic::Code, message: &str, emit_headers: bool) -> http::HeaderMap {
debug!(grpc.status = ?code, "Synthesizing gRPC trailers");
let mut t = http::HeaderMap::new();
t.insert(GRPC_STATUS, code_header(code));
if emit_headers {
t.insert(
GRPC_MESSAGE,
HeaderValue::from_str(message).unwrap_or_else(|error| {
warn!(%error, "Failed to encode error header");
HeaderValue::from_static("Unexpected error")
}),
);
}
t
}
}
// Copied from tonic, where it's private.
fn code_header(code: tonic::Code) -> HeaderValue {
use tonic::Code;
match code {
Code::Ok => HeaderValue::from_static("0"),
Code::Cancelled => HeaderValue::from_static("1"),
Code::Unknown => HeaderValue::from_static("2"),
Code::InvalidArgument => HeaderValue::from_static("3"),
Code::DeadlineExceeded => HeaderValue::from_static("4"),
Code::NotFound => HeaderValue::from_static("5"),
Code::AlreadyExists => HeaderValue::from_static("6"),
Code::PermissionDenied => HeaderValue::from_static("7"),
Code::ResourceExhausted => HeaderValue::from_static("8"),
Code::FailedPrecondition => HeaderValue::from_static("9"),
Code::Aborted => HeaderValue::from_static("10"),
Code::OutOfRange => HeaderValue::from_static("11"),
Code::Unimplemented => HeaderValue::from_static("12"),
Code::Internal => HeaderValue::from_static("13"),
Code::Unavailable => HeaderValue::from_static("14"),
Code::DataLoss => HeaderValue::from_static("15"),
Code::Unauthenticated => HeaderValue::from_static("16"),
}
}

View File

@ -1,76 +1,139 @@
use linkerd_error::Error;
use linkerd_opencensus::proto::trace::v1 as oc;
use linkerd_stack::layer;
use linkerd_trace_context::{
self as trace_context,
export::{ExportSpan, SpanKind, SpanLabels},
Span, TraceContext,
};
use std::{str::FromStr, sync::Arc};
use linkerd_trace_context::{self as trace_context, TraceContext};
use std::{collections::HashMap, sync::Arc};
use thiserror::Error;
use tokio::sync::mpsc;
#[derive(Debug, Copy, Clone, Default)]
pub enum CollectorProtocol {
#[default]
OpenCensus,
OpenTelemetry,
pub type OpenCensusSink = Option<mpsc::Sender<oc::Span>>;
pub type Labels = Arc<HashMap<String, String>>;
/// SpanConverter converts trace_context::Span objects into OpenCensus agent
/// protobuf span objects. SpanConverter receives trace_context::Span objects by
/// implmenting the SpanSink trait. For each span that it receives, it converts
/// it to an OpenCensus span and then sends it on the provided mpsc::Sender.
#[derive(Clone)]
pub struct SpanConverter {
kind: Kind,
sink: mpsc::Sender<oc::Span>,
labels: Labels,
}
impl FromStr for CollectorProtocol {
type Err = ();
fn from_str(s: &str) -> Result<Self, Self::Err> {
if s.eq_ignore_ascii_case("opencensus") {
Ok(Self::OpenCensus)
} else if s.eq_ignore_ascii_case("opentelemetry") {
Ok(Self::OpenTelemetry)
} else {
Err(())
}
}
#[derive(Debug, Error)]
#[error("ID '{:?} should have {} bytes, but it has {}", self.id, self.expected_size, self.actual_size)]
pub struct IdLengthError {
id: Vec<u8>,
expected_size: usize,
actual_size: usize,
}
pub type SpanSink = mpsc::Sender<ExportSpan>;
pub fn server<S>(
sink: Option<SpanSink>,
labels: impl Into<SpanLabels>,
sink: OpenCensusSink,
labels: impl Into<Labels>,
) -> impl layer::Layer<S, Service = TraceContext<Option<SpanConverter>, S>> + Clone {
TraceContext::layer(sink.map(move |sink| SpanConverter {
kind: SpanKind::Server,
sink,
labels: labels.into(),
}))
SpanConverter::layer(Kind::Server, sink, labels)
}
pub fn client<S>(
sink: Option<SpanSink>,
labels: impl Into<SpanLabels>,
sink: OpenCensusSink,
labels: impl Into<Labels>,
) -> impl layer::Layer<S, Service = TraceContext<Option<SpanConverter>, S>> + Clone {
TraceContext::layer(sink.map(move |sink| SpanConverter {
kind: SpanKind::Client,
sink,
labels: labels.into(),
}))
SpanConverter::layer(Kind::Client, sink, labels)
}
#[derive(Clone)]
pub struct SpanConverter {
kind: SpanKind,
sink: SpanSink,
labels: SpanLabels,
#[derive(Copy, Clone, Debug, PartialEq)]
enum Kind {
Server = 1,
Client = 2,
}
impl SpanConverter {
fn layer<S>(
kind: Kind,
sink: OpenCensusSink,
labels: impl Into<Labels>,
) -> impl layer::Layer<S, Service = TraceContext<Option<Self>, S>> + Clone {
TraceContext::layer(sink.map(move |sink| Self {
kind,
sink,
labels: labels.into(),
}))
}
fn mk_span(&self, mut span: trace_context::Span) -> Result<oc::Span, IdLengthError> {
let mut attributes = HashMap::<String, oc::AttributeValue>::new();
for (k, v) in self.labels.iter() {
attributes.insert(
k.clone(),
oc::AttributeValue {
value: Some(oc::attribute_value::Value::StringValue(truncatable(
v.clone(),
))),
},
);
}
for (k, v) in span.labels.drain() {
attributes.insert(
k.to_string(),
oc::AttributeValue {
value: Some(oc::attribute_value::Value::StringValue(truncatable(v))),
},
);
}
Ok(oc::Span {
trace_id: into_bytes(span.trace_id, 16)?,
span_id: into_bytes(span.span_id, 8)?,
tracestate: None,
parent_span_id: into_bytes(span.parent_id, 8)?,
name: Some(truncatable(span.span_name)),
kind: self.kind as i32,
start_time: Some(span.start.into()),
end_time: Some(span.end.into()),
attributes: Some(oc::span::Attributes {
attribute_map: attributes,
dropped_attributes_count: 0,
}),
stack_trace: None,
time_events: None,
links: None,
status: None, // TODO: this is gRPC status; we must read response trailers to populate this
resource: None,
same_process_as_parent_span: Some(self.kind == Kind::Client),
child_span_count: None,
})
}
}
impl trace_context::SpanSink for SpanConverter {
#[inline]
fn is_enabled(&self) -> bool {
true
}
fn try_send(&mut self, span: Span) -> Result<(), Error> {
self.sink.try_send(ExportSpan {
span,
kind: self.kind,
labels: Arc::clone(&self.labels),
})?;
Ok(())
fn try_send(&mut self, span: trace_context::Span) -> Result<(), Error> {
let span = self.mk_span(span)?;
self.sink.try_send(span).map_err(Into::into)
}
}
fn into_bytes(id: trace_context::Id, size: usize) -> Result<Vec<u8>, IdLengthError> {
let bytes: Vec<u8> = id.into();
if bytes.len() == size {
Ok(bytes)
} else {
let actual_size = bytes.len();
Err(IdLengthError {
id: bytes,
expected_size: size,
actual_size,
})
}
}
fn truncatable(value: String) -> oc::TruncatableString {
oc::TruncatableString {
value,
truncated_byte_count: 0,
}
}

View File

@ -25,7 +25,6 @@ pub mod metrics;
pub mod proxy;
pub mod serve;
pub mod svc;
pub mod tls_info;
pub mod transport;
pub use self::build_info::{BuildInfo, BUILD_INFO};
@ -33,6 +32,7 @@ pub use drain;
pub use ipnet::{IpNet, Ipv4Net, Ipv6Net};
pub use linkerd_addr::{self as addr, Addr, AddrMatch, IpMatch, NameAddr, NameMatch};
pub use linkerd_conditional::Conditional;
pub use linkerd_detect as detect;
pub use linkerd_dns;
pub use linkerd_error::{cause_ref, is_caused_by, Error, Infallible, Recover, Result};
pub use linkerd_exp_backoff as exp_backoff;
@ -40,7 +40,6 @@ pub use linkerd_http_metrics as http_metrics;
pub use linkerd_idle_cache as idle_cache;
pub use linkerd_io as io;
pub use linkerd_opencensus as opencensus;
pub use linkerd_opentelemetry as opentelemetry;
pub use linkerd_service_profiles as profiles;
pub use linkerd_stack_metrics as stack_metrics;
pub use linkerd_stack_tracing as stack_tracing;
@ -66,7 +65,7 @@ pub struct ProxyRuntime {
pub identity: identity::creds::Receiver,
pub metrics: metrics::Proxy,
pub tap: proxy::tap::Registry,
pub span_sink: Option<http_tracing::SpanSink>,
pub span_sink: http_tracing::OpenCensusSink,
pub drain: drain::Watch,
}
@ -78,9 +77,9 @@ pub fn http_request_authority_addr<B>(req: &http::Request<B>) -> Result<Addr, ad
}
pub fn http_request_host_addr<B>(req: &http::Request<B>) -> Result<Addr, addr::Error> {
use crate::proxy::http;
use crate::proxy::http::h1;
http::authority_from_header(req, http::header::HOST)
h1::authority_from_host(req)
.ok_or(addr::Error::InvalidHost)
.and_then(|a| Addr::from_authority_and_default_port(&a, DEFAULT_PORT))
}

View File

@ -9,13 +9,14 @@
pub use crate::transport::labels::{TargetAddr, TlsAccept};
use crate::{
classify::Class,
control, http_metrics, opencensus, opentelemetry, profiles, proxy, stack_metrics, svc, tls,
control, http_metrics, opencensus, profiles, stack_metrics,
svc::Param,
tls,
transport::{self, labels::TlsConnect},
};
use linkerd_addr::Addr;
pub use linkerd_metrics::*;
use linkerd_proxy_server_policy as policy;
use prometheus_client::encoding::{EncodeLabelSet, EncodeLabelValue};
use std::{
fmt::{self, Write},
net::SocketAddr,
@ -38,7 +39,6 @@ pub struct Metrics {
pub proxy: Proxy,
pub control: ControlHttp,
pub opencensus: opencensus::metrics::Registry,
pub opentelemetry: opentelemetry::metrics::Registry,
}
#[derive(Clone, Debug)]
@ -54,7 +54,7 @@ pub struct Proxy {
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ControlLabels {
addr: Addr,
server_id: tls::ConditionalClientTlsLabels,
server_id: tls::ConditionalClientTls,
}
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
@ -65,7 +65,7 @@ pub enum EndpointLabels {
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct InboundEndpointLabels {
pub tls: tls::ConditionalServerTlsLabels,
pub tls: tls::ConditionalServerTls,
pub authority: Option<http::uri::Authority>,
pub target_addr: SocketAddr,
pub policy: RouteAuthzLabels,
@ -73,7 +73,7 @@ pub struct InboundEndpointLabels {
/// A label referencing an inbound `Server` (i.e. for policy).
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct ServerLabel(pub Arc<policy::Meta>, pub u16);
pub struct ServerLabel(pub Arc<policy::Meta>);
/// Labels referencing an inbound server and authorization.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
@ -98,35 +98,12 @@ pub struct RouteAuthzLabels {
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct OutboundEndpointLabels {
pub server_id: tls::ConditionalClientTlsLabels,
pub server_id: tls::ConditionalClientTls,
pub authority: Option<http::uri::Authority>,
pub labels: Option<String>,
pub zone_locality: OutboundZoneLocality,
pub target_addr: SocketAddr,
}
#[derive(Debug, Copy, Clone, Default, Hash, Eq, PartialEq, EncodeLabelValue)]
pub enum OutboundZoneLocality {
#[default]
Unknown,
Local,
Remote,
}
impl OutboundZoneLocality {
pub fn new(metadata: &proxy::api_resolve::Metadata) -> Self {
if let Some(is_zone_local) = metadata.is_zone_local() {
if is_zone_local {
OutboundZoneLocality::Local
} else {
OutboundZoneLocality::Remote
}
} else {
OutboundZoneLocality::Unknown
}
}
}
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct StackLabels {
pub direction: Direction,
@ -155,10 +132,10 @@ where
I: Iterator<Item = (&'i String, &'i String)>,
{
let (k0, v0) = labels_iter.next()?;
let mut out = format!("{prefix}_{k0}=\"{v0}\"");
let mut out = format!("{}_{}=\"{}\"", prefix, k0, v0);
for (k, v) in labels_iter {
write!(out, ",{prefix}_{k}=\"{v}\"").expect("label concat must succeed");
write!(out, ",{}_{}=\"{}\"", prefix, k, v).expect("label concat must succeed");
}
Some(out)
}
@ -166,7 +143,7 @@ where
// === impl Metrics ===
impl Metrics {
pub fn new(retain_idle: Duration) -> (Self, impl legacy::FmtMetrics + Clone + Send + 'static) {
pub fn new(retain_idle: Duration) -> (Self, impl FmtMetrics + Clone + Send + 'static) {
let (control, control_report) = {
let m = http_metrics::Requests::<ControlLabels, Class>::default();
let r = m.clone().into_report(retain_idle).with_prefix("control");
@ -214,16 +191,13 @@ impl Metrics {
};
let (opencensus, opencensus_report) = opencensus::metrics::new();
let (opentelemetry, opentelemetry_report) = opentelemetry::metrics::new();
let metrics = Metrics {
proxy,
control,
opencensus,
opentelemetry,
};
use legacy::FmtMetrics as _;
let report = endpoint_report
.and_report(profile_route_report)
.and_report(retry_report)
@ -231,7 +205,6 @@ impl Metrics {
.and_report(control_report)
.and_report(transport_report)
.and_report(opencensus_report)
.and_report(opentelemetry_report)
.and_report(stack);
(metrics, report)
@ -240,21 +213,19 @@ impl Metrics {
// === impl CtlLabels ===
impl svc::Param<ControlLabels> for control::ControlAddr {
impl Param<ControlLabels> for control::ControlAddr {
fn param(&self) -> ControlLabels {
ControlLabels {
addr: self.addr.clone(),
server_id: self.identity.as_ref().map(tls::ClientTls::labels),
server_id: self.identity.clone(),
}
}
}
impl legacy::FmtLabels for ControlLabels {
impl FmtLabels for ControlLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { addr, server_id } = self;
write!(f, "addr=\"{addr}\",")?;
TlsConnect::from(server_id).fmt_labels(f)?;
write!(f, "addr=\"{}\",", self.addr)?;
TlsConnect::from(&self.server_id).fmt_labels(f)?;
Ok(())
}
@ -282,19 +253,13 @@ impl ProfileRouteLabels {
}
}
impl legacy::FmtLabels for ProfileRouteLabels {
impl FmtLabels for ProfileRouteLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
direction,
addr,
labels,
} = self;
self.direction.fmt_labels(f)?;
write!(f, ",dst=\"{}\"", self.addr)?;
direction.fmt_labels(f)?;
write!(f, ",dst=\"{addr}\"")?;
if let Some(labels) = labels.as_ref() {
write!(f, ",{labels}")?;
if let Some(labels) = self.labels.as_ref() {
write!(f, ",{}", labels)?;
}
Ok(())
@ -315,7 +280,7 @@ impl From<OutboundEndpointLabels> for EndpointLabels {
}
}
impl legacy::FmtLabels for EndpointLabels {
impl FmtLabels for EndpointLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Inbound(i) => (Direction::In, i).fmt_labels(f),
@ -324,130 +289,87 @@ impl legacy::FmtLabels for EndpointLabels {
}
}
impl legacy::FmtLabels for InboundEndpointLabels {
impl FmtLabels for InboundEndpointLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
tls,
authority,
target_addr,
policy,
} = self;
if let Some(a) = authority.as_ref() {
if let Some(a) = self.authority.as_ref() {
Authority(a).fmt_labels(f)?;
write!(f, ",")?;
}
((TargetAddr(*target_addr), TlsAccept::from(tls)), policy).fmt_labels(f)?;
(
(TargetAddr(self.target_addr), TlsAccept::from(&self.tls)),
&self.policy,
)
.fmt_labels(f)?;
Ok(())
}
}
impl legacy::FmtLabels for ServerLabel {
impl FmtLabels for ServerLabel {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(meta, port) = self;
write!(
f,
"srv_group=\"{}\",srv_kind=\"{}\",srv_name=\"{}\",srv_port=\"{}\"",
meta.group(),
meta.kind(),
meta.name(),
port
"srv_group=\"{}\",srv_kind=\"{}\",srv_name=\"{}\"",
self.0.group(),
self.0.kind(),
self.0.name()
)
}
}
impl EncodeLabelSet for ServerLabel {
fn encode(&self, mut enc: prometheus_client::encoding::LabelSetEncoder<'_>) -> fmt::Result {
prom::EncodeLabelSetMut::encode_label_set(self, &mut enc)
}
}
impl prom::EncodeLabelSetMut for ServerLabel {
fn encode_label_set(&self, enc: &mut prom::encoding::LabelSetEncoder<'_>) -> fmt::Result {
use prometheus_client::encoding::EncodeLabel;
("srv_group", self.0.group()).encode(enc.encode_label())?;
("srv_kind", self.0.kind()).encode(enc.encode_label())?;
("srv_name", self.0.name()).encode(enc.encode_label())?;
("srv_port", self.1).encode(enc.encode_label())?;
Ok(())
}
}
impl legacy::FmtLabels for ServerAuthzLabels {
impl FmtLabels for ServerAuthzLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { server, authz } = self;
server.fmt_labels(f)?;
self.server.fmt_labels(f)?;
write!(
f,
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
authz.group(),
authz.kind(),
authz.name()
self.authz.group(),
self.authz.kind(),
self.authz.name()
)
}
}
impl legacy::FmtLabels for RouteLabels {
impl FmtLabels for RouteLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { server, route } = self;
server.fmt_labels(f)?;
self.server.fmt_labels(f)?;
write!(
f,
",route_group=\"{}\",route_kind=\"{}\",route_name=\"{}\"",
route.group(),
route.kind(),
route.name(),
self.route.group(),
self.route.kind(),
self.route.name(),
)
}
}
impl legacy::FmtLabels for RouteAuthzLabels {
impl FmtLabels for RouteAuthzLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { route, authz } = self;
route.fmt_labels(f)?;
self.route.fmt_labels(f)?;
write!(
f,
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
authz.group(),
authz.kind(),
authz.name(),
self.authz.group(),
self.authz.kind(),
self.authz.name(),
)
}
}
impl svc::Param<OutboundZoneLocality> for OutboundEndpointLabels {
fn param(&self) -> OutboundZoneLocality {
self.zone_locality
}
}
impl legacy::FmtLabels for OutboundEndpointLabels {
impl FmtLabels for OutboundEndpointLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
server_id,
authority,
labels,
// TODO(kate): this label is not currently emitted.
zone_locality: _,
target_addr,
} = self;
if let Some(a) = authority.as_ref() {
if let Some(a) = self.authority.as_ref() {
Authority(a).fmt_labels(f)?;
write!(f, ",")?;
}
let ta = TargetAddr(*target_addr);
let tls = TlsConnect::from(server_id);
let ta = TargetAddr(self.target_addr);
let tls = TlsConnect::from(&self.server_id);
(ta, tls).fmt_labels(f)?;
if let Some(labels) = labels.as_ref() {
write!(f, ",{labels}")?;
if let Some(labels) = self.labels.as_ref() {
write!(f, ",{}", labels)?;
}
Ok(())
@ -463,20 +385,19 @@ impl fmt::Display for Direction {
}
}
impl legacy::FmtLabels for Direction {
impl FmtLabels for Direction {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "direction=\"{self}\"")
write!(f, "direction=\"{}\"", self)
}
}
impl legacy::FmtLabels for Authority<'_> {
impl<'a> FmtLabels for Authority<'a> {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(authority) = self;
write!(f, "authority=\"{authority}\"")
write!(f, "authority=\"{}\"", self.0)
}
}
impl legacy::FmtLabels for Class {
impl FmtLabels for Class {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let class = |ok: bool| if ok { "success" } else { "failure" };
@ -498,7 +419,8 @@ impl legacy::FmtLabels for Class {
Class::Error(msg) => write!(
f,
"classification=\"failure\",grpc_status=\"\",error=\"{msg}\""
"classification=\"failure\",grpc_status=\"\",error=\"{}\"",
msg
),
}
}
@ -524,15 +446,9 @@ impl StackLabels {
}
}
impl legacy::FmtLabels for StackLabels {
impl FmtLabels for StackLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
direction,
protocol,
name,
} = self;
direction.fmt_labels(f)?;
write!(f, ",protocol=\"{protocol}\",name=\"{name}\"")
self.direction.fmt_labels(f)?;
write!(f, ",protocol=\"{}\",name=\"{}\"", self.protocol, self.name)
}
}

View File

@ -1,70 +0,0 @@
use linkerd_metrics::prom;
use prometheus_client::encoding::{EncodeLabelSet, EncodeLabelValue, LabelValueEncoder};
use std::{
fmt::{Error, Write},
sync::{Arc, OnceLock},
};
static TLS_INFO: OnceLock<Arc<TlsInfo>> = OnceLock::new();
#[derive(Clone, Debug, Default, Hash, PartialEq, Eq, EncodeLabelSet)]
pub struct TlsInfo {
tls_suites: MetricValueList,
tls_kx_groups: MetricValueList,
tls_rand: String,
tls_key_provider: String,
tls_fips: bool,
}
#[derive(Clone, Debug, Default, Hash, PartialEq, Eq)]
struct MetricValueList {
values: Vec<&'static str>,
}
impl FromIterator<&'static str> for MetricValueList {
fn from_iter<T: IntoIterator<Item = &'static str>>(iter: T) -> Self {
MetricValueList {
values: iter.into_iter().collect(),
}
}
}
impl EncodeLabelValue for MetricValueList {
fn encode(&self, encoder: &mut LabelValueEncoder<'_>) -> Result<(), Error> {
for value in &self.values {
value.encode(encoder)?;
encoder.write_char(',')?;
}
Ok(())
}
}
pub fn metric() -> prom::Family<TlsInfo, prom::ConstGauge> {
let fam = prom::Family::<TlsInfo, prom::ConstGauge>::new_with_constructor(|| {
prom::ConstGauge::new(1)
});
let tls_info = TLS_INFO.get_or_init(|| {
let provider = linkerd_rustls::get_default_provider();
let tls_suites = provider
.cipher_suites
.iter()
.flat_map(|cipher_suite| cipher_suite.suite().as_str())
.collect::<MetricValueList>();
let tls_kx_groups = provider
.kx_groups
.iter()
.flat_map(|suite| suite.name().as_str())
.collect::<MetricValueList>();
Arc::new(TlsInfo {
tls_suites,
tls_kx_groups,
tls_rand: format!("{:?}", provider.secure_random),
tls_key_provider: format!("{:?}", provider.key_provider),
tls_fips: provider.fips(),
})
});
let _ = fam.get_or_create(tls_info);
fam
}

View File

@ -1,7 +1,7 @@
use crate::metrics::ServerLabel as PolicyServerLabel;
pub use crate::metrics::{Direction, OutboundEndpointLabels};
use linkerd_conditional::Conditional;
use linkerd_metrics::legacy::FmtLabels;
use linkerd_metrics::FmtLabels;
use linkerd_tls as tls;
use std::{fmt, net::SocketAddr};
@ -20,16 +20,16 @@ pub enum Key {
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct ServerLabels {
direction: Direction,
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
target_addr: SocketAddr,
policy: Option<PolicyServerLabel>,
}
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct TlsAccept<'t>(pub &'t tls::ConditionalServerTlsLabels);
pub struct TlsAccept<'t>(pub &'t tls::ConditionalServerTls);
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub(crate) struct TlsConnect<'t>(pub &'t tls::ConditionalClientTlsLabels);
pub(crate) struct TlsConnect<'t>(&'t tls::ConditionalClientTls);
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub struct TargetAddr(pub SocketAddr);
@ -38,7 +38,7 @@ pub struct TargetAddr(pub SocketAddr);
impl Key {
pub fn inbound_server(
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
target_addr: SocketAddr,
server: PolicyServerLabel,
) -> Self {
@ -62,7 +62,7 @@ impl FmtLabels for Key {
}
Self::InboundClient => {
const NO_TLS: tls::client::ConditionalClientTlsLabels =
const NO_TLS: tls::client::ConditionalClientTls =
Conditional::None(tls::NoClientTls::Loopback);
Direction::In.fmt_labels(f)?;
@ -75,7 +75,7 @@ impl FmtLabels for Key {
impl ServerLabels {
fn inbound(
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
target_addr: SocketAddr,
policy: PolicyServerLabel,
) -> Self {
@ -90,7 +90,7 @@ impl ServerLabels {
fn outbound(target_addr: SocketAddr) -> Self {
ServerLabels {
direction: Direction::Out,
tls: tls::ConditionalServerTlsLabels::None(tls::NoServerTls::Loopback),
tls: tls::ConditionalServerTls::None(tls::NoServerTls::Loopback),
target_addr,
policy: None,
}
@ -99,17 +99,14 @@ impl ServerLabels {
impl FmtLabels for ServerLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
direction,
tls,
target_addr,
policy,
} = self;
direction.fmt_labels(f)?;
self.direction.fmt_labels(f)?;
f.write_str(",peer=\"src\",")?;
((TargetAddr(*target_addr), TlsAccept(tls)), policy.as_ref()).fmt_labels(f)?;
(
(TargetAddr(self.target_addr), TlsAccept(&self.tls)),
self.policy.as_ref(),
)
.fmt_labels(f)?;
Ok(())
}
@ -117,28 +114,27 @@ impl FmtLabels for ServerLabels {
// === impl TlsAccept ===
impl<'t> From<&'t tls::ConditionalServerTlsLabels> for TlsAccept<'t> {
fn from(c: &'t tls::ConditionalServerTlsLabels) -> Self {
impl<'t> From<&'t tls::ConditionalServerTls> for TlsAccept<'t> {
fn from(c: &'t tls::ConditionalServerTls) -> Self {
TlsAccept(c)
}
}
impl FmtLabels for TlsAccept<'_> {
impl<'t> FmtLabels for TlsAccept<'t> {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(tls) = self;
match tls {
match self.0 {
Conditional::None(tls::NoServerTls::Disabled) => {
write!(f, "tls=\"disabled\"")
}
Conditional::None(why) => {
write!(f, "tls=\"no_identity\",no_tls_reason=\"{why}\"")
write!(f, "tls=\"no_identity\",no_tls_reason=\"{}\"", why)
}
Conditional::Some(tls::ServerTlsLabels::Established { client_id }) => match client_id {
Some(id) => write!(f, "tls=\"true\",client_id=\"{id}\""),
Conditional::Some(tls::ServerTls::Established { client_id, .. }) => match client_id {
Some(id) => write!(f, "tls=\"true\",client_id=\"{}\"", id),
None => write!(f, "tls=\"true\",client_id=\"\""),
},
Conditional::Some(tls::ServerTlsLabels::Passthru { sni }) => {
write!(f, "tls=\"opaque\",sni=\"{sni}\"")
Conditional::Some(tls::ServerTls::Passthru { sni }) => {
write!(f, "tls=\"opaque\",sni=\"{}\"", sni)
}
}
}
@ -146,25 +142,23 @@ impl FmtLabels for TlsAccept<'_> {
// === impl TlsConnect ===
impl<'t> From<&'t tls::ConditionalClientTlsLabels> for TlsConnect<'t> {
fn from(s: &'t tls::ConditionalClientTlsLabels) -> Self {
impl<'t> From<&'t tls::ConditionalClientTls> for TlsConnect<'t> {
fn from(s: &'t tls::ConditionalClientTls) -> Self {
TlsConnect(s)
}
}
impl FmtLabels for TlsConnect<'_> {
impl<'t> FmtLabels for TlsConnect<'t> {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(tls) = self;
match tls {
match self.0 {
Conditional::None(tls::NoClientTls::Disabled) => {
write!(f, "tls=\"disabled\"")
}
Conditional::None(why) => {
write!(f, "tls=\"no_identity\",no_tls_reason=\"{why}\"")
write!(f, "tls=\"no_identity\",no_tls_reason=\"{}\"", why)
}
Conditional::Some(tls::ClientTlsLabels { server_id }) => {
write!(f, "tls=\"true\",server_id=\"{server_id}\"")
Conditional::Some(tls::ClientTls { server_id, .. }) => {
write!(f, "tls=\"true\",server_id=\"{}\"", server_id)
}
}
}
@ -174,13 +168,12 @@ impl FmtLabels for TlsConnect<'_> {
impl FmtLabels for TargetAddr {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(target_addr) = self;
write!(
f,
"target_addr=\"{}\",target_ip=\"{}\",target_port=\"{}\"",
target_addr,
target_addr.ip(),
target_addr.port()
self.0,
self.0.ip(),
self.0.port()
)
}
}
@ -201,25 +194,23 @@ mod tests {
use std::sync::Arc;
let labels = ServerLabels::inbound(
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some("foo.id.example.com".parse().unwrap()),
negotiated_protocol: None,
}),
([192, 0, 2, 4], 40000).into(),
PolicyServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testserver".into(),
}),
40000,
),
PolicyServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testserver".into(),
})),
);
assert_eq!(
labels.to_string(),
"direction=\"inbound\",peer=\"src\",\
target_addr=\"192.0.2.4:40000\",target_ip=\"192.0.2.4\",target_port=\"40000\",\
tls=\"true\",client_id=\"foo.id.example.com\",\
srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testserver\",srv_port=\"40000\""
srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testserver\""
);
}
}

View File

@ -1,24 +1,24 @@
[package]
name = "linkerd-app-gateway"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[dependencies]
http = { workspace = true }
http = "0.2"
futures = { version = "0.3", default-features = false }
linkerd-app-core = { path = "../core" }
linkerd-app-inbound = { path = "../inbound" }
linkerd-app-outbound = { path = "../outbound" }
linkerd-proxy-client-policy = { path = "../../proxy/client-policy" }
once_cell = "1"
thiserror = "2"
thiserror = "1"
tokio = { version = "1", features = ["sync"] }
tonic = { workspace = true, default-features = false }
tower = { workspace = true, default-features = false }
tracing = { workspace = true }
tonic = { version = "0.10", default-features = false }
tower = { version = "0.4", default-features = false }
tracing = "0.1"
[dev-dependencies]
linkerd-app-inbound = { path = "../inbound", features = ["test-util"] }
@ -26,6 +26,6 @@ linkerd-app-outbound = { path = "../outbound", features = ["test-util"] }
linkerd-proxy-server-policy = { path = "../../proxy/server-policy" }
tokio = { version = "1", features = ["rt", "macros"] }
tokio-test = "0.4"
tower = { workspace = true, default-features = false, features = ["util"] }
tower-test = { workspace = true }
tower = { version = "0.4", default-features = false, features = ["util"] }
tower-test = "0.4"
linkerd-app-test = { path = "../test" }

View File

@ -90,7 +90,7 @@ impl Gateway {
detect_timeout,
queue,
addr,
meta.into(),
meta,
),
None => {
tracing::debug!(

View File

@ -1,7 +1,7 @@
use super::Gateway;
use inbound::{GatewayAddr, GatewayDomainInvalid};
use linkerd_app_core::{
metrics::ServerLabel,
metrics::{prom, ServerLabel},
profiles,
proxy::{
api_resolve::{ConcreteAddr, Metadata},
@ -28,7 +28,7 @@ pub(crate) use self::gateway::NewHttpGateway;
pub struct Target<T = ()> {
addr: GatewayAddr,
routes: watch::Receiver<outbound::http::Routes>,
version: http::Variant,
version: http::Version,
parent: T,
}
@ -47,6 +47,7 @@ impl Gateway {
/// outbound router.
pub fn http<T, R>(
&self,
registry: &mut prom::Registry,
inner: svc::ArcNewHttp<
outbound::http::concrete::Endpoint<
outbound::http::logical::Concrete<outbound::http::Http<Target>>,
@ -74,7 +75,7 @@ impl Gateway {
T: svc::Param<tls::ClientId>,
T: svc::Param<inbound::policy::AllowPolicy>,
T: svc::Param<Option<watch::Receiver<profiles::Profile>>>,
T: svc::Param<http::Variant>,
T: svc::Param<http::Version>,
T: svc::Param<http::normalize_uri::DefaultAuthority>,
T: Clone + Send + Sync + Unpin + 'static,
// Endpoint resolution.
@ -85,7 +86,7 @@ impl Gateway {
.outbound
.clone()
.with_stack(inner)
.push_http_cached(resolve)
.push_http_cached(outbound::http::HttpMetrics::register(registry), resolve)
.into_stack()
// Discard `T` and its associated client-specific metadata.
.push_map_target(Target::discard_parent)
@ -153,7 +154,7 @@ fn mk_routes(profile: &profiles::Profile) -> Option<outbound::http::Routes> {
if let Some((addr, metadata)) = profile.endpoint.clone() {
return Some(outbound::http::Routes::Endpoint(
Remote(ServerAddr(addr)),
metadata.into(),
metadata,
));
}
@ -164,7 +165,7 @@ fn mk_routes(profile: &profiles::Profile) -> Option<outbound::http::Routes> {
impl<B, T: Clone> svc::router::SelectRoute<http::Request<B>> for ByRequestVersion<T> {
type Key = Target<T>;
type Error = http::UnsupportedVariant;
type Error = http::version::Unsupported;
fn select(&self, req: &http::Request<B>) -> Result<Self::Key, Self::Error> {
let mut t = self.0.clone();
@ -192,8 +193,8 @@ impl<T> svc::Param<GatewayAddr> for Target<T> {
}
}
impl<T> svc::Param<http::Variant> for Target<T> {
fn param(&self) -> http::Variant {
impl<T> svc::Param<http::Version> for Target<T> {
fn param(&self) -> http::Version {
self.version
}
}

View File

@ -66,7 +66,7 @@ where
impl<B, S> tower::Service<http::Request<B>> for HttpGateway<S>
where
B: http::Body + 'static,
B: http::HttpBody + 'static,
S: tower::Service<http::Request<B>, Response = http::Response<http::BoxBody>>,
S::Error: Into<Error> + 'static,
S::Future: Send + 'static,

View File

@ -62,7 +62,7 @@ async fn upgraded_request_remains_relative_form() {
impl svc::Param<ServerLabel> for Target {
fn param(&self) -> ServerLabel {
ServerLabel(policy::Meta::new_default("test"), 4143)
ServerLabel(policy::Meta::new_default("test"))
}
}
@ -98,9 +98,9 @@ async fn upgraded_request_remains_relative_form() {
}
}
impl svc::Param<http::Variant> for Target {
fn param(&self) -> http::Variant {
http::Variant::H2
impl svc::Param<http::Version> for Target {
fn param(&self) -> http::Version {
http::Version::H2
}
}
@ -129,7 +129,6 @@ async fn upgraded_request_remains_relative_form() {
}),
}]))]),
},
local_rate_limit: Arc::new(Default::default()),
};
let (policy, tx) = inbound::policy::AllowPolicy::for_test(self.param(), policy);
tokio::spawn(async move {
@ -161,6 +160,7 @@ async fn upgraded_request_remains_relative_form() {
);
gateway
.http(
&mut Default::default(),
svc::ArcNewHttp::new(move |_: _| svc::BoxHttp::new(inner.clone())),
resolve,
)

View File

@ -3,7 +3,9 @@
#![forbid(unsafe_code)]
use linkerd_app_core::{
io, profiles,
io,
metrics::prom,
profiles,
proxy::{
api_resolve::{ConcreteAddr, Metadata},
core::Resolve,
@ -48,6 +50,7 @@ impl Gateway {
/// stack.
pub fn stack<T, I, R>(
self,
registry: &mut prom::Registry,
resolve: R,
profiles: impl profiles::GetProfile<Error = Error>,
policies: impl outbound::policy::GetPolicy,
@ -70,8 +73,12 @@ impl Gateway {
R::Resolution: Unpin,
{
let opaq = {
let registry = registry.sub_registry_with_prefix("tcp");
let resolve = resolve.clone();
let opaq = self.outbound.to_tcp_connect().push_opaq_cached(resolve);
let opaq = self
.outbound
.to_tcp_connect()
.push_opaq_cached(registry, resolve);
self.opaq(opaq.into_inner()).into_inner()
};
@ -81,7 +88,7 @@ impl Gateway {
.to_tcp_connect()
.push_tcp_endpoint()
.push_http_tcp_client();
let http = self.http(http.into_inner(), resolve);
let http = self.http(registry, http.into_inner(), resolve);
self.inbound
.clone()
.with_stack(http.into_inner())

View File

@ -1,15 +1,10 @@
use super::{server::Opaq, Gateway};
use inbound::{GatewayAddr, GatewayDomainInvalid};
use linkerd_app_core::{io, svc, tls, transport::addrs::*, Error};
use linkerd_app_core::{io, profiles, svc, tls, transport::addrs::*, Error};
use linkerd_app_inbound as inbound;
use linkerd_app_outbound as outbound;
use tokio::sync::watch;
#[derive(Clone, Debug)]
pub struct Target {
addr: GatewayAddr,
routes: watch::Receiver<outbound::opaq::Routes>,
}
pub type Target = outbound::opaq::Logical;
impl Gateway {
/// Wrap the provided outbound opaque stack with inbound authorization and
@ -38,7 +33,18 @@ impl Gateway {
.push_filter(
|(_, opaq): (_, Opaq<T>)| -> Result<_, GatewayDomainInvalid> {
// Fail connections were not resolved.
Target::try_from(opaq)
let profile = svc::Param::<Option<profiles::Receiver>>::param(&*opaq)
.ok_or(GatewayDomainInvalid)?;
if let Some(profiles::LogicalAddr(addr)) = profile.logical_addr() {
Ok(outbound::opaq::Logical::Route(addr, profile))
} else if let Some((addr, metadata)) = profile.endpoint() {
Ok(outbound::opaq::Logical::Forward(
Remote(ServerAddr(addr)),
metadata,
))
} else {
Err(GatewayDomainInvalid)
}
},
)
// Authorize connections to the gateway.
@ -46,47 +52,3 @@ impl Gateway {
.arc_new_tcp()
}
}
impl<T> TryFrom<Opaq<T>> for Target
where
T: svc::Param<GatewayAddr>,
{
type Error = GatewayDomainInvalid;
fn try_from(opaq: Opaq<T>) -> Result<Self, Self::Error> {
use svc::Param;
let addr: GatewayAddr = (**opaq).param();
let Some(profile) = (*opaq).param() else {
// The gateway address must be resolvable via the profile API.
return Err(GatewayDomainInvalid);
};
let routes = outbound::opaq::routes_from_discovery(
addr.0.clone().into(),
Some(profile),
(*opaq).param(),
);
Ok(Target { addr, routes })
}
}
impl svc::Param<watch::Receiver<outbound::opaq::Routes>> for Target {
fn param(&self) -> watch::Receiver<outbound::opaq::Routes> {
self.routes.clone()
}
}
impl PartialEq for Target {
fn eq(&self, other: &Self) -> bool {
self.addr == other.addr
}
}
impl Eq for Target {}
impl std::hash::Hash for Target {
fn hash<H: std::hash::Hasher>(&self, state: &mut H) {
self.addr.hash(state);
}
}

View File

@ -11,7 +11,7 @@ use tokio::sync::watch;
/// Target for HTTP stacks.
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct Http<T> {
version: http::Variant,
version: http::Version,
parent: outbound::Discovery<T>,
}
@ -61,13 +61,13 @@ impl Gateway {
|parent: outbound::Discovery<T>| -> Result<_, GatewayDomainInvalid> {
if let Some(proto) = (*parent).param() {
let version = match proto {
SessionProtocol::Http1 => http::Variant::Http1,
SessionProtocol::Http2 => http::Variant::H2,
SessionProtocol::Http1 => http::Version::Http1,
SessionProtocol::Http2 => http::Version::H2,
};
return Ok(svc::Either::Left(Http { parent, version }));
return Ok(svc::Either::A(Http { parent, version }));
}
Ok(svc::Either::Right(Opaq(parent)))
Ok(svc::Either::B(Opaq(parent)))
},
opaq,
)
@ -154,8 +154,8 @@ impl<T> std::ops::Deref for Http<T> {
}
}
impl<T> svc::Param<http::Variant> for Http<T> {
fn param(&self) -> http::Variant {
impl<T> svc::Param<http::Version> for Http<T> {
fn param(&self) -> http::Version {
self.version
}
}

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-inbound"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Configures and runs the inbound proxy
"""
@ -13,62 +13,53 @@ Configures and runs the inbound proxy
test-util = [
"linkerd-app-test",
"linkerd-idle-cache/test-util",
"linkerd-meshtls/test-util",
"linkerd-meshtls/rustls",
"linkerd-meshtls-rustls/test-util",
]
[dependencies]
bytes = { workspace = true }
http = { workspace = true }
bytes = "1"
http = "0.2"
futures = { version = "0.3", default-features = false }
linkerd-app-core = { path = "../core" }
linkerd-app-test = { path = "../test", optional = true }
linkerd-http-access-log = { path = "../../http/access-log" }
linkerd-http-access-log = { path = "../../http-access-log" }
linkerd-idle-cache = { path = "../../idle-cache" }
linkerd-meshtls = { path = "../../meshtls", optional = true, default-features = false }
linkerd-meshtls = { path = "../../meshtls", optional = true }
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", optional = true }
linkerd-proxy-client-policy = { path = "../../proxy/client-policy" }
linkerd-tonic-stream = { path = "../../tonic-stream" }
linkerd-tonic-watch = { path = "../../tonic-watch" }
linkerd2-proxy-api = { workspace = true, features = ["inbound"] }
linkerd2-proxy-api = { version = "0.13", features = ["inbound"] }
once_cell = "1"
parking_lot = "0.12"
rangemap = "1"
thiserror = "2"
thiserror = "1"
tokio = { version = "1", features = ["sync"] }
tonic = { workspace = true, default-features = false }
tower = { workspace = true, features = ["util"] }
tracing = { workspace = true }
tonic = { version = "0.10", default-features = false }
tower = { version = "0.4", features = ["util"] }
tracing = "0.1"
[dependencies.linkerd-proxy-server-policy]
path = "../../proxy/server-policy"
features = ["proto"]
[target.'cfg(fuzzing)'.dependencies]
hyper = { workspace = true, features = ["http1", "http2"] }
hyper = { version = "0.14", features = ["http1", "http2"] }
linkerd-app-test = { path = "../test" }
arbitrary = { version = "1", features = ["derive"] }
libfuzzer-sys = { version = "0.4", features = ["arbitrary-derive"] }
linkerd-meshtls = { path = "../../meshtls", features = [
"test-util",
] }
[dev-dependencies]
http-body-util = { workspace = true }
hyper = { workspace = true, features = ["http1", "http2"] }
hyper-util = { workspace = true }
hyper = { version = "0.14", features = ["http1", "http2"] }
linkerd-app-test = { path = "../test" }
linkerd-http-metrics = { path = "../../http/metrics", features = ["test-util"] }
linkerd-http-box = { path = "../../http/box" }
linkerd-http-metrics = { path = "../../http-metrics", features = ["test-util"] }
linkerd-idle-cache = { path = "../../idle-cache", features = ["test-util"] }
linkerd-io = { path = "../../io", features = ["tokio-test"] }
linkerd-meshtls = { path = "../../meshtls", features = [
"test-util",
] }
linkerd-proxy-server-policy = { path = "../../proxy/server-policy", features = [
linkerd-meshtls = { path = "../../meshtls", features = ["rustls"] }
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", features = [
"test-util",
] }
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
tokio = { version = "1", features = ["full", "macros"] }
tokio-test = "0.4"
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(fuzzing)'] }

View File

@ -1,29 +1,30 @@
[package]
name = "linkerd-app-inbound-fuzz"
version = { workspace = true }
version = "0.0.0"
authors = ["Automatically generated"]
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
publish = false
edition = "2021"
[package.metadata]
cargo-fuzz = true
[target.'cfg(fuzzing)'.dependencies]
arbitrary = { version = "1", features = ["derive"] }
hyper = { version = "0.14", features = ["deprecated", "http1", "http2"] }
http = { workspace = true }
hyper = { version = "0.14", features = ["http1", "http2"] }
http = "0.2"
libfuzzer-sys = { version = "0.4", features = ["arbitrary-derive"] }
linkerd-app-core = { path = "../../core" }
linkerd-app-inbound = { path = ".." }
linkerd-app-test = { path = "../../test" }
linkerd-idle-cache = { path = "../../../idle-cache", features = ["test-util"] }
linkerd-meshtls = { path = "../../../meshtls", features = [
linkerd-meshtls = { path = "../../../meshtls", features = ["rustls"] }
linkerd-meshtls-rustls = { path = "../../../meshtls/rustls", features = [
"test-util",
] }
linkerd-tracing = { path = "../../../tracing", features = ["ansi"] }
tokio = { version = "1", features = ["full"] }
tracing = { workspace = true }
tracing = "0.1"
# Prevent this from interfering with workspaces
[workspace]

View File

@ -53,12 +53,12 @@ impl<N> Inbound<N> {
move |t: T| -> Result<_, Error> {
let addr: OrigDstAddr = t.param();
if addr.port() == proxy_port {
return Ok(svc::Either::Right(t));
return Ok(svc::Either::B(t));
}
let policy = policies.get_policy(addr);
tracing::debug!(policy = ?&*policy.borrow(), "Accepted");
Ok(svc::Either::Left(Accept {
Ok(svc::Either::A(Accept {
client_addr: t.param(),
orig_dst_addr: addr,
policy,
@ -138,7 +138,6 @@ mod tests {
kind: "server".into(),
name: "testsrv".into(),
}),
local_rate_limit: Default::default(),
},
None,
);
@ -182,11 +181,7 @@ mod tests {
}
fn inbound() -> Inbound<()> {
Inbound::new(
test_util::default_config(),
test_util::runtime().0,
&mut Default::default(),
)
Inbound::new(test_util::default_config(), test_util::runtime().0)
}
fn new_panic<T>(msg: &'static str) -> svc::ArcNewTcp<T, io::DuplexStream> {

View File

@ -3,8 +3,8 @@ use crate::{
Inbound,
};
use linkerd_app_core::{
identity, io,
metrics::{prom, ServerLabel},
detect, identity, io,
metrics::ServerLabel,
proxy::http,
svc, tls,
transport::{
@ -20,10 +20,6 @@ use tracing::info;
#[cfg(test)]
mod tests;
#[derive(Clone, Debug)]
pub struct MetricsFamilies(pub HttpDetectMetrics);
pub type HttpDetectMetrics = http::DetectMetricsFamilies<ServerLabel>;
#[derive(Clone, Debug, PartialEq, Eq)]
pub(crate) struct Forward {
client_addr: Remote<ClientAddr>,
@ -35,7 +31,7 @@ pub(crate) struct Forward {
#[derive(Clone, Debug)]
pub(crate) struct Http {
tls: Tls,
http: http::Variant,
http: http::Version,
}
#[derive(Clone, Debug)]
@ -52,6 +48,9 @@ struct Detect {
tls: Tls,
}
#[derive(Copy, Clone, Debug)]
struct ConfigureHttpDetect;
#[derive(Clone)]
struct TlsParams {
timeout: tls::server::Timeout,
@ -65,11 +64,7 @@ type TlsIo<I> = tls::server::Io<identity::ServerIo<tls::server::DetectIo<I>>, I>
impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
/// Builds a stack that terminates mesh TLS and detects whether the traffic is HTTP (as hinted
/// by policy).
pub(crate) fn push_detect<T, I, F, FSvc>(
self,
MetricsFamilies(metrics): MetricsFamilies,
forward: F,
) -> Inbound<svc::ArcNewTcp<T, I>>
pub(crate) fn push_detect<T, I, F, FSvc>(self, forward: F) -> Inbound<svc::ArcNewTcp<T, I>>
where
T: svc::Param<OrigDstAddr> + svc::Param<Remote<ClientAddr>> + svc::Param<AllowPolicy>,
T: Clone + Send + 'static,
@ -80,18 +75,14 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
FSvc::Error: Into<Error>,
FSvc::Future: Send,
{
self.push_detect_http(metrics, forward.clone())
self.push_detect_http(forward.clone())
.push_detect_tls(forward)
}
/// Builds a stack that handles HTTP detection once TLS detection has been performed. If the
/// connection is determined to be HTTP, the inner stack is used; otherwise the connection is
/// passed to the provided 'forward' stack.
fn push_detect_http<I, F, FSvc>(
self,
metrics: HttpDetectMetrics,
forward: F,
) -> Inbound<svc::ArcNewTcp<Tls, I>>
fn push_detect_http<I, F, FSvc>(self, forward: F) -> Inbound<svc::ArcNewTcp<Tls, I>>
where
I: io::AsyncRead + io::AsyncWrite + io::PeerAddr,
I: Debug + Send + Sync + Unpin + 'static,
@ -120,59 +111,42 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
.push_switch(
|(detected, Detect { tls, .. })| -> Result<_, Infallible> {
match detected {
http::Detection::Http(http) => {
Ok(svc::Either::Left(Http { http, tls }))
}
http::Detection::NotHttp => Ok(svc::Either::Right(tls)),
Ok(Some(http)) => Ok(svc::Either::A(Http { http, tls })),
Ok(None) => Ok(svc::Either::B(tls)),
// When HTTP detection fails, forward the connection to the application as
// an opaque TCP stream.
http::Detection::ReadTimeout(timeout) => {
match tls.policy.protocol() {
Protocol::Http1 { .. } => {
// If the protocol was hinted to be HTTP/1.1 but detection
// failed, we'll usually be handling HTTP/1, but we may actually
// be handling HTTP/2 via protocol upgrade. Our options are:
// handle the connection as HTTP/1, assuming it will be rare for
// a proxy to initiate TLS, etc and not send the 16B of
// connection header; or we can handle it as opaque--but there's
// no chance the server will be able to handle the H2 protocol
// upgrade. So, it seems best to assume it's HTTP/1 and let the
// proxy handle the protocol error if we're in an edge case.
info!(
?timeout,
"Handling connection as HTTP/1 due to policy"
);
Ok(svc::Either::Left(Http {
http: http::Variant::Http1,
tls,
}))
}
// Otherwise, the protocol hint must have
// been `Detect` or the protocol was updated
// after detection was initiated, otherwise
// we would have avoided detection below.
// Continue handling the connection as if it
// were opaque.
_ => {
info!(
?timeout,
"Handling connection as opaque due to policy"
);
Ok(svc::Either::Right(tls))
}
Err(timeout) => match tls.policy.protocol() {
Protocol::Http1 { .. } => {
// If the protocol was hinted to be HTTP/1.1 but detection
// failed, we'll usually be handling HTTP/1, but we may actually
// be handling HTTP/2 via protocol upgrade. Our options are:
// handle the connection as HTTP/1, assuming it will be rare for
// a proxy to initiate TLS, etc and not send the 16B of
// connection header; or we can handle it as opaque--but there's
// no chance the server will be able to handle the H2 protocol
// upgrade. So, it seems best to assume it's HTTP/1 and let the
// proxy handle the protocol error if we're in an edge case.
info!(%timeout, "Handling connection as HTTP/1 due to policy");
Ok(svc::Either::A(Http {
http: http::Version::Http1,
tls,
}))
}
}
// Otherwise, the protocol hint must have been `Detect` or the
// protocol was updated after detection was initiated, otherwise we
// would have avoided detection below. Continue handling the
// connection as if it were opaque.
_ => {
info!(%timeout, "Handling connection as opaque");
Ok(svc::Either::B(tls))
}
},
}
},
forward.into_inner(),
)
.lift_new_with_target()
.push(http::NewDetect::layer(
move |Detect { timeout, tls }: &Detect| http::DetectParams {
read_timeout: *timeout,
metrics: metrics.metrics(tls.policy.server_label()),
},
))
.push(detect::NewDetectService::layer(ConfigureHttpDetect))
.arc_new_tcp();
http.push_on_service(svc::MapTargetLayer::new(io::BoxedIo::new))
@ -185,7 +159,7 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
move |tls: Tls| -> Result<_, Infallible> {
let http = match tls.policy.protocol() {
Protocol::Detect { timeout, .. } => {
return Ok(svc::Either::Right(Detect { timeout, tls }));
return Ok(svc::Either::B(Detect { timeout, tls }));
}
// Meshed HTTP/1 services may actually be transported over HTTP/2 connections
// between proxies, so we have to do detection.
@ -193,18 +167,18 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
// TODO(ver) outbound clients should hint this with ALPN so we don't
// have to detect this situation.
Protocol::Http1 { .. } if tls.status.is_some() => {
return Ok(svc::Either::Right(Detect {
return Ok(svc::Either::B(Detect {
timeout: detect_timeout,
tls,
}));
}
// Unmeshed services don't use protocol upgrading, so we can use the
// hint without further detection.
Protocol::Http1 { .. } => http::Variant::Http1,
Protocol::Http2 { .. } | Protocol::Grpc { .. } => http::Variant::H2,
Protocol::Http1 { .. } => http::Version::Http1,
Protocol::Http2 { .. } | Protocol::Grpc { .. } => http::Version::H2,
_ => unreachable!("opaque protocols must not hit the HTTP stack"),
};
Ok(svc::Either::Left(Http { http, tls }))
Ok(svc::Either::A(Http { http, tls }))
},
detect.into_inner(),
)
@ -258,10 +232,10 @@ impl<I> Inbound<svc::ArcNewTcp<Tls, TlsIo<I>>> {
// whether app TLS was employed, but we use this as a signal that we should
// not perform additional protocol detection.
if matches!(protocol, Protocol::Tls { .. }) {
return Ok(svc::Either::Right(tls));
return Ok(svc::Either::B(tls));
}
Ok(svc::Either::Left(tls))
Ok(svc::Either::A(tls))
},
forward
.clone()
@ -285,14 +259,14 @@ impl<I> Inbound<svc::ArcNewTcp<Tls, TlsIo<I>>> {
if matches!(policy.protocol(), Protocol::Opaque { .. }) {
const TLS_PORT_SKIPPED: tls::ConditionalServerTls =
tls::ConditionalServerTls::None(tls::NoServerTls::PortSkipped);
return Ok(svc::Either::Right(Tls {
return Ok(svc::Either::B(Tls {
client_addr: t.param(),
orig_dst_addr: t.param(),
status: TLS_PORT_SKIPPED,
policy,
}));
}
Ok(svc::Either::Left(t))
Ok(svc::Either::A(t))
},
forward
.push_on_service(svc::MapTargetLayer::new(io::BoxedIo::new))
@ -325,7 +299,7 @@ impl svc::Param<Remote<ServerAddr>> for Forward {
impl svc::Param<transport::labels::Key> for Forward {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
self.tls.as_ref().map(|t| t.labels()),
self.tls.clone(),
self.orig_dst_addr.into(),
self.permit.labels.server.clone(),
)
@ -358,10 +332,18 @@ impl svc::Param<tls::ConditionalServerTls> for Tls {
}
}
// === impl ConfigureHttpDetect ===
impl svc::ExtractParam<detect::Config<http::DetectHttp>, Detect> for ConfigureHttpDetect {
fn extract_param(&self, detect: &Detect) -> detect::Config<http::DetectHttp> {
detect::Config::from_timeout(detect.timeout)
}
}
// === impl Http ===
impl svc::Param<http::Variant> for Http {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for Http {
fn param(&self) -> http::Version {
self.http
}
}
@ -429,7 +411,7 @@ impl svc::Param<ServerLabel> for Http {
impl svc::Param<transport::labels::Key> for Http {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
self.tls.status.as_ref().map(|t| t.labels()),
self.tls.status.clone(),
self.tls.orig_dst_addr.into(),
self.tls.policy.server_label(),
)
@ -460,13 +442,3 @@ impl<T> svc::InsertParam<tls::ConditionalServerTls, T> for TlsParams {
(tls, target)
}
}
// === impl MetricsFamilies ===
impl MetricsFamilies {
pub fn register(reg: &mut prom::Registry) -> Self {
Self(http::DetectMetricsFamilies::register(
reg.sub_registry_with_prefix("http"),
))
}
}

View File

@ -13,12 +13,6 @@ const HTTP1: &[u8] = b"GET / HTTP/1.1\r\nhost: example.com\r\n\r\n";
const HTTP2: &[u8] = b"PRI * HTTP/2.0\r\n";
const NOT_HTTP: &[u8] = b"foo\r\nbar\r\nblah\r\n";
const RESULTS_NOT_HTTP: &str = "results_total{result=\"not_http\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_HTTP1: &str = "results_total{result=\"http/1\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_HTTP2: &str = "results_total{result=\"http/2\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_READ_TIMEOUT: &str = "results_total{result=\"read_timeout\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_ERROR: &str = "results_total{result=\"error\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
fn authzs() -> Arc<[Authorization]> {
Arc::new([Authorization {
authentication: Authentication::Unauthenticated,
@ -41,41 +35,11 @@ fn allow(protocol: Protocol) -> AllowPolicy {
kind: "server".into(),
name: "testsrv".into(),
}),
local_rate_limit: Arc::new(Default::default()),
},
);
allow
}
macro_rules! assert_contains_metric {
($registry:expr, $metric:expr, $value:expr) => {{
let mut buf = String::new();
prom::encoding::text::encode_registry(&mut buf, $registry).expect("encode registry failed");
let lines = buf.split_terminator('\n').collect::<Vec<_>>();
assert_eq!(
lines.iter().find(|l| l.starts_with($metric)),
Some(&&*format!("{} {}", $metric, $value)),
"metric '{}' not found in:\n{:?}",
$metric,
buf
);
}};
}
macro_rules! assert_not_contains_metric {
($registry:expr, $pattern:expr) => {{
let mut buf = String::new();
prom::encoding::text::encode_registry(&mut buf, $registry).expect("encode registry failed");
let lines = buf.split_terminator('\n').collect::<Vec<_>>();
assert!(
!lines.iter().any(|l| l.starts_with($pattern)),
"metric '{}' found in:\n{:?}",
$pattern,
buf
);
}};
}
#[tokio::test(flavor = "current_thread")]
async fn detect_tls_opaque() {
let _trace = trace::test::trace_init();
@ -112,21 +76,14 @@ async fn detect_http_non_http() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(NOT_HTTP).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_panic("http stack must not be used"))
.push_detect_http(super::HttpDetectMetrics::register(&mut registry), new_ok())
.push_detect_http(new_ok())
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 1);
assert_contains_metric!(&registry, RESULTS_HTTP1, 0);
assert_contains_metric!(&registry, RESULTS_HTTP2, 0);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -150,24 +107,14 @@ async fn detect_http() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(HTTP1).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 0);
assert_contains_metric!(&registry, RESULTS_HTTP1, 1);
assert_contains_metric!(&registry, RESULTS_HTTP2, 0);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -186,24 +133,14 @@ async fn hinted_http1() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(HTTP1).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 0);
assert_contains_metric!(&registry, RESULTS_HTTP1, 1);
assert_contains_metric!(&registry, RESULTS_HTTP2, 0);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -222,24 +159,14 @@ async fn hinted_http1_supports_http2() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(HTTP2).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 0);
assert_contains_metric!(&registry, RESULTS_HTTP1, 0);
assert_contains_metric!(&registry, RESULTS_HTTP2, 1);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -257,25 +184,14 @@ async fn hinted_http2() {
let (ior, _) = io::duplex(100);
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
// No detection is performed when HTTP/2 is hinted, so no metrics are recorded.
assert_not_contains_metric!(&registry, RESULTS_NOT_HTTP);
assert_not_contains_metric!(&registry, RESULTS_HTTP1);
assert_not_contains_metric!(&registry, RESULTS_HTTP2);
assert_not_contains_metric!(&registry, RESULTS_READ_TIMEOUT);
assert_not_contains_metric!(&registry, RESULTS_ERROR);
}
fn client_id() -> tls::ClientId {
@ -293,11 +209,7 @@ fn orig_dst_addr() -> OrigDstAddr {
}
fn inbound() -> Inbound<()> {
Inbound::new(
test_util::default_config(),
test_util::runtime().0,
&mut Default::default(),
)
Inbound::new(test_util::default_config(), test_util::runtime().0)
}
fn new_panic<T, I: 'static>(msg: &'static str) -> svc::ArcNewTcp<T, I> {

View File

@ -15,10 +15,6 @@ use std::fmt::Debug;
use thiserror::Error;
use tracing::{debug_span, info_span};
mod metrics;
pub use self::metrics::MetricsFamilies;
/// Creates I/O errors when a connection cannot be forwarded because no transport
/// header was present.
#[derive(Debug, Default)]
@ -29,8 +25,8 @@ struct RefusedNoHeader;
pub struct RefusedNoIdentity(());
#[derive(Debug, Error)]
#[error("direct connections require transport header negotiation")]
struct TransportHeaderRequired(());
#[error("a named target must be provided on gateway connections")]
struct RefusedNoTarget;
#[derive(Debug, Clone)]
pub(crate) struct LocalTcp {
@ -97,7 +93,7 @@ impl<N> Inbound<N> {
self,
policies: impl policy::GetPolicy + Clone + Send + Sync + 'static,
gateway: svc::ArcNewTcp<GatewayTransportHeader, GatewayIo<I>>,
http: svc::ArcNewTcp<LocalHttp, SensorIo<io::PrefixedIo<TlsIo<I>>>>,
http: svc::ArcNewTcp<LocalHttp, io::PrefixedIo<TlsIo<I>>>,
) -> Inbound<svc::ArcNewTcp<T, I>>
where
T: Param<Remote<ClientAddr>> + Param<OrigDstAddr>,
@ -112,12 +108,11 @@ impl<N> Inbound<N> {
{
self.map_stack(|config, rt, inner| {
let detect_timeout = config.proxy.detect_protocol_timeout;
let metrics = rt.metrics.direct.clone();
let identity = rt
.identity
.server()
.spawn_with_alpn(vec![transport_header::PROTOCOL.into()])
.with_alpn(vec![transport_header::PROTOCOL.into()])
.expect("TLS credential store must be held");
inner
@ -140,14 +135,7 @@ impl<N> Inbound<N> {
// forwarding, or we may be processing an HTTP gateway connection. HTTP gateway
// connections that have a transport header must provide a target name as a part of
// the header.
.push_switch(
Ok::<Local, Infallible>,
svc::stack(http)
.push(transport::metrics::NewServer::layer(
rt.metrics.proxy.transport.clone(),
))
.into_inner(),
)
.push_switch(Ok::<Local, Infallible>, http)
.push_switch(
{
let policies = policies.clone();
@ -157,14 +145,14 @@ impl<N> Inbound<N> {
port,
name: None,
protocol,
} => Ok(svc::Either::Left({
} => Ok(svc::Either::A({
// When the transport header targets an alternate port (but does
// not identify an alternate target name), we check the new
// target's policy (rather than the inbound proxy's address).
let addr = (client.local_addr.ip(), port).into();
let policy = policies.get_policy(OrigDstAddr(addr));
match protocol {
None => svc::Either::Left(LocalTcp {
None => svc::Either::A(LocalTcp {
server_addr: Remote(ServerAddr(addr)),
client_addr: client.client_addr,
client_id: client.client_id,
@ -174,7 +162,7 @@ impl<N> Inbound<N> {
// When TransportHeader includes the protocol, but does not
// include an alternate name we go through the Inbound HTTP
// stack.
svc::Either::Right(LocalHttp {
svc::Either::B(LocalHttp {
addr: Remote(ServerAddr(addr)),
policy,
protocol,
@ -188,7 +176,7 @@ impl<N> Inbound<N> {
port,
name: Some(name),
protocol,
} => Ok(svc::Either::Right({
} => Ok(svc::Either::B({
// When the transport header provides an alternate target, the
// connection is a gateway connection. We check the _gateway
// address's_ policy (rather than the target address).
@ -216,7 +204,6 @@ impl<N> Inbound<N> {
)
.check_new_service::<(TransportHeader, ClientInfo), _>()
// Use ALPN to determine whether a transport header should be read.
.push(metrics::NewRecord::layer(metrics))
.push(svc::ArcNewService::layer())
.push(NewTransportHeaderServer::layer(detect_timeout))
.check_new_service::<ClientInfo, _>()
@ -228,7 +215,7 @@ impl<N> Inbound<N> {
if client.header_negotiated() {
Ok(client)
} else {
Err(TransportHeaderRequired(()).into())
Err(RefusedNoTarget.into())
}
})
.push(svc::ArcNewService::layer())
@ -311,8 +298,9 @@ impl Param<Remote<ServerAddr>> for AuthorizedLocalTcp {
impl Param<transport::labels::Key> for AuthorizedLocalTcp {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(self.client_id.clone()),
negotiated_protocol: None,
}),
self.addr.into(),
self.permit.labels.server.clone(),
@ -343,8 +331,9 @@ impl Param<Remote<ClientAddr>> for LocalHttp {
impl Param<transport::labels::Key> for LocalHttp {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(self.client.client_id.clone()),
negotiated_protocol: None,
}),
self.addr.into(),
self.policy.server_label(),
@ -358,11 +347,11 @@ impl svc::Param<policy::AllowPolicy> for LocalHttp {
}
}
impl svc::Param<http::Variant> for LocalHttp {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for LocalHttp {
fn param(&self) -> http::Version {
match self.protocol {
SessionProtocol::Http1 => http::Variant::Http1,
SessionProtocol::Http2 => http::Variant::H2,
SessionProtocol::Http1 => http::Version::Http1,
SessionProtocol::Http2 => http::Version::H2,
}
}
}
@ -433,14 +422,6 @@ impl Param<tls::ConditionalServerTls> for GatewayTransportHeader {
}
}
impl Param<tls::ConditionalServerTlsLabels> for GatewayTransportHeader {
fn param(&self) -> tls::ConditionalServerTlsLabels {
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
client_id: Some(self.client.client_id.clone()),
})
}
}
impl Param<tls::ClientId> for GatewayTransportHeader {
fn param(&self) -> tls::ClientId {
self.client.client_id.clone()

View File

@ -1,91 +0,0 @@
use super::ClientInfo;
use linkerd_app_core::{
metrics::prom::{self, EncodeLabelSetMut},
svc, tls,
transport_header::{SessionProtocol, TransportHeader},
};
#[cfg(test)]
mod tests;
#[derive(Clone, Debug)]
pub struct NewRecord<N> {
inner: N,
metrics: MetricsFamilies,
}
#[derive(Clone, Debug, Default)]
pub struct MetricsFamilies {
connections: prom::Family<Labels, prom::Counter>,
}
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
struct Labels {
header: TransportHeader,
client_id: tls::ClientId,
}
impl MetricsFamilies {
pub fn register(reg: &mut prom::Registry) -> Self {
let connections = prom::Family::default();
reg.register(
"connections",
"TCP connections with transport headers",
connections.clone(),
);
Self { connections }
}
}
impl<N> NewRecord<N> {
pub fn layer(metrics: MetricsFamilies) -> impl svc::layer::Layer<N, Service = Self> + Clone {
svc::layer::mk(move |inner| Self {
inner,
metrics: metrics.clone(),
})
}
}
impl<N> svc::NewService<(TransportHeader, ClientInfo)> for NewRecord<N>
where
N: svc::NewService<(TransportHeader, ClientInfo)>,
{
type Service = N::Service;
fn new_service(&self, (header, client): (TransportHeader, ClientInfo)) -> Self::Service {
self.metrics
.connections
.get_or_create(&Labels {
header: header.clone(),
client_id: client.client_id.clone(),
})
.inc();
self.inner.new_service((header, client))
}
}
impl prom::EncodeLabelSetMut for Labels {
fn encode_label_set(&self, enc: &mut prom::encoding::LabelSetEncoder<'_>) -> std::fmt::Result {
use prom::encoding::EncodeLabel;
(
"session_protocol",
self.header.protocol.as_ref().map(|p| match p {
SessionProtocol::Http1 => "http/1",
SessionProtocol::Http2 => "http/2",
}),
)
.encode(enc.encode_label())?;
("target_port", self.header.port).encode(enc.encode_label())?;
("target_name", self.header.name.as_deref()).encode(enc.encode_label())?;
("client_id", self.client_id.to_str()).encode(enc.encode_label())?;
Ok(())
}
}
impl prom::encoding::EncodeLabelSet for Labels {
fn encode(&self, mut enc: prom::encoding::LabelSetEncoder<'_>) -> Result<(), std::fmt::Error> {
self.encode_label_set(&mut enc)
}
}

View File

@ -1,115 +0,0 @@
use super::*;
use crate::direct::ClientInfo;
use futures::future;
use linkerd_app_core::{
io,
metrics::prom,
svc, tls,
transport::addrs::{ClientAddr, OrigDstAddr, Remote},
transport_header::{SessionProtocol, TransportHeader},
Error,
};
use std::str::FromStr;
fn new_ok<T>() -> svc::ArcNewTcp<T, io::BoxedIo> {
svc::ArcNewService::new(|_| svc::BoxService::new(svc::mk(|_| future::ok::<(), Error>(()))))
}
macro_rules! assert_counted {
($registry:expr, $proto:expr, $port:expr, $name:expr, $value:expr) => {{
let mut buf = String::new();
prom::encoding::text::encode_registry(&mut buf, $registry).expect("encode registry failed");
let metric = format!("connections_total{{session_protocol=\"{}\",target_port=\"{}\",target_name=\"{}\",client_id=\"test.client\"}}", $proto, $port, $name);
assert_eq!(
buf.split_terminator('\n')
.find(|l| l.starts_with(&*metric)),
Some(&*format!("{metric} {}", $value)),
"metric '{metric}' not found in:\n{buf}"
);
}};
}
// Added helper to setup and run the test
fn run_metric_test(header: TransportHeader) -> prom::Registry {
let mut registry = prom::Registry::default();
let families = MetricsFamilies::register(&mut registry);
let new_record = svc::layer::Layer::layer(&NewRecord::layer(families.clone()), new_ok());
// common client info
let client_id = tls::ClientId::from_str("test.client").unwrap();
let client_addr = Remote(ClientAddr(([127, 0, 0, 1], 40000).into()));
let local_addr = OrigDstAddr(([127, 0, 0, 1], 4143).into());
let client_info = ClientInfo {
client_id: client_id.clone(),
alpn: Some(tls::NegotiatedProtocol("transport.l5d.io/v1".into())),
client_addr,
local_addr,
};
let _svc = svc::NewService::new_service(&new_record, (header.clone(), client_info.clone()));
registry
}
#[test]
fn records_metrics_http1_local() {
let header = TransportHeader {
port: 8080,
name: None,
protocol: Some(SessionProtocol::Http1),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/1", 8080, "", 1);
}
#[test]
fn records_metrics_http2_local() {
let header = TransportHeader {
port: 8081,
name: None,
protocol: Some(SessionProtocol::Http2),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/2", 8081, "", 1);
}
#[test]
fn records_metrics_opaq_local() {
let header = TransportHeader {
port: 8082,
name: None,
protocol: None,
};
let registry = run_metric_test(header);
assert_counted!(&registry, "", 8082, "", 1);
}
#[test]
fn records_metrics_http1_gateway() {
let header = TransportHeader {
port: 8080,
name: Some("mysvc.myns.svc.cluster.local".parse().unwrap()),
protocol: Some(SessionProtocol::Http1),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/1", 8080, "mysvc.myns.svc.cluster.local", 1);
}
#[test]
fn records_metrics_http2_gateway() {
let header = TransportHeader {
port: 8081,
name: Some("mysvc.myns.svc.cluster.local".parse().unwrap()),
protocol: Some(SessionProtocol::Http2),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/2", 8081, "mysvc.myns.svc.cluster.local", 1);
}
#[test]
fn records_metrics_opaq_gateway() {
let header = TransportHeader {
port: 8082,
name: Some("mysvc.myns.svc.cluster.local".parse().unwrap()),
protocol: None,
};
let registry = run_metric_test(header);
assert_counted!(&registry, "", 8082, "mysvc.myns.svc.cluster.local", 1);
}

View File

@ -18,7 +18,7 @@ pub mod fuzz {
test_util::{support::connect::Connect, *},
Config, Inbound,
};
use hyper::{Body, Request, Response};
use hyper::{client::conn::Builder as ClientBuilder, Body, Request, Response};
use libfuzzer_sys::arbitrary::Arbitrary;
use linkerd_app_core::{
identity, io,
@ -41,8 +41,9 @@ pub mod fuzz {
}
pub async fn fuzz_entry_raw(requests: Vec<HttpRequestSpec>) {
let server = hyper::server::conn::http1::Builder::new();
let mut client = hyper::client::conn::http1::Builder::new();
let mut server = hyper::server::conn::Http::new();
server.http1_only(true);
let mut client = ClientBuilder::new();
let connect =
support::connect().endpoint_fn_boxed(Target::addr(), hello_fuzz_server(server));
let profiles = profile::resolver();
@ -54,7 +55,7 @@ pub mod fuzz {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_fuzz_server(cfg, rt, profiles, connect).new_service(Target::HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Now send all of the requests
for inp in requests.iter() {
@ -73,7 +74,7 @@ pub mod fuzz {
.header(header_name, header_value)
.body(Body::default())
{
let rsp = client.send_request(req).await;
let rsp = http_util::http_request(&mut client, req).await;
tracing::info!(?rsp);
if let Ok(rsp) = rsp {
let body = http_util::body_to_string(rsp.into_body()).await;
@ -85,18 +86,18 @@ pub mod fuzz {
}
}
drop(client);
// It's okay if the background task returns an error, as this would
// indicate that the proxy closed the connection --- which it will do on
// invalid inputs. We want to ensure that the proxy doesn't crash in the
// face of these inputs, and the background task will panic in this
// case.
drop(client);
let res = bg.join_all().await;
let res = bg.await;
tracing::info!(?res, "background tasks completed")
}
fn hello_fuzz_server(
http: hyper::server::conn::http1::Builder,
http: hyper::server::conn::Http,
) -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
move |_endpoint| {
let (client_io, server_io) = support::io::duplex(4096);
@ -162,12 +163,12 @@ pub mod fuzz {
}
#[derive(Clone, Debug)]
struct Target(http::Variant);
struct Target(http::Version);
// === impl Target ===
impl Target {
const HTTP1: Self = Self(http::Variant::Http1);
const HTTP1: Self = Self(http::Version::Http1);
fn addr() -> SocketAddr {
([127, 0, 0, 1], 80).into()
@ -192,8 +193,8 @@ pub mod fuzz {
}
}
impl svc::Param<http::Variant> for Target {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for Target {
fn param(&self) -> http::Version {
self.0
}
}
@ -227,9 +228,6 @@ pub mod fuzz {
kind: "server".into(),
name: "testsrv".into(),
}),
local_rate_limit: Arc::new(
linkerd_proxy_server_policy::LocalRateLimit::default(),
),
},
);
policy
@ -238,14 +236,11 @@ pub mod fuzz {
impl svc::Param<policy::ServerLabel> for Target {
fn param(&self) -> policy::ServerLabel {
policy::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
1000,
)
policy::ServerLabel(Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}))
}
}

View File

@ -14,7 +14,7 @@ use tracing::{debug, debug_span};
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct Http {
addr: Remote<ServerAddr>,
params: http::client::Params,
settings: http::client::Settings,
permit: policy::HttpRoutePermit,
}
@ -33,7 +33,7 @@ struct Logical {
/// The request's logical destination. Used for profile discovery.
logical: Option<NameAddr>,
addr: Remote<ServerAddr>,
http: http::Variant,
http: http::Version,
tls: tls::ConditionalServerTls,
permit: policy::HttpRoutePermit,
labels: tap::Labels,
@ -69,7 +69,7 @@ struct LogicalError {
impl<C> Inbound<C> {
pub(crate) fn push_http_router<T, P>(self, profiles: P) -> Inbound<svc::ArcNewCloneHttp<T>>
where
T: Param<http::Variant>
T: Param<http::Version>
+ Param<Remote<ServerAddr>>
+ Param<Remote<ClientAddr>>
+ Param<tls::ConditionalServerTls>
@ -83,9 +83,6 @@ impl<C> Inbound<C> {
{
self.map_stack(|config, rt, connect| {
let allow_profile = config.allow_discovery.clone();
let unsafe_authority_labels = config.unsafe_authority_labels;
let h1_params = config.proxy.connect.http1;
let h2_params = config.proxy.connect.http2.clone();
// Creates HTTP clients for each inbound port & HTTP settings.
let http = connect
@ -96,21 +93,15 @@ impl<C> Inbound<C> {
.push(transport::metrics::Client::layer(rt.metrics.proxy.transport.clone()))
.check_service::<Http>()
.push_map_target(|(_version, target)| target)
.push(http::client::layer())
.push(http::client::layer(
config.proxy.connect.h1_settings,
config.proxy.connect.h2_settings,
))
.check_service::<Http>()
.push_on_service(svc::MapErr::layer_boxed())
.into_new_service()
.push_new_reconnect(config.proxy.connect.backoff)
.push_map_target(move |t: Logical| {
Http {
addr: t.addr,
permit: t.permit,
params: match t.http {
http::Variant::Http1 => http::client::Params::Http1(h1_params),
http::Variant::H2 => http::client::Params::H2(h2_params.clone())
},
}
})
.push_map_target(Http::from)
// Handle connection-level errors eagerly so that we can report 5XX failures in tap
// and metrics. HTTP error metrics are not incremented here so that errors are not
// double-counted--i.e., endpoint metrics track these responses and error metrics
@ -123,9 +114,7 @@ impl<C> Inbound<C> {
rt.metrics
.proxy
.http_endpoint
.to_layer_via::<classify::Response, _, _, _>(
endpoint_labels(unsafe_authority_labels),
),
.to_layer::<classify::Response, _, _>(),
)
.push_on_service(http_tracing::client(rt.span_sink.clone(), super::trace_labels()))
.push_on_service(http::BoxResponse::layer())
@ -166,14 +155,14 @@ impl<C> Inbound<C> {
|(rx, logical): (Option<profiles::Receiver>, Logical)| -> Result<_, Infallible> {
if let Some(rx) = rx {
if let Some(addr) = rx.logical_addr() {
return Ok(svc::Either::Left(Profile {
return Ok(svc::Either::A(Profile {
addr,
logical,
profiles: rx,
}));
}
}
Ok(svc::Either::Right(logical))
Ok(svc::Either::B(logical))
},
http.clone().into_inner(),
)
@ -192,7 +181,7 @@ impl<C> Inbound<C> {
// discovery (so that we skip the profile stack above).
let addr = match logical.logical.clone() {
Some(addr) => addr,
None => return Ok(svc::Either::Right((None, logical))),
None => return Ok(svc::Either::B((None, logical))),
};
if !allow_profile.matches(addr.name()) {
tracing::debug!(
@ -200,9 +189,9 @@ impl<C> Inbound<C> {
suffixes = %allow_profile,
"Skipping discovery, address not in configured DNS suffixes",
);
return Ok(svc::Either::Right((None, logical)));
return Ok(svc::Either::B((None, logical)));
}
Ok(svc::Either::Left(logical))
Ok(svc::Either::A(logical))
},
router
.check_new_service::<(Option<profiles::Receiver>, Logical), http::Request<_>>()
@ -390,17 +379,13 @@ impl Param<transport::labels::Key> for Logical {
}
}
fn endpoint_labels(
unsafe_authority_labels: bool,
) -> impl svc::ExtractParam<metrics::EndpointLabels, Logical> + Clone {
move |t: &Logical| -> metrics::EndpointLabels {
impl Param<metrics::EndpointLabels> for Logical {
fn param(&self) -> metrics::EndpointLabels {
metrics::InboundEndpointLabels {
tls: t.tls.as_ref().map(|t| t.labels()),
authority: unsafe_authority_labels
.then(|| t.logical.as_ref().map(|d| d.as_http_authority()))
.flatten(),
target_addr: t.addr.into(),
policy: t.permit.labels.clone(),
tls: self.tls.clone(),
authority: self.logical.as_ref().map(|d| d.as_http_authority()),
target_addr: self.addr.into(),
policy: self.permit.labels.clone(),
}
.into()
}
@ -449,9 +434,19 @@ impl Param<Remote<ServerAddr>> for Http {
}
}
impl Param<http::client::Params> for Http {
fn param(&self) -> http::client::Params {
self.params.clone()
impl Param<http::client::Settings> for Http {
fn param(&self) -> http::client::Settings {
self.settings
}
}
impl From<Logical> for Http {
fn from(l: Logical) -> Self {
Self {
addr: l.addr,
settings: l.http.into(),
permit: l.permit,
}
}
}

View File

@ -1,6 +1,6 @@
use super::set_identity_header::NewSetIdentityHeader;
use crate::{policy, Inbound};
pub use linkerd_app_core::proxy::http::{normalize_uri, Variant};
pub use linkerd_app_core::proxy::http::{normalize_uri, Version};
use linkerd_app_core::{
config::ProxyConfig,
errors, http_tracing, io,
@ -31,7 +31,7 @@ impl<H> Inbound<H> {
pub fn push_http_server<T, HSvc>(self) -> Inbound<svc::ArcNewCloneHttp<T>>
where
// Connection target.
T: Param<Variant>
T: Param<Version>
+ Param<normalize_uri::DefaultAuthority>
+ Param<tls::ConditionalServerTls>
+ Param<ServerLabel>
@ -95,7 +95,7 @@ impl<H> Inbound<H> {
pub fn push_http_tcp_server<T, I, HSvc>(self) -> Inbound<svc::ArcNewTcp<T, I>>
where
// Connection target.
T: Param<Variant>,
T: Param<Version>,
T: Clone + Send + Unpin + 'static,
// Server-side socket.
I: io::AsyncRead + io::AsyncWrite + io::PeerAddr + Send + Unpin + 'static,
@ -112,15 +112,16 @@ impl<H> Inbound<H> {
HSvc::Future: Send,
{
self.map_stack(|config, rt, http| {
let h2 = config.proxy.server.http2.clone();
let h2 = config.proxy.server.h2_settings;
let drain = rt.drain.clone();
http.check_new_service::<T, http::Request<http::BoxBody>>()
http.push_on_service(http::BoxRequest::layer())
.check_new_service::<T, http::Request<_>>()
.unlift_new()
.check_new_new_service::<T, http::ClientHandle, http::Request<_>>()
.push(http::NewServeHttp::layer(move |t: &T| http::ServerParams {
version: t.param(),
http2: h2.clone(),
h2,
drain: drain.clone(),
}))
.check_new_service::<T, I>()
@ -203,10 +204,6 @@ impl errors::HttpRescue<Error> for ServerRescue {
));
}
if errors::is_caused_by::<linkerd_proxy_server_policy::RateLimitError>(&*error) {
return Ok(errors::SyntheticHttpResponse::rate_limited(error));
}
if errors::is_caused_by::<crate::GatewayDomainInvalid>(&*error) {
return Ok(errors::SyntheticHttpResponse::not_found(error));
}

View File

@ -6,22 +6,21 @@ use crate::{
},
Config, Inbound,
};
use hyper::{Request, Response};
use hyper::{body::HttpBody, client::conn::Builder as ClientBuilder, Body, Request, Response};
use linkerd_app_core::{
classify,
errors::header::L5D_PROXY_ERROR,
errors::respond::L5D_PROXY_ERROR,
identity, io, metrics,
proxy::http::{self, BoxBody},
svc::{self, http::TokioExecutor, NewService, Param},
proxy::http,
svc::{self, NewService, Param},
tls,
transport::{ClientAddr, OrigDstAddr, Remote, ServerAddr},
Error, NameAddr, ProxyRuntime,
NameAddr, ProxyRuntime,
};
use linkerd_app_test::connect::ConnectFuture;
use linkerd_tracing::test::trace_init;
use std::{net::SocketAddr, sync::Arc};
use tokio::time;
use tower::ServiceExt;
use tracing::Instrument;
fn build_server<I>(
@ -33,7 +32,7 @@ fn build_server<I>(
where
I: io::AsyncRead + io::AsyncWrite + io::PeerAddr + Send + Unpin + 'static,
{
Inbound::new(cfg, rt, &mut Default::default())
Inbound::new(cfg, rt)
.with_stack(connect)
.map_stack(|cfg, _, s| {
s.push_map_target(|t| Param::<Remote<ServerAddr>>::param(&t))
@ -47,10 +46,9 @@ where
#[tokio::test(flavor = "current_thread")]
async fn unmeshed_http1_hello_world() {
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
let mut client = hyper::client::conn::http1::Builder::new();
let mut server = hyper::server::conn::Http::new();
server.http1_only(true);
let mut client = ClientBuilder::new();
let _trace = trace_init();
// Build a mock "connector" that returns the upstream "server" IO.
@ -65,38 +63,29 @@ async fn unmeshed_http1_hello_world() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
let rsp = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(rsp.status(), http::StatusCode::OK);
let body = http_util::body_to_string(rsp.into_body()).await.unwrap();
assert_eq!(body, "Hello world!");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
.collect::<Result<Vec<()>, Error>>()
.expect("background task failed");
bg.await.expect("background task failed");
}
#[tokio::test(flavor = "current_thread")]
async fn downgrade_origin_form() {
// Reproduces https://github.com/linkerd/linkerd2/issues/5298
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut server = hyper::server::conn::Http::new();
server.http1_only(true);
let mut client = ClientBuilder::new();
client.http2_only(true);
let _trace = trace_init();
// Build a mock "connector" that returns the upstream "server" IO.
@ -111,67 +100,30 @@ async fn downgrade_origin_form() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = {
tracing::info!(settings = ?client, "connecting client with");
let (client_io, server_io) = io::duplex(4096);
let (client, conn) = client
.handshake(hyper_util::rt::TokioIo::new(client_io))
.await
.expect("Client must connect");
let mut bg = tokio::task::JoinSet::new();
bg.spawn(
async move {
server.oneshot(server_io).await?;
tracing::info!("proxy serve task complete");
Ok(())
}
.instrument(tracing::info_span!("proxy")),
);
bg.spawn(
async move {
conn.await?;
tracing::info!("client background complete");
Ok(())
}
.instrument(tracing::info_span!("client_bg")),
);
(client, bg)
};
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
let req = Request::builder()
.method(http::Method::GET)
.uri("/")
.header(http::header::HOST, "foo.svc.cluster.local")
.header("l5d-orig-proto", "HTTP/1.1")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
let rsp = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(rsp.status(), http::StatusCode::OK);
let body = http_util::body_to_string(rsp.into_body()).await.unwrap();
assert_eq!(body, "Hello world!");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
.collect::<Result<Vec<()>, Error>>()
.expect("background task failed");
bg.await.expect("background task failed");
}
#[tokio::test(flavor = "current_thread")]
async fn downgrade_absolute_form() {
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
let mut server = hyper::server::conn::Http::new();
server.http1_only(true);
let mut client = ClientBuilder::new();
client.http2_only(true);
let _trace = trace_init();
// Build a mock "connector" that returns the upstream "server" IO.
@ -186,60 +138,22 @@ async fn downgrade_absolute_form() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = {
tracing::info!(settings = ?client, "connecting client with");
let (client_io, server_io) = io::duplex(4096);
let (client, conn) = client
.handshake(hyper_util::rt::TokioIo::new(client_io))
.await
.expect("Client must connect");
let mut bg = tokio::task::JoinSet::new();
bg.spawn(
async move {
server.oneshot(server_io).await?;
tracing::info!("proxy serve task complete");
Ok(())
}
.instrument(tracing::info_span!("proxy")),
);
bg.spawn(
async move {
conn.await?;
tracing::info!("client background complete");
Ok(())
}
.instrument(tracing::info_span!("client_bg")),
);
(client, bg)
};
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550/")
.header(http::header::HOST, "foo.svc.cluster.local")
.header("l5d-orig-proto", "HTTP/1.1; absolute-form")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
let rsp = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(rsp.status(), http::StatusCode::OK);
let body = http_util::body_to_string(rsp.into_body()).await.unwrap();
assert_eq!(body, "Hello world!");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
.collect::<Result<Vec<()>, Error>>()
.expect("background task failed");
bg.await.expect("background task failed");
}
#[tokio::test(flavor = "current_thread")]
@ -251,7 +165,7 @@ async fn http1_bad_gateway_meshed_response_error_header() {
// Build a client using the connect that always errors so that responses
// are BAD_GATEWAY.
let mut client = hyper::client::conn::http1::Builder::new();
let mut client = ClientBuilder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -259,34 +173,25 @@ async fn http1_bad_gateway_meshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_http1());
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a BAD_GATEWAY with the expected
// header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::BAD_GATEWAY);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::BAD_GATEWAY);
// NOTE: this does not include a stack error context for that endpoint
// because we don't build a real HTTP endpoint stack, which adds error
// context to this error, and the client rescue layer is below where the
// logical error context is added.
check_error_header(rsp.headers(), "client error (Connect)");
check_error_header(response.headers(), "server is not listening");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
.collect::<Result<Vec<()>, Error>>()
.expect("background task failed");
bg.await.expect("background task failed");
}
#[tokio::test(flavor = "current_thread")]
@ -298,7 +203,7 @@ async fn http1_bad_gateway_unmeshed_response() {
// Build a client using the connect that always errors so that responses
// are BAD_GATEWAY.
let mut client = hyper::client::conn::http1::Builder::new();
let mut client = ClientBuilder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -306,33 +211,24 @@ async fn http1_bad_gateway_unmeshed_response() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a BAD_GATEWAY with the expected
// header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::BAD_GATEWAY);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::BAD_GATEWAY);
assert!(
rsp.headers().get(L5D_PROXY_ERROR).is_none(),
response.headers().get(L5D_PROXY_ERROR).is_none(),
"response must not contain L5D_PROXY_ERROR header"
);
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
.collect::<Result<Vec<()>, Error>>()
.expect("background task failed");
bg.await.expect("background task failed");
}
#[tokio::test(flavor = "current_thread")]
@ -342,11 +238,12 @@ async fn http1_connect_timeout_meshed_response_error_header() {
// Build a mock connect that sleeps longer than the default inbound
// connect timeout.
let connect = support::connect().endpoint(Target::addr(), connect_timeout());
let server = hyper::server::conn::Http::new();
let connect = support::connect().endpoint(Target::addr(), connect_timeout(server));
// Build a client using the connect that always sleeps so that responses
// are GATEWAY_TIMEOUT.
let mut client = hyper::client::conn::http1::Builder::new();
let mut client = ClientBuilder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -354,35 +251,26 @@ async fn http1_connect_timeout_meshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_http1());
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a GATEWAY_TIMEOUT with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::GATEWAY_TIMEOUT);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::GATEWAY_TIMEOUT);
// NOTE: this does not include a stack error context for that endpoint
// because we don't build a real HTTP endpoint stack, which adds error
// context to this error, and the client rescue layer is below where the
// logical error context is added.
check_error_header(rsp.headers(), "client error (Connect)");
check_error_header(response.headers(), "connect timed out after 1s");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
.collect::<Result<Vec<()>, Error>>()
.expect("background task failed");
bg.await.expect("background task failed");
}
#[tokio::test(flavor = "current_thread")]
@ -392,11 +280,12 @@ async fn http1_connect_timeout_unmeshed_response_error_header() {
// Build a mock connect that sleeps longer than the default inbound
// connect timeout.
let connect = support::connect().endpoint(Target::addr(), connect_timeout());
let server = hyper::server::conn::Http::new();
let connect = support::connect().endpoint(Target::addr(), connect_timeout(server));
// Build a client using the connect that always sleeps so that responses
// are GATEWAY_TIMEOUT.
let mut client = hyper::client::conn::http1::Builder::new();
let mut client = ClientBuilder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -404,33 +293,24 @@ async fn http1_connect_timeout_unmeshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a GATEWAY_TIMEOUT with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::GATEWAY_TIMEOUT);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::GATEWAY_TIMEOUT);
assert!(
rsp.headers().get(L5D_PROXY_ERROR).is_none(),
response.headers().get(L5D_PROXY_ERROR).is_none(),
"response must not contain L5D_PROXY_ERROR header"
);
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
.collect::<Result<Vec<()>, Error>>()
.expect("background task failed");
bg.await.expect("background task failed");
}
#[tokio::test(flavor = "current_thread")]
@ -441,8 +321,8 @@ async fn h2_response_meshed_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut client = ClientBuilder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -450,28 +330,25 @@ async fn h2_response_meshed_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_h2());
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is SERVICE_UNAVAILABLE with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::GATEWAY_TIMEOUT);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::GATEWAY_TIMEOUT);
check_error_header(rsp.headers(), "service in fail-fast");
check_error_header(response.headers(), "service in fail-fast");
// Drop the client and discard the result of awaiting the proxy background
// task. The result is discarded because it hits an error that is related
// to the mock implementation and has no significance to the test.
let _ = bg.join_all().await;
drop(client);
let _ = bg.await;
}
#[tokio::test(flavor = "current_thread")]
@ -482,8 +359,8 @@ async fn h2_response_unmeshed_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut client = ClientBuilder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -491,30 +368,27 @@ async fn h2_response_unmeshed_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is SERVICE_UNAVAILABLE with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::GATEWAY_TIMEOUT);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::GATEWAY_TIMEOUT);
assert!(
rsp.headers().get(L5D_PROXY_ERROR).is_none(),
response.headers().get(L5D_PROXY_ERROR).is_none(),
"response must not contain L5D_PROXY_ERROR header"
);
// Drop the client and discard the result of awaiting the proxy background
// task. The result is discarded because it hits an error that is related
// to the mock implementation and has no significance to the test.
let _ = bg.join_all().await;
drop(client);
let _ = bg.await;
}
#[tokio::test(flavor = "current_thread")]
@ -525,8 +399,8 @@ async fn grpc_meshed_response_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut client = ClientBuilder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -534,7 +408,7 @@ async fn grpc_meshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_h2());
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
@ -542,21 +416,18 @@ async fn grpc_meshed_response_error_header() {
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "application/grpc")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::OK);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::OK);
check_error_header(rsp.headers(), "service in fail-fast");
check_error_header(response.headers(), "service in fail-fast");
// Drop the client and discard the result of awaiting the proxy background
// task. The result is discarded because it hits an error that is related
// to the mock implementation and has no significance to the test.
let _ = bg.join_all().await;
drop(client);
let _ = bg.await;
}
#[tokio::test(flavor = "current_thread")]
@ -567,8 +438,8 @@ async fn grpc_unmeshed_response_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut client = ClientBuilder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -576,7 +447,7 @@ async fn grpc_unmeshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
@ -584,23 +455,20 @@ async fn grpc_unmeshed_response_error_header() {
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "application/grpc")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::OK);
let response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::OK);
assert!(
rsp.headers().get(L5D_PROXY_ERROR).is_none(),
response.headers().get(L5D_PROXY_ERROR).is_none(),
"response must not contain L5D_PROXY_ERROR header"
);
// Drop the client and discard the result of awaiting the proxy background
// task. The result is discarded because it hits an error that is related
// to the mock implementation and has no significance to the test.
let _ = bg.join_all().await;
drop(client);
let _ = bg.await;
}
#[tokio::test(flavor = "current_thread")]
@ -609,8 +477,8 @@ async fn grpc_response_class() {
// Build a mock connector serves a gRPC server that returns errors.
let connect = {
let mut server = hyper::server::conn::http2::Builder::new(TokioExecutor::new());
server.timer(hyper_util::rt::TokioTimer::new());
let mut server = hyper::server::conn::Http::new();
server.http2_only(true);
support::connect().endpoint_fn_boxed(
Target::addr(),
grpc_status_server(server, tonic::Code::Unknown),
@ -618,8 +486,8 @@ async fn grpc_response_class() {
};
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut client = ClientBuilder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -632,7 +500,7 @@ async fn grpc_response_class() {
.http_endpoint
.into_report(time::Duration::from_secs(3600));
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_h2());
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
@ -640,43 +508,29 @@ async fn grpc_response_class() {
.method(http::Method::POST)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "application/grpc")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::OK);
let mut response = http_util::http_request(&mut client, req).await.unwrap();
assert_eq!(response.status(), http::StatusCode::OK);
use http_body_util::BodyExt;
let mut body = rsp.into_body();
let trls = body
.frame()
.await
.unwrap()
.unwrap()
.into_trailers()
.expect("trailers frame");
response.body_mut().data().await;
let trls = response.body_mut().trailers().await.unwrap().unwrap();
assert_eq!(trls.get("grpc-status").unwrap().to_str().unwrap(), "2");
let response_total = metrics
.get_response_total(
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
tls: Target::meshed_h2().1.map(|t| t.labels()),
authority: None,
tls: Target::meshed_h2().1,
authority: Some("foo.svc.cluster.local:5550".parse().unwrap()),
target_addr: "127.0.0.1:80".parse().unwrap(),
policy: metrics::RouteAuthzLabels {
route: metrics::RouteLabels {
server: metrics::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
80,
),
server: metrics::ServerLabel(Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
})),
route: policy::Meta::new_default("default"),
},
authz: Arc::new(policy::Meta::Resource {
@ -692,124 +546,24 @@ async fn grpc_response_class() {
.expect("response_total not found");
assert_eq!(response_total, 1.0);
drop(bg);
}
#[tokio::test(flavor = "current_thread")]
async fn unsafe_authority_labels_true() {
let _trace = trace_init();
let mut cfg = default_config();
cfg.unsafe_authority_labels = true;
test_unsafe_authority_labels(cfg, Some("foo.svc.cluster.local:5550".parse().unwrap())).await;
}
#[tokio::test(flavor = "current_thread")]
async fn unsafe_authority_labels_false() {
let _trace = trace_init();
let cfg = default_config();
test_unsafe_authority_labels(cfg, None).await;
}
async fn test_unsafe_authority_labels(
cfg: Config,
expected_authority: Option<http::uri::Authority>,
) {
let connect = {
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
support::connect().endpoint_fn_boxed(Target::addr(), hello_server(server))
};
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http1::Builder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
profile_tx.send(profile::Profile::default()).unwrap();
let (rt, _shutdown) = runtime();
let metrics = rt
.metrics
.clone()
.http_endpoint
.into_report(time::Duration::from_secs(3600));
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_http1());
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
let req = Request::builder()
.method(http::Method::POST)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::OK);
use http_body_util::BodyExt;
let mut body = rsp.into_body();
while let Some(Ok(_)) = body.frame().await {}
tracing::info!("{metrics:#?}");
let response_total = metrics
.get_response_total(
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
tls: Target::meshed_http1().1.as_ref().map(|t| t.labels()),
authority: expected_authority,
target_addr: "127.0.0.1:80".parse().unwrap(),
policy: metrics::RouteAuthzLabels {
route: metrics::RouteLabels {
server: metrics::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
80,
),
route: policy::Meta::new_default("default"),
},
authz: Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "serverauthorization".into(),
name: "testsaz".into(),
}),
},
}),
Some(http::StatusCode::OK),
&classify::Class::Http(Ok(http::StatusCode::OK)),
)
.expect("response_total not found");
assert_eq!(response_total, 1.0);
drop(bg);
drop((client, bg));
}
#[tracing::instrument]
fn hello_server(
server: hyper::server::conn::http1::Builder,
http: hyper::server::conn::Http,
) -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
move |endpoint| {
let span = tracing::info_span!("hello_server", ?endpoint);
let _e = span.enter();
tracing::info!("mock connecting");
let (client_io, server_io) = support::io::duplex(4096);
let hello_svc =
hyper::service::service_fn(|request: Request<hyper::body::Incoming>| async move {
tracing::info!(?request);
Ok::<_, io::Error>(Response::new(BoxBody::from_static("Hello world!")))
});
let hello_svc = hyper::service::service_fn(|request: Request<Body>| async move {
tracing::info!(?request);
Ok::<_, io::Error>(Response::new(Body::from("Hello world!")))
});
tokio::spawn(
server
.serve_connection(hyper_util::rt::TokioIo::new(server_io), hello_svc)
http.serve_connection(server_io, hello_svc)
.in_current_span(),
);
Ok(io::BoxedIo::new(client_io))
@ -818,7 +572,7 @@ fn hello_server(
#[tracing::instrument]
fn grpc_status_server(
server: hyper::server::conn::http2::Builder<TokioExecutor>,
http: hyper::server::conn::Http,
status: tonic::Code,
) -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
move |endpoint| {
@ -827,33 +581,26 @@ fn grpc_status_server(
tracing::info!("mock connecting");
let (client_io, server_io) = support::io::duplex(4096);
tokio::spawn(
server
.serve_connection(
hyper_util::rt::TokioIo::new(server_io),
hyper::service::service_fn(
move |request: Request<hyper::body::Incoming>| async move {
tracing::info!(?request);
let (mut tx, rx) =
http_body_util::channel::Channel::<bytes::Bytes, Error>::new(1024);
tokio::spawn(async move {
let mut trls = ::http::HeaderMap::new();
trls.insert(
"grpc-status",
(status as u32).to_string().parse().unwrap(),
);
tx.send_trailers(trls).await
});
Ok::<_, io::Error>(
http::Response::builder()
.version(::http::Version::HTTP_2)
.header("content-type", "application/grpc")
.body(rx)
.unwrap(),
)
},
),
)
.in_current_span(),
http.serve_connection(
server_io,
hyper::service::service_fn(move |request: Request<Body>| async move {
tracing::info!(?request);
let (mut tx, rx) = Body::channel();
tokio::spawn(async move {
let mut trls = ::http::HeaderMap::new();
trls.insert("grpc-status", (status as u32).to_string().parse().unwrap());
tx.send_trailers(trls).await
});
Ok::<_, io::Error>(
http::Response::builder()
.version(::http::Version::HTTP_2)
.header("content-type", "application/grpc")
.body(rx)
.unwrap(),
)
}),
)
.in_current_span(),
);
Ok(io::BoxedIo::new(client_io))
}
@ -861,11 +608,18 @@ fn grpc_status_server(
#[tracing::instrument]
fn connect_error() -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
move |_| Err(io::Error::other("server is not listening"))
move |_| {
Err(io::Error::new(
io::ErrorKind::Other,
"server is not listening",
))
}
}
#[tracing::instrument]
fn connect_timeout() -> Box<dyn FnMut(Remote<ServerAddr>) -> ConnectFuture + Send> {
fn connect_timeout(
http: hyper::server::conn::Http,
) -> Box<dyn FnMut(Remote<ServerAddr>) -> ConnectFuture + Send> {
Box::new(move |endpoint| {
let span = tracing::info_span!("connect_timeout", ?endpoint);
Box::pin(
@ -882,7 +636,7 @@ fn connect_timeout() -> Box<dyn FnMut(Remote<ServerAddr>) -> ConnectFuture + Sen
}
#[derive(Clone, Debug)]
struct Target(http::Variant, tls::ConditionalServerTls);
struct Target(http::Version, tls::ConditionalServerTls);
#[track_caller]
fn check_error_header(hdrs: &::http::HeaderMap, expected: &str) {
@ -901,17 +655,17 @@ fn check_error_header(hdrs: &::http::HeaderMap, expected: &str) {
impl Target {
const UNMESHED_HTTP1: Self = Self(
http::Variant::Http1,
http::Version::Http1,
tls::ConditionalServerTls::None(tls::NoServerTls::NoClientHello),
);
const UNMESHED_H2: Self = Self(
http::Variant::H2,
http::Version::H2,
tls::ConditionalServerTls::None(tls::NoServerTls::NoClientHello),
);
fn meshed_http1() -> Self {
Self(
http::Variant::Http1,
http::Version::Http1,
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(tls::ClientId(
"foosa.barns.serviceaccount.identity.linkerd.cluster.local"
@ -925,7 +679,7 @@ impl Target {
fn meshed_h2() -> Self {
Self(
http::Variant::H2,
http::Version::H2,
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(tls::ClientId(
"foosa.barns.serviceaccount.identity.linkerd.cluster.local"
@ -960,8 +714,8 @@ impl svc::Param<Remote<ClientAddr>> for Target {
}
}
impl svc::Param<http::Variant> for Target {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for Target {
fn param(&self) -> http::Version {
self.0
}
}
@ -994,7 +748,6 @@ impl svc::Param<policy::AllowPolicy> for Target {
kind: "server".into(),
name: "testsrv".into(),
}),
local_rate_limit: Default::default(),
},
);
policy
@ -1003,14 +756,11 @@ impl svc::Param<policy::AllowPolicy> for Target {
impl svc::Param<policy::ServerLabel> for Target {
fn param(&self) -> policy::ServerLabel {
policy::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
80,
)
policy::ServerLabel(Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}))
}
}

View File

@ -18,17 +18,12 @@ mod server;
#[cfg(any(test, feature = "test-util", fuzzing))]
pub mod test_util;
#[cfg(fuzzing)]
pub use self::http::fuzz as http_fuzz;
pub use self::{
detect::MetricsFamilies as DetectMetrics, metrics::InboundMetrics, policy::DefaultPolicy,
};
pub use self::{metrics::InboundMetrics, policy::DefaultPolicy};
use linkerd_app_core::{
config::{ConnectConfig, ProxyConfig, QueueConfig},
drain,
http_tracing::SpanSink,
http_tracing::OpenCensusSink,
identity, io,
metrics::prom,
proxy::{tap, tcp},
svc,
transport::{self, Remote, ServerAddr},
@ -38,6 +33,9 @@ use std::{fmt::Debug, time::Duration};
use thiserror::Error;
use tracing::debug_span;
#[cfg(fuzzing)]
pub use self::http::fuzz as http_fuzz;
#[derive(Clone, Debug)]
pub struct Config {
pub allow_discovery: NameMatch,
@ -55,9 +53,6 @@ pub struct Config {
/// Configures how HTTP requests are buffered *for each inbound port*.
pub http_request_queue: QueueConfig,
/// Enables unsafe authority labels.
pub unsafe_authority_labels: bool,
}
#[derive(Clone)]
@ -72,7 +67,7 @@ struct Runtime {
metrics: InboundMetrics,
identity: identity::creds::Receiver,
tap: tap::Registry,
span_sink: Option<SpanSink>,
span_sink: OpenCensusSink,
drain: drain::Watch,
}
@ -113,6 +108,10 @@ impl<S> Inbound<S> {
&self.runtime.identity
}
pub fn proxy_metrics(&self) -> &metrics::Proxy {
&self.runtime.metrics.proxy
}
/// A helper for gateways to instrument policy checks.
pub fn authorize_http<N>(
&self,
@ -150,9 +149,9 @@ impl<S> Inbound<S> {
}
impl Inbound<()> {
pub fn new(config: Config, runtime: ProxyRuntime, prom: &mut prom::Registry) -> Self {
pub fn new(config: Config, runtime: ProxyRuntime) -> Self {
let runtime = Runtime {
metrics: InboundMetrics::new(runtime.metrics, prom),
metrics: InboundMetrics::new(runtime.metrics),
identity: runtime.identity,
tap: runtime.tap,
span_sink: runtime.span_sink,
@ -168,11 +167,7 @@ impl Inbound<()> {
#[cfg(any(test, feature = "test-util"))]
pub fn for_test() -> (Self, drain::Signal) {
let (rt, drain) = test_util::runtime();
let this = Self::new(
test_util::default_config(),
rt,
&mut prom::Registry::default(),
);
let this = Self::new(test_util::default_config(), rt);
(this, drain)
}
@ -206,7 +201,6 @@ impl Inbound<()> {
// forwarding and HTTP proxying).
let ConnectConfig {
ref keepalive,
ref user_timeout,
ref timeout,
..
} = config.proxy.connect;
@ -215,7 +209,7 @@ impl Inbound<()> {
#[error("inbound connection must not target port {0}")]
struct Loop(u16);
svc::stack(transport::ConnectTcp::new(*keepalive, *user_timeout))
svc::stack(transport::ConnectTcp::new(*keepalive))
// Limits the time we wait for a connection to be established.
.push_connect_timeout(*timeout)
// Prevent connections that would target the inbound proxy port from looping.

View File

@ -13,7 +13,7 @@ pub(crate) mod error;
pub use linkerd_app_core::metrics::*;
/// Holds LEGACY inbound proxy metrics.
/// Holds outbound proxy metrics.
#[derive(Clone, Debug)]
pub struct InboundMetrics {
pub http_authz: authz::HttpAuthzMetrics,
@ -25,32 +25,21 @@ pub struct InboundMetrics {
/// Holds metrics that are common to both inbound and outbound proxies. These metrics are
/// reported separately
pub proxy: Proxy,
pub detect: crate::detect::MetricsFamilies,
pub direct: crate::direct::MetricsFamilies,
}
impl InboundMetrics {
pub(crate) fn new(proxy: Proxy, reg: &mut prom::Registry) -> Self {
let detect =
crate::detect::MetricsFamilies::register(reg.sub_registry_with_prefix("tcp_detect"));
let direct = crate::direct::MetricsFamilies::register(
reg.sub_registry_with_prefix("tcp_transport_header"),
);
pub(crate) fn new(proxy: Proxy) -> Self {
Self {
http_authz: authz::HttpAuthzMetrics::default(),
http_errors: error::HttpErrorMetrics::default(),
tcp_authz: authz::TcpAuthzMetrics::default(),
tcp_errors: error::TcpErrorMetrics::default(),
proxy,
detect,
direct,
}
}
}
impl legacy::FmtMetrics for InboundMetrics {
impl FmtMetrics for InboundMetrics {
fn fmt_metrics(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.http_authz.fmt_metrics(f)?;
self.http_errors.fmt_metrics(f)?;

View File

@ -1,9 +1,8 @@
use crate::policy::{AllowPolicy, HttpRoutePermit, Meta, ServerPermit};
use crate::policy::{AllowPolicy, HttpRoutePermit, ServerPermit};
use linkerd_app_core::{
metrics::{
legacy::{Counter, FmtLabels, FmtMetrics},
metrics, RouteAuthzLabels, RouteLabels, ServerAuthzLabels, ServerLabel, TargetAddr,
TlsAccept,
metrics, Counter, FmtLabels, FmtMetrics, RouteAuthzLabels, RouteLabels, ServerAuthzLabels,
ServerLabel, TargetAddr, TlsAccept,
},
tls,
transport::OrigDstAddr,
@ -22,10 +21,6 @@ metrics! {
"The total number of inbound HTTP requests that could not be associated with a route"
},
inbound_http_local_ratelimit_total: Counter {
"The total number of inbound HTTP requests that were rate-limited"
},
inbound_tcp_authz_allow_total: Counter {
"The total number of inbound TCP connections that were authorized"
},
@ -48,7 +43,6 @@ struct HttpInner {
allow: Mutex<HashMap<RouteAuthzKey, Counter>>,
deny: Mutex<HashMap<RouteKey, Counter>>,
route_not_found: Mutex<HashMap<ServerKey, Counter>>,
http_local_rate_limit: Mutex<HashMap<HttpLocalRateLimitKey, Counter>>,
}
#[derive(Debug, Default)]
@ -58,17 +52,10 @@ struct TcpInner {
terminate: Mutex<HashMap<ServerKey, Counter>>,
}
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct HTTPLocalRateLimitLabels {
pub server: ServerLabel,
pub rate_limit: Option<Arc<Meta>>,
pub scope: &'static str,
}
#[derive(Debug, Hash, PartialEq, Eq)]
struct Key<L> {
target: TargetAddr,
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
labels: L,
}
@ -76,12 +63,11 @@ type ServerKey = Key<ServerLabel>;
type ServerAuthzKey = Key<ServerAuthzLabels>;
type RouteKey = Key<RouteLabels>;
type RouteAuthzKey = Key<RouteAuthzLabels>;
type HttpLocalRateLimitKey = Key<HTTPLocalRateLimitLabels>;
// === impl HttpAuthzMetrics ===
impl HttpAuthzMetrics {
pub fn allow(&self, permit: &HttpRoutePermit, tls: tls::ConditionalServerTlsLabels) {
pub fn allow(&self, permit: &HttpRoutePermit, tls: tls::ConditionalServerTls) {
self.0
.allow
.lock()
@ -94,7 +80,7 @@ impl HttpAuthzMetrics {
&self,
labels: ServerLabel,
dst: OrigDstAddr,
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
) {
self.0
.route_not_found
@ -104,12 +90,7 @@ impl HttpAuthzMetrics {
.incr();
}
pub fn deny(
&self,
labels: RouteLabels,
dst: OrigDstAddr,
tls: tls::ConditionalServerTlsLabels,
) {
pub fn deny(&self, labels: RouteLabels, dst: OrigDstAddr, tls: tls::ConditionalServerTls) {
self.0
.deny
.lock()
@ -117,20 +98,6 @@ impl HttpAuthzMetrics {
.or_default()
.incr();
}
pub fn ratelimit(
&self,
labels: HTTPLocalRateLimitLabels,
dst: OrigDstAddr,
tls: tls::ConditionalServerTlsLabels,
) {
self.0
.http_local_rate_limit
.lock()
.entry(HttpLocalRateLimitKey::new(labels, dst, tls))
.or_default()
.incr();
}
}
impl FmtMetrics for HttpAuthzMetrics {
@ -173,19 +140,6 @@ impl FmtMetrics for HttpAuthzMetrics {
}
drop(route_not_found);
let local_ratelimit = self.0.http_local_rate_limit.lock();
if !local_ratelimit.is_empty() {
inbound_http_local_ratelimit_total.fmt_help(f)?;
inbound_http_local_ratelimit_total.fmt_scopes(
f,
local_ratelimit
.iter()
.map(|(k, c)| ((k.target, (&k.labels, TlsAccept(&k.tls))), c)),
|c| c,
)?;
}
drop(local_ratelimit);
Ok(())
}
}
@ -193,7 +147,7 @@ impl FmtMetrics for HttpAuthzMetrics {
// === impl TcpAuthzMetrics ===
impl TcpAuthzMetrics {
pub fn allow(&self, permit: &ServerPermit, tls: tls::ConditionalServerTlsLabels) {
pub fn allow(&self, permit: &ServerPermit, tls: tls::ConditionalServerTls) {
self.0
.allow
.lock()
@ -202,7 +156,7 @@ impl TcpAuthzMetrics {
.incr();
}
pub fn deny(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) {
pub fn deny(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTls) {
self.0
.deny
.lock()
@ -211,7 +165,7 @@ impl TcpAuthzMetrics {
.incr();
}
pub fn terminate(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) {
pub fn terminate(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTls) {
self.0
.terminate
.lock()
@ -248,36 +202,10 @@ impl FmtMetrics for TcpAuthzMetrics {
}
}
// === impl HTTPLocalRateLimitLabels ===
impl FmtLabels for HTTPLocalRateLimitLabels {
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self {
server,
rate_limit,
scope,
} = self;
server.fmt_labels(f)?;
if let Some(rl) = rate_limit {
write!(
f,
",ratelimit_group=\"{}\",ratelimit_kind=\"{}\",ratelimit_name=\"{}\",ratelimit_scope=\"{}\"",
rl.group(),
rl.kind(),
rl.name(),
scope,
)
} else {
write!(f, ",ratelimit_scope=\"{scope}\"")
}
}
}
// === impl Key ===
impl<L> Key<L> {
fn new(labels: L, dst: OrigDstAddr, tls: tls::ConditionalServerTlsLabels) -> Self {
fn new(labels: L, dst: OrigDstAddr, tls: tls::ConditionalServerTls) -> Self {
Self {
tls,
target: TargetAddr(dst.into()),
@ -288,30 +216,24 @@ impl<L> Key<L> {
impl<L: FmtLabels> FmtLabels for Key<L> {
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self {
target,
tls,
labels,
} = self;
(target, (labels, TlsAccept(tls))).fmt_labels(f)
(self.target, (&self.labels, TlsAccept(&self.tls))).fmt_labels(f)
}
}
impl ServerKey {
fn from_policy(policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) -> Self {
fn from_policy(policy: &AllowPolicy, tls: tls::ConditionalServerTls) -> Self {
Self::new(policy.server_label(), policy.dst_addr(), tls)
}
}
impl RouteAuthzKey {
fn from_permit(permit: &HttpRoutePermit, tls: tls::ConditionalServerTlsLabels) -> Self {
fn from_permit(permit: &HttpRoutePermit, tls: tls::ConditionalServerTls) -> Self {
Self::new(permit.labels.clone(), permit.dst, tls)
}
}
impl ServerAuthzKey {
fn from_permit(permit: &ServerPermit, tls: tls::ConditionalServerTlsLabels) -> Self {
fn from_permit(permit: &ServerPermit, tls: tls::ConditionalServerTls) -> Self {
Self::new(permit.labels.clone(), permit.dst, tls)
}
}

View File

@ -8,7 +8,7 @@ use crate::{
};
use linkerd_app_core::{
errors::{FailFastError, LoadShedError},
metrics::legacy::FmtLabels,
metrics::FmtLabels,
tls,
};
use std::fmt;

View File

@ -1,9 +1,6 @@
use super::ErrorKind;
use linkerd_app_core::{
metrics::{
legacy::{Counter, FmtMetrics},
metrics, ServerLabel,
},
metrics::{metrics, Counter, FmtMetrics, ServerLabel},
svc::{self, stack::NewMonitor},
transport::{labels::TargetAddr, OrigDstAddr},
Error,

View File

@ -1,9 +1,6 @@
use super::ErrorKind;
use linkerd_app_core::{
metrics::{
legacy::{Counter, FmtMetrics},
metrics,
},
metrics::{metrics, Counter, FmtMetrics},
svc::{self, stack::NewMonitor},
transport::{labels::TargetAddr, OrigDstAddr},
Error,

View File

@ -5,8 +5,6 @@ mod http;
mod store;
mod tcp;
use crate::metrics::authz::HTTPLocalRateLimitLabels;
pub(crate) use self::store::Store;
pub use self::{
config::Config,
@ -29,8 +27,7 @@ pub use linkerd_proxy_server_policy::{
authz::Suffix,
grpc::Route as GrpcRoute,
http::{filter::Redirection, Route as HttpRoute},
route, Authentication, Authorization, Meta, Protocol, RateLimitError, RoutePolicy,
ServerPolicy,
route, Authentication, Authorization, Meta, Protocol, RoutePolicy, ServerPolicy,
};
use std::sync::Arc;
use thiserror::Error;
@ -47,7 +44,7 @@ pub trait GetPolicy {
fn get_policy(&self, dst: OrigDstAddr) -> AllowPolicy;
}
#[derive(Clone, Debug)]
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum DefaultPolicy {
Allow(ServerPolicy),
Deny,
@ -93,7 +90,6 @@ impl From<DefaultPolicy> for ServerPolicy {
DefaultPolicy::Allow(p) => p,
DefaultPolicy::Deny => ServerPolicy {
protocol: Protocol::Opaque(Arc::new([])),
local_rate_limit: Default::default(),
meta: Meta::new_default("deny"),
},
}
@ -133,21 +129,7 @@ impl AllowPolicy {
#[inline]
pub fn server_label(&self) -> ServerLabel {
ServerLabel(self.server.borrow().meta.clone(), self.dst.port())
}
pub fn ratelimit_label(&self, error: &RateLimitError) -> HTTPLocalRateLimitLabels {
use RateLimitError::*;
let scope = match error {
Total(_) => "total",
PerIdentity(_) | Override(_) => "identity",
};
HTTPLocalRateLimitLabels {
server: self.server_label(),
rate_limit: self.server.borrow().local_rate_limit.meta(),
scope,
}
ServerLabel(self.server.borrow().meta.clone())
}
async fn changed(&mut self) {
@ -220,7 +202,7 @@ impl ServerPermit {
protocol: server.protocol.clone(),
labels: ServerAuthzLabels {
authz: authz.meta.clone(),
server: ServerLabel(server.meta.clone(), dst.port()),
server: ServerLabel(server.meta.clone()),
},
}
}

View File

@ -33,8 +33,9 @@ static INVALID_POLICY: once_cell::sync::OnceCell<ServerPolicy> = once_cell::sync
impl<S> Api<S>
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error> + Clone,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error> + Clone,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
{
pub(super) fn new(
workload: Arc<str>,
@ -57,9 +58,10 @@ where
impl<S> Service<u16> for Api<S>
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
S: Clone + Send + Sync + 'static,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
S::Future: Send + 'static,
{
type Response =

View File

@ -40,10 +40,10 @@ impl Config {
limits: ReceiveLimits,
) -> impl GetPolicy + Clone + Send + Sync + 'static
where
C: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
C: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
C: Clone + Unpin + Send + Sync + 'static,
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Send + 'static,
C::ResponseBody: http::HttpBody<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Default + Send + 'static,
C::Future: Send,
{
match self {

View File

@ -36,15 +36,6 @@ pub fn cluster_unauthenticated(
)
}
pub fn audit(timeout: Duration) -> ServerPolicy {
mk(
"audit",
all_nets(),
Authentication::Unauthenticated,
timeout,
)
}
pub fn all_mtls_unauthenticated(timeout: Duration) -> ServerPolicy {
mk(
"all-tls-unauthenticated",
@ -88,6 +79,5 @@ fn mk(
ServerPolicy {
meta: Meta::new_default(name),
protocol,
local_rate_limit: Default::default(),
}
}

View File

@ -9,7 +9,7 @@ use linkerd_app_core::{
svc::{self, ServiceExt},
tls,
transport::{ClientAddr, OrigDstAddr, Remote},
Conditional, Error, Result,
Error, Result,
};
use linkerd_proxy_server_policy::{grpc, http, route::RouteMatch};
use std::{sync::Arc, task};
@ -171,8 +171,6 @@ where
}
};
try_fut!(self.check_rate_limit());
future::Either::Left(
self.inner
.new_service((permit, self.target.clone()))
@ -204,25 +202,7 @@ impl<T, N> HttpPolicyService<T, N> {
.iter()
.find(|a| super::is_authorized(a, self.connection.client, &self.connection.tls))
{
Some(authz) => {
if authz.meta.is_audit() {
tracing::info!(
server.group = %labels.server.0.group(),
server.kind = %labels.server.0.kind(),
server.name = %labels.server.0.name(),
route.group = %labels.route.group(),
route.kind = %labels.route.kind(),
route.name = %labels.route.name(),
client.tls = ?self.connection.tls,
client.ip = %self.connection.client.ip(),
authz.group = %authz.meta.group(),
authz.kind = %authz.meta.kind(),
authz.name = %authz.meta.name(),
"Request allowed",
);
}
authz
}
Some(authz) => authz,
None => {
tracing::info!(
server.group = %labels.server.0.group(),
@ -248,11 +228,8 @@ impl<T, N> HttpPolicyService<T, N> {
);
}
}
self.metrics.deny(
labels,
self.connection.dst,
self.connection.tls.as_ref().map(|t| t.labels()),
);
self.metrics
.deny(labels, self.connection.dst, self.connection.tls.clone());
return Err(HttpRouteUnauthorized(()).into());
}
};
@ -282,43 +259,16 @@ impl<T, N> HttpPolicyService<T, N> {
}
};
self.metrics
.allow(&permit, self.connection.tls.as_ref().map(|t| t.labels()));
self.metrics.allow(&permit, self.connection.tls.clone());
Ok((permit, r#match, route))
}
fn mk_route_not_found(&self) -> Error {
let labels = self.policy.server_label();
self.metrics.route_not_found(
labels,
self.connection.dst,
self.connection.tls.as_ref().map(|t| t.labels()),
);
self.metrics
.route_not_found(labels, self.connection.dst, self.connection.tls.clone());
HttpRouteNotFound(()).into()
}
fn check_rate_limit(&self) -> Result<()> {
let id = match self.connection.tls {
Conditional::Some(tls::ServerTls::Established {
client_id: Some(tls::ClientId(ref id)),
..
}) => Some(id),
_ => None,
};
self.policy
.borrow()
.local_rate_limit
.check(id)
.map_err(|err| {
self.metrics.ratelimit(
self.policy.ratelimit_label(&err),
self.connection.dst,
self.connection.tls.as_ref().map(|t| t.labels()),
);
err.into()
})
}
}
fn apply_http_filters<B>(

View File

@ -1,8 +1,6 @@
use super::*;
use crate::policy::{Authentication, Authorization, Meta, Protocol, ServerPolicy};
use linkerd_app_core::{svc::Service, Infallible};
use linkerd_http_box::BoxBody;
use linkerd_proxy_server_policy::{LocalRateLimit, RateLimitError};
macro_rules! conn {
($client:expr, $dst:expr) => {{
@ -21,7 +19,7 @@ macro_rules! conn {
}
macro_rules! new_svc {
($proto:expr, $conn:expr, $rsp:expr, $rl: expr) => {{
($proto:expr, $conn:expr, $rsp:expr) => {{
let (policy, tx) = AllowPolicy::for_test(
$conn.dst,
ServerPolicy {
@ -31,7 +29,6 @@ macro_rules! new_svc {
kind: "Server".into(),
name: "testsrv".into(),
}),
local_rate_limit: Arc::new($rl),
},
);
let svc = HttpPolicyService {
@ -41,7 +38,7 @@ macro_rules! new_svc {
metrics: HttpAuthzMetrics::default(),
inner: |(permit, _): (HttpRoutePermit, ())| {
let f = $rsp;
svc::mk(move |req: ::http::Request<BoxBody>| {
svc::mk(move |req: ::http::Request<hyper::Body>| {
futures::future::ready((f)(permit.clone(), req))
})
},
@ -49,28 +46,19 @@ macro_rules! new_svc {
(svc, tx)
}};
($proto:expr, $conn:expr, $rsp:expr) => {{
new_svc!($proto, $conn, $rsp, Default::default())
}};
($proto:expr, $rl:expr) => {{
($proto:expr) => {{
new_svc!(
$proto,
conn!(),
|permit: HttpRoutePermit, _req: ::http::Request<BoxBody>| {
|permit: HttpRoutePermit, _req: ::http::Request<hyper::Body>| {
let mut rsp = ::http::Response::builder()
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap();
rsp.extensions_mut().insert(permit.clone());
Ok::<_, Infallible>(rsp)
},
$rl
}
)
}};
($proto:expr) => {{
new_svc!($proto, Default::default())
}};
}
#[tokio::test(flavor = "current_thread")]
@ -120,7 +108,11 @@ async fn http_route() {
// Test that authorization policies allow requests:
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect("serves");
let permit = rsp
@ -134,7 +126,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -146,7 +138,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::DELETE)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -205,12 +197,15 @@ async fn http_route() {
},
],
}])),
local_rate_limit: Arc::new(Default::default()),
})
.expect("must send");
assert!(svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap(),)
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect_err("fails")
.is::<HttpRouteUnauthorized>());
@ -219,7 +214,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -230,7 +225,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::DELETE)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -278,14 +273,14 @@ async fn http_filter_header() {
},
}],
}]));
let inner = |permit: HttpRoutePermit, req: ::http::Request<BoxBody>| -> Result<_> {
let inner = |permit: HttpRoutePermit, req: ::http::Request<hyper::Body>| -> Result<_> {
assert_eq!(req.headers().len(), 1);
assert_eq!(
req.headers().get("testkey"),
Some(&"testval".parse().unwrap())
);
let mut rsp = ::http::Response::builder()
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap();
rsp.extensions_mut().insert(permit);
Ok(rsp)
@ -293,7 +288,11 @@ async fn http_filter_header() {
let (mut svc, _tx) = new_svc!(proto, conn!(), inner);
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect("serves");
let permit = rsp
@ -343,12 +342,16 @@ async fn http_filter_inject_failure() {
}],
}]));
let inner = |_: HttpRoutePermit,
_: ::http::Request<BoxBody>|
-> Result<::http::Response<BoxBody>> { unreachable!() };
_: ::http::Request<hyper::Body>|
-> Result<::http::Response<hyper::Body>> { unreachable!() };
let (mut svc, _tx) = new_svc!(proto, conn!(), inner);
let err = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect_err("fails");
assert_eq!(
@ -360,82 +363,6 @@ async fn http_filter_inject_failure() {
);
}
#[tokio::test(flavor = "current_thread")]
async fn rate_limit_allow() {
use linkerd_app_core::{Ipv4Net, Ipv6Net};
let rmeta = Meta::new_default("default");
// Rate-limit with plenty of room for two consecutive requests
let rl = LocalRateLimit::new_no_overrides_for_test(Some(10), Some(5));
let authorizations = Arc::new([Authorization {
meta: rmeta.clone(),
networks: vec![Ipv4Net::default().into(), Ipv6Net::default().into()],
authentication: Authentication::Unauthenticated,
}]);
let (mut svc, _tx) = new_svc!(
Protocol::Http1(Arc::new([http::default(authorizations.clone())])),
rl
);
// First request should be allowed
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.await
.expect("serves");
assert_eq!(rsp.status(), ::http::StatusCode::OK);
// Second request should be allowed as well
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.await
.expect("serves");
assert_eq!(rsp.status(), ::http::StatusCode::OK);
}
#[tokio::test(flavor = "current_thread")]
async fn rate_limit_deny() {
use linkerd_app_core::{Ipv4Net, Ipv6Net};
let rmeta = Meta::new_default("default");
// Rate-limit with room for only one request per second
let rl = LocalRateLimit::new_no_overrides_for_test(Some(10), Some(1));
let authorizations = Arc::new([Authorization {
meta: rmeta.clone(),
networks: vec![Ipv4Net::default().into(), Ipv6Net::default().into()],
authentication: Authentication::Unauthenticated,
}]);
let (mut svc, _tx) = new_svc!(
Protocol::Http1(Arc::new([http::default(authorizations.clone())])),
rl
);
// First request should be allowed
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.await
.expect("serves");
assert_eq!(rsp.status(), ::http::StatusCode::OK);
// Second request should be denied
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.await
.expect_err("should deny");
let err = rsp
.downcast_ref::<RateLimitError>()
.expect("rate limit error");
match err {
RateLimitError::PerIdentity(rps) => assert_eq!(rps, &std::num::NonZeroU32::new(1).unwrap()),
_ => panic!("unexpected error"),
};
}
#[tokio::test(flavor = "current_thread")]
async fn grpc_route() {
use linkerd_proxy_server_policy::grpc::{
@ -495,7 +422,7 @@ async fn grpc_route() {
::http::Request::builder()
.uri("/foo.bar.bah/baz")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -511,7 +438,7 @@ async fn grpc_route() {
::http::Request::builder()
.uri("/foo.bar.bah/qux")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -523,7 +450,7 @@ async fn grpc_route() {
::http::Request::builder()
.uri("/boo.bar.bah/bah")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -575,14 +502,14 @@ async fn grpc_filter_header() {
},
}],
}]));
let inner = |permit: HttpRoutePermit, req: ::http::Request<BoxBody>| -> Result<_> {
let inner = |permit: HttpRoutePermit, req: ::http::Request<hyper::Body>| -> Result<_> {
assert_eq!(req.headers().len(), 1);
assert_eq!(
req.headers().get("testkey"),
Some(&"testval".parse().unwrap())
);
let mut rsp = ::http::Response::builder()
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap();
rsp.extensions_mut().insert(permit);
Ok(rsp)
@ -594,7 +521,7 @@ async fn grpc_filter_header() {
::http::Request::builder()
.uri("/foo.bar.bah/baz")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -652,8 +579,8 @@ async fn grpc_filter_inject_failure() {
}],
}]));
let inner = |_: HttpRoutePermit,
_: ::http::Request<BoxBody>|
-> Result<::http::Response<BoxBody>> { unreachable!() };
_: ::http::Request<hyper::Body>|
-> Result<::http::Response<hyper::Body>> { unreachable!() };
let (mut svc, _tx) = new_svc!(proto, conn!(), inner);
let err = svc
@ -661,7 +588,7 @@ async fn grpc_filter_inject_failure() {
::http::Request::builder()
.uri("/foo.bar.bah/baz")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await

View File

@ -74,10 +74,11 @@ impl<S> Store<S> {
opaque_ports: RangeInclusiveSet<u16>,
) -> Self
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
S: Clone + Send + Sync + 'static,
S::Future: Send,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
{
let opaque_default = Self::make_opaque(default.clone());
// The initial set of policies never expire from the cache.
@ -138,10 +139,11 @@ impl<S> Store<S> {
impl<S> GetPolicy for Store<S>
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
S: Clone + Send + Sync + 'static,
S::Future: Send,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
{
fn get_policy(&self, dst: OrigDstAddr) -> AllowPolicy {
// Lookup the policy for the target port in the cache. If it doesn't

View File

@ -77,8 +77,7 @@ where
// This new services requires a ClientAddr, so it must necessarily be built for each
// connection. So we can just increment the counter here since the service can only
// be used at most once.
self.metrics
.allow(&permit, tls.as_ref().map(|t| t.labels()));
self.metrics.allow(&permit, tls.clone());
let inner = self.inner.new_service((permit, target));
TcpPolicy::Authorized(Authorized {
@ -98,7 +97,7 @@ where
?tls, %client,
"Connection denied"
);
self.metrics.deny(&policy, tls.as_ref().map(|t| t.labels()));
self.metrics.deny(&policy, tls);
TcpPolicy::Unauthorized(deny)
}
}
@ -168,7 +167,7 @@ where
%client,
"Connection terminated due to policy change",
);
metrics.terminate(&policy, tls.as_ref().map(|t| t.labels()));
metrics.terminate(&policy, tls);
return Err(denied.into());
}
}
@ -195,19 +194,6 @@ fn check_authorized(
{
for authz in &**authzs {
if super::is_authorized(authz, client_addr, tls) {
if authz.meta.is_audit() {
tracing::info!(
server.group = %server.meta.group(),
server.kind = %server.meta.kind(),
server.name = %server.meta.name(),
client.tls = ?tls,
client.ip = %client_addr.ip(),
authz.group = %authz.meta.group(),
authz.kind = %authz.meta.kind(),
authz.name = %authz.meta.name(),
"Request allowed",
);
}
return Ok(ServerPermit::new(dst, server, authz));
}
}

View File

@ -26,7 +26,6 @@ async fn unauthenticated_allowed() {
kind: "server".into(),
name: "test".into(),
}),
local_rate_limit: Arc::new(Default::default()),
};
let tls = tls::ConditionalServerTls::None(tls::NoServerTls::NoClientHello);
@ -43,14 +42,11 @@ async fn unauthenticated_allowed() {
kind: "serverauthorization".into(),
name: "unauth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
)
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}))
},
}
);
@ -79,7 +75,6 @@ async fn authenticated_identity() {
kind: "server".into(),
name: "test".into(),
}),
local_rate_limit: Arc::new(Default::default()),
};
let tls = tls::ConditionalServerTls::Some(tls::ServerTls::Established {
@ -99,14 +94,11 @@ async fn authenticated_identity() {
kind: "serverauthorization".into(),
name: "tls-auth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
)
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}))
}
}
);
@ -146,7 +138,6 @@ async fn authenticated_suffix() {
kind: "server".into(),
name: "test".into(),
}),
local_rate_limit: Arc::new(Default::default()),
};
let tls = tls::ConditionalServerTls::Some(tls::ServerTls::Established {
@ -165,14 +156,11 @@ async fn authenticated_suffix() {
kind: "serverauthorization".into(),
name: "tls-auth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
),
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
})),
}
}
);
@ -209,7 +197,6 @@ async fn tls_unauthenticated() {
kind: "server".into(),
name: "test".into(),
}),
local_rate_limit: Arc::new(Default::default()),
};
let tls = tls::ConditionalServerTls::Some(tls::ServerTls::Established {
@ -228,14 +215,11 @@ async fn tls_unauthenticated() {
kind: "serverauthorization".into(),
name: "tls-unauth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
),
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
})),
}
}
);
@ -263,7 +247,7 @@ fn orig_dst_addr() -> OrigDstAddr {
OrigDstAddr(([192, 0, 2, 2], 1000).into())
}
impl tonic::client::GrpcService<tonic::body::Body> for MockSvc {
impl tonic::client::GrpcService<tonic::body::BoxBody> for MockSvc {
type ResponseBody = linkerd_app_core::control::RspBody;
type Error = Error;
type Future = futures::future::Pending<Result<http::Response<Self::ResponseBody>, Self::Error>>;
@ -275,7 +259,7 @@ impl tonic::client::GrpcService<tonic::body::Body> for MockSvc {
unreachable!()
}
fn call(&mut self, _req: http::Request<tonic::body::Body>) -> Self::Future {
fn call(&mut self, _req: http::Request<tonic::body::BoxBody>) -> Self::Future {
unreachable!()
}
}

View File

@ -27,10 +27,10 @@ impl Inbound<()> {
limits: ReceiveLimits,
) -> impl policy::GetPolicy + Clone + Send + Sync + 'static
where
C: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
C: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
C: Clone + Unpin + Send + Sync + 'static,
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Send + 'static,
C::ResponseBody: http::HttpBody<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Default + Send + 'static,
C::Future: Send,
{
self.config
@ -55,8 +55,6 @@ impl Inbound<()> {
I: Debug + Unpin + Send + Sync + 'static,
P: profiles::GetProfile<Error = Error>,
{
let detect_metrics = self.runtime.metrics.detect.clone();
// Handles connections to ports that can't be determined to be HTTP.
let forward = self
.clone()
@ -99,7 +97,7 @@ impl Inbound<()> {
// Determines how to handle an inbound connection, dispatching it to the appropriate
// stack.
http.push_http_tcp_server()
.push_detect(detect_metrics, forward)
.push_detect(forward)
.push_accept(addr.port(), policies, direct)
.into_inner()
}

View File

@ -3,12 +3,14 @@ pub use futures::prelude::*;
use linkerd_app_core::{
config,
dns::Suffix,
drain, exp_backoff, identity, metrics,
drain, exp_backoff,
identity::rustls,
metrics,
proxy::{
http::{h1, h2},
tap,
},
transport::{DualListenAddr, Keepalive, UserTimeout},
transport::{Keepalive, ListenAddr},
ProxyRuntime,
};
pub use linkerd_app_test as support;
@ -44,7 +46,6 @@ pub fn default_config() -> Config {
kind: "server".into(),
name: "testsrv".into(),
}),
local_rate_limit: Arc::new(Default::default()),
}
.into(),
ports: Default::default(),
@ -56,14 +57,12 @@ pub fn default_config() -> Config {
allow_discovery: Some(cluster_local).into_iter().collect(),
proxy: config::ProxyConfig {
server: config::ServerConfig {
addr: DualListenAddr(([0, 0, 0, 0], 0).into(), None),
addr: ListenAddr(([0, 0, 0, 0], 0).into()),
keepalive: Keepalive(None),
user_timeout: UserTimeout(None),
http2: h2::ServerParams::default(),
h2_settings: h2::Settings::default(),
},
connect: config::ConnectConfig {
keepalive: Keepalive(None),
user_timeout: UserTimeout(None),
timeout: Duration::from_secs(1),
backoff: exp_backoff::ExponentialBackoff::try_new(
Duration::from_millis(100),
@ -71,11 +70,11 @@ pub fn default_config() -> Config {
0.1,
)
.unwrap(),
http1: h1::PoolSettings {
h1_settings: h1::PoolSettings {
max_idle: 1,
idle_timeout: Duration::from_secs(1),
},
http2: h2::ClientParams::default(),
h2_settings: h2::Settings::default(),
},
max_in_flight_requests: 10_000,
detect_protocol_timeout: Duration::from_secs(10),
@ -87,7 +86,6 @@ pub fn default_config() -> Config {
},
discovery_idle_timeout: Duration::from_secs(20),
profile_skip_timeout: Duration::from_secs(1),
unsafe_authority_labels: false,
}
}
@ -96,7 +94,7 @@ pub fn runtime() -> (ProxyRuntime, drain::Signal) {
let (tap, _) = tap::new();
let (metrics, _) = metrics::Metrics::new(std::time::Duration::from_secs(10));
let runtime = ProxyRuntime {
identity: identity::creds::default_for_test().1,
identity: rustls::creds::default_for_test().1.into(),
metrics: metrics.proxy,
tap,
span_sink: None,

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-integration"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Proxy integration tests
@ -17,56 +17,43 @@ default = []
flakey = []
[dependencies]
bytes = { workspace = true }
bytes = "1"
futures = { version = "0.3", default-features = false, features = ["executor"] }
h2 = { workspace = true }
http = { workspace = true }
http-body = { workspace = true }
http-body-util = { workspace = true }
hyper-util = { workspace = true, features = ["service"] }
h2 = "0.3"
http = "0.2"
http-body = "0.4"
hyper = { version = "0.14", features = [
"http1",
"http2",
"stream",
"client",
"server",
] }
ipnet = "2"
linkerd-app = { path = "..", features = ["allow-loopback"] }
linkerd-app-core = { path = "../core" }
linkerd-app-test = { path = "../test" }
linkerd-meshtls = { path = "../../meshtls", features = ["test-util"] }
linkerd-metrics = { path = "../../metrics", features = ["test_util"] }
linkerd-rustls = { path = "../../rustls" }
linkerd2-proxy-api = { version = "0.13", features = [
"destination",
"arbitrary",
] }
linkerd-app-test = { path = "../test" }
linkerd-tracing = { path = "../../tracing" }
maplit = "1"
parking_lot = "0.12"
regex = "1"
rustls-pemfile = "2.2"
socket2 = "0.6"
socket2 = "0.5"
tokio = { version = "1", features = ["io-util", "net", "rt", "macros"] }
tokio-rustls = { workspace = true }
tokio-stream = { version = "0.1", features = ["sync"] }
tonic = { workspace = true, features = ["transport", "router"], default-features = false }
tower = { workspace = true, default-features = false }
tracing = { workspace = true }
[dependencies.hyper]
workspace = true
features = [
"client",
"http1",
"http2",
"server",
]
[dependencies.linkerd2-proxy-api]
workspace = true
features = [
"arbitrary",
"destination",
]
[dependencies.tracing-subscriber]
version = "0.3"
default-features = false
features = [
tokio-rustls = "0.24"
rustls-pemfile = "1.0"
tower = { version = "0.4", default-features = false }
tonic = { version = "0.10", features = ["transport"], default-features = false }
tracing = "0.1"
tracing-subscriber = { version = "0.3", default-features = false, features = [
"fmt",
"std",
]
] }
[dev-dependencies]
flate2 = { version = "1", default-features = false, features = [
@ -74,5 +61,8 @@ flate2 = { version = "1", default-features = false, features = [
] }
# Log streaming isn't enabled by default globally, but we want to test it.
linkerd-app-admin = { path = "../admin", features = ["log-streaming"] }
# No code from this crate is actually used; only necessary to enable the Rustls
# implementation.
linkerd-meshtls = { path = "../../meshtls", features = ["rustls"] }
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
serde_json = "1"

View File

@ -1,28 +1,26 @@
use super::*;
use http::{Request, Response};
use linkerd_app_core::{proxy::http::TokioExecutor, svc::http::BoxBody};
use linkerd_app_core::proxy::http::trace;
use parking_lot::Mutex;
use std::io;
use tokio::{net::TcpStream, task::JoinHandle};
use tokio::net::TcpStream;
use tokio::task::JoinHandle;
use tokio_rustls::rustls::{self, ClientConfig};
use tracing::info_span;
type ClientError = hyper_util::client::legacy::Error;
type Sender = mpsc::UnboundedSender<(
Request<BoxBody>,
oneshot::Sender<Result<Response<hyper::body::Incoming>, ClientError>>,
)>;
type ClientError = hyper::Error;
type Request = http::Request<hyper::Body>;
type Response = http::Response<hyper::Body>;
type Sender = mpsc::UnboundedSender<(Request, oneshot::Sender<Result<Response, ClientError>>)>;
#[derive(Clone)]
pub struct TlsConfig {
client_config: Arc<ClientConfig>,
name: rustls::pki_types::ServerName<'static>,
name: rustls::ServerName,
}
impl TlsConfig {
pub fn new(client_config: Arc<ClientConfig>, name: &'static str) -> Self {
let name =
rustls::pki_types::ServerName::try_from(name).expect("name must be a valid DNS name");
pub fn new(client_config: Arc<ClientConfig>, name: &str) -> Self {
let name = rustls::ServerName::try_from(name).expect("name must be a valid DNS name");
TlsConfig {
client_config,
name,
@ -76,6 +74,9 @@ pub fn http2_tls<T: Into<String>>(addr: SocketAddr, auth: T, tls: TlsConfig) ->
Client::new(addr, auth.into(), Run::Http2, Some(tls))
}
pub fn tcp(addr: SocketAddr) -> tcp::TcpClient {
tcp::client(addr)
}
pub struct Client {
addr: SocketAddr,
run: Run,
@ -131,19 +132,11 @@ impl Client {
pub fn request(
&self,
builder: http::request::Builder,
) -> impl Future<Output = Result<Response<hyper::body::Incoming>, ClientError>> + Send + 'static
{
let req = builder.body(BoxBody::empty()).unwrap();
self.send_req(req)
) -> impl Future<Output = Result<Response, ClientError>> + Send + Sync + 'static {
self.send_req(builder.body(Bytes::new().into()).unwrap())
}
pub async fn request_body<B>(&self, req: Request<B>) -> Response<hyper::body::Incoming>
where
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Into<Error>,
{
let req = req.map(BoxBody::new);
pub async fn request_body(&self, req: Request) -> Response {
self.send_req(req).await.expect("response")
}
@ -159,16 +152,11 @@ impl Client {
}
}
#[tracing::instrument(skip(self, req))]
pub(crate) fn send_req<B>(
#[tracing::instrument(skip(self))]
pub(crate) fn send_req(
&self,
mut req: Request<B>,
) -> impl Future<Output = Result<Response<hyper::body::Incoming>, ClientError>> + Send + 'static
where
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Into<Error>,
{
mut req: Request,
) -> impl Future<Output = Result<Response, ClientError>> + Send + Sync + 'static {
if req.uri().scheme().is_none() {
if self.tls.is_some() {
*req.uri_mut() = format!("https://{}{}", self.authority, req.uri().path())
@ -182,8 +170,7 @@ impl Client {
}
tracing::debug!(headers = ?req.headers(), "request");
let (tx, rx) = oneshot::channel();
let req = req.map(BoxBody::new);
let _ = self.tx.send((req, tx));
let _ = self.tx.send((req.map(Into::into), tx));
async { rx.await.expect("request cancelled") }.in_current_span()
}
@ -233,17 +220,13 @@ enum Run {
Http2,
}
pub type Running = Pin<Box<dyn Future<Output = ()> + Send + 'static>>;
fn run(
addr: SocketAddr,
version: Run,
tls: Option<TlsConfig>,
) -> (Sender, JoinHandle<()>, Running) {
let (tx, rx) = mpsc::unbounded_channel::<(
Request<BoxBody>,
oneshot::Sender<Result<Response<hyper::body::Incoming>, ClientError>>,
)>();
let (tx, rx) =
mpsc::unbounded_channel::<(Request, oneshot::Sender<Result<Response, ClientError>>)>();
let test_name = thread_name();
let absolute_uris = if let Run::Http1 { absolute_uris } = version {
@ -252,12 +235,7 @@ fn run(
false
};
let (running_tx, running) = {
let (tx, rx) = oneshot::channel();
let rx = Box::pin(rx.map(|_| ()));
(tx, rx)
};
let (running_tx, running) = running();
let conn = Conn {
addr,
absolute_uris,
@ -272,9 +250,10 @@ fn run(
let span = info_span!("test client", peer_addr = %addr, ?version, test = %test_name);
let work = async move {
let client = hyper_util::client::legacy::Client::builder(TokioExecutor::new())
let client = hyper::Client::builder()
.http2_only(http2_only)
.build::<Conn, BoxBody>(conn);
.executor(trace::Executor::new())
.build::<Conn, hyper::Body>(conn);
tracing::trace!("client task started");
let mut rx = rx;
let (drain_tx, drain) = drain::channel();
@ -284,6 +263,7 @@ fn run(
// instance would remain un-dropped.
async move {
while let Some((req, cb)) = rx.recv().await {
let req = req.map(hyper::Body::from);
tracing::trace!(?req);
let req = client.request(req);
tokio::spawn(
@ -315,11 +295,9 @@ struct Conn {
}
impl tower::Service<hyper::Uri> for Conn {
type Response = hyper_util::rt::TokioIo<RunningIo>;
type Response = RunningIo;
type Error = io::Error;
type Future = Pin<
Box<dyn Future<Output = io::Result<hyper_util::rt::TokioIo<RunningIo>>> + Send + 'static>,
>;
type Future = Pin<Box<dyn Future<Output = io::Result<RunningIo>> + Send + 'static>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
@ -349,19 +327,19 @@ impl tower::Service<hyper::Uri> for Conn {
} else {
Box::pin(io) as Pin<Box<dyn Io + Send + 'static>>
};
Ok(hyper_util::rt::TokioIo::new(RunningIo {
Ok(RunningIo {
io,
abs_form,
_running: Some(running),
}))
})
})
}
}
impl hyper_util::client::legacy::connect::Connection for RunningIo {
fn connected(&self) -> hyper_util::client::legacy::connect::Connected {
impl hyper::client::connect::Connection for RunningIo {
fn connected(&self) -> hyper::client::connect::Connected {
// Setting `proxy` to true will configure Hyper to use absolute-form
// URIs on this connection.
hyper_util::client::legacy::connect::Connected::new().proxy(self.abs_form)
hyper::client::connect::Connected::new().proxy(self.abs_form)
}
}

View File

@ -2,7 +2,7 @@ use super::*;
pub use linkerd2_proxy_api::destination as pb;
use linkerd2_proxy_api::net;
use linkerd_app_core::proxy::http::TokioExecutor;
use linkerd_app_core::proxy::http::trace;
use parking_lot::Mutex;
use std::collections::VecDeque;
use std::net::IpAddr;
@ -262,7 +262,10 @@ impl pb::destination_server::Destination for Controller {
}
tracing::warn!(?dst, ?updates, "request does not match");
let msg = format!("expected get call for {dst:?} but got get call for {req:?}");
let msg = format!(
"expected get call for {:?} but got get call for {:?}",
dst, req
);
calls.push_front(Dst::Call(dst, updates));
return Err(grpc::Status::new(grpc::Code::Unavailable, msg));
}
@ -340,7 +343,7 @@ pub(crate) async fn run<T, B>(
delay: Option<Pin<Box<dyn Future<Output = ()> + Send>>>,
) -> Listening
where
T: tower::Service<http::Request<hyper::body::Incoming>, Response = http::Response<B>>,
T: tower::Service<http::Request<hyper::body::Body>, Response = http::Response<B>>,
T: Clone + Send + 'static,
T::Error: Into<Box<dyn std::error::Error + Send + Sync>>,
T::Future: Send,
@ -369,16 +372,12 @@ where
let _ = listening_tx.send(());
}
let mut http = hyper::server::conn::http2::Builder::new(TokioExecutor::new());
let mut http = hyper::server::conn::Http::new().with_executor(trace::Executor::new());
http.http2_only(true);
loop {
let (sock, addr) = listener.accept().await?;
let span = tracing::debug_span!("conn", %addr).or_current();
let serve = http
.timer(hyper_util::rt::TokioTimer::new())
.serve_connection(
hyper_util::rt::TokioIo::new(sock),
hyper_util::service::TowerToHyperService::new(svc.clone()),
);
let serve = http.serve_connection(sock, svc.clone());
let f = async move {
serve.await.map_err(|error| {
tracing::error!(
@ -530,8 +529,6 @@ impl From<DestinationBuilder> for pb::Update {
protocol_hint,
tls_identity,
authority_override: None,
http2: None,
resource_ref: None,
}],
metric_labels: set_labels,
})),
@ -612,11 +609,7 @@ pub fn retry_budget(
}
pub fn dst_override(authority: String, weight: u32) -> pb::WeightedDst {
pb::WeightedDst {
authority,
weight,
backend_ref: None,
}
pb::WeightedDst { authority, weight }
}
pub fn route() -> RouteBuilder {

View File

@ -8,8 +8,7 @@ use std::{
};
use linkerd2_proxy_api::identity as pb;
use linkerd_rustls::get_default_provider;
use tokio_rustls::rustls::{self, server::WebPkiClientVerifier};
use tokio_rustls::rustls;
use tonic as grpc;
pub struct Identity {
@ -35,6 +34,10 @@ type Certify = Box<
> + Send,
>;
static TLS_VERSIONS: &[&rustls::SupportedProtocolVersion] = &[&rustls::version::TLS13];
static TLS_SUPPORTED_CIPHERSUITES: &[rustls::SupportedCipherSuite] =
&[rustls::cipher_suite::TLS13_CHACHA20_POLY1305_SHA256];
struct Certificates {
pub leaf: Vec<u8>,
pub intermediates: Vec<Vec<u8>>,
@ -47,17 +50,11 @@ impl Certificates {
{
let f = fs::File::open(p)?;
let mut r = io::BufReader::new(f);
let mut certs = rustls_pemfile::certs(&mut r);
let leaf = certs
.next()
.expect("no leaf cert in pemfile")
.map_err(|_| io::Error::other("rustls error reading certs"))?
.as_ref()
.to_vec();
let intermediates = certs
.map(|cert| cert.map(|cert| cert.as_ref().to_vec()))
.collect::<Result<Vec<_>, _>>()
.map_err(|_| io::Error::other("rustls error reading certs"))?;
let mut certs = rustls_pemfile::certs(&mut r)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "rustls error reading certs"))?;
let mut certs = certs.drain(..);
let leaf = certs.next().expect("no leaf cert in pemfile");
let intermediates = certs.collect();
Ok(Certificates {
leaf,
@ -65,14 +62,11 @@ impl Certificates {
})
}
pub fn chain(&self) -> Vec<rustls::pki_types::CertificateDer<'static>> {
pub fn chain(&self) -> Vec<rustls::Certificate> {
let mut chain = Vec::with_capacity(self.intermediates.len() + 1);
chain.push(self.leaf.clone());
chain.extend(self.intermediates.clone());
chain
.into_iter()
.map(rustls::pki_types::CertificateDer::from)
.collect()
chain.into_iter().map(rustls::Certificate).collect()
}
pub fn response(&self) -> pb::CertifyResponse {
@ -85,46 +79,43 @@ impl Certificates {
}
impl Identity {
fn load_key<P>(p: P) -> rustls::pki_types::PrivateKeyDer<'static>
fn load_key<P>(p: P) -> rustls::PrivateKey
where
P: AsRef<Path>,
{
let p8 = fs::read(&p).expect("read key");
rustls::pki_types::PrivateKeyDer::try_from(p8).expect("decode key")
rustls::PrivateKey(p8)
}
fn configs(
trust_anchors: &str,
certs: &Certificates,
key: rustls::pki_types::PrivateKeyDer<'static>,
key: rustls::PrivateKey,
) -> (Arc<rustls::ClientConfig>, Arc<rustls::ServerConfig>) {
use std::io::Cursor;
let mut roots = rustls::RootCertStore::empty();
let trust_anchors = rustls_pemfile::certs(&mut Cursor::new(trust_anchors))
.collect::<Result<Vec<_>, _>>()
.expect("error parsing pemfile");
let (added, skipped) = roots.add_parsable_certificates(trust_anchors);
let trust_anchors =
rustls_pemfile::certs(&mut Cursor::new(trust_anchors)).expect("error parsing pemfile");
let (added, skipped) = roots.add_parsable_certificates(&trust_anchors[..]);
assert_ne!(added, 0, "trust anchors must include at least one cert");
assert_eq!(skipped, 0, "no certs in pemfile should be invalid");
let provider = get_default_provider();
let client_config = rustls::ClientConfig::builder_with_provider(provider.clone())
.with_safe_default_protocol_versions()
let client_config = rustls::ClientConfig::builder()
.with_cipher_suites(TLS_SUPPORTED_CIPHERSUITES)
.with_safe_default_kx_groups()
.with_protocol_versions(TLS_VERSIONS)
.expect("client config must be valid")
.with_root_certificates(roots.clone())
.with_no_client_auth();
let client_cert_verifier =
WebPkiClientVerifier::builder_with_provider(Arc::new(roots), provider.clone())
.allow_unauthenticated()
.build()
.expect("server verifier must be valid");
let server_config = rustls::ServerConfig::builder_with_provider(provider)
.with_safe_default_protocol_versions()
let server_config = rustls::ServerConfig::builder()
.with_cipher_suites(TLS_SUPPORTED_CIPHERSUITES)
.with_safe_default_kx_groups()
.with_protocol_versions(TLS_VERSIONS)
.expect("server config must be valid")
.with_client_cert_verifier(client_cert_verifier)
.with_client_cert_verifier(Arc::new(
rustls::server::AllowAnyAnonymousOrAuthenticatedClient::new(roots),
))
.with_single_cert(certs.chain(), key)
.unwrap();
@ -213,7 +204,7 @@ impl Controller {
let f = f.take().expect("called twice?");
let fut = f(req)
.map_ok(grpc::Response::new)
.map_err(|e| grpc::Status::new(grpc::Code::Internal, format!("{e}")));
.map_err(|e| grpc::Status::new(grpc::Code::Internal, format!("{}", e)));
Box::pin(fut)
});
self.expect_calls.lock().push_back(func);

View File

@ -3,7 +3,6 @@
#![warn(rust_2018_idioms, clippy::disallowed_methods, clippy::disallowed_types)]
#![forbid(unsafe_code)]
#![recursion_limit = "256"]
#![allow(clippy::result_large_err)]
mod test_env;
@ -27,9 +26,9 @@ pub use bytes::{Buf, BufMut, Bytes};
pub use futures::stream::{Stream, StreamExt};
pub use futures::{future, FutureExt, TryFuture, TryFutureExt};
pub use http::{HeaderMap, Request, Response, StatusCode};
pub use http_body::Body;
pub use http_body::Body as HttpBody;
pub use linkerd_app as app;
pub use linkerd_app_core::{drain, Addr, Error};
pub use linkerd_app_core::{drain, Addr};
pub use linkerd_app_test::*;
pub use linkerd_tracing::test::*;
use socket2::Socket;
@ -51,6 +50,8 @@ pub use tower::Service;
pub const ENV_TEST_PATIENCE_MS: &str = "RUST_TEST_PATIENCE_MS";
pub const DEFAULT_TEST_PATIENCE: Duration = Duration::from_millis(15);
pub type Error = Box<dyn std::error::Error + Send + Sync + 'static>;
/// Retry an assertion up to a specified number of times, waiting
/// `RUST_TEST_PATIENCE_MS` between retries.
///
@ -72,7 +73,7 @@ pub const DEFAULT_TEST_PATIENCE: Duration = Duration::from_millis(15);
macro_rules! assert_eventually {
($cond:expr, retries: $retries:expr, $($arg:tt)+) => {
{
use std::{env};
use std::{env, u64};
use std::str::FromStr;
use tokio::time::{Instant, Duration};
use tracing::Instrument as _;
@ -218,6 +219,15 @@ impl Shutdown {
pub type ShutdownRx = Pin<Box<dyn Future<Output = ()> + Send>>;
/// A channel used to signal when a Client's related connection is running or closed.
pub fn running() -> (oneshot::Sender<()>, Running) {
let (tx, rx) = oneshot::channel();
let rx = Box::pin(rx.map(|_| ()));
(tx, rx)
}
pub type Running = Pin<Box<dyn Future<Output = ()> + Send + Sync + 'static>>;
pub fn s(bytes: &[u8]) -> &str {
::std::str::from_utf8(bytes).unwrap()
}
@ -248,7 +258,7 @@ impl fmt::Display for HumanDuration {
let secs = self.0.as_secs();
let subsec_ms = self.0.subsec_nanos() as f64 / 1_000_000f64;
if secs == 0 {
write!(fmt, "{subsec_ms}ms")
write!(fmt, "{}ms", subsec_ms)
} else {
write!(fmt, "{}s", secs as f64 + subsec_ms)
}
@ -257,7 +267,7 @@ impl fmt::Display for HumanDuration {
pub async fn cancelable<E: Send + 'static>(
drain: drain::Watch,
f: impl Future<Output = Result<(), E>>,
f: impl Future<Output = Result<(), E>> + Send + 'static,
) -> Result<(), E> {
tokio::select! {
res = f => res,

View File

@ -192,8 +192,8 @@ impl MetricMatch {
}
pub async fn assert_in(&self, client: &crate::client::Client) {
use std::env;
use std::str::FromStr;
use std::{env, u64};
use tokio::time::{Duration, Instant};
use tracing::Instrument as _;
const MAX_RETRIES: usize = 5;

View File

@ -2,7 +2,6 @@ use super::*;
pub use api::{inbound, outbound};
use api::{inbound::inbound_server_policies_server, outbound::outbound_policies_server};
use futures::stream;
use http_body_util::combinators::UnsyncBoxBody;
use linkerd2_proxy_api as api;
use parking_lot::Mutex;
use std::collections::VecDeque;
@ -35,9 +34,6 @@ pub struct InboundSender(Tx<inbound::Server>);
#[derive(Debug, Clone)]
pub struct OutboundSender(Tx<outbound::OutboundPolicy>);
#[derive(Clone)]
struct RoutesSvc(grpc::service::Routes);
type Tx<T> = mpsc::UnboundedSender<Result<T, grpc::Status>>;
type Rx<T> = UnboundedReceiverStream<Result<T, grpc::Status>>;
type WatchStream<T> = Pin<Box<dyn Stream<Item = Result<T, grpc::Status>> + Send + Sync + 'static>>;
@ -49,7 +45,6 @@ pub fn all_unauthenticated() -> inbound::Server {
inbound::proxy_protocol::Detect {
timeout: Some(Duration::from_secs(10).try_into().unwrap()),
http_routes: vec![],
http_local_rate_limit: None,
},
)),
}),
@ -123,11 +118,11 @@ pub fn outbound_default(dst: impl ToString) -> outbound::OutboundPolicy {
timeout: Some(Duration::from_secs(10).try_into().unwrap()),
http1: Some(proxy_protocol::Http1 {
routes: vec![route.clone()],
..Default::default()
failure_accrual: None,
}),
http2: Some(proxy_protocol::Http2 {
routes: vec![route],
..Default::default()
failure_accrual: None,
}),
opaque: Some(proxy_protocol::Opaque {
routes: vec![outbound_default_opaque_route(dst)],
@ -155,7 +150,7 @@ pub fn outbound_default_http_route(dst: impl ToString) -> outbound::HttpRoute {
}],
filters: Vec::new(),
backends: Some(http_first_available(std::iter::once(backend(dst)))),
..Default::default()
request_timeout: None,
}],
}
}
@ -172,12 +167,10 @@ pub fn outbound_default_opaque_route(dst: impl ToString) -> outbound::OpaqueRout
distribution::FirstAvailable {
backends: vec![opaque_route::RouteBackend {
backend: Some(backend(dst)),
filters: Vec::new(),
}],
},
)),
}),
filters: Vec::new(),
}],
}
}
@ -221,7 +214,7 @@ pub fn http_first_available(
.map(|backend| http_route::RouteBackend {
backend: Some(backend),
filters: Vec::new(),
..Default::default()
request_timeout: None,
})
.collect(),
},
@ -302,7 +295,7 @@ impl Controller {
}
pub async fn run(self) -> controller::Listening {
let routes = grpc::service::Routes::default()
let svc = grpc::transport::Server::builder()
.add_service(
inbound_server_policies_server::InboundServerPoliciesServer::new(Server(Arc::new(
self.inbound,
@ -310,9 +303,9 @@ impl Controller {
)
.add_service(outbound_policies_server::OutboundPoliciesServer::new(
Server(Arc::new(self.outbound)),
));
controller::run(RoutesSvc(routes), "support policy controller", None).await
))
.into_service();
controller::run(svc, "support policy controller", None).await
}
}
@ -513,35 +506,6 @@ impl<Req, Rsp> Inner<Req, Rsp> {
}
}
// === impl RoutesSvc ===
impl Service<Request<hyper::body::Incoming>> for RoutesSvc {
type Response =
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::Response;
type Error =
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::Error;
type Future =
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::Future;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
let Self(routes) = self;
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::poll_ready(
routes, cx,
)
}
fn call(&mut self, req: Request<hyper::body::Incoming>) -> Self::Future {
use http_body_util::{combinators::UnsyncBoxBody, BodyExt};
let Self(routes) = self;
let req = req.map(|body| {
UnsyncBoxBody::new(body.map_err(|err| grpc::Status::from_error(Box::new(err))))
});
routes.call(req)
}
}
fn grpc_no_results() -> grpc::Status {
grpc::Status::new(
grpc::Code::NotFound,

View File

@ -1,9 +1,8 @@
use super::*;
use linkerd_app_core::{
svc::Param,
transport::{
listen, orig_dst, Keepalive, ListenAddr, Local, OrigDstAddr, ServerAddr, UserTimeout,
},
transport::OrigDstAddr,
transport::{listen, orig_dst, Keepalive, ListenAddr},
Result,
};
use std::{collections::HashSet, thread};
@ -22,7 +21,7 @@ pub struct Proxy {
/// Inbound/outbound addresses helpful for mocking connections that do not
/// implement `server::Listener`.
inbound: MockOrigDst,
outbound: MockDualOrigDst,
outbound: MockOrigDst,
/// Inbound/outbound addresses for mocking connections that implement
/// `server::Listener`.
@ -61,24 +60,18 @@ enum MockOrigDst {
None,
}
#[derive(Copy, Clone, Debug, Default)]
struct MockDualOrigDst {
inner: MockOrigDst,
}
// === impl MockOrigDst ===
impl<T> listen::Bind<T> for MockOrigDst
where
T: Param<Keepalive> + Param<UserTimeout> + Param<ListenAddr>,
T: Param<Keepalive> + Param<ListenAddr>,
{
type Addrs = orig_dst::Addrs;
type BoundAddrs = Local<ServerAddr>;
type Io = tokio::net::TcpStream;
type Incoming =
Pin<Box<dyn Stream<Item = Result<(orig_dst::Addrs, TcpStream)>> + Send + Sync + 'static>>;
fn bind(self, params: &T) -> Result<(Self::BoundAddrs, Self::Incoming)> {
fn bind(self, params: &T) -> Result<listen::Bound<Self::Incoming>> {
let (bound, incoming) = listen::BindTcp::default().bind(params)?;
let incoming = Box::pin(incoming.map(move |res| {
let (inner, tcp) = res?;
@ -108,7 +101,7 @@ impl fmt::Debug for MockOrigDst {
match self {
Self::Addr(addr) => f
.debug_tuple("MockOrigDst::Addr")
.field(&format_args!("{addr}"))
.field(&format_args!("{}", addr))
.finish(),
Self::Direct => f.debug_tuple("MockOrigDst::Direct").finish(),
Self::None => f.debug_tuple("MockOrigDst::None").finish(),
@ -116,24 +109,6 @@ impl fmt::Debug for MockOrigDst {
}
}
// === impl MockDualOrigDst ===
impl<T> listen::Bind<T> for MockDualOrigDst
where
T: Param<Keepalive> + Param<UserTimeout> + Param<ListenAddr>,
{
type Addrs = orig_dst::Addrs;
type BoundAddrs = (Local<ServerAddr>, Option<Local<ServerAddr>>);
type Io = tokio::net::TcpStream;
type Incoming =
Pin<Box<dyn Stream<Item = Result<(orig_dst::Addrs, TcpStream)>> + Send + Sync + 'static>>;
fn bind(self, params: &T) -> Result<(Self::BoundAddrs, Self::Incoming)> {
let (bound, incoming) = self.inner.bind(params)?;
Ok(((bound, None), incoming))
}
}
// === impl Proxy ===
impl Proxy {
@ -188,17 +163,13 @@ impl Proxy {
}
pub fn outbound(mut self, s: server::Listening) -> Self {
self.outbound = MockDualOrigDst {
inner: MockOrigDst::Addr(s.addr),
};
self.outbound = MockOrigDst::Addr(s.addr);
self.outbound_server = Some(s);
self
}
pub fn outbound_ip(mut self, s: SocketAddr) -> Self {
self.outbound = MockDualOrigDst {
inner: MockOrigDst::Addr(s),
};
self.outbound = MockOrigDst::Addr(s);
self
}
@ -416,9 +387,9 @@ async fn run(proxy: Proxy, mut env: TestEnv, random_ports: bool) -> Listening {
use std::fmt::Write;
let mut ports = inbound_default_ports.iter();
if let Some(port) = ports.next() {
let mut var = format!("{port}");
let mut var = format!("{}", port);
for port in ports {
write!(&mut var, ",{port}").expect("writing to String should never fail");
write!(&mut var, ",{}", port).expect("writing to String should never fail");
}
info!("{}={:?}", app::env::ENV_INBOUND_PORTS, var);
env.put(app::env::ENV_INBOUND_PORTS, var);
@ -486,14 +457,7 @@ async fn run(proxy: Proxy, mut env: TestEnv, random_ports: bool) -> Listening {
let bind_adm = listen::BindTcp::default();
let (shutdown_tx, mut shutdown_rx) = tokio::sync::mpsc::unbounded_channel();
let main = config
.build(
bind_in,
bind_out,
bind_adm,
shutdown_tx,
trace_handle,
Default::default(),
)
.build(bind_in, bind_out, bind_adm, shutdown_tx, trace_handle)
.await
.expect("config");
@ -505,7 +469,6 @@ async fn run(proxy: Proxy, mut env: TestEnv, random_ports: bool) -> Listening {
identity_addr,
main.inbound_addr(),
main.outbound_addr(),
main.outbound_addr_additional(),
main.admin_addr(),
);
let mut running = Some((running_tx, addrs));
@ -561,14 +524,8 @@ async fn run(proxy: Proxy, mut env: TestEnv, random_ports: bool) -> Listening {
})
.expect("spawn");
let (
tap_addr,
identity_addr,
inbound_addr,
outbound_addr,
outbound_addr_additional,
admin_addr,
) = running_rx.await.unwrap();
let (tap_addr, identity_addr, inbound_addr, outbound_addr, admin_addr) =
running_rx.await.unwrap();
tracing::info!(
tap.addr = ?tap_addr,
@ -576,7 +533,6 @@ async fn run(proxy: Proxy, mut env: TestEnv, random_ports: bool) -> Listening {
inbound.addr = ?inbound_addr,
inbound.orig_dst = ?inbound,
outbound.addr = ?outbound_addr,
outbound.addr.additional = ?outbound_addr_additional,
outbound.orig_dst = ?outbound,
metrics.addr = ?admin_addr,
);

View File

@ -1,7 +1,5 @@
use super::app_core::svc::http::TokioExecutor;
use super::*;
use http::{Request, Response};
use linkerd_app_core::svc::http::BoxBody;
use linkerd_app_core::proxy::http::trace;
use std::{
io,
sync::atomic::{AtomicUsize, Ordering},
@ -14,35 +12,23 @@ pub fn new() -> Server {
}
pub fn http1() -> Server {
Server {
routes: Default::default(),
version: Run::Http1,
tls: None,
}
Server::http1()
}
pub fn http1_tls(tls: Arc<ServerConfig>) -> Server {
Server {
routes: Default::default(),
version: Run::Http1,
tls: Some(tls),
}
Server::http1_tls(tls)
}
pub fn http2() -> Server {
Server {
routes: Default::default(),
version: Run::Http2,
tls: None,
}
Server::http2()
}
pub fn http2_tls(tls: Arc<ServerConfig>) -> Server {
Server {
routes: Default::default(),
version: Run::Http2,
tls: Some(tls),
}
Server::http2_tls(tls)
}
pub fn tcp() -> tcp::TcpServer {
tcp::server()
}
pub struct Server {
@ -59,8 +45,9 @@ pub struct Listening {
pub(super) http_version: Option<Run>,
}
type RspFuture<B = BoxBody> =
Pin<Box<dyn Future<Output = Result<Response<B>, Error>> + Send + 'static>>;
type Request = http::Request<hyper::Body>;
type Response = http::Response<hyper::Body>;
type RspFuture = Pin<Box<dyn Future<Output = Result<Response, BoxError>> + Send + Sync + 'static>>;
impl Listening {
pub fn connections(&self) -> usize {
@ -105,6 +92,29 @@ impl Listening {
}
impl Server {
fn new(run: Run, tls: Option<Arc<ServerConfig>>) -> Self {
Server {
routes: HashMap::new(),
version: run,
tls,
}
}
fn http1() -> Self {
Server::new(Run::Http1, None)
}
fn http1_tls(tls: Arc<ServerConfig>) -> Self {
Server::new(Run::Http1, Some(tls))
}
fn http2() -> Self {
Server::new(Run::Http2, None)
}
fn http2_tls(tls: Arc<ServerConfig>) -> Self {
Server::new(Run::Http2, Some(tls))
}
/// Return a string body as a 200 OK response, with the string as
/// the response body.
pub fn route(mut self, path: &str, resp: &str) -> Self {
@ -116,11 +126,11 @@ impl Server {
/// to send back.
pub fn route_fn<F>(self, path: &str, cb: F) -> Self
where
F: Fn(Request<BoxBody>) -> Response<BoxBody> + Send + Sync + 'static,
F: Fn(Request) -> Response + Send + Sync + 'static,
{
self.route_async(path, move |req| {
let res = cb(req);
async move { Ok::<_, Error>(res) }
async move { Ok::<_, BoxError>(res) }
})
}
@ -128,9 +138,9 @@ impl Server {
/// a response to send back.
pub fn route_async<F, U>(mut self, path: &str, cb: F) -> Self
where
F: Fn(Request<BoxBody>) -> U + Send + Sync + 'static,
U: TryFuture<Ok = Response<BoxBody>> + Send + 'static,
U::Error: Into<Error> + Send + 'static,
F: Fn(Request) -> U + Send + Sync + 'static,
U: TryFuture<Ok = Response> + Send + Sync + 'static,
U::Error: Into<BoxError> + Send + 'static,
{
let func = move |req| Box::pin(cb(req).map_err(Into::into)) as RspFuture;
self.routes.insert(path.into(), Route(Box::new(func)));
@ -138,17 +148,16 @@ impl Server {
}
pub fn route_with_latency(self, path: &str, resp: &str, latency: Duration) -> Self {
let body = resp.to_owned();
let resp = Bytes::from(resp.to_string());
self.route_async(path, move |_| {
let body = body.clone();
let resp = resp.clone();
async move {
tokio::time::sleep(latency).await;
Ok::<_, Error>(
Ok::<_, BoxError>(
http::Response::builder()
.status(StatusCode::OK)
.body(http_body_util::Full::new(Bytes::from(body.clone())))
.unwrap()
.map(BoxBody::new),
.status(200)
.body(hyper::Body::from(resp.clone()))
.unwrap(),
)
}
})
@ -184,7 +193,13 @@ impl Server {
drain.clone(),
async move {
tracing::info!("support server running");
let svc = Svc(Arc::new(self.routes));
let mut new_svc = NewSvc(Arc::new(self.routes));
let mut http =
hyper::server::conn::Http::new().with_executor(trace::Executor::new());
match self.version {
Run::Http1 => http.http1_only(true),
Run::Http2 => http.http2_only(true),
};
if let Some(delay) = delay {
let _ = listening_tx.take().unwrap().send(());
delay.await;
@ -203,41 +218,27 @@ impl Server {
let sock = accept_connection(sock, tls_config.clone())
.instrument(span.clone())
.await?;
let http = http.clone();
let srv_conn_count = srv_conn_count.clone();
let svc = svc.clone();
let svc = new_svc.call(());
let f = async move {
tracing::trace!("serving...");
let svc = svc.await;
tracing::trace!("service acquired");
srv_conn_count.fetch_add(1, Ordering::Release);
use hyper_util::{rt::TokioIo, service::TowerToHyperService};
let (sock, svc) = (TokioIo::new(sock), TowerToHyperService::new(svc));
let result = match self.version {
Run::Http1 => hyper::server::conn::http1::Builder::new()
.timer(hyper_util::rt::TokioTimer::new())
.serve_connection(sock, svc)
.await
.map_err(|e| tracing::error!("support/server error: {}", e)),
Run::Http2 => {
hyper::server::conn::http2::Builder::new(TokioExecutor::new())
.timer(hyper_util::rt::TokioTimer::new())
.serve_connection(sock, svc)
.await
.map_err(|e| tracing::error!("support/server error: {}", e))
}
};
let svc = svc.map_err(|e| {
tracing::error!("support/server new_service error: {}", e)
})?;
let result = http
.serve_connection(sock, svc)
.await
.map_err(|e| tracing::error!("support/server error: {}", e));
tracing::trace!(?result, "serve done");
result
};
// let fut = Box::pin(cancelable(drain.clone(), f).instrument(span.clone().or_current()))
let drain = drain.clone();
tokio::spawn(async move {
tokio::select! {
res = f => res,
_ = drain.signaled() => {
tracing::debug!("canceled!");
Ok(())
}
}
});
tokio::spawn(
cancelable(drain.clone(), f).instrument(span.clone().or_current()),
);
}
}
.instrument(
@ -266,19 +267,17 @@ pub(super) enum Run {
Http2,
}
struct Route(Box<dyn Fn(Request<BoxBody>) -> RspFuture + Send + Sync>);
struct Route(Box<dyn Fn(Request) -> RspFuture + Send + Sync>);
impl Route {
fn string(body: &str) -> Route {
let body = http_body_util::Full::new(Bytes::from(body.to_string()));
let body = Bytes::from(body.to_string());
Route(Box::new(move |_| {
let body = body.clone();
Box::pin(future::ok(
http::Response::builder()
.status(StatusCode::OK)
.body(body)
.unwrap()
.map(BoxBody::new),
.status(200)
.body(hyper::Body::from(body.clone()))
.unwrap(),
))
}))
}
@ -290,53 +289,58 @@ impl std::fmt::Debug for Route {
}
}
#[derive(Clone, Debug)]
type BoxError = Box<dyn std::error::Error + Send + Sync>;
#[derive(Debug)]
struct Svc(Arc<HashMap<String, Route>>);
impl Svc {
fn route<B>(
&mut self,
req: Request<B>,
) -> impl Future<Output = Result<Response<BoxBody>, crate::app_core::Error>> + Send
where
B: Body + Send + Sync + 'static,
B::Data: Send + 'static,
B::Error: std::error::Error + Send + Sync + 'static,
{
fn route(&mut self, req: Request) -> RspFuture {
match self.0.get(req.uri().path()) {
Some(Route(ref func)) => {
tracing::trace!(path = %req.uri().path(), "found route for path");
func(req.map(BoxBody::new))
func(req)
}
None => {
tracing::warn!("server 404: {:?}", req.uri().path());
Box::pin(futures::future::ok(
http::Response::builder()
.status(StatusCode::NOT_FOUND)
.body(BoxBody::empty())
.unwrap(),
))
let res = http::Response::builder()
.status(404)
.body(Default::default())
.unwrap();
Box::pin(async move { Ok(res) })
}
}
}
}
impl<B> tower::Service<Request<B>> for Svc
where
B: Body + Send + Sync + 'static,
B::Data: Send,
B::Error: std::error::Error + Send + Sync,
{
type Response = Response<BoxBody>;
type Error = Error;
impl tower::Service<Request> for Svc {
type Response = Response;
type Error = BoxError;
type Future = RspFuture;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, req: Request<B>) -> Self::Future {
Box::pin(self.route(req))
fn call(&mut self, req: Request) -> Self::Future {
self.route(req)
}
}
#[derive(Debug)]
struct NewSvc(Arc<HashMap<String, Route>>);
impl Service<()> for NewSvc {
type Response = Svc;
type Error = ::std::io::Error;
type Future = future::Ready<Result<Svc, Self::Error>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, _: ()) -> Self::Future {
future::ok(Svc(Arc::clone(&self.0)))
}
}
@ -354,6 +358,7 @@ async fn accept_connection(
_running: None,
})
}
None => Ok(RunningIo {
io: Box::pin(io),
abs_form: false,

View File

@ -2,7 +2,6 @@ use super::*;
use futures::stream;
use http_body::Body;
use linkerd2_proxy_api::tap as pb;
use linkerd_app_core::svc::http::BoxBody;
pub fn client(addr: SocketAddr) -> Client {
let api = pb::tap_client::TapClient::new(SyncSvc(client::http2(addr, "localhost")));
@ -107,6 +106,7 @@ pub trait TapEventExt {
//fn id(&self) -> (u32, u64);
fn event(&self) -> &pb::tap_event::http::Event;
fn request_init_method(&self) -> String;
fn request_init_authority(&self) -> &str;
fn request_init_path(&self) -> &str;
@ -134,31 +134,41 @@ impl TapEventExt for pb::TapEvent {
}
}
fn request_init_method(&self) -> String {
match self.event() {
pb::tap_event::http::Event::RequestInit(_ev) => {
//TODO: ugh
unimplemented!("method");
}
e => panic!("not RequestInit event: {:?}", e),
}
}
fn request_init_authority(&self) -> &str {
match self.event() {
pb::tap_event::http::Event::RequestInit(ev) => &ev.authority,
e => panic!("not RequestInit event: {e:?}"),
e => panic!("not RequestInit event: {:?}", e),
}
}
fn request_init_path(&self) -> &str {
match self.event() {
pb::tap_event::http::Event::RequestInit(ev) => &ev.path,
e => panic!("not RequestInit event: {e:?}"),
e => panic!("not RequestInit event: {:?}", e),
}
}
fn response_init_status(&self) -> u16 {
match self.event() {
pb::tap_event::http::Event::ResponseInit(ev) => ev.http_status as u16,
e => panic!("not ResponseInit event: {e:?}"),
e => panic!("not ResponseInit event: {:?}", e),
}
}
fn response_end_bytes(&self) -> u64 {
match self.event() {
pb::tap_event::http::Event::ResponseEnd(ev) => ev.response_bytes,
e => panic!("not ResponseEnd event: {e:?}"),
e => panic!("not ResponseEnd event: {:?}", e),
}
}
@ -170,7 +180,7 @@ impl TapEventExt for pb::TapEvent {
}) => code,
_ => panic!("not Eos GrpcStatusCode: {:?}", ev.eos),
},
ev => panic!("not ResponseEnd event: {ev:?}"),
ev => panic!("not ResponseEnd event: {:?}", ev),
}
}
}
@ -178,14 +188,15 @@ impl TapEventExt for pb::TapEvent {
struct SyncSvc(client::Client);
type ResponseFuture =
Pin<Box<dyn Future<Output = Result<http::Response<hyper::body::Incoming>, String>> + Send>>;
Pin<Box<dyn Future<Output = Result<http::Response<hyper::Body>, String>> + Send>>;
impl<B> tower::Service<http::Request<B>> for SyncSvc
where
B: Body,
B::Error: std::fmt::Debug,
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Send + 'static,
{
type Response = http::Response<hyper::body::Incoming>;
type Response = http::Response<hyper::Body>;
type Error = String;
type Future = ResponseFuture;
@ -194,31 +205,20 @@ where
}
fn call(&mut self, req: http::Request<B>) -> Self::Future {
use http_body_util::Full;
let Self(client) = self;
let req = req.map(Self::collect_body).map(Full::new).map(BoxBody::new);
let fut = client.send_req(req).map_err(|err| err.to_string());
Box::pin(fut)
}
}
impl SyncSvc {
/// Collects the given [`Body`], returning a [`Bytes`].
///
/// NB: This blocks the current thread until the provided body has been collected. This is
/// an acceptable practice in test code for the sake of simplicitly, because we will always
/// provide [`SyncSvc`] with bodies that are complete.
fn collect_body<B>(body: B) -> Bytes
where
B: Body,
B::Error: std::fmt::Debug,
{
futures::executor::block_on(async move {
use http_body_util::BodyExt;
body.collect()
.await
.expect("body should not fail")
.to_bytes()
})
// this is okay to do because the body should always be complete, we
// just can't prove it.
let req = futures::executor::block_on(async move {
let (parts, body) = req.into_parts();
let body = match hyper::body::to_bytes(body).await {
Ok(body) => body,
Err(_) => unreachable!("body should not fail"),
};
http::Request::from_parts(parts, body)
});
Box::pin(
self.0
.send_req(req.map(Into::into))
.map_err(|err| err.to_string()),
)
}
}

View File

@ -1,11 +1,10 @@
use super::*;
use std::{
collections::VecDeque,
io,
net::TcpListener as StdTcpListener,
sync::atomic::{AtomicUsize, Ordering},
};
use tokio::{net::TcpStream, task::JoinHandle};
use std::collections::VecDeque;
use std::io;
use std::net::TcpListener as StdTcpListener;
use std::sync::atomic::{AtomicUsize, Ordering};
use tokio::net::TcpStream;
use tokio::task::JoinHandle;
type TcpConnSender = mpsc::UnboundedSender<(
Option<Vec<u8>>,
@ -149,6 +148,10 @@ impl TcpServer {
}
impl TcpConn {
pub fn target_addr(&self) -> SocketAddr {
self.addr
}
pub async fn read(&self) -> Vec<u8> {
self.try_read()
.await

View File

@ -1,7 +1,6 @@
use linkerd_app::env::{EnvError, Strings};
use std::collections::HashMap;
/// An implementation of [`Strings`] that wraps for use in tests.
#[derive(Clone, Default)]
pub struct TestEnv {
values: HashMap<&'static str, String>,
@ -10,22 +9,18 @@ pub struct TestEnv {
// === impl TestEnv ===
impl TestEnv {
/// Puts a new key-value pair in the test environment.
pub fn put(&mut self, key: &'static str, value: String) {
self.values.insert(key, value);
}
/// Returns true if this environment contains the given key.
pub fn contains_key(&self, key: &'static str) -> bool {
self.values.contains_key(key)
}
/// Removes a new key-value pair from the test environment.
pub fn remove(&mut self, key: &'static str) {
self.values.remove(key);
}
/// Extends this test environment using the other given [`TestEnv`].
pub fn extend(&mut self, other: TestEnv) {
self.values.extend(other.values);
}

View File

@ -63,11 +63,11 @@ async fn empty_http1_route() {
hosts: Vec::new(),
rules: Vec::new(),
}],
..Default::default()
failure_accrual: None,
}),
http2: Some(proxy_protocol::Http2 {
routes: vec![policy::outbound_default_http_route(&dst)],
..Default::default()
failure_accrual: None,
}),
opaque: Some(proxy_protocol::Opaque {
routes: vec![policy::outbound_default_opaque_route(&dst)],
@ -148,7 +148,7 @@ async fn empty_http2_route() {
timeout: Some(Duration::from_secs(10).try_into().unwrap()),
http1: Some(proxy_protocol::Http1 {
routes: vec![policy::outbound_default_http_route(&dst)],
..Default::default()
failure_accrual: None,
}),
http2: Some(proxy_protocol::Http2 {
routes: vec![outbound::HttpRoute {
@ -156,7 +156,7 @@ async fn empty_http2_route() {
hosts: Vec::new(),
rules: Vec::new(),
}],
..Default::default()
failure_accrual: None,
}),
opaque: Some(proxy_protocol::Opaque {
routes: vec![policy::outbound_default_opaque_route(&dst)],
@ -223,7 +223,7 @@ async fn header_based_routing() {
backends: Some(policy::http_first_available(std::iter::once(
policy::backend(dst),
))),
..Default::default()
request_timeout: None,
};
let route = outbound::HttpRoute {
@ -237,7 +237,7 @@ async fn header_based_routing() {
backends: Some(policy::http_first_available(std::iter::once(
policy::backend(&dst_world),
))),
..Default::default()
request_timeout: None,
},
// x-hello-city: sf | x-hello-city: san francisco
mk_header_rule(
@ -266,11 +266,11 @@ async fn header_based_routing() {
timeout: Some(Duration::from_secs(10).try_into().unwrap()),
http1: Some(proxy_protocol::Http1 {
routes: vec![route.clone()],
..Default::default()
failure_accrual: None,
}),
http2: Some(proxy_protocol::Http2 {
routes: vec![route],
..Default::default()
failure_accrual: None,
}),
opaque: Some(proxy_protocol::Opaque {
routes: vec![policy::outbound_default_opaque_route(&dst_world)],
@ -400,7 +400,8 @@ async fn path_based_routing() {
backends: Some(policy::http_first_available(std::iter::once(
policy::backend(dst),
))),
..Default::default()
request_timeout: None,
};
let route = outbound::HttpRoute {
@ -414,7 +415,7 @@ async fn path_based_routing() {
backends: Some(policy::http_first_available(std::iter::once(
policy::backend(&dst_world),
))),
..Default::default()
request_timeout: None,
},
// /goodbye/*
mk_path_rule(
@ -448,11 +449,11 @@ async fn path_based_routing() {
timeout: Some(Duration::from_secs(10).try_into().unwrap()),
http1: Some(proxy_protocol::Http1 {
routes: vec![route.clone()],
..Default::default()
failure_accrual: None,
}),
http2: Some(proxy_protocol::Http2 {
routes: vec![route],
..Default::default()
failure_accrual: None,
}),
opaque: Some(proxy_protocol::Opaque {
routes: vec![policy::outbound_default_opaque_route(&dst_world)],

View File

@ -381,7 +381,7 @@ mod cross_version {
}
fn default_dst_name(port: u16) -> String {
format!("{HOST}:{port}")
format!("{}:{}", HOST, port)
}
fn send_default_dst(
@ -481,17 +481,16 @@ mod http2 {
let res = fut.await.expect("beta response");
assert_eq!(res.status(), http::StatusCode::OK);
let body = {
let body = res.into_body();
let body = http_body_util::BodyExt::collect(body)
.await
.unwrap()
.to_bytes()
.to_vec();
String::from_utf8(body).unwrap()
};
assert_eq!(body, "beta");
assert_eq!(
String::from_utf8(
hyper::body::to_bytes(res.into_body())
.await
.unwrap()
.to_vec(),
)
.unwrap(),
"beta"
);
}
}

Some files were not shown because too many files have changed in this diff Show More