Compare commits

..

No commits in common. "main" and "release/v2.271.0" have entirely different histories.

400 changed files with 9016 additions and 11445 deletions

View File

@ -3,7 +3,7 @@
"build": {
"dockerfile": "Dockerfile",
"args": {
"DEV_VERSION": "v47",
"DEV_VERSION": "v44",
"http_proxy": "${localEnv:http_proxy}",
"https_proxy": "${localEnv:https_proxy}"
}
@ -23,15 +23,7 @@
"zxh404.vscode-proto3"
],
"settings": {
"files.insertFinalNewline": true,
"[git-commit]": {
"editor.rulers": [
72,
80
],
"editor.wordWrap": "wordWrapColumn",
"editor.wordWrapColumn": 80
}
"files.insertFinalNewline": true
}
}
},
@ -50,7 +42,7 @@
"overrideCommand": false,
"remoteUser": "code",
"containerEnv": {
"CXX": "clang++-19",
"CXX": "clang++-14",
"RUSTFLAGS": "--cfg tokio_unstable"
},
"mounts": [

View File

@ -1,156 +0,0 @@
# Linkerd2 Proxy Copilot Instructions
## Code Generation
- Code MUST pass `cargo fmt`.
- Code MUST pass `cargo clippy --all-targets --all-features -- -D warnings`.
- Markdown MUST pass `markdownlint-cli2`.
- Prefer `?` for error propagation.
- Avoid `unwrap()` and `expect()` outside tests.
- Use `tracing` crate macros (`tracing::info!`, etc.) for structured logging.
### Comments
Comments should explain **why**, not **what**. Focus on high-level rationale and
design intent at the function or block level, rather than line-by-line
descriptions.
- Use comments to capture:
- System-facing or interface-level concerns
- Key invariants, preconditions, and postconditions
- Design decisions and trade-offs
- Cross-references to architecture or design documentation
- Avoid:
- Line-by-line commentary explaining obvious code
- Restating what the code already clearly expresses
- For public APIs:
- Use `///` doc comments to describe the contract, behavior, parameters, and
usage examples
- For internal rationale:
- Use `//` comments sparingly to note non-obvious reasoning or edge-case
handling
- Be neutral and factual.
### Rust File Organization
For Rust source files, enforce this layout:
1. **Nonpublic imports**
- Declare all `use` statements for private/internal crates first.
- Group imports to avoid duplicates and do **not** add blank lines between
`use` statements.
2. **Module declarations**
- List all `mod` declarations.
3. **Reexports**
- Follow with `pub use` statements.
4. **Type definitions**
- Define `struct`, `enum`, `type`, and `trait` declarations.
- Sort by visibility: `pub` first, then `pub(crate)`, then private.
- Public types should be documented with `///` comments.
5. **Impl blocks**
- Implement methods in the same order as types above.
- Precede each types `impl` block with a header comment: `// === <TypeName> ===`
6. **Tests**
- End with a `tests` module guarded by `#[cfg(test)]`.
- If the infile test module exceeds 100lines, move it to
`tests/<filename>.rs` as a child integrationtest module.
## Test Generation
- Async tests MUST use `tokio::test`.
- Synchronous tests use `#[test]`.
- Include at least one failingedgecase test per public function.
- Use `tracing::info!` for logging in tests, usually in place of comments.
## Code Review
### Rust
- Point out any `unsafe` blocks and justify their safety.
- Flag functions >50 LOC for refactor suggestions.
- Highlight missing docs on public items.
### Markdown
- Use `markdownlint-cli2` to check for linting errors.
- Lines SHOULD be wrapped at 80 characters.
- Fenced code blocks MUST include a language identifier.
### Copilot Instructions
- Start each instruction with an imperative, presenttense verb.
- Keep each instruction under 120 characters.
- Provide one directive per instruction; avoid combining multiple ideas.
- Use "MUST" and "SHOULD" sparingly to emphasize critical rules.
- Avoid semicolons and complex punctuation within bullets.
- Do not reference external links, documents, or specific coding standards.
## Commit Messages
Commits follow the Conventional Commits specification:
### Subject
Subjects are in the form: `<type>[optional scope]: <description>`
- **Type**: feat, fix, docs, refactor, test, chore, ci, build, perf, revert
(others by agreement)
- **Scope**: optional, lowercase; may include `/` to denote submodules (e.g.
`http/detect`)
- **Description**: imperative mood, present tense, no trailing period
- MUST be less than 72 characters
- Omit needless words!
### Body
Non-trivial commits SHOULD include a body summarizing the change.
- Explain *why* the change was needed.
- Describe *what* was done at a high level.
- Use present-tense narration.
- Use complete sentences, paragraphs, and punctuation.
- Preceded by a blank line.
- Wrapped at 80 characters.
- Omit needless words!
### Breaking changes
If the change introduces a backwards-incompatible change, it MUST be marked as
such.
- Indicated by `!` after the type/scope (e.g. `feat(inbound)!: …`)
- Optionally including a `BREAKING CHANGE:` section in the footer explaining the
change in behavior.
### Examples
```text
feat(auth): add JWT refresh endpoint
There is currently no way to refresh a JWT token.
This exposes a new `/refresh` route that returns a refreshed token.
```
```text
feat(api)!: remove deprecated v1 routes
The `/v1/*` endpoints have been deprecated for a long time and are no
longer called by clients.
This change removes the `/v1/*` endpoints and all associated code,
including integration tests and documentation.
BREAKING CHANGE: The previously-deprecated `/v1/*` endpoints were removed.
```
## Pull Requests
- The subject line MUST be in the conventional commit format.
- Autogenerate a PR body summarizing the problem, solution, and verification steps.
- List breaking changes under a separate **Breaking Changes** heading.

View File

@ -11,6 +11,12 @@ updates:
allow:
- dependency-type: "all"
ignore:
# These dependencies will be updated via higher-level aggregator dependencies like `clap`,
# `futures`, `prost`, `tracing`, and `trust-dns-resolver`:
- dependency-name: "futures-*"
- dependency-name: "prost-derive"
- dependency-name: "tracing-*"
- dependency-name: "trust-dns-proto"
# These dependencies are for platforms that we don't support:
- dependency-name: "hermit-abi"
- dependency-name: "redox_*"
@ -19,37 +25,9 @@ updates:
- dependency-name: "web-sys"
- dependency-name: "windows*"
groups:
boring:
patterns:
- "tokio-boring"
- "boring*"
futures:
patterns:
- "futures*"
grpc:
patterns:
- "prost*"
- "tonic*"
hickory:
patterns:
- "hickory*"
icu4x:
patterns:
- "icu_*"
opentelemetry:
patterns:
- "opentelemetry*"
rustls:
patterns:
- "tokio-rustls"
- "rustls*"
- "ring"
symbolic:
patterns:
- "symbolic-*"
tracing:
patterns:
- "tracing*"
- package-ecosystem: cargo
directory: /linkerd/addr/fuzz

View File

@ -22,13 +22,13 @@ permissions:
jobs:
build:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v44-rust
timeout-minutes: 20
continue-on-error: true
steps:
- run: rustup toolchain install --profile=minimal beta
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- run: just toolchain=beta fetch
- run: just toolchain=beta build

View File

@ -21,11 +21,11 @@ env:
jobs:
meta:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- id: changed
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366
with:
files: |
.codecov.yml
@ -40,19 +40,19 @@ jobs:
codecov:
needs: meta
if: (github.event_name == 'push' && github.ref == 'refs/heads/main') || needs.meta.outputs.any_changed == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 30
container:
image: docker://ghcr.io/linkerd/dev:v47-rust
image: docker://ghcr.io/linkerd/dev:v44-rust
options: --security-opt seccomp=unconfined # 🤷
env:
CXX: "/usr/bin/clang++-19"
CXX: "/usr/bin/clang++-14"
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: Swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --no-run
- run: cargo tarpaulin --locked --workspace --exclude=linkerd2-proxy --exclude=linkerd-transport-header --exclude=opencensus-proto --exclude=spire-proto --skip-clean --ignore-tests --no-fail-fast --out=Xml
# Some tests are especially flakey in coverage tests. That's fine. We
# only really care to measure how much of our codebase is covered.
continue-on-error: true
- uses: codecov/codecov-action@18283e04ce6e62d37312384ff67231eb8fd56d24
- uses: codecov/codecov-action@7f8b4b4bde536c465e797be725718b88c5d95e0e

View File

@ -26,13 +26,13 @@ permissions:
jobs:
list-changed:
timeout-minutes: 3
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: docker://rust:1.88.0
runs-on: ubuntu-latest
container: docker://rust:1.83.0
steps:
- run: apt update && apt install -y jo
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
- uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366
id: changed-files
- name: list changed crates
id: list-changed
@ -47,15 +47,15 @@ jobs:
build:
needs: [list-changed]
timeout-minutes: 40
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: docker://rust:1.88.0
runs-on: ubuntu-latest
container: docker://rust:1.83.0
strategy:
matrix:
dir: ${{ fromJson(needs.list-changed.outputs.dirs) }}
steps:
- run: rustup toolchain add nightly
- run: cargo install cargo-fuzz
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- working-directory: ${{matrix.dir}}
run: cargo +nightly fetch

View File

@ -12,9 +12,9 @@ on:
jobs:
markdownlint:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: DavidAnson/markdownlint-cli2-action@992badcdf24e3b8eb7e87ff9287fe931bcb00c6e
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: DavidAnson/markdownlint-cli2-action@eb5ca3ab411449c66620fe7f1b3c9e10547144b0
with:
globs: "**/*.md"

View File

@ -22,13 +22,13 @@ permissions:
jobs:
build:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v44-rust
timeout-minutes: 20
continue-on-error: true
steps:
- run: rustup toolchain install --profile=minimal nightly
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- run: just toolchain=nightly fetch
- run: just toolchain=nightly profile=release build

View File

@ -14,24 +14,24 @@ concurrency:
jobs:
meta:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- id: build
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366
with:
files: |
.github/workflows/pr.yml
justfile
Dockerfile
- id: actions
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366
with:
files: |
.github/workflows/**
.devcontainer/*
- id: cargo
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366
with:
files_ignore: "Cargo.toml"
files: |
@ -40,7 +40,7 @@ jobs:
if: steps.cargo.outputs.any_changed == 'true'
run: ./.github/list-crates.sh ${{ steps.cargo.outputs.all_changed_files }}
- id: rust
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366
with:
files: |
**/*.rs
@ -57,7 +57,7 @@ jobs:
info:
timeout-minutes: 3
needs: meta
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- name: Info
run: |
@ -74,27 +74,30 @@ jobs:
actions:
needs: meta
if: needs.meta.outputs.actions_changed == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: linkerd/dev/actions/setup-tools@v44
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: just action-lint
- run: just action-dev-check
rust:
needs: meta
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v44-rust
permissions:
contents: read
timeout-minutes: 20
steps:
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: Swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab
- run: just fetch
- run: cargo deny --all-features check bans licenses sources
- name: Run cargo deny check bans licenses sources
uses: EmbarkStudios/cargo-deny-action@3f4a782664881cf5725d0ffd23969fcce89fd868
with:
command: check bans licenses sources
- run: just check-fmt
- run: just clippy
- run: just doc
@ -107,15 +110,15 @@ jobs:
needs: meta
if: needs.meta.outputs.cargo_changed == 'true'
timeout-minutes: 20
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v44-rust
strategy:
matrix:
crate: ${{ fromJson(needs.meta.outputs.cargo_crates) }}
steps:
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: Swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab
- run: just fetch
- run: just check-crate ${{ matrix.crate }}
@ -123,11 +126,11 @@ jobs:
needs: meta
if: needs.meta.outputs.cargo_changed == 'true' || needs.meta.outputs.rust_changed == 'true'
timeout-minutes: 20
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
env:
WAIT_TIMEOUT: 2m
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: linkerd/dev/actions/setup-tools@v44
- name: scurl https://run.linkerd.io/install-edge | sh
run: |
scurl https://run.linkerd.io/install-edge | sh
@ -136,9 +139,9 @@ jobs:
tag=$(linkerd version --client --short)
echo "linkerd $tag"
echo "LINKERD_TAG=$tag" >> "$GITHUB_ENV"
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: just docker
- run: just k3d-create
- run: just-k3d create
- run: just k3d-load-linkerd
- run: just linkerd-install
- run: just linkerd-check-control-plane-proxy
@ -149,7 +152,7 @@ jobs:
timeout-minutes: 3
needs: [meta, actions, rust, rust-crates, linkerd-install]
if: always()
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
permissions:
contents: write
@ -168,7 +171,7 @@ jobs:
if: contains(needs.*.result, 'failure') || contains(needs.*.result, 'cancelled')
run: exit 1
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'
- name: "Merge dependabot changes"
if: needs.meta.outputs.is_dependabot == 'true' && needs.meta.outputs.any_changed == 'true'

View File

@ -13,7 +13,7 @@ concurrency:
jobs:
last-release:
if: github.repository == 'linkerd/linkerd2-proxy' # Don't run this in forks.
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-22.04
timeout-minutes: 5
env:
GH_REPO: ${{ github.repository }}
@ -41,10 +41,10 @@ jobs:
last-commit:
needs: last-release
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-22.04
timeout-minutes: 5
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Check if the most recent commit is after the last release
id: recency
env:
@ -62,7 +62,7 @@ jobs:
trigger-release:
needs: [last-release, last-commit]
if: needs.last-release.outputs.recent == 'false' && needs.last-commit.outputs.after-release == 'true'
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-22.04
timeout-minutes: 5
permissions:
actions: write

View File

@ -46,7 +46,6 @@ on:
default: true
env:
CARGO: "cargo auditable"
CARGO_INCREMENTAL: 0
CARGO_NET_RETRY: 10
RUSTFLAGS: "-D warnings -A deprecated --cfg tokio_unstable"
@ -59,25 +58,9 @@ concurrency:
jobs:
meta:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
if: github.event_name == 'pull_request'
- id: workflow
if: github.event_name == 'pull_request'
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
with:
files: |
.github/workflows/release.yml
- id: build
if: github.event_name == 'pull_request'
uses: tj-actions/changed-files@ed68ef82c095e0d48ec87eccea555d944a631a4c
with:
files: |
justfile
Cargo.toml
- id: version
- id: meta
env:
VERSION: ${{ inputs.version }}
shell: bash
@ -85,45 +68,44 @@ jobs:
set -euo pipefail
shopt -s extglob
if [[ "$GITHUB_EVENT_NAME" == pull_request ]]; then
echo version="0.0.0-test.${GITHUB_SHA:0:7}" >> "$GITHUB_OUTPUT"
echo version="0.0.0-test.${GITHUB_SHA:0:7}"
echo archs='["amd64"]'
exit 0
fi
fi >> "$GITHUB_OUTPUT"
if ! [[ "$VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+(-[0-9A-Za-z-]+)?(\+[0-9A-Za-z-]+)?$ ]]; then
echo "Invalid version: $VERSION" >&2
exit 1
fi
echo version="${VERSION#v}" >> "$GITHUB_OUTPUT"
- id: platform
shell: bash
env:
WORKFLOW_CHANGED: ${{ steps.workflow.outputs.any_changed }}
run: |
if [[ "$GITHUB_EVENT_NAME" == pull_request && "$WORKFLOW_CHANGED" != 'true' ]]; then
( echo archs='["amd64"]'
echo oses='["linux"]' ) >> "$GITHUB_OUTPUT"
exit 0
fi
( echo archs='["amd64", "arm64"]'
echo oses='["linux", "windows"]'
( echo version="${VERSION#v}"
echo archs='["amd64", "arm64", "arm"]'
) >> "$GITHUB_OUTPUT"
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
if: github.event_name == 'pull_request'
- id: changed
if: github.event_name == 'pull_request'
uses: tj-actions/changed-files@bab30c2299617f6615ec02a68b9a40d10bd21366
with:
files: |
.github/workflows/release.yml
justfile
Cargo.toml
outputs:
archs: ${{ steps.platform.outputs.archs }}
oses: ${{ steps.platform.outputs.oses }}
version: ${{ steps.version.outputs.version }}
package: ${{ github.event_name == 'workflow_dispatch' || steps.build.outputs.any_changed == 'true' || steps.workflow.outputs.any_changed == 'true' }}
archs: ${{ steps.meta.outputs.archs }}
version: ${{ steps.meta.outputs.version }}
package: ${{ github.event_name == 'workflow_dispatch' || steps.changed.outputs.any_changed == 'true' }}
profile: ${{ inputs.profile || 'release' }}
publish: ${{ inputs.publish }}
ref: ${{ inputs.ref || github.sha }}
tag: "${{ inputs.tag-prefix || 'release/' }}v${{ steps.version.outputs.version }}"
tag: "${{ inputs.tag-prefix || 'release/' }}v${{ steps.meta.outputs.version }}"
prerelease: ${{ inputs.prerelease }}
draft: ${{ inputs.draft }}
latest: ${{ inputs.latest }}
info:
needs: meta
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- name: Inputs
@ -144,50 +126,38 @@ jobs:
strategy:
matrix:
arch: ${{ fromJson(needs.meta.outputs.archs) }}
os: ${{ fromJson(needs.meta.outputs.oses) }}
libc: [gnu] # musl
exclude:
- os: windows
arch: arm64
# If we're not actually building on a release tag, don't short-circuit on
# errors. This helps us know whether a failure is platform-specific.
continue-on-error: ${{ needs.meta.outputs.publish != 'true' }}
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 40
container: docker://ghcr.io/linkerd/dev:v47-rust-musl
container: docker://ghcr.io/linkerd/dev:v44-rust-musl
env:
LINKERD2_PROXY_VENDOR: ${{ github.repository_owner }}
LINKERD2_PROXY_VERSION: ${{ needs.meta.outputs.version }}
steps:
# TODO: add to dev image
- name: Install MiniGW
if: matrix.os == 'windows'
run: apt-get update && apt-get install -y mingw-w64
- name: Install cross compilation toolchain
if: matrix.arch == 'arm64'
run: apt-get update && apt-get install -y binutils-aarch64-linux-gnu
- name: Configure git
run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
ref: ${{ needs.meta.outputs.ref }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0
- uses: Swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab
with:
key: ${{ matrix.os }}-${{ matrix.arch }}
key: ${{ matrix.arch }}
- run: just fetch
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} rustup
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} profile=${{ needs.meta.outputs.profile }} build
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} os=${{ matrix.os }} profile=${{ needs.meta.outputs.profile }} package
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} rustup
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} profile=${{ needs.meta.outputs.profile }} build
- run: just arch=${{ matrix.arch }} libc=${{ matrix.libc }} profile=${{ needs.meta.outputs.profile }} package
- uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882
with:
name: ${{ matrix.arch }}-${{ matrix.os }}-artifacts
name: ${{ matrix.arch }}-artifacts
path: target/package/*
publish:
needs: [meta, package]
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
timeout-minutes: 5
permissions:
actions: write
@ -204,13 +174,13 @@ jobs:
git config --global user.name "$GITHUB_USERNAME"
git config --global user.email "$GITHUB_USERNAME"@users.noreply.github.com
# Tag the release.
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
token: ${{ secrets.LINKERD2_PROXY_GITHUB_TOKEN || github.token }}
ref: ${{ needs.meta.outputs.ref }}
- run: git tag -a -m "$VERSION" "$TAG"
# Fetch the artifacts.
- uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093
- uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16
with:
path: artifacts
- run: du -h artifacts/**/*
@ -218,7 +188,7 @@ jobs:
- if: needs.meta.outputs.publish == 'true'
run: git push origin "$TAG"
- if: needs.meta.outputs.publish == 'true'
uses: softprops/action-gh-release@72f2c25fcb47643c292f7107632f7a47c1df5cd8
uses: softprops/action-gh-release@01570a1f39cb168c169c802c3bceb9e93fb10974
with:
name: ${{ env.VERSION }}
tag_name: ${{ env.TAG }}
@ -242,7 +212,7 @@ jobs:
needs: publish
if: always()
timeout-minutes: 3
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- name: Results
run: |

View File

@ -13,8 +13,8 @@ on:
jobs:
sh-lint:
timeout-minutes: 5
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: linkerd/dev/actions/setup-tools@v44
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: just sh-lint

View File

@ -13,10 +13,10 @@ permissions:
jobs:
devcontainer:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
container: ghcr.io/linkerd/dev:v47-rust
runs-on: ubuntu-latest
container: ghcr.io/linkerd/dev:v44-rust
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- run: git config --global --add safe.directory "$PWD" # actions/runner#2033
- run: |
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'
@ -35,10 +35,10 @@ jobs:
workflows:
runs-on: ${{ vars.LINKERD2_PROXY_RUNNER || 'ubuntu-24.04' }}
runs-on: ubuntu-latest
steps:
- uses: linkerd/dev/actions/setup-tools@v47
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8
- uses: linkerd/dev/actions/setup-tools@v44
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- shell: bash
run: |
VERSION_REGEX='channel = "([0-9]+\.[0-9]+\.[0-9]+)"'

2
.gitignore vendored
View File

@ -1,5 +1,3 @@
.cargo
**/.cargo
target
**/target
**/corpus

2058
Cargo.lock

File diff suppressed because it is too large Load Diff

View File

@ -16,6 +16,7 @@ members = [
"linkerd/app",
"linkerd/conditional",
"linkerd/distribute",
"linkerd/detect",
"linkerd/dns/name",
"linkerd/dns",
"linkerd/duplex",
@ -26,7 +27,7 @@ members = [
"linkerd/http/access-log",
"linkerd/http/box",
"linkerd/http/classify",
"linkerd/http/detect",
"linkerd/http/executor",
"linkerd/http/h2",
"linkerd/http/insert",
"linkerd/http/metrics",
@ -37,14 +38,15 @@ members = [
"linkerd/http/route",
"linkerd/http/stream-timeouts",
"linkerd/http/upgrade",
"linkerd/http/variant",
"linkerd/http/version",
"linkerd/identity",
"linkerd/idle-cache",
"linkerd/io",
"linkerd/meshtls",
"linkerd/meshtls/boring",
"linkerd/meshtls/rustls",
"linkerd/meshtls/verifier",
"linkerd/metrics",
"linkerd/mock/http-body",
"linkerd/opaq-route",
"linkerd/opencensus",
"linkerd/opentelemetry",
@ -69,12 +71,12 @@ members = [
"linkerd/reconnect",
"linkerd/retry",
"linkerd/router",
"linkerd/rustls",
"linkerd/service-profiles",
"linkerd/signal",
"linkerd/stack",
"linkerd/stack/metrics",
"linkerd/stack/tracing",
"linkerd/system",
"linkerd/tonic-stream",
"linkerd/tonic-watch",
"linkerd/tls",
@ -83,7 +85,6 @@ members = [
"linkerd/tracing",
"linkerd/transport-header",
"linkerd/transport-metrics",
"linkerd/workers",
"linkerd2-proxy",
"opencensus-proto",
"opentelemetry-proto",
@ -95,43 +96,6 @@ members = [
debug = 1
lto = true
[workspace.package]
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[workspace.dependencies]
bytes = { version = "1" }
drain = { version = "0.2", default-features = false }
h2 = { version = "0.4" }
http = { version = "1" }
http-body = { version = "1" }
hyper = { version = "1", default-features = false }
prometheus-client = { version = "0.23" }
prost = { version = "0.13" }
prost-build = { version = "0.13", default-features = false }
prost-types = { version = "0.13" }
tokio-rustls = { version = "0.26", default-features = false, features = [
"logging",
] }
tonic = { version = "0.13", default-features = false }
tonic-build = { version = "0.13", default-features = false }
tower = { version = "0.5", default-features = false }
tower-service = { version = "0.3" }
tower-test = { version = "0.4" }
tracing = { version = "0.1" }
[workspace.dependencies.http-body-util]
version = "0.1.3"
default-features = false
features = ["channel"]
[workspace.dependencies.hyper-util]
version = "0.1"
default-features = false
features = ["tokio", "tracing"]
[workspace.dependencies.linkerd2-proxy-api]
version = "0.17.0"
linkerd2-proxy-api = "0.15.0"
# linkerd2-proxy-api = { git = "https://github.com/linkerd/linkerd2-proxy-api.git", branch = "main" }

View File

@ -3,7 +3,7 @@
# This is intended **DEVELOPMENT ONLY**, i.e. so that proxy developers can
# easily test the proxy in the context of the larger `linkerd2` project.
ARG RUST_IMAGE=ghcr.io/linkerd/dev:v47-rust
ARG RUST_IMAGE=ghcr.io/linkerd/dev:v44-rust
# Use an arbitrary ~recent edge release image to get the proxy
# identity-initializing and linkerd-await wrappers.
@ -14,16 +14,11 @@ FROM $LINKERD2_IMAGE as linkerd2
FROM --platform=$BUILDPLATFORM $RUST_IMAGE as fetch
ARG PROXY_FEATURES=""
ARG TARGETARCH="amd64"
RUN apt-get update && \
apt-get install -y time && \
if [[ "$PROXY_FEATURES" =~ .*meshtls-boring.* ]] ; then \
apt-get install -y golang ; \
fi && \
case "$TARGETARCH" in \
amd64) true ;; \
arm64) apt-get install --no-install-recommends -y binutils-aarch64-linux-gnu ;; \
esac && \
rm -rf /var/lib/apt/lists/*
ENV CARGO_NET_RETRY=10
@ -38,6 +33,7 @@ RUN --mount=type=cache,id=cargo,target=/usr/local/cargo/registry \
FROM fetch as build
ENV CARGO_INCREMENTAL=0
ENV RUSTFLAGS="-D warnings -A deprecated --cfg tokio_unstable"
ARG TARGETARCH="amd64"
ARG PROFILE="release"
ARG LINKERD2_PROXY_VERSION=""
ARG LINKERD2_PROXY_VENDOR=""

View File

@ -86,9 +86,8 @@ minutes to review our [code of conduct][coc].
We test our code by way of fuzzing and this is described in [FUZZING.md](/docs/FUZZING.md).
A third party security audit focused on fuzzing Linkerd2-proxy was performed by
Ada Logics in 2021. The
[full report](/docs/reports/linkerd2-proxy-fuzzing-report.pdf) can be found in
the `docs/reports/` directory.
Ada Logics in 2021. The full report is available
[here](/docs/reports/linkerd2-proxy-fuzzing-report.pdf).
## License

View File

@ -2,6 +2,7 @@
targets = [
{ triple = "x86_64-unknown-linux-gnu" },
{ triple = "aarch64-unknown-linux-gnu" },
{ triple = "armv7-unknown-linux-gnu" },
]
[advisories]
@ -17,20 +18,27 @@ allow = [
"ISC",
"MIT",
"Unicode-3.0",
"Zlib",
]
# Ignore local workspace license values for unpublished crates.
private = { ignore = true }
confidence-threshold = 0.8
exceptions = [
{ allow = [
"ISC",
"OpenSSL",
], name = "aws-lc-sys", version = "*" },
"Zlib",
], name = "adler32", version = "*" },
{ allow = [
"ISC",
"MIT",
"OpenSSL",
], name = "aws-lc-fips-sys", version = "*" },
], name = "ring", version = "*" },
]
[[licenses.clarify]]
name = "ring"
version = "*"
expression = "MIT AND ISC AND OpenSSL"
license-files = [
{ path = "LICENSE", hash = 0xbd0eed23 },
]
[bans]
@ -42,35 +50,27 @@ deny = [
{ name = "rustls", wrappers = ["tokio-rustls"] },
# rustls-webpki should be used instead.
{ name = "webpki" },
# aws-lc-rs should be used instead.
{ name = "ring" }
]
skip = [
# `linkerd-trace-context`, `rustls-pemfile` and `tonic` depend on `base64`
# v0.13.1 while `rcgen` depends on v0.21.5
{ name = "base64" },
# tonic/axum depend on a newer `tower`, which we are still catching up to.
# see #3744.
{ name = "tower", version = "0.5" },
{ name = "bitflags", version = "1" },
# https://github.com/hawkw/matchers/pull/4
{ name = "regex-automata", version = "0.1" },
{ name = "regex-syntax", version = "0.6" },
# `trust-dns-proto`, depends on `idna` v0.4.0 while `url` depends on v0.5.0
{ name = "idna" },
# Some dependencies still use indexmap v1.
{ name = "indexmap", version = "1" },
{ name = "hashbrown", version = "0.12" },
]
skip-tree = [
# thiserror v2 is still propagating through the ecosystem
{ name = "thiserror", version = "1" },
# rand v0.9 is still propagating through the ecosystem
{ name = "rand", version = "0.8" },
# rust v1.0 is still propagating through the ecosystem
{ name = "rustix", version = "0.38" },
# `pprof` uses a number of old dependencies. for now, we skip its subtree.
{ name = "pprof" },
# aws-lc-rs uses a slightly outdated version of bindgen
{ name = "bindgen", version = "0.69.5" },
# socket v0.6 is still propagating through the ecosystem
{ name = "socket2", version = "0.5" },
]
[sources]
unknown-registry = "deny"
unknown-git = "deny"
allow-registry = [
"https://github.com/rust-lang/crates.io-index",
]
allow-registry = ["https://github.com/rust-lang/crates.io-index"]

View File

@ -12,12 +12,9 @@ engine.
We place the fuzz tests into folders within the individual crates that the fuzz
tests target. For example, we have a fuzz test that that target the crate
`/linkerd/addr` and the code in `/linkerd/addr/src` and thus the fuzz test that
targets this crate is put in `/linkerd/addr/fuzz`.
The folder structure for each of the fuzz tests is automatically generated by
`cargo fuzz init`. See cargo fuzz's
[`README.md`](https://github.com/rust-fuzz/cargo-fuzz#cargo-fuzz-init) for more
information.
targets this crate is put in `/linkerd/addr/fuzz`. The folder set up we use for
each of the fuzz tests is automatically generated by `cargo fuzz init`
(described [here](https://github.com/rust-fuzz/cargo-fuzz#cargo-fuzz-init)).
### Fuzz targets
@ -99,5 +96,6 @@ unit-test-like fuzzers, but are essentially just more substantial in nature. The
idea behind these fuzzers is to test end-to-end concepts more so than individual
components of the proxy.
The [inbound fuzzer](/linkerd/app/inbound/fuzz/fuzz_targets/fuzz_target_1.rs)
is an example of this.
The inbound fuzzer
[here](/linkerd/app/inbound/fuzz/fuzz_targets/fuzz_target_1.rs) is an example of
this.

View File

@ -1,18 +1,17 @@
[package]
name = "hyper-balance"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[dependencies]
futures = { version = "0.3", default-features = false }
http = { workspace = true }
http-body = { workspace = true }
hyper = { workspace = true }
http = "0.2"
hyper = { version = "0.14", features = ["deprecated"] }
pin-project = "1"
tower = { workspace = true, default-features = false, features = ["load"] }
tower = { version = "0.4", default-features = false, features = ["load"] }
tokio = { version = "1", features = ["macros"] }
[dev-dependencies]

View File

@ -1,7 +1,7 @@
#![deny(rust_2018_idioms, clippy::disallowed_methods, clippy::disallowed_types)]
#![forbid(unsafe_code)]
use http_body::Body;
use hyper::body::HttpBody;
use pin_project::pin_project;
use std::pin::Pin;
use std::task::{Context, Poll};
@ -38,7 +38,7 @@ pub struct PendingUntilEosBody<T, B> {
impl<T, B> TrackCompletion<T, http::Response<B>> for PendingUntilFirstData
where
B: Body,
B: HttpBody,
{
type Output = http::Response<PendingUntilFirstDataBody<T, B>>;
@ -59,7 +59,7 @@ where
impl<T, B> TrackCompletion<T, http::Response<B>> for PendingUntilEos
where
B: Body,
B: HttpBody,
{
type Output = http::Response<PendingUntilEosBody<T, B>>;
@ -80,7 +80,7 @@ where
impl<T, B> Default for PendingUntilFirstDataBody<T, B>
where
B: Body + Default,
B: HttpBody + Default,
{
fn default() -> Self {
Self {
@ -90,9 +90,9 @@ where
}
}
impl<T, B> Body for PendingUntilFirstDataBody<T, B>
impl<T, B> HttpBody for PendingUntilFirstDataBody<T, B>
where
B: Body,
B: HttpBody,
T: Send + 'static,
{
type Data = B::Data;
@ -102,20 +102,32 @@ where
self.body.is_end_stream()
}
fn poll_frame(
fn poll_data(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
let this = self.project();
let ret = futures::ready!(this.body.poll_frame(cx));
let ret = futures::ready!(this.body.poll_data(cx));
// Once a frame is received, the handle is dropped. On subsequent calls, this
// Once a data frame is received, the handle is dropped. On subsequent calls, this
// is a noop.
drop(this.handle.take());
Poll::Ready(ret)
}
fn poll_trailers(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
let this = self.project();
// If this is being called, the handle definitely should have been dropped
// already.
drop(this.handle.take());
this.body.poll_trailers(cx)
}
#[inline]
fn size_hint(&self) -> hyper::body::SizeHint {
self.body.size_hint()
@ -126,7 +138,7 @@ where
impl<T, B> Default for PendingUntilEosBody<T, B>
where
B: Body + Default,
B: HttpBody + Default,
{
fn default() -> Self {
Self {
@ -136,7 +148,7 @@ where
}
}
impl<T: Send + 'static, B: Body> Body for PendingUntilEosBody<T, B> {
impl<T: Send + 'static, B: HttpBody> HttpBody for PendingUntilEosBody<T, B> {
type Data = B::Data;
type Error = B::Error;
@ -145,21 +157,35 @@ impl<T: Send + 'static, B: Body> Body for PendingUntilEosBody<T, B> {
self.body.is_end_stream()
}
fn poll_frame(
fn poll_data(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
let mut this = self.project();
let body = &mut this.body;
tokio::pin!(body);
let frame = futures::ready!(body.poll_frame(cx));
let ret = futures::ready!(body.poll_data(cx));
// If this was the last frame, then drop the handle immediately.
if this.body.is_end_stream() {
drop(this.handle.take());
}
Poll::Ready(frame)
Poll::Ready(ret)
}
fn poll_trailers(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
let this = self.project();
let ret = futures::ready!(this.body.poll_trailers(cx));
// Once trailers are received, the handle is dropped immediately (in case the body
// is retained longer for some reason).
drop(this.handle.take());
Poll::Ready(ret)
}
#[inline]
@ -172,7 +198,7 @@ impl<T: Send + 'static, B: Body> Body for PendingUntilEosBody<T, B> {
mod tests {
use super::{PendingUntilEos, PendingUntilFirstData};
use futures::future::poll_fn;
use http_body::{Body, Frame};
use hyper::body::HttpBody;
use std::collections::VecDeque;
use std::io::Cursor;
use std::pin::Pin;
@ -199,13 +225,11 @@ mod tests {
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok")
.into_data()
.expect("frame is data");
.expect("data some")
.expect("data ok");
assert!(wk.upgrade().is_none());
}
@ -258,10 +282,10 @@ mod tests {
let res = assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll());
assert!(res.expect("frame is some").is_err());
assert!(res.expect("data is some").is_err());
assert!(wk.upgrade().is_none());
}
@ -284,21 +308,21 @@ mod tests {
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data some")
.expect("data ok");
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data some")
.expect("data ok");
assert!(wk.upgrade().is_none());
}
@ -331,42 +355,40 @@ mod tests {
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data")
.expect("data ok");
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok");
.expect("data")
.expect("data ok");
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
}))
.poll())
.expect("frame is some")
.expect("frame is ok")
.into_trailers()
.expect("is trailers");
assert!(wk.upgrade().is_none());
let poll = assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll());
assert!(poll.is_none());
assert!(wk.upgrade().is_some());
assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_trailers(cx)
}))
.poll())
.expect("trailers ok")
.expect("trailers");
assert!(wk.upgrade().is_none());
}
@ -389,7 +411,7 @@ mod tests {
let poll = assert_ready!(task::spawn(poll_fn(|cx| {
let body = &mut body;
tokio::pin!(body);
body.poll_frame(cx)
body.poll_data(cx)
}))
.poll());
assert!(poll.expect("some").is_err());
@ -407,7 +429,7 @@ mod tests {
#[derive(Default)]
struct TestBody(VecDeque<&'static str>, Option<http::HeaderMap>);
impl Body for TestBody {
impl HttpBody for TestBody {
type Data = Cursor<&'static str>;
type Error = &'static str;
@ -415,27 +437,26 @@ mod tests {
self.0.is_empty() & self.1.is_none()
}
fn poll_frame(
fn poll_data(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
Poll::Ready(self.as_mut().0.pop_front().map(Cursor::new).map(Ok))
}
fn poll_trailers(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
let mut this = self.as_mut();
// Return the next data frame from the sequence of chunks.
if let Some(chunk) = this.0.pop_front() {
let frame = Some(Ok(Frame::data(Cursor::new(chunk))));
return Poll::Ready(frame);
}
// Yield the trailers once all data frames have been yielded.
let trailers = this.1.take().map(Frame::<Self::Data>::trailers).map(Ok);
Poll::Ready(trailers)
assert!(this.0.is_empty());
Poll::Ready(Ok(this.1.take()))
}
}
#[derive(Default)]
struct ErrBody(Option<&'static str>);
impl Body for ErrBody {
impl HttpBody for ErrBody {
type Data = Cursor<&'static str>;
type Error = &'static str;
@ -443,13 +464,18 @@ mod tests {
self.0.is_none()
}
fn poll_frame(
fn poll_data(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Option<Result<http_body::Frame<Self::Data>, Self::Error>>> {
let err = self.as_mut().0.take().expect("err");
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
Poll::Ready(Some(Err(self.as_mut().0.take().expect("err"))))
}
Poll::Ready(Some(Err(err)))
fn poll_trailers(
mut self: Pin<&mut Self>,
_: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
Poll::Ready(Err(self.as_mut().0.take().expect("err")))
}
}
}

View File

@ -15,13 +15,9 @@ toolchain := ""
features := ""
export LINKERD2_PROXY_VERSION := env_var_or_default("LINKERD2_PROXY_VERSION", "0.0.0-dev" + `git rev-parse --short HEAD`)
export LINKERD2_PROXY_VERSION := env_var_or_default("LINKERD2_PROXY_VERSION", "0.0.0-dev." + `git rev-parse --short HEAD`)
export LINKERD2_PROXY_VENDOR := env_var_or_default("LINKERD2_PROXY_VENDOR", `whoami` + "@" + `hostname`)
# TODO: these variables will be included in dev v48
export AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_gnu := env_var_or_default("AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_gnu", "-fuse-ld=/usr/aarch64-linux-gnu/bin/ld")
export AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_musl := env_var_or_default("AWS_LC_SYS_CFLAGS_aarch64_unknown_linux_musl", "-fuse-ld=/usr/aarch64-linux-gnu/bin/ld")
# The version name to use for packages.
package_version := "v" + LINKERD2_PROXY_VERSION
@ -30,30 +26,28 @@ docker-repo := "localhost/linkerd/proxy"
docker-tag := `git rev-parse --abbrev-ref HEAD | sed 's|/|.|g'` + "." + `git rev-parse --short HEAD`
docker-image := docker-repo + ":" + docker-tag
# The architecture name to use for packages. Either 'amd64' or 'arm64'.
# The architecture name to use for packages. Either 'amd64', 'arm64', or 'arm'.
arch := "amd64"
# The OS name to use for packages. Either 'linux' or 'windows'.
os := "linux"
libc := 'gnu'
# If a `arch` is specified, then we change the default cargo `--target`
# to support cross-compilation. Otherwise, we use `rustup` to find the default.
_target := if os + '-' + arch == "linux-amd64" {
_target := if arch == 'amd64' {
"x86_64-unknown-linux-" + libc
} else if os + '-' + arch == "linux-arm64" {
} else if arch == "arm64" {
"aarch64-unknown-linux-" + libc
} else if os + '-' + arch == "windows-amd64" {
"x86_64-pc-windows-" + libc
} else if arch == "arm" {
"armv7-unknown-linux-" + libc + "eabihf"
} else {
error("unsupported: os=" + os + " arch=" + arch + " libc=" + libc)
error("unsupported arch=" + arch)
}
_cargo := 'just-cargo profile=' + profile + ' target=' + _target + ' toolchain=' + toolchain
_target_dir := "target" / _target / profile
_target_bin := _target_dir / "linkerd2-proxy" + if os == 'windows' { '.exe' } else { '' }
_package_name := "linkerd2-proxy-" + package_version + "-" + os + "-" + arch + if libc == 'musl' { '-static' } else { '' }
_target_bin := _target_dir / "linkerd2-proxy"
_package_name := "linkerd2-proxy-" + package_version + "-" + arch + if libc == 'musl' { '-static' } else { '' }
_package_dir := "target/package" / _package_name
shasum := "shasum -a 256"
@ -65,7 +59,7 @@ _features := if features == "all" {
wait-timeout := env_var_or_default("WAIT_TIMEOUT", "1m")
export CXX := 'clang++-19'
export CXX := 'clang++-14'
#
# Recipes
@ -141,7 +135,7 @@ _strip:
_package_bin := _package_dir / "bin" / "linkerd2-proxy"
# XXX aarch64-musl builds do not enable PIE, so we use target-specific
# XXX {aarch64,arm}-musl builds do not enable PIE, so we use target-specific
# files to document those differences.
_expected_checksec := '.checksec' / arch + '-' + libc + '.json'
@ -260,12 +254,6 @@ _tag-set:
_k3d-ready:
@just-k3d ready
export K3D_CLUSTER_NAME := "l5d-proxy"
export K3D_CREATE_FLAGS := "--no-lb"
export K3S_DISABLE := "local-storage,traefik,servicelb,metrics-server@server:*"
k3d-create: && _k3d-ready
@just-k3d create
k3d-load-linkerd: _tag-set _k3d-ready
for i in \
'{{ _controller-image }}:{{ linkerd-tag }}' \
@ -282,7 +270,6 @@ k3d-load-linkerd: _tag-set _k3d-ready
# Install crds on the test cluster.
_linkerd-crds-install: _k3d-ready
{{ _kubectl }} apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml
{{ _linkerd }} install --crds \
| {{ _kubectl }} apply -f -
{{ _kubectl }} wait crd --for condition=established \

View File

@ -1,14 +1,14 @@
[package]
name = "linkerd-addr"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[dependencies]
http = { workspace = true }
ipnet = "2.11"
http = "0.2"
ipnet = "2.10"
linkerd-dns-name = { path = "../dns/name" }
thiserror = "2"

View File

@ -1,10 +1,9 @@
[package]
name = "linkerd-addr-fuzz"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.0.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
publish = false
edition = "2021"
[package.metadata]
cargo-fuzz = true
@ -13,7 +12,7 @@ cargo-fuzz = true
libfuzzer-sys = "0.4"
linkerd-addr = { path = ".." }
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
tracing = { workspace = true }
tracing = "0.1"
# Prevent this from interfering with workspaces
[workspace]

View File

@ -100,11 +100,15 @@ impl Addr {
// them ourselves.
format!("[{}]", a.ip())
};
http::uri::Authority::from_str(&ip)
.unwrap_or_else(|err| panic!("SocketAddr ({a}) must be valid authority: {err}"))
http::uri::Authority::from_str(&ip).unwrap_or_else(|err| {
panic!("SocketAddr ({}) must be valid authority: {}", a, err)
})
}
Addr::Socket(a) => {
http::uri::Authority::from_str(&a.to_string()).unwrap_or_else(|err| {
panic!("SocketAddr ({}) must be valid authority: {}", a, err)
})
}
Addr::Socket(a) => http::uri::Authority::from_str(&a.to_string())
.unwrap_or_else(|err| panic!("SocketAddr ({a}) must be valid authority: {err}")),
}
}
@ -261,14 +265,14 @@ mod tests {
];
for (host, expected_result) in cases {
let a = Addr::from_str(host).unwrap();
assert_eq!(a.is_loopback(), *expected_result, "{host:?}")
assert_eq!(a.is_loopback(), *expected_result, "{:?}", host)
}
}
fn test_to_http_authority(cases: &[&str]) {
let width = cases.iter().map(|s| s.len()).max().unwrap_or(0);
for host in cases {
print!("trying {host:width$} ... ");
print!("trying {:1$} ... ", host, width);
Addr::from_str(host).unwrap().to_http_authority();
println!("ok");
}

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Configures and executes the proxy
@ -18,7 +18,6 @@ pprof = ["linkerd-app-admin/pprof"]
[dependencies]
futures = { version = "0.3", default-features = false }
hyper-util = { workspace = true }
linkerd-app-admin = { path = "./admin" }
linkerd-app-core = { path = "./core" }
linkerd-app-gateway = { path = "./gateway" }
@ -28,12 +27,11 @@ linkerd-error = { path = "../error" }
linkerd-opencensus = { path = "../opencensus" }
linkerd-opentelemetry = { path = "../opentelemetry" }
linkerd-tonic-stream = { path = "../tonic-stream" }
linkerd-workers = { path = "../workers" }
rangemap = "1"
regex = "1"
thiserror = "2"
tokio = { version = "1", features = ["rt"] }
tokio-stream = { version = "0.1", features = ["time", "sync"] }
tonic = { workspace = true, default-features = false, features = ["prost"] }
tower = { workspace = true }
tracing = { workspace = true }
tonic = { version = "0.10", default-features = false, features = ["prost"] }
tower = "0.4"
tracing = "0.1"

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-admin"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
The linkerd proxy's admin server.
"""
@ -15,26 +15,24 @@ pprof = ["deflate", "dep:pprof"]
log-streaming = ["linkerd-tracing/stream"]
[dependencies]
bytes = { workspace = true }
deflate = { version = "1", optional = true, features = ["gzip"] }
http = { workspace = true }
http-body = { workspace = true }
http-body-util = { workspace = true }
hyper = { workspace = true, features = ["http1", "http2"] }
http = "0.2"
http-body = "0.4"
hyper = { version = "0.14", features = ["deprecated", "http1", "http2"] }
futures = { version = "0.3", default-features = false }
pprof = { version = "0.15", optional = true, features = ["prost-codec"] }
pprof = { version = "0.14", optional = true, features = ["prost-codec"] }
serde = "1"
serde_json = "1"
thiserror = "2"
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
tracing = { workspace = true }
tracing = "0.1"
linkerd-app-core = { path = "../core" }
linkerd-app-inbound = { path = "../inbound" }
linkerd-tracing = { path = "../../tracing" }
[dependencies.tower]
workspace = true
version = "0.4"
default-features = false
features = [
"buffer",

View File

@ -12,9 +12,13 @@
use futures::future::{self, TryFutureExt};
use http::StatusCode;
use hyper::{
body::{Body, HttpBody},
Request, Response,
};
use linkerd_app_core::{
metrics::{self as metrics, legacy::FmtMetrics},
proxy::http::{Body, BoxBody, ClientHandle, Request, Response},
metrics::{self as metrics, FmtMetrics},
proxy::http::ClientHandle,
trace, Error, Result,
};
use std::{
@ -32,7 +36,7 @@ pub use self::readiness::{Latch, Readiness};
#[derive(Clone)]
pub struct Admin<M> {
metrics: metrics::legacy::Serve<M>,
metrics: metrics::Serve<M>,
tracing: trace::Handle,
ready: Readiness,
shutdown_tx: mpsc::UnboundedSender<()>,
@ -41,7 +45,7 @@ pub struct Admin<M> {
pprof: Option<crate::pprof::Pprof>,
}
pub type ResponseFuture = Pin<Box<dyn Future<Output = Result<Response<BoxBody>>> + Send + 'static>>;
pub type ResponseFuture = Pin<Box<dyn Future<Output = Result<Response<Body>>> + Send + 'static>>;
impl<M> Admin<M> {
pub fn new(
@ -52,7 +56,7 @@ impl<M> Admin<M> {
tracing: trace::Handle,
) -> Self {
Self {
metrics: metrics::legacy::Serve::new(metrics),
metrics: metrics::Serve::new(metrics),
ready,
shutdown_tx,
enable_shutdown,
@ -69,30 +73,30 @@ impl<M> Admin<M> {
self
}
fn ready_rsp(&self) -> Response<BoxBody> {
fn ready_rsp(&self) -> Response<Body> {
if self.ready.is_ready() {
Response::builder()
.status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("ready\n"))
.body("ready\n".into())
.expect("builder with known status code must not fail")
} else {
Response::builder()
.status(StatusCode::SERVICE_UNAVAILABLE)
.body(BoxBody::from_static("not ready\n"))
.body("not ready\n".into())
.expect("builder with known status code must not fail")
}
}
fn live_rsp() -> Response<BoxBody> {
fn live_rsp() -> Response<Body> {
Response::builder()
.status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("live\n"))
.body("live\n".into())
.expect("builder with known status code must not fail")
}
fn env_rsp<B>(req: Request<B>) -> Response<BoxBody> {
fn env_rsp<B>(req: Request<B>) -> Response<Body> {
use std::{collections::HashMap, env, ffi::OsString};
if req.method() != http::Method::GET {
@ -138,58 +142,56 @@ impl<M> Admin<M> {
json::json_rsp(&env)
}
fn shutdown(&self) -> Response<BoxBody> {
fn shutdown(&self) -> Response<Body> {
if !self.enable_shutdown {
return Response::builder()
.status(StatusCode::NOT_FOUND)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("shutdown endpoint is not enabled\n"))
.body("shutdown endpoint is not enabled\n".into())
.expect("builder with known status code must not fail");
}
if self.shutdown_tx.send(()).is_ok() {
Response::builder()
.status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("shutdown\n"))
.body("shutdown\n".into())
.expect("builder with known status code must not fail")
} else {
Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::from_static("shutdown listener dropped\n"))
.body("shutdown listener dropped\n".into())
.expect("builder with known status code must not fail")
}
}
fn internal_error_rsp(error: impl ToString) -> http::Response<BoxBody> {
fn internal_error_rsp(error: impl ToString) -> http::Response<Body> {
http::Response::builder()
.status(http::StatusCode::INTERNAL_SERVER_ERROR)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::new(error.to_string()))
.body(error.to_string().into())
.expect("builder with known status code should not fail")
}
fn not_found() -> Response<BoxBody> {
fn not_found() -> Response<Body> {
Response::builder()
.status(http::StatusCode::NOT_FOUND)
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail")
}
fn method_not_allowed() -> Response<BoxBody> {
fn method_not_allowed() -> Response<Body> {
Response::builder()
.status(http::StatusCode::METHOD_NOT_ALLOWED)
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail")
}
fn forbidden_not_localhost() -> Response<BoxBody> {
fn forbidden_not_localhost() -> Response<Body> {
Response::builder()
.status(http::StatusCode::FORBIDDEN)
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::new::<String>(
"Requests are only permitted from localhost.".into(),
))
.body("Requests are only permitted from localhost.".into())
.expect("builder with known status code must not fail")
}
@ -213,11 +215,11 @@ impl<M> Admin<M> {
impl<M, B> tower::Service<http::Request<B>> for Admin<M>
where
M: FmtMetrics,
B: Body + Send + 'static,
B: HttpBody + Send + 'static,
B::Error: Into<Error>,
B::Data: Send,
{
type Response = http::Response<BoxBody>;
type Response = http::Response<Body>;
type Error = Error;
type Future = ResponseFuture;
@ -329,7 +331,7 @@ mod tests {
let r = Request::builder()
.method(Method::GET)
.uri("http://0.0.0.0/ready")
.body(BoxBody::empty())
.body(Body::empty())
.unwrap();
let f = admin.clone().oneshot(r);
timeout(TIMEOUT, f).await.expect("timeout").expect("call")

View File

@ -1,17 +1,14 @@
static JSON_MIME: &str = "application/json";
pub(in crate::server) static JSON_HEADER_VAL: HeaderValue = HeaderValue::from_static(JSON_MIME);
use bytes::Bytes;
use hyper::{
header::{self, HeaderValue},
StatusCode,
Body, StatusCode,
};
use linkerd_app_core::proxy::http::BoxBody;
pub(crate) fn json_error_rsp(
error: impl ToString,
status: http::StatusCode,
) -> http::Response<BoxBody> {
) -> http::Response<Body> {
mk_rsp(
status,
&serde_json::json!({
@ -21,12 +18,11 @@ pub(crate) fn json_error_rsp(
)
}
pub(crate) fn json_rsp(val: &impl serde::Serialize) -> http::Response<BoxBody> {
pub(crate) fn json_rsp(val: &impl serde::Serialize) -> http::Response<Body> {
mk_rsp(StatusCode::OK, val)
}
#[allow(clippy::result_large_err)]
pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Response<BoxBody>> {
pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Response<Body>> {
if let Some(accept) = req.headers().get(header::ACCEPT) {
let accept = match std::str::from_utf8(accept.as_bytes()) {
Ok(accept) => accept,
@ -45,7 +41,7 @@ pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Respon
tracing::warn!(?accept, "Accept header will not accept 'application/json'");
return Err(http::Response::builder()
.status(StatusCode::NOT_ACCEPTABLE)
.body(BoxBody::from_static(JSON_MIME))
.body(JSON_MIME.into())
.expect("builder with known status code must not fail"));
}
}
@ -53,26 +49,18 @@ pub(crate) fn accepts_json<B>(req: &http::Request<B>) -> Result<(), http::Respon
Ok(())
}
fn mk_rsp(status: StatusCode, val: &impl serde::Serialize) -> http::Response<BoxBody> {
// Serialize the value into JSON, and then place the bytes in a boxed response body.
let json = serde_json::to_vec(val)
.map(Bytes::from)
.map(http_body_util::Full::new)
.map(BoxBody::new);
match json {
Ok(body) => http::Response::builder()
fn mk_rsp(status: StatusCode, val: &impl serde::Serialize) -> http::Response<Body> {
match serde_json::to_vec(val) {
Ok(json) => http::Response::builder()
.status(status)
.header(header::CONTENT_TYPE, JSON_HEADER_VAL.clone())
.body(body)
.body(json.into())
.expect("builder with known status code must not fail"),
Err(error) => {
tracing::warn!(?error, "failed to serialize JSON value");
http::Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.body(BoxBody::new(format!(
"failed to serialize JSON value: {error}"
)))
.body(format!("failed to serialize JSON value: {error}").into())
.expect("builder with known status code must not fail")
}
}

View File

@ -1,18 +1,17 @@
use bytes::Buf;
use http::{header, StatusCode};
use linkerd_app_core::{
proxy::http::{Body, BoxBody},
trace::level,
Error,
use hyper::{
body::{Buf, HttpBody},
Body,
};
use linkerd_app_core::{trace::level, Error};
use std::io;
pub async fn serve<B>(
level: level::Handle,
req: http::Request<B>,
) -> Result<http::Response<BoxBody>, Error>
) -> Result<http::Response<Body>, Error>
where
B: Body,
B: HttpBody,
B::Error: Into<Error>,
{
Ok(match *req.method() {
@ -22,15 +21,14 @@ where
}
http::Method::PUT => {
use http_body_util::BodyExt;
let body = req
.into_body()
.collect()
.await
.map_err(io::Error::other)?
.map_err(|e| io::Error::new(io::ErrorKind::Other, e))?
.aggregate();
match level.set_from(body.chunk()) {
Ok(_) => mk_rsp(StatusCode::NO_CONTENT, BoxBody::empty()),
Ok(_) => mk_rsp(StatusCode::NO_CONTENT, Body::empty()),
Err(error) => {
tracing::warn!(%error, "Setting log level failed");
mk_rsp(StatusCode::BAD_REQUEST, error)
@ -42,19 +40,14 @@ where
.status(StatusCode::METHOD_NOT_ALLOWED)
.header(header::ALLOW, "GET")
.header(header::ALLOW, "PUT")
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail"),
})
}
fn mk_rsp<B>(status: StatusCode, body: B) -> http::Response<BoxBody>
where
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Into<Error>,
{
fn mk_rsp(status: StatusCode, body: impl Into<Body>) -> http::Response<Body> {
http::Response::builder()
.status(status)
.body(BoxBody::new(body))
.body(body.into())
.expect("builder with known status code must not fail")
}

View File

@ -1,9 +1,10 @@
use crate::server::json;
use bytes::{Buf, Bytes};
use futures::FutureExt;
use hyper::{header, StatusCode};
use hyper::{
body::{Buf, Bytes},
header, Body, StatusCode,
};
use linkerd_app_core::{
proxy::http::{Body, BoxBody},
trace::{self},
Error,
};
@ -26,9 +27,9 @@ macro_rules! recover {
pub async fn serve<B>(
handle: trace::Handle,
req: http::Request<B>,
) -> Result<http::Response<BoxBody>, Error>
) -> Result<http::Response<Body>, Error>
where
B: Body,
B: hyper::body::HttpBody,
B::Error: Into<Error>,
{
let handle = handle.into_stream();
@ -51,13 +52,11 @@ where
// If the request is a QUERY, use the request body
method if method.as_str() == "QUERY" => {
// TODO(eliza): validate that the request has a content-length...
use http_body_util::BodyExt;
let body = recover!(
req.into_body()
.collect()
http_body::Body::collect(req.into_body())
.await
.map_err(Into::into)
.map(http_body_util::Collected::aggregate),
.map(http_body::Collected::aggregate),
"Reading log stream request body",
StatusCode::BAD_REQUEST
);
@ -76,7 +75,7 @@ where
.status(StatusCode::METHOD_NOT_ALLOWED)
.header(header::ALLOW, "GET")
.header(header::ALLOW, "QUERY")
.body(BoxBody::empty())
.body(Body::empty())
.expect("builder with known status code must not fail"));
}
};
@ -101,7 +100,7 @@ where
// https://github.com/hawkw/thingbuf/issues/62 would allow us to avoid the
// copy by passing the channel's pooled buffer directly to hyper, and
// returning it to the channel to be reused when hyper is done with it.
let (mut tx, body) = http_body_util::channel::Channel::<Bytes, Error>::new(1024);
let (mut tx, body) = Body::channel();
tokio::spawn(
async move {
// TODO(eliza): we could definitely implement some batching here.
@ -126,7 +125,7 @@ where
}),
);
Ok(mk_rsp(StatusCode::OK, BoxBody::new(body)))
Ok(mk_rsp(StatusCode::OK, body))
}
fn parse_filter(filter_str: &str) -> Result<EnvFilter, impl std::error::Error> {
@ -135,10 +134,10 @@ fn parse_filter(filter_str: &str) -> Result<EnvFilter, impl std::error::Error> {
filter
}
fn mk_rsp<B>(status: StatusCode, body: B) -> http::Response<B> {
fn mk_rsp(status: StatusCode, body: impl Into<Body>) -> http::Response<Body> {
http::Response::builder()
.status(status)
.header(header::CONTENT_TYPE, json::JSON_HEADER_VAL.clone())
.body(body)
.body(body.into())
.expect("builder with known status code must not fail")
}

View File

@ -1,8 +1,8 @@
use linkerd_app_core::{
classify,
config::ServerConfig,
drain, errors, identity,
metrics::{self, legacy::FmtMetrics},
detect, drain, errors, identity,
metrics::{self, FmtMetrics},
proxy::http,
serve,
svc::{self, ExtractParam, InsertParam, Param},
@ -52,7 +52,7 @@ struct Tcp {
#[derive(Clone, Debug)]
struct Http {
tcp: Tcp,
version: http::Variant,
version: http::Version,
}
#[derive(Clone, Debug)]
@ -122,7 +122,6 @@ impl Config {
.push_on_service(http::BoxResponse::layer())
.arc_new_clone_http();
let inbound::DetectMetrics(detect_metrics) = metrics.detect.clone();
let tcp = http
.unlift_new()
.push(http::NewServeHttp::layer({
@ -137,11 +136,11 @@ impl Config {
}))
.push_filter(
|(http, tcp): (
http::Detection,
Result<Option<http::Version>, detect::DetectTimeoutError<_>>,
Tcp,
)| {
match http {
http::Detection::Http(version) => Ok(Http { version, tcp }),
Ok(Some(version)) => Ok(Http { version, tcp }),
// If detection timed out, we can make an educated guess at the proper
// behavior:
// - If the connection was meshed, it was most likely transported over
@ -149,12 +148,12 @@ impl Config {
// - If the connection was unmeshed, it was mostly likely HTTP/1.
// - If we received some unexpected SNI, the client is mostly likely
// confused/stale.
http::Detection::ReadTimeout(_timeout) => {
Err(_timeout) => {
let version = match tcp.tls {
tls::ConditionalServerTls::None(_) => http::Variant::Http1,
tls::ConditionalServerTls::None(_) => http::Version::Http1,
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
..
}) => http::Variant::H2,
}) => http::Version::H2,
tls::ConditionalServerTls::Some(tls::ServerTls::Passthru {
sni,
}) => {
@ -167,7 +166,7 @@ impl Config {
}
// If the connection failed HTTP detection, check if we detected TLS for
// another target. This might indicate that the client is confused/stale.
http::Detection::NotHttp => match tcp.tls {
Ok(None) => match tcp.tls {
tls::ConditionalServerTls::Some(tls::ServerTls::Passthru { sni }) => {
Err(UnexpectedSni(sni, tcp.client).into())
}
@ -178,12 +177,9 @@ impl Config {
)
.arc_new_tcp()
.lift_new_with_target()
.push(http::NewDetect::layer(move |tcp: &Tcp| {
http::DetectParams {
read_timeout: DETECT_TIMEOUT,
metrics: detect_metrics.metrics(tcp.policy.server_label())
}
}))
.push(detect::NewDetectService::layer(svc::stack::CloneParam::from(
detect::Config::<http::DetectHttp>::from_timeout(DETECT_TIMEOUT),
)))
.push(transport::metrics::NewServer::layer(metrics.proxy.transport))
.push_map_target(move |(tls, addrs): (tls::ConditionalServerTls, B::Addrs)| {
Tcp {
@ -214,7 +210,7 @@ impl Config {
impl Param<transport::labels::Key> for Tcp {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
self.tls.as_ref().map(|t| t.labels()),
self.tls.clone(),
self.addr.into(),
self.policy.server_label(),
)
@ -223,8 +219,8 @@ impl Param<transport::labels::Key> for Tcp {
// === impl Http ===
impl Param<http::Variant> for Http {
fn param(&self) -> http::Variant {
impl Param<http::Version> for Http {
fn param(&self) -> http::Version {
self.version
}
}
@ -272,7 +268,7 @@ impl Param<metrics::ServerLabel> for Http {
impl Param<metrics::EndpointLabels> for Permitted {
fn param(&self) -> metrics::EndpointLabels {
metrics::InboundEndpointLabels {
tls: self.http.tcp.tls.as_ref().map(|t| t.labels()),
tls: self.http.tcp.tls.clone(),
authority: None,
target_addr: self.http.tcp.addr.into(),
policy: self.permit.labels.clone(),

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-core"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Core infrastructure for the proxy application
@ -13,23 +13,30 @@ independently of the inbound and outbound proxy logic.
"""
[dependencies]
drain = { workspace = true, features = ["retain"] }
http = { workspace = true }
http-body = { workspace = true }
hyper = { workspace = true, features = ["http1", "http2"] }
bytes = "1"
drain = { version = "0.1", features = ["retain"] }
http = "0.2"
http-body = "0.4"
hyper = { version = "0.14", features = ["deprecated", "http1", "http2"] }
futures = { version = "0.3", default-features = false }
ipnet = "2.11"
prometheus-client = { workspace = true }
ipnet = "2.10"
prometheus-client = "0.22"
regex = "1"
serde_json = "1"
thiserror = "2"
tokio = { version = "1", features = ["macros", "sync", "parking_lot"] }
tokio-stream = { version = "0.1", features = ["time"] }
tonic = { workspace = true, default-features = false, features = ["prost"] }
tracing = { workspace = true }
tonic = { version = "0.10", default-features = false, features = ["prost"] }
tracing = "0.1"
parking_lot = "0.12"
pin-project = "1"
linkerd-addr = { path = "../../addr" }
linkerd-conditional = { path = "../../conditional" }
linkerd-dns = { path = "../../dns" }
linkerd-detect = { path = "../../detect" }
linkerd-duplex = { path = "../../duplex" }
linkerd-errno = { path = "../../errno" }
linkerd-error = { path = "../../error" }
linkerd-error-respond = { path = "../../error-respond" }
linkerd-exp-backoff = { path = "../../exp-backoff" }
@ -56,7 +63,6 @@ linkerd-proxy-tcp = { path = "../../proxy/tcp" }
linkerd-proxy-transport = { path = "../../proxy/transport" }
linkerd-reconnect = { path = "../../reconnect" }
linkerd-router = { path = "../../router" }
linkerd-rustls = { path = "../../rustls" }
linkerd-service-profiles = { path = "../../service-profiles" }
linkerd-stack = { path = "../../stack" }
linkerd-stack-metrics = { path = "../../stack/metrics" }
@ -68,14 +74,15 @@ linkerd-tls = { path = "../../tls" }
linkerd-trace-context = { path = "../../trace-context" }
[dependencies.tower]
workspace = true
version = "0.4"
default-features = false
features = ["make", "spawn-ready", "timeout", "util", "limit"]
[target.'cfg(target_os = "linux")'.dependencies]
linkerd-system = { path = "../../system" }
[build-dependencies]
semver = "1"
[dev-dependencies]
bytes = { workspace = true }
http-body-util = { workspace = true }
linkerd-mock-http-body = { path = "../../mock/http-body" }
quickcheck = { version = "1", default-features = false }

View File

@ -4,18 +4,18 @@ fn set_env(name: &str, cmd: &mut Command) {
let value = match cmd.output() {
Ok(output) => String::from_utf8(output.stdout).unwrap(),
Err(err) => {
println!("cargo:warning={err}");
println!("cargo:warning={}", err);
"".to_string()
}
};
println!("cargo:rustc-env={name}={value}");
println!("cargo:rustc-env={}={}", name, value);
}
fn version() -> String {
if let Ok(v) = std::env::var("LINKERD2_PROXY_VERSION") {
if !v.is_empty() {
if let Err(err) = semver::Version::parse(&v) {
panic!("LINKERD2_PROXY_VERSION must be semver: version='{v}' error='{err}'");
if semver::Version::parse(&v).is_err() {
panic!("LINKERD2_PROXY_VERSION must be semver");
}
return v;
}

View File

@ -1,4 +1,5 @@
use crate::profiles;
pub use classify::gate;
use linkerd_error::Error;
use linkerd_proxy_client_policy as client_policy;
use linkerd_proxy_http::{classify, HasH2Reason, ResponseTimeoutError};
@ -213,7 +214,7 @@ fn h2_error(err: &Error) -> String {
if let Some(reason) = err.h2_reason() {
// This should output the error code in the same format as the spec,
// for example: PROTOCOL_ERROR
format!("h2({reason:?})")
format!("h2({:?})", reason)
} else {
trace!("classifying found non-h2 error: {:?}", err);
String::from("unclassified")

View File

@ -1,7 +1,7 @@
pub use crate::exp_backoff::ExponentialBackoff;
use crate::{
proxy::http::{h1, h2},
svc::{queue, ExtractParam, Param},
proxy::http::{self, h1, h2},
svc::{queue, CloneParam, ExtractParam, Param},
transport::{DualListenAddr, Keepalive, ListenAddr, UserTimeout},
};
use std::time::Duration;
@ -59,6 +59,14 @@ impl<T> ExtractParam<queue::Timeout, T> for QueueConfig {
}
}
// === impl ProxyConfig ===
impl ProxyConfig {
pub fn detect_http(&self) -> CloneParam<linkerd_detect::Config<http::DetectHttp>> {
linkerd_detect::Config::from_timeout(self.detect_protocol_timeout).into()
}
}
// === impl ServerConfig ===
impl Param<DualListenAddr> for ServerConfig {

View File

@ -69,10 +69,8 @@ impl fmt::Display for ControlAddr {
}
}
pub type RspBody = linkerd_http_metrics::requests::ResponseBody<
http::balance::Body<hyper::body::Incoming>,
classify::Eos,
>;
pub type RspBody =
linkerd_http_metrics::requests::ResponseBody<http::balance::Body<hyper::Body>, classify::Eos>;
#[derive(Clone, Debug, Default)]
pub struct Metrics {
@ -101,7 +99,7 @@ impl Config {
identity: identity::NewClient,
) -> svc::ArcNewService<
(),
svc::BoxCloneSyncService<http::Request<tonic::body::Body>, http::Response<RspBody>>,
svc::BoxCloneSyncService<http::Request<tonic::body::BoxBody>, http::Response<RspBody>>,
> {
let addr = self.addr;
tracing::trace!(%addr, "Building");
@ -114,7 +112,7 @@ impl Config {
warn!(error, "Failed to resolve control-plane component");
if let Some(e) = crate::errors::cause_ref::<dns::ResolveError>(&*error) {
if let Some(ttl) = e.negative_ttl() {
return Ok::<_, Error>(Either::Left(
return Ok(Either::Left(
IntervalStream::new(time::interval(ttl)).map(|_| ()),
));
}
@ -131,9 +129,9 @@ impl Config {
self.connect.user_timeout,
))
.push(tls::Client::layer(identity))
.push_connect_timeout(self.connect.timeout) // Client<NewClient, ConnectTcp>
.push_connect_timeout(self.connect.timeout)
.push_map_target(|(_version, target)| target)
.push(self::client::layer::<_, _>(self.connect.http2))
.push(self::client::layer(self.connect.http2))
.push_on_service(svc::MapErr::layer_boxed())
.into_new_service();

View File

@ -1,50 +1,25 @@
use self::metrics::Labels;
use linkerd_metrics::prom::{Counter, Family, Registry};
use std::time::Duration;
pub use linkerd_dns::*;
mod metrics;
use std::path::PathBuf;
use std::time::Duration;
#[derive(Clone, Debug)]
pub struct Config {
pub min_ttl: Option<Duration>,
pub max_ttl: Option<Duration>,
pub resolv_conf_path: PathBuf,
}
pub struct Dns {
resolver: Resolver,
resolutions: Family<Labels, Counter>,
}
// === impl Dns ===
impl Dns {
/// Returns a new [`Resolver`].
pub fn resolver(&self, client: &'static str) -> Resolver {
let metrics = self.metrics(client);
self.resolver.clone().with_metrics(metrics)
}
pub resolver: Resolver,
}
// === impl Config ===
impl Config {
pub fn build(self, registry: &mut Registry) -> Dns {
let resolutions = Family::default();
registry.register(
"resolutions",
"Counts the number of DNS records that have been resolved.",
resolutions.clone(),
);
pub fn build(self) -> Dns {
let resolver =
Resolver::from_system_config_with(&self).expect("system DNS config must be valid");
Dns {
resolver,
resolutions,
}
Dns { resolver }
}
}

View File

@ -1,115 +0,0 @@
use super::{Dns, Metrics};
use linkerd_metrics::prom::encoding::{
EncodeLabel, EncodeLabelSet, EncodeLabelValue, LabelSetEncoder, LabelValueEncoder,
};
use std::fmt::{Display, Write};
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
pub(super) struct Labels {
client: &'static str,
record_type: RecordType,
result: Outcome,
}
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
enum RecordType {
A,
Srv,
}
#[derive(Clone, Debug, Eq, Hash, PartialEq)]
enum Outcome {
Ok,
NotFound,
}
// === impl Dns ===
impl Dns {
pub(super) fn metrics(&self, client: &'static str) -> Metrics {
let family = &self.resolutions;
let a_records_resolved = (*family.get_or_create(&Labels {
client,
record_type: RecordType::A,
result: Outcome::Ok,
}))
.clone();
let a_records_not_found = (*family.get_or_create(&Labels {
client,
record_type: RecordType::A,
result: Outcome::NotFound,
}))
.clone();
let srv_records_resolved = (*family.get_or_create(&Labels {
client,
record_type: RecordType::Srv,
result: Outcome::Ok,
}))
.clone();
let srv_records_not_found = (*family.get_or_create(&Labels {
client,
record_type: RecordType::Srv,
result: Outcome::NotFound,
}))
.clone();
Metrics {
a_records_resolved,
a_records_not_found,
srv_records_resolved,
srv_records_not_found,
}
}
}
// === impl Labels ===
impl EncodeLabelSet for Labels {
fn encode(&self, mut encoder: LabelSetEncoder<'_>) -> Result<(), std::fmt::Error> {
let Self {
client,
record_type,
result,
} = self;
("client", *client).encode(encoder.encode_label())?;
("record_type", record_type).encode(encoder.encode_label())?;
("result", result).encode(encoder.encode_label())?;
Ok(())
}
}
// === impl Outcome ===
impl EncodeLabelValue for &Outcome {
fn encode(&self, encoder: &mut LabelValueEncoder<'_>) -> Result<(), std::fmt::Error> {
encoder.write_str(self.to_string().as_str())
}
}
impl Display for Outcome {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Self::Ok => "ok",
Self::NotFound => "not_found",
})
}
}
// === impl RecordType ===
impl EncodeLabelValue for &RecordType {
fn encode(&self, encoder: &mut LabelValueEncoder<'_>) -> Result<(), std::fmt::Error> {
encoder.write_str(self.to_string().as_str())
}
}
impl Display for RecordType {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(match self {
Self::A => "A/AAAA",
Self::Srv => "SRV",
})
}
}

View File

@ -1,4 +1,3 @@
pub mod body;
pub mod respond;
pub use self::respond::{HttpRescue, NewRespond, NewRespondService, SyntheticHttpResponse};
@ -7,16 +6,6 @@ pub use linkerd_proxy_http::h2::H2Error;
pub use linkerd_stack::{FailFastError, LoadShedError};
pub use tonic::Code as Grpc;
/// Header names and values related to error responses.
pub mod header {
use http::header::{HeaderName, HeaderValue};
pub const L5D_PROXY_CONNECTION: HeaderName = HeaderName::from_static("l5d-proxy-connection");
pub const L5D_PROXY_ERROR: HeaderName = HeaderName::from_static("l5d-proxy-error");
pub(super) const GRPC_CONTENT_TYPE: HeaderValue = HeaderValue::from_static("application/grpc");
pub(super) const GRPC_MESSAGE: HeaderName = HeaderName::from_static("grpc-message");
pub(super) const GRPC_STATUS: HeaderName = HeaderName::from_static("grpc-status");
}
#[derive(Debug, thiserror::Error)]
#[error("connect timed out after {0:?}")]
pub struct ConnectTimeout(pub(crate) std::time::Duration);
@ -29,27 +18,3 @@ pub fn has_grpc_status(error: &crate::Error, code: tonic::Code) -> bool {
.map(|s| s.code() == code)
.unwrap_or(false)
}
// Copied from tonic, where it's private.
fn code_header(code: tonic::Code) -> http::HeaderValue {
use {http::HeaderValue, tonic::Code};
match code {
Code::Ok => HeaderValue::from_static("0"),
Code::Cancelled => HeaderValue::from_static("1"),
Code::Unknown => HeaderValue::from_static("2"),
Code::InvalidArgument => HeaderValue::from_static("3"),
Code::DeadlineExceeded => HeaderValue::from_static("4"),
Code::NotFound => HeaderValue::from_static("5"),
Code::AlreadyExists => HeaderValue::from_static("6"),
Code::PermissionDenied => HeaderValue::from_static("7"),
Code::ResourceExhausted => HeaderValue::from_static("8"),
Code::FailedPrecondition => HeaderValue::from_static("9"),
Code::Aborted => HeaderValue::from_static("10"),
Code::OutOfRange => HeaderValue::from_static("11"),
Code::Unimplemented => HeaderValue::from_static("12"),
Code::Internal => HeaderValue::from_static("13"),
Code::Unavailable => HeaderValue::from_static("14"),
Code::DataLoss => HeaderValue::from_static("15"),
Code::Unauthenticated => HeaderValue::from_static("16"),
}
}

View File

@ -1,314 +0,0 @@
use super::{
header::{GRPC_MESSAGE, GRPC_STATUS},
respond::{HttpRescue, SyntheticHttpResponse},
};
use http::header::HeaderValue;
use http_body::Frame;
use linkerd_error::{Error, Result};
use pin_project::pin_project;
use std::{
pin::Pin,
task::{Context, Poll},
};
use tracing::{debug, warn};
/// Returns a "gRPC rescue" body.
///
/// This returns a body that, should the inner `B`-typed body return an error when polling for
/// DATA frames, will "rescue" the stream and return a TRAILERS frame that describes the error.
#[pin_project(project = ResponseBodyProj)]
pub struct ResponseBody<R, B>(#[pin] Inner<R, B>);
#[pin_project(project = InnerProj)]
enum Inner<R, B> {
/// An inert body that delegates directly down to the underlying body `B`.
Passthru(#[pin] B),
/// A body that will be rescued if it yields an error.
GrpcRescue {
#[pin]
inner: B,
/// An error response [strategy][HttpRescue].
rescue: R,
emit_headers: bool,
},
/// The underlying body `B` yielded an error and was "rescued".
Rescued,
}
// === impl ResponseBody ===
impl<R, B> ResponseBody<R, B> {
/// Returns a body in "passthru" mode.
pub fn passthru(inner: B) -> Self {
Self(Inner::Passthru(inner))
}
/// Returns a "gRPC rescue" body.
pub fn grpc_rescue(inner: B, rescue: R, emit_headers: bool) -> Self {
Self(Inner::GrpcRescue {
inner,
rescue,
emit_headers,
})
}
}
impl<R, B: Default + linkerd_proxy_http::Body> Default for ResponseBody<R, B> {
fn default() -> Self {
Self(Inner::Passthru(B::default()))
}
}
impl<R, B> linkerd_proxy_http::Body for ResponseBody<R, B>
where
B: linkerd_proxy_http::Body<Error = Error>,
R: HttpRescue<B::Error>,
{
type Data = B::Data;
type Error = B::Error;
fn poll_frame(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<std::result::Result<http_body::Frame<Self::Data>, Self::Error>>> {
let ResponseBodyProj(inner) = self.as_mut().project();
match inner.project() {
InnerProj::Passthru(inner) => inner.poll_frame(cx),
InnerProj::GrpcRescue {
inner,
rescue,
emit_headers,
} => match inner.poll_frame(cx) {
Poll::Ready(Some(Err(error))) => {
// The inner body has yielded an error, which we will try to rescue. If so,
// yield synthetic trailers reporting the error.
let trailers = Self::rescue(error, rescue, *emit_headers)?;
self.set(Self(Inner::Rescued));
Poll::Ready(Some(Ok(Frame::trailers(trailers))))
}
poll => poll,
},
InnerProj::Rescued => Poll::Ready(None),
}
}
#[inline]
fn is_end_stream(&self) -> bool {
let Self(inner) = self;
match inner {
Inner::Passthru(inner) => inner.is_end_stream(),
Inner::GrpcRescue { inner, .. } => inner.is_end_stream(),
Inner::Rescued => true,
}
}
#[inline]
fn size_hint(&self) -> http_body::SizeHint {
let Self(inner) = self;
match inner {
Inner::Passthru(inner) => inner.size_hint(),
Inner::GrpcRescue { inner, .. } => inner.size_hint(),
Inner::Rescued => http_body::SizeHint::with_exact(0),
}
}
}
impl<R, B> ResponseBody<R, B>
where
B: http_body::Body,
R: HttpRescue<B::Error>,
{
/// Maps an error yielded by the inner body to a collection of gRPC trailers.
///
/// This function returns `Ok(trailers)` if the given [`HttpRescue<E>`] strategy could identify
/// a cause for an error yielded by the inner `B`-typed body.
fn rescue(
error: B::Error,
rescue: &R,
emit_headers: bool,
) -> Result<http::HeaderMap, B::Error> {
let SyntheticHttpResponse {
grpc_status,
message,
..
} = rescue.rescue(error)?;
debug!(grpc.status = ?grpc_status, "Synthesizing gRPC trailers");
let mut t = http::HeaderMap::new();
t.insert(GRPC_STATUS, super::code_header(grpc_status));
if emit_headers {
// A gRPC message trailer is only included if instructed to emit additional headers.
t.insert(
GRPC_MESSAGE,
HeaderValue::from_str(&message).unwrap_or_else(|error| {
warn!(%error, "Failed to encode error header");
HeaderValue::from_static("Unexpected error")
}),
);
}
Ok(t)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::errors::header::{GRPC_MESSAGE, GRPC_STATUS};
use http::HeaderMap;
use linkerd_mock_http_body::MockBody;
struct MockRescue;
impl<E> HttpRescue<E> for MockRescue {
/// Attempts to synthesize a response from the given error.
fn rescue(&self, _: E) -> Result<SyntheticHttpResponse, E> {
let synthetic = SyntheticHttpResponse::internal_error("MockRescue::rescue");
Ok(synthetic)
}
}
#[tokio::test]
async fn rescue_body_recovers_from_error_without_grpc_message() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let trailers = {
let mut trls = HeaderMap::with_capacity(1);
let value = HeaderValue::from_static("caboose");
trls.insert("trailer", value);
trls
};
let rescue = {
let inner = MockBody::default()
.then_yield_data(Poll::Ready(Some(Ok("inter".into()))))
.then_yield_data(Poll::Ready(Some(Err("an error midstream".into()))))
.then_yield_data(Poll::Ready(Some(Ok("rupted".into()))))
.then_yield_trailer(Poll::Ready(Some(Ok(trailers))));
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, Some(trailers)) = body_to_string(rescue).await else {
panic!("trailers should exist");
};
assert_eq!(data, "inter");
assert_eq!(
trailers[GRPC_STATUS],
i32::from(tonic::Code::Internal).to_string()
);
assert_eq!(trailers.get(GRPC_MESSAGE), None);
}
#[tokio::test]
async fn rescue_body_recovers_from_error_emitting_message() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let trailers = {
let mut trls = HeaderMap::with_capacity(1);
let value = HeaderValue::from_static("caboose");
trls.insert("trailer", value);
trls
};
let rescue = {
let inner = MockBody::default()
.then_yield_data(Poll::Ready(Some(Ok("inter".into()))))
.then_yield_data(Poll::Ready(Some(Err("an error midstream".into()))))
.then_yield_data(Poll::Ready(Some(Ok("rupted".into()))))
.then_yield_trailer(Poll::Ready(Some(Ok(trailers))));
let rescue = MockRescue;
let emit_headers = true;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, Some(trailers)) = body_to_string(rescue).await else {
panic!("trailers should exist");
};
assert_eq!(data, "inter");
assert_eq!(
trailers[GRPC_STATUS],
i32::from(tonic::Code::Internal).to_string()
);
assert_eq!(trailers[GRPC_MESSAGE], "MockRescue::rescue");
}
#[tokio::test]
async fn rescue_body_works_for_empty() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let rescue = {
let inner = MockBody::default();
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, trailers) = body_to_string(rescue).await;
assert_eq!(data, "");
assert_eq!(trailers, None);
}
#[tokio::test]
async fn rescue_body_works_for_body_with_data() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let rescue = {
let inner = MockBody::default().then_yield_data(Poll::Ready(Some(Ok("unary".into()))));
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, trailers) = body_to_string(rescue).await;
assert_eq!(data, "unary");
assert_eq!(trailers, None);
}
#[tokio::test]
async fn rescue_body_works_for_body_with_trailers() {
let (_guard, _handle) = linkerd_tracing::test::trace_init();
let trailers = {
let mut trls = HeaderMap::with_capacity(1);
let value = HeaderValue::from_static("caboose");
trls.insert("trailer", value);
trls
};
let rescue = {
let inner = MockBody::default().then_yield_trailer(Poll::Ready(Some(Ok(trailers))));
let rescue = MockRescue;
let emit_headers = false;
ResponseBody::grpc_rescue(inner, rescue, emit_headers)
};
let (data, trailers) = body_to_string(rescue).await;
assert_eq!(data, "");
assert_eq!(trailers.expect("has trailers")["trailer"], "caboose");
}
async fn body_to_string<B>(mut body: B) -> (String, Option<HeaderMap>)
where
B: http_body::Body + Unpin,
B::Error: std::fmt::Debug,
{
use http_body_util::BodyExt;
let mut data = String::new();
let mut trailers = None;
// Continue reading frames from the body until it is finished.
while let Some(frame) = body
.frame()
.await
.transpose()
.expect("reading a frame succeeds")
{
match frame.into_data().map(|mut buf| {
use bytes::Buf;
let bytes = buf.copy_to_bytes(buf.remaining());
String::from_utf8(bytes.to_vec()).unwrap()
}) {
Ok(ref s) => data.push_str(s),
Err(frame) => {
let trls = frame
.into_trailers()
.map_err(drop)
.expect("test frame is either data or trailers");
trailers = Some(trls);
}
}
}
tracing::info!(?data, ?trailers, "finished reading body");
(data, trailers)
}
}

View File

@ -1,16 +1,21 @@
use super::{
body::ResponseBody,
header::{GRPC_CONTENT_TYPE, GRPC_MESSAGE, GRPC_STATUS, L5D_PROXY_CONNECTION, L5D_PROXY_ERROR},
};
use crate::svc;
use http::header::{HeaderValue, LOCATION};
use linkerd_error::{Error, Result};
use linkerd_error_respond as respond;
use linkerd_proxy_http::{orig_proto, ClientHandle};
use linkerd_proxy_http::orig_proto;
pub use linkerd_proxy_http::{ClientHandle, HasH2Reason};
use linkerd_stack::ExtractParam;
use std::borrow::Cow;
use pin_project::pin_project;
use std::{
borrow::Cow,
pin::Pin,
task::{Context, Poll},
};
use tracing::{debug, info_span, warn};
pub const L5D_PROXY_CONNECTION: &str = "l5d-proxy-connection";
pub const L5D_PROXY_ERROR: &str = "l5d-proxy-error";
pub fn layer<R, P: Clone, N>(
params: P,
) -> impl svc::layer::Layer<N, Service = NewRespondService<R, P, N>> + Clone {
@ -28,10 +33,10 @@ pub trait HttpRescue<E> {
#[derive(Clone, Debug)]
pub struct SyntheticHttpResponse {
pub grpc_status: tonic::Code,
grpc_status: tonic::Code,
http_status: http::StatusCode,
close_connection: bool,
pub message: Cow<'static, str>,
message: Cow<'static, str>,
location: Option<HeaderValue>,
}
@ -57,6 +62,22 @@ pub struct Respond<R> {
emit_headers: bool,
}
#[pin_project(project = ResponseBodyProj)]
pub enum ResponseBody<R, B> {
Passthru(#[pin] B),
GrpcRescue {
#[pin]
inner: B,
trailers: Option<http::HeaderMap>,
rescue: R,
emit_headers: bool,
},
}
const GRPC_CONTENT_TYPE: &str = "application/grpc";
const GRPC_STATUS: &str = "grpc-status";
const GRPC_MESSAGE: &str = "grpc-message";
// === impl HttpRescue ===
impl<E, F> HttpRescue<E> for F
@ -226,7 +247,7 @@ impl SyntheticHttpResponse {
.version(http::Version::HTTP_2)
.header(http::header::CONTENT_LENGTH, "0")
.header(http::header::CONTENT_TYPE, GRPC_CONTENT_TYPE)
.header(GRPC_STATUS, super::code_header(self.grpc_status));
.header(GRPC_STATUS, code_header(self.grpc_status));
if emit_headers {
rsp = rsp
@ -325,15 +346,7 @@ where
let is_grpc = req
.headers()
.get(http::header::CONTENT_TYPE)
.and_then(|v| {
v.to_str().ok().map(|s| {
s.starts_with(
GRPC_CONTENT_TYPE
.to_str()
.expect("GRPC_CONTENT_TYPE only contains visible ASCII"),
)
})
})
.and_then(|v| v.to_str().ok().map(|s| s.starts_with(GRPC_CONTENT_TYPE)))
.unwrap_or(false);
Respond {
client,
@ -375,7 +388,7 @@ impl<R> Respond<R> {
impl<B, R> respond::Respond<http::Response<B>, Error> for Respond<R>
where
B: Default + linkerd_proxy_http::Body,
B: Default + hyper::body::HttpBody,
R: HttpRescue<Error> + Clone,
{
type Response = http::Response<ResponseBody<R, B>>;
@ -383,14 +396,19 @@ where
fn respond(&self, res: Result<http::Response<B>>) -> Result<Self::Response> {
let error = match res {
Ok(rsp) => {
return Ok(rsp.map(|inner| match self {
return Ok(rsp.map(|b| match self {
Respond {
is_grpc: true,
rescue,
emit_headers,
..
} => ResponseBody::grpc_rescue(inner, rescue.clone(), *emit_headers),
_ => ResponseBody::passthru(inner),
} => ResponseBody::GrpcRescue {
inner: b,
trailers: None,
rescue: rescue.clone(),
emit_headers: *emit_headers,
},
_ => ResponseBody::Passthru(b),
}));
}
Err(error) => error,
@ -423,3 +441,127 @@ where
Ok(rsp)
}
}
// === impl ResponseBody ===
impl<R, B: Default + hyper::body::HttpBody> Default for ResponseBody<R, B> {
fn default() -> Self {
ResponseBody::Passthru(B::default())
}
}
impl<R, B> hyper::body::HttpBody for ResponseBody<R, B>
where
B: hyper::body::HttpBody<Error = Error>,
R: HttpRescue<B::Error>,
{
type Data = B::Data;
type Error = B::Error;
fn poll_data(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Self::Data, Self::Error>>> {
match self.project() {
ResponseBodyProj::Passthru(inner) => inner.poll_data(cx),
ResponseBodyProj::GrpcRescue {
inner,
trailers,
rescue,
emit_headers,
} => {
// should not be calling poll_data if we have set trailers derived from an error
assert!(trailers.is_none());
match inner.poll_data(cx) {
Poll::Ready(Some(Err(error))) => {
let SyntheticHttpResponse {
grpc_status,
message,
..
} = rescue.rescue(error)?;
let t = Self::grpc_trailers(grpc_status, &message, *emit_headers);
*trailers = Some(t);
Poll::Ready(None)
}
data => data,
}
}
}
}
#[inline]
fn poll_trailers(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<Option<http::HeaderMap>, Self::Error>> {
match self.project() {
ResponseBodyProj::Passthru(inner) => inner.poll_trailers(cx),
ResponseBodyProj::GrpcRescue {
inner, trailers, ..
} => match trailers.take() {
Some(t) => Poll::Ready(Ok(Some(t))),
None => inner.poll_trailers(cx),
},
}
}
#[inline]
fn is_end_stream(&self) -> bool {
match self {
Self::Passthru(inner) => inner.is_end_stream(),
Self::GrpcRescue {
inner, trailers, ..
} => trailers.is_none() && inner.is_end_stream(),
}
}
#[inline]
fn size_hint(&self) -> http_body::SizeHint {
match self {
Self::Passthru(inner) => inner.size_hint(),
Self::GrpcRescue { inner, .. } => inner.size_hint(),
}
}
}
impl<R, B> ResponseBody<R, B> {
fn grpc_trailers(code: tonic::Code, message: &str, emit_headers: bool) -> http::HeaderMap {
debug!(grpc.status = ?code, "Synthesizing gRPC trailers");
let mut t = http::HeaderMap::new();
t.insert(GRPC_STATUS, code_header(code));
if emit_headers {
t.insert(
GRPC_MESSAGE,
HeaderValue::from_str(message).unwrap_or_else(|error| {
warn!(%error, "Failed to encode error header");
HeaderValue::from_static("Unexpected error")
}),
);
}
t
}
}
// Copied from tonic, where it's private.
fn code_header(code: tonic::Code) -> HeaderValue {
use tonic::Code;
match code {
Code::Ok => HeaderValue::from_static("0"),
Code::Cancelled => HeaderValue::from_static("1"),
Code::Unknown => HeaderValue::from_static("2"),
Code::InvalidArgument => HeaderValue::from_static("3"),
Code::DeadlineExceeded => HeaderValue::from_static("4"),
Code::NotFound => HeaderValue::from_static("5"),
Code::AlreadyExists => HeaderValue::from_static("6"),
Code::PermissionDenied => HeaderValue::from_static("7"),
Code::ResourceExhausted => HeaderValue::from_static("8"),
Code::FailedPrecondition => HeaderValue::from_static("9"),
Code::Aborted => HeaderValue::from_static("10"),
Code::OutOfRange => HeaderValue::from_static("11"),
Code::Unimplemented => HeaderValue::from_static("12"),
Code::Internal => HeaderValue::from_static("13"),
Code::Unavailable => HeaderValue::from_static("14"),
Code::DataLoss => HeaderValue::from_static("15"),
Code::Unauthenticated => HeaderValue::from_static("16"),
}
}

View File

@ -25,7 +25,6 @@ pub mod metrics;
pub mod proxy;
pub mod serve;
pub mod svc;
pub mod tls_info;
pub mod transport;
pub use self::build_info::{BuildInfo, BUILD_INFO};
@ -33,6 +32,7 @@ pub use drain;
pub use ipnet::{IpNet, Ipv4Net, Ipv6Net};
pub use linkerd_addr::{self as addr, Addr, AddrMatch, IpMatch, NameAddr, NameMatch};
pub use linkerd_conditional::Conditional;
pub use linkerd_detect as detect;
pub use linkerd_dns;
pub use linkerd_error::{cause_ref, is_caused_by, Error, Infallible, Recover, Result};
pub use linkerd_exp_backoff as exp_backoff;

View File

@ -15,7 +15,7 @@ use crate::{
use linkerd_addr::Addr;
pub use linkerd_metrics::*;
use linkerd_proxy_server_policy as policy;
use prometheus_client::encoding::{EncodeLabelSet, EncodeLabelValue};
use prometheus_client::encoding::EncodeLabelValue;
use std::{
fmt::{self, Write},
net::SocketAddr,
@ -54,7 +54,7 @@ pub struct Proxy {
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct ControlLabels {
addr: Addr,
server_id: tls::ConditionalClientTlsLabels,
server_id: tls::ConditionalClientTls,
}
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
@ -65,7 +65,7 @@ pub enum EndpointLabels {
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct InboundEndpointLabels {
pub tls: tls::ConditionalServerTlsLabels,
pub tls: tls::ConditionalServerTls,
pub authority: Option<http::uri::Authority>,
pub target_addr: SocketAddr,
pub policy: RouteAuthzLabels,
@ -73,7 +73,7 @@ pub struct InboundEndpointLabels {
/// A label referencing an inbound `Server` (i.e. for policy).
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct ServerLabel(pub Arc<policy::Meta>, pub u16);
pub struct ServerLabel(pub Arc<policy::Meta>);
/// Labels referencing an inbound server and authorization.
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
@ -98,7 +98,7 @@ pub struct RouteAuthzLabels {
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct OutboundEndpointLabels {
pub server_id: tls::ConditionalClientTlsLabels,
pub server_id: tls::ConditionalClientTls,
pub authority: Option<http::uri::Authority>,
pub labels: Option<String>,
pub zone_locality: OutboundZoneLocality,
@ -155,10 +155,10 @@ where
I: Iterator<Item = (&'i String, &'i String)>,
{
let (k0, v0) = labels_iter.next()?;
let mut out = format!("{prefix}_{k0}=\"{v0}\"");
let mut out = format!("{}_{}=\"{}\"", prefix, k0, v0);
for (k, v) in labels_iter {
write!(out, ",{prefix}_{k}=\"{v}\"").expect("label concat must succeed");
write!(out, ",{}_{}=\"{}\"", prefix, k, v).expect("label concat must succeed");
}
Some(out)
}
@ -166,7 +166,7 @@ where
// === impl Metrics ===
impl Metrics {
pub fn new(retain_idle: Duration) -> (Self, impl legacy::FmtMetrics + Clone + Send + 'static) {
pub fn new(retain_idle: Duration) -> (Self, impl FmtMetrics + Clone + Send + 'static) {
let (control, control_report) = {
let m = http_metrics::Requests::<ControlLabels, Class>::default();
let r = m.clone().into_report(retain_idle).with_prefix("control");
@ -223,7 +223,6 @@ impl Metrics {
opentelemetry,
};
use legacy::FmtMetrics as _;
let report = endpoint_report
.and_report(profile_route_report)
.and_report(retry_report)
@ -244,17 +243,15 @@ impl svc::Param<ControlLabels> for control::ControlAddr {
fn param(&self) -> ControlLabels {
ControlLabels {
addr: self.addr.clone(),
server_id: self.identity.as_ref().map(tls::ClientTls::labels),
server_id: self.identity.clone(),
}
}
}
impl legacy::FmtLabels for ControlLabels {
impl FmtLabels for ControlLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { addr, server_id } = self;
write!(f, "addr=\"{addr}\",")?;
TlsConnect::from(server_id).fmt_labels(f)?;
write!(f, "addr=\"{}\",", self.addr)?;
TlsConnect::from(&self.server_id).fmt_labels(f)?;
Ok(())
}
@ -282,19 +279,13 @@ impl ProfileRouteLabels {
}
}
impl legacy::FmtLabels for ProfileRouteLabels {
impl FmtLabels for ProfileRouteLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
direction,
addr,
labels,
} = self;
self.direction.fmt_labels(f)?;
write!(f, ",dst=\"{}\"", self.addr)?;
direction.fmt_labels(f)?;
write!(f, ",dst=\"{addr}\"")?;
if let Some(labels) = labels.as_ref() {
write!(f, ",{labels}")?;
if let Some(labels) = self.labels.as_ref() {
write!(f, ",{}", labels)?;
}
Ok(())
@ -315,7 +306,7 @@ impl From<OutboundEndpointLabels> for EndpointLabels {
}
}
impl legacy::FmtLabels for EndpointLabels {
impl FmtLabels for EndpointLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Inbound(i) => (Direction::In, i).fmt_labels(f),
@ -324,98 +315,70 @@ impl legacy::FmtLabels for EndpointLabels {
}
}
impl legacy::FmtLabels for InboundEndpointLabels {
impl FmtLabels for InboundEndpointLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
tls,
authority,
target_addr,
policy,
} = self;
if let Some(a) = authority.as_ref() {
if let Some(a) = self.authority.as_ref() {
Authority(a).fmt_labels(f)?;
write!(f, ",")?;
}
((TargetAddr(*target_addr), TlsAccept::from(tls)), policy).fmt_labels(f)?;
(
(TargetAddr(self.target_addr), TlsAccept::from(&self.tls)),
&self.policy,
)
.fmt_labels(f)?;
Ok(())
}
}
impl legacy::FmtLabels for ServerLabel {
impl FmtLabels for ServerLabel {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(meta, port) = self;
write!(
f,
"srv_group=\"{}\",srv_kind=\"{}\",srv_name=\"{}\",srv_port=\"{}\"",
meta.group(),
meta.kind(),
meta.name(),
port
"srv_group=\"{}\",srv_kind=\"{}\",srv_name=\"{}\"",
self.0.group(),
self.0.kind(),
self.0.name()
)
}
}
impl EncodeLabelSet for ServerLabel {
fn encode(&self, mut enc: prometheus_client::encoding::LabelSetEncoder<'_>) -> fmt::Result {
prom::EncodeLabelSetMut::encode_label_set(self, &mut enc)
}
}
impl prom::EncodeLabelSetMut for ServerLabel {
fn encode_label_set(&self, enc: &mut prom::encoding::LabelSetEncoder<'_>) -> fmt::Result {
use prometheus_client::encoding::EncodeLabel;
("srv_group", self.0.group()).encode(enc.encode_label())?;
("srv_kind", self.0.kind()).encode(enc.encode_label())?;
("srv_name", self.0.name()).encode(enc.encode_label())?;
("srv_port", self.1).encode(enc.encode_label())?;
Ok(())
}
}
impl legacy::FmtLabels for ServerAuthzLabels {
impl FmtLabels for ServerAuthzLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { server, authz } = self;
server.fmt_labels(f)?;
self.server.fmt_labels(f)?;
write!(
f,
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
authz.group(),
authz.kind(),
authz.name()
self.authz.group(),
self.authz.kind(),
self.authz.name()
)
}
}
impl legacy::FmtLabels for RouteLabels {
impl FmtLabels for RouteLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { server, route } = self;
server.fmt_labels(f)?;
self.server.fmt_labels(f)?;
write!(
f,
",route_group=\"{}\",route_kind=\"{}\",route_name=\"{}\"",
route.group(),
route.kind(),
route.name(),
self.route.group(),
self.route.kind(),
self.route.name(),
)
}
}
impl legacy::FmtLabels for RouteAuthzLabels {
impl FmtLabels for RouteAuthzLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self { route, authz } = self;
route.fmt_labels(f)?;
self.route.fmt_labels(f)?;
write!(
f,
",authz_group=\"{}\",authz_kind=\"{}\",authz_name=\"{}\"",
authz.group(),
authz.kind(),
authz.name(),
self.authz.group(),
self.authz.kind(),
self.authz.name(),
)
}
}
@ -426,28 +389,19 @@ impl svc::Param<OutboundZoneLocality> for OutboundEndpointLabels {
}
}
impl legacy::FmtLabels for OutboundEndpointLabels {
impl FmtLabels for OutboundEndpointLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
server_id,
authority,
labels,
// TODO(kate): this label is not currently emitted.
zone_locality: _,
target_addr,
} = self;
if let Some(a) = authority.as_ref() {
if let Some(a) = self.authority.as_ref() {
Authority(a).fmt_labels(f)?;
write!(f, ",")?;
}
let ta = TargetAddr(*target_addr);
let tls = TlsConnect::from(server_id);
let ta = TargetAddr(self.target_addr);
let tls = TlsConnect::from(&self.server_id);
(ta, tls).fmt_labels(f)?;
if let Some(labels) = labels.as_ref() {
write!(f, ",{labels}")?;
if let Some(labels) = self.labels.as_ref() {
write!(f, ",{}", labels)?;
}
Ok(())
@ -463,20 +417,19 @@ impl fmt::Display for Direction {
}
}
impl legacy::FmtLabels for Direction {
impl FmtLabels for Direction {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "direction=\"{self}\"")
write!(f, "direction=\"{}\"", self)
}
}
impl legacy::FmtLabels for Authority<'_> {
impl FmtLabels for Authority<'_> {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(authority) = self;
write!(f, "authority=\"{authority}\"")
write!(f, "authority=\"{}\"", self.0)
}
}
impl legacy::FmtLabels for Class {
impl FmtLabels for Class {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let class = |ok: bool| if ok { "success" } else { "failure" };
@ -498,7 +451,8 @@ impl legacy::FmtLabels for Class {
Class::Error(msg) => write!(
f,
"classification=\"failure\",grpc_status=\"\",error=\"{msg}\""
"classification=\"failure\",grpc_status=\"\",error=\"{}\"",
msg
),
}
}
@ -524,15 +478,9 @@ impl StackLabels {
}
}
impl legacy::FmtLabels for StackLabels {
impl FmtLabels for StackLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
direction,
protocol,
name,
} = self;
direction.fmt_labels(f)?;
write!(f, ",protocol=\"{protocol}\",name=\"{name}\"")
self.direction.fmt_labels(f)?;
write!(f, ",protocol=\"{}\",name=\"{}\"", self.protocol, self.name)
}
}

View File

@ -1,70 +0,0 @@
use linkerd_metrics::prom;
use prometheus_client::encoding::{EncodeLabelSet, EncodeLabelValue, LabelValueEncoder};
use std::{
fmt::{Error, Write},
sync::{Arc, OnceLock},
};
static TLS_INFO: OnceLock<Arc<TlsInfo>> = OnceLock::new();
#[derive(Clone, Debug, Default, Hash, PartialEq, Eq, EncodeLabelSet)]
pub struct TlsInfo {
tls_suites: MetricValueList,
tls_kx_groups: MetricValueList,
tls_rand: String,
tls_key_provider: String,
tls_fips: bool,
}
#[derive(Clone, Debug, Default, Hash, PartialEq, Eq)]
struct MetricValueList {
values: Vec<&'static str>,
}
impl FromIterator<&'static str> for MetricValueList {
fn from_iter<T: IntoIterator<Item = &'static str>>(iter: T) -> Self {
MetricValueList {
values: iter.into_iter().collect(),
}
}
}
impl EncodeLabelValue for MetricValueList {
fn encode(&self, encoder: &mut LabelValueEncoder<'_>) -> Result<(), Error> {
for value in &self.values {
value.encode(encoder)?;
encoder.write_char(',')?;
}
Ok(())
}
}
pub fn metric() -> prom::Family<TlsInfo, prom::ConstGauge> {
let fam = prom::Family::<TlsInfo, prom::ConstGauge>::new_with_constructor(|| {
prom::ConstGauge::new(1)
});
let tls_info = TLS_INFO.get_or_init(|| {
let provider = linkerd_rustls::get_default_provider();
let tls_suites = provider
.cipher_suites
.iter()
.flat_map(|cipher_suite| cipher_suite.suite().as_str())
.collect::<MetricValueList>();
let tls_kx_groups = provider
.kx_groups
.iter()
.flat_map(|suite| suite.name().as_str())
.collect::<MetricValueList>();
Arc::new(TlsInfo {
tls_suites,
tls_kx_groups,
tls_rand: format!("{:?}", provider.secure_random),
tls_key_provider: format!("{:?}", provider.key_provider),
tls_fips: provider.fips(),
})
});
let _ = fam.get_or_create(tls_info);
fam
}

View File

@ -1,7 +1,7 @@
use crate::metrics::ServerLabel as PolicyServerLabel;
pub use crate::metrics::{Direction, OutboundEndpointLabels};
use linkerd_conditional::Conditional;
use linkerd_metrics::legacy::FmtLabels;
use linkerd_metrics::FmtLabels;
use linkerd_tls as tls;
use std::{fmt, net::SocketAddr};
@ -20,16 +20,16 @@ pub enum Key {
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct ServerLabels {
direction: Direction,
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
target_addr: SocketAddr,
policy: Option<PolicyServerLabel>,
}
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub struct TlsAccept<'t>(pub &'t tls::ConditionalServerTlsLabels);
pub struct TlsAccept<'t>(pub &'t tls::ConditionalServerTls);
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub(crate) struct TlsConnect<'t>(pub &'t tls::ConditionalClientTlsLabels);
pub(crate) struct TlsConnect<'t>(&'t tls::ConditionalClientTls);
#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash)]
pub struct TargetAddr(pub SocketAddr);
@ -38,7 +38,7 @@ pub struct TargetAddr(pub SocketAddr);
impl Key {
pub fn inbound_server(
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
target_addr: SocketAddr,
server: PolicyServerLabel,
) -> Self {
@ -62,7 +62,7 @@ impl FmtLabels for Key {
}
Self::InboundClient => {
const NO_TLS: tls::client::ConditionalClientTlsLabels =
const NO_TLS: tls::client::ConditionalClientTls =
Conditional::None(tls::NoClientTls::Loopback);
Direction::In.fmt_labels(f)?;
@ -75,7 +75,7 @@ impl FmtLabels for Key {
impl ServerLabels {
fn inbound(
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
target_addr: SocketAddr,
policy: PolicyServerLabel,
) -> Self {
@ -90,7 +90,7 @@ impl ServerLabels {
fn outbound(target_addr: SocketAddr) -> Self {
ServerLabels {
direction: Direction::Out,
tls: tls::ConditionalServerTlsLabels::None(tls::NoServerTls::Loopback),
tls: tls::ConditionalServerTls::None(tls::NoServerTls::Loopback),
target_addr,
policy: None,
}
@ -99,17 +99,14 @@ impl ServerLabels {
impl FmtLabels for ServerLabels {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self {
direction,
tls,
target_addr,
policy,
} = self;
direction.fmt_labels(f)?;
self.direction.fmt_labels(f)?;
f.write_str(",peer=\"src\",")?;
((TargetAddr(*target_addr), TlsAccept(tls)), policy.as_ref()).fmt_labels(f)?;
(
(TargetAddr(self.target_addr), TlsAccept(&self.tls)),
self.policy.as_ref(),
)
.fmt_labels(f)?;
Ok(())
}
@ -117,28 +114,27 @@ impl FmtLabels for ServerLabels {
// === impl TlsAccept ===
impl<'t> From<&'t tls::ConditionalServerTlsLabels> for TlsAccept<'t> {
fn from(c: &'t tls::ConditionalServerTlsLabels) -> Self {
impl<'t> From<&'t tls::ConditionalServerTls> for TlsAccept<'t> {
fn from(c: &'t tls::ConditionalServerTls) -> Self {
TlsAccept(c)
}
}
impl FmtLabels for TlsAccept<'_> {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(tls) = self;
match tls {
match self.0 {
Conditional::None(tls::NoServerTls::Disabled) => {
write!(f, "tls=\"disabled\"")
}
Conditional::None(why) => {
write!(f, "tls=\"no_identity\",no_tls_reason=\"{why}\"")
write!(f, "tls=\"no_identity\",no_tls_reason=\"{}\"", why)
}
Conditional::Some(tls::ServerTlsLabels::Established { client_id }) => match client_id {
Some(id) => write!(f, "tls=\"true\",client_id=\"{id}\""),
Conditional::Some(tls::ServerTls::Established { client_id, .. }) => match client_id {
Some(id) => write!(f, "tls=\"true\",client_id=\"{}\"", id),
None => write!(f, "tls=\"true\",client_id=\"\""),
},
Conditional::Some(tls::ServerTlsLabels::Passthru { sni }) => {
write!(f, "tls=\"opaque\",sni=\"{sni}\"")
Conditional::Some(tls::ServerTls::Passthru { sni }) => {
write!(f, "tls=\"opaque\",sni=\"{}\"", sni)
}
}
}
@ -146,25 +142,23 @@ impl FmtLabels for TlsAccept<'_> {
// === impl TlsConnect ===
impl<'t> From<&'t tls::ConditionalClientTlsLabels> for TlsConnect<'t> {
fn from(s: &'t tls::ConditionalClientTlsLabels) -> Self {
impl<'t> From<&'t tls::ConditionalClientTls> for TlsConnect<'t> {
fn from(s: &'t tls::ConditionalClientTls) -> Self {
TlsConnect(s)
}
}
impl FmtLabels for TlsConnect<'_> {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(tls) = self;
match tls {
match self.0 {
Conditional::None(tls::NoClientTls::Disabled) => {
write!(f, "tls=\"disabled\"")
}
Conditional::None(why) => {
write!(f, "tls=\"no_identity\",no_tls_reason=\"{why}\"")
write!(f, "tls=\"no_identity\",no_tls_reason=\"{}\"", why)
}
Conditional::Some(tls::ClientTlsLabels { server_id }) => {
write!(f, "tls=\"true\",server_id=\"{server_id}\"")
Conditional::Some(tls::ClientTls { server_id, .. }) => {
write!(f, "tls=\"true\",server_id=\"{}\"", server_id)
}
}
}
@ -174,13 +168,12 @@ impl FmtLabels for TlsConnect<'_> {
impl FmtLabels for TargetAddr {
fn fmt_labels(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
let Self(target_addr) = self;
write!(
f,
"target_addr=\"{}\",target_ip=\"{}\",target_port=\"{}\"",
target_addr,
target_addr.ip(),
target_addr.port()
self.0,
self.0.ip(),
self.0.port()
)
}
}
@ -201,25 +194,23 @@ mod tests {
use std::sync::Arc;
let labels = ServerLabels::inbound(
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some("foo.id.example.com".parse().unwrap()),
negotiated_protocol: None,
}),
([192, 0, 2, 4], 40000).into(),
PolicyServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testserver".into(),
}),
40000,
),
PolicyServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testserver".into(),
})),
);
assert_eq!(
labels.to_string(),
"direction=\"inbound\",peer=\"src\",\
target_addr=\"192.0.2.4:40000\",target_ip=\"192.0.2.4\",target_port=\"40000\",\
tls=\"true\",client_id=\"foo.id.example.com\",\
srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testserver\",srv_port=\"40000\""
srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testserver\""
);
}
}

View File

@ -1,13 +1,13 @@
[package]
name = "linkerd-app-gateway"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
[dependencies]
http = { workspace = true }
http = "0.2"
futures = { version = "0.3", default-features = false }
linkerd-app-core = { path = "../core" }
linkerd-app-inbound = { path = "../inbound" }
@ -16,9 +16,9 @@ linkerd-proxy-client-policy = { path = "../../proxy/client-policy" }
once_cell = "1"
thiserror = "2"
tokio = { version = "1", features = ["sync"] }
tonic = { workspace = true, default-features = false }
tower = { workspace = true, default-features = false }
tracing = { workspace = true }
tonic = { version = "0.10", default-features = false }
tower = { version = "0.4", default-features = false }
tracing = "0.1"
[dev-dependencies]
linkerd-app-inbound = { path = "../inbound", features = ["test-util"] }
@ -26,6 +26,6 @@ linkerd-app-outbound = { path = "../outbound", features = ["test-util"] }
linkerd-proxy-server-policy = { path = "../../proxy/server-policy" }
tokio = { version = "1", features = ["rt", "macros"] }
tokio-test = "0.4"
tower = { workspace = true, default-features = false, features = ["util"] }
tower-test = { workspace = true }
tower = { version = "0.4", default-features = false, features = ["util"] }
tower-test = "0.4"
linkerd-app-test = { path = "../test" }

View File

@ -90,7 +90,7 @@ impl Gateway {
detect_timeout,
queue,
addr,
meta.into(),
meta,
),
None => {
tracing::debug!(

View File

@ -28,7 +28,7 @@ pub(crate) use self::gateway::NewHttpGateway;
pub struct Target<T = ()> {
addr: GatewayAddr,
routes: watch::Receiver<outbound::http::Routes>,
version: http::Variant,
version: http::Version,
parent: T,
}
@ -74,7 +74,7 @@ impl Gateway {
T: svc::Param<tls::ClientId>,
T: svc::Param<inbound::policy::AllowPolicy>,
T: svc::Param<Option<watch::Receiver<profiles::Profile>>>,
T: svc::Param<http::Variant>,
T: svc::Param<http::Version>,
T: svc::Param<http::normalize_uri::DefaultAuthority>,
T: Clone + Send + Sync + Unpin + 'static,
// Endpoint resolution.
@ -153,7 +153,7 @@ fn mk_routes(profile: &profiles::Profile) -> Option<outbound::http::Routes> {
if let Some((addr, metadata)) = profile.endpoint.clone() {
return Some(outbound::http::Routes::Endpoint(
Remote(ServerAddr(addr)),
metadata.into(),
metadata,
));
}
@ -164,7 +164,7 @@ fn mk_routes(profile: &profiles::Profile) -> Option<outbound::http::Routes> {
impl<B, T: Clone> svc::router::SelectRoute<http::Request<B>> for ByRequestVersion<T> {
type Key = Target<T>;
type Error = http::UnsupportedVariant;
type Error = http::version::Unsupported;
fn select(&self, req: &http::Request<B>) -> Result<Self::Key, Self::Error> {
let mut t = self.0.clone();
@ -192,8 +192,8 @@ impl<T> svc::Param<GatewayAddr> for Target<T> {
}
}
impl<T> svc::Param<http::Variant> for Target<T> {
fn param(&self) -> http::Variant {
impl<T> svc::Param<http::Version> for Target<T> {
fn param(&self) -> http::Version {
self.version
}
}

View File

@ -66,7 +66,7 @@ where
impl<B, S> tower::Service<http::Request<B>> for HttpGateway<S>
where
B: http::Body + 'static,
B: http::HttpBody + 'static,
S: tower::Service<http::Request<B>, Response = http::Response<http::BoxBody>>,
S::Error: Into<Error> + 'static,
S::Future: Send + 'static,

View File

@ -62,7 +62,7 @@ async fn upgraded_request_remains_relative_form() {
impl svc::Param<ServerLabel> for Target {
fn param(&self) -> ServerLabel {
ServerLabel(policy::Meta::new_default("test"), 4143)
ServerLabel(policy::Meta::new_default("test"))
}
}
@ -98,9 +98,9 @@ async fn upgraded_request_remains_relative_form() {
}
}
impl svc::Param<http::Variant> for Target {
fn param(&self) -> http::Variant {
http::Variant::H2
impl svc::Param<http::Version> for Target {
fn param(&self) -> http::Version {
http::Version::H2
}
}

View File

@ -11,7 +11,7 @@ use tokio::sync::watch;
/// Target for HTTP stacks.
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub struct Http<T> {
version: http::Variant,
version: http::Version,
parent: outbound::Discovery<T>,
}
@ -61,13 +61,13 @@ impl Gateway {
|parent: outbound::Discovery<T>| -> Result<_, GatewayDomainInvalid> {
if let Some(proto) = (*parent).param() {
let version = match proto {
SessionProtocol::Http1 => http::Variant::Http1,
SessionProtocol::Http2 => http::Variant::H2,
SessionProtocol::Http1 => http::Version::Http1,
SessionProtocol::Http2 => http::Version::H2,
};
return Ok(svc::Either::Left(Http { parent, version }));
return Ok(svc::Either::A(Http { parent, version }));
}
Ok(svc::Either::Right(Opaq(parent)))
Ok(svc::Either::B(Opaq(parent)))
},
opaq,
)
@ -154,8 +154,8 @@ impl<T> std::ops::Deref for Http<T> {
}
}
impl<T> svc::Param<http::Variant> for Http<T> {
fn param(&self) -> http::Variant {
impl<T> svc::Param<http::Version> for Http<T> {
fn param(&self) -> http::Version {
self.version
}
}

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-inbound"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Configures and runs the inbound proxy
"""
@ -13,18 +13,20 @@ Configures and runs the inbound proxy
test-util = [
"linkerd-app-test",
"linkerd-idle-cache/test-util",
"linkerd-meshtls/test-util",
"linkerd-meshtls/rustls",
"linkerd-meshtls-rustls/test-util",
]
[dependencies]
bytes = { workspace = true }
http = { workspace = true }
bytes = "1"
http = "0.2"
futures = { version = "0.3", default-features = false }
linkerd-app-core = { path = "../core" }
linkerd-app-test = { path = "../test", optional = true }
linkerd-http-access-log = { path = "../../http/access-log" }
linkerd-idle-cache = { path = "../../idle-cache" }
linkerd-meshtls = { path = "../../meshtls", optional = true, default-features = false }
linkerd-meshtls = { path = "../../meshtls", optional = true }
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", optional = true }
linkerd-proxy-client-policy = { path = "../../proxy/client-policy" }
linkerd-tonic-stream = { path = "../../tonic-stream" }
linkerd-tonic-watch = { path = "../../tonic-watch" }
@ -34,33 +36,28 @@ parking_lot = "0.12"
rangemap = "1"
thiserror = "2"
tokio = { version = "1", features = ["sync"] }
tonic = { workspace = true, default-features = false }
tower = { workspace = true, features = ["util"] }
tracing = { workspace = true }
tonic = { version = "0.10", default-features = false }
tower = { version = "0.4", features = ["util"] }
tracing = "0.1"
[dependencies.linkerd-proxy-server-policy]
path = "../../proxy/server-policy"
features = ["proto"]
[target.'cfg(fuzzing)'.dependencies]
hyper = { workspace = true, features = ["http1", "http2"] }
hyper = { version = "0.14", features = ["deprecated", "http1", "http2"] }
linkerd-app-test = { path = "../test" }
arbitrary = { version = "1", features = ["derive"] }
libfuzzer-sys = { version = "0.4", features = ["arbitrary-derive"] }
linkerd-meshtls = { path = "../../meshtls", features = [
"test-util",
] }
[dev-dependencies]
http-body-util = { workspace = true }
hyper = { workspace = true, features = ["http1", "http2"] }
hyper-util = { workspace = true }
hyper = { version = "0.14", features = ["deprecated", "http1", "http2"] }
linkerd-app-test = { path = "../test" }
linkerd-http-metrics = { path = "../../http/metrics", features = ["test-util"] }
linkerd-http-box = { path = "../../http/box" }
linkerd-idle-cache = { path = "../../idle-cache", features = ["test-util"] }
linkerd-io = { path = "../../io", features = ["tokio-test"] }
linkerd-meshtls = { path = "../../meshtls", features = [
linkerd-meshtls = { path = "../../meshtls", features = ["rustls"] }
linkerd-meshtls-rustls = { path = "../../meshtls/rustls", features = [
"test-util",
] }
linkerd-proxy-server-policy = { path = "../../proxy/server-policy", features = [

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-inbound-fuzz"
version = { workspace = true }
version = "0.0.0"
authors = ["Automatically generated"]
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
publish = false
edition = "2021"
[package.metadata]
cargo-fuzz = true
@ -12,18 +12,19 @@ cargo-fuzz = true
[target.'cfg(fuzzing)'.dependencies]
arbitrary = { version = "1", features = ["derive"] }
hyper = { version = "0.14", features = ["deprecated", "http1", "http2"] }
http = { workspace = true }
http = "0.2"
libfuzzer-sys = { version = "0.4", features = ["arbitrary-derive"] }
linkerd-app-core = { path = "../../core" }
linkerd-app-inbound = { path = ".." }
linkerd-app-test = { path = "../../test" }
linkerd-idle-cache = { path = "../../../idle-cache", features = ["test-util"] }
linkerd-meshtls = { path = "../../../meshtls", features = [
linkerd-meshtls = { path = "../../../meshtls", features = ["rustls"] }
linkerd-meshtls-rustls = { path = "../../../meshtls/rustls", features = [
"test-util",
] }
linkerd-tracing = { path = "../../../tracing", features = ["ansi"] }
tokio = { version = "1", features = ["full"] }
tracing = { workspace = true }
tracing = "0.1"
# Prevent this from interfering with workspaces
[workspace]

View File

@ -53,12 +53,12 @@ impl<N> Inbound<N> {
move |t: T| -> Result<_, Error> {
let addr: OrigDstAddr = t.param();
if addr.port() == proxy_port {
return Ok(svc::Either::Right(t));
return Ok(svc::Either::B(t));
}
let policy = policies.get_policy(addr);
tracing::debug!(policy = ?&*policy.borrow(), "Accepted");
Ok(svc::Either::Left(Accept {
Ok(svc::Either::A(Accept {
client_addr: t.param(),
orig_dst_addr: addr,
policy,
@ -182,11 +182,7 @@ mod tests {
}
fn inbound() -> Inbound<()> {
Inbound::new(
test_util::default_config(),
test_util::runtime().0,
&mut Default::default(),
)
Inbound::new(test_util::default_config(), test_util::runtime().0)
}
fn new_panic<T>(msg: &'static str) -> svc::ArcNewTcp<T, io::DuplexStream> {

View File

@ -3,8 +3,8 @@ use crate::{
Inbound,
};
use linkerd_app_core::{
identity, io,
metrics::{prom, ServerLabel},
detect, identity, io,
metrics::ServerLabel,
proxy::http,
svc, tls,
transport::{
@ -20,10 +20,6 @@ use tracing::info;
#[cfg(test)]
mod tests;
#[derive(Clone, Debug)]
pub struct MetricsFamilies(pub HttpDetectMetrics);
pub type HttpDetectMetrics = http::DetectMetricsFamilies<ServerLabel>;
#[derive(Clone, Debug, PartialEq, Eq)]
pub(crate) struct Forward {
client_addr: Remote<ClientAddr>,
@ -35,7 +31,7 @@ pub(crate) struct Forward {
#[derive(Clone, Debug)]
pub(crate) struct Http {
tls: Tls,
http: http::Variant,
http: http::Version,
}
#[derive(Clone, Debug)]
@ -52,6 +48,9 @@ struct Detect {
tls: Tls,
}
#[derive(Copy, Clone, Debug)]
struct ConfigureHttpDetect;
#[derive(Clone)]
struct TlsParams {
timeout: tls::server::Timeout,
@ -65,11 +64,7 @@ type TlsIo<I> = tls::server::Io<identity::ServerIo<tls::server::DetectIo<I>>, I>
impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
/// Builds a stack that terminates mesh TLS and detects whether the traffic is HTTP (as hinted
/// by policy).
pub(crate) fn push_detect<T, I, F, FSvc>(
self,
MetricsFamilies(metrics): MetricsFamilies,
forward: F,
) -> Inbound<svc::ArcNewTcp<T, I>>
pub(crate) fn push_detect<T, I, F, FSvc>(self, forward: F) -> Inbound<svc::ArcNewTcp<T, I>>
where
T: svc::Param<OrigDstAddr> + svc::Param<Remote<ClientAddr>> + svc::Param<AllowPolicy>,
T: Clone + Send + 'static,
@ -80,18 +75,14 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
FSvc::Error: Into<Error>,
FSvc::Future: Send,
{
self.push_detect_http(metrics, forward.clone())
self.push_detect_http(forward.clone())
.push_detect_tls(forward)
}
/// Builds a stack that handles HTTP detection once TLS detection has been performed. If the
/// connection is determined to be HTTP, the inner stack is used; otherwise the connection is
/// passed to the provided 'forward' stack.
fn push_detect_http<I, F, FSvc>(
self,
metrics: HttpDetectMetrics,
forward: F,
) -> Inbound<svc::ArcNewTcp<Tls, I>>
fn push_detect_http<I, F, FSvc>(self, forward: F) -> Inbound<svc::ArcNewTcp<Tls, I>>
where
I: io::AsyncRead + io::AsyncWrite + io::PeerAddr,
I: Debug + Send + Sync + Unpin + 'static,
@ -120,59 +111,42 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
.push_switch(
|(detected, Detect { tls, .. })| -> Result<_, Infallible> {
match detected {
http::Detection::Http(http) => {
Ok(svc::Either::Left(Http { http, tls }))
}
http::Detection::NotHttp => Ok(svc::Either::Right(tls)),
Ok(Some(http)) => Ok(svc::Either::A(Http { http, tls })),
Ok(None) => Ok(svc::Either::B(tls)),
// When HTTP detection fails, forward the connection to the application as
// an opaque TCP stream.
http::Detection::ReadTimeout(timeout) => {
match tls.policy.protocol() {
Protocol::Http1 { .. } => {
// If the protocol was hinted to be HTTP/1.1 but detection
// failed, we'll usually be handling HTTP/1, but we may actually
// be handling HTTP/2 via protocol upgrade. Our options are:
// handle the connection as HTTP/1, assuming it will be rare for
// a proxy to initiate TLS, etc and not send the 16B of
// connection header; or we can handle it as opaque--but there's
// no chance the server will be able to handle the H2 protocol
// upgrade. So, it seems best to assume it's HTTP/1 and let the
// proxy handle the protocol error if we're in an edge case.
info!(
?timeout,
"Handling connection as HTTP/1 due to policy"
);
Ok(svc::Either::Left(Http {
http: http::Variant::Http1,
tls,
}))
}
// Otherwise, the protocol hint must have
// been `Detect` or the protocol was updated
// after detection was initiated, otherwise
// we would have avoided detection below.
// Continue handling the connection as if it
// were opaque.
_ => {
info!(
?timeout,
"Handling connection as opaque due to policy"
);
Ok(svc::Either::Right(tls))
}
Err(timeout) => match tls.policy.protocol() {
Protocol::Http1 { .. } => {
// If the protocol was hinted to be HTTP/1.1 but detection
// failed, we'll usually be handling HTTP/1, but we may actually
// be handling HTTP/2 via protocol upgrade. Our options are:
// handle the connection as HTTP/1, assuming it will be rare for
// a proxy to initiate TLS, etc and not send the 16B of
// connection header; or we can handle it as opaque--but there's
// no chance the server will be able to handle the H2 protocol
// upgrade. So, it seems best to assume it's HTTP/1 and let the
// proxy handle the protocol error if we're in an edge case.
info!(%timeout, "Handling connection as HTTP/1 due to policy");
Ok(svc::Either::A(Http {
http: http::Version::Http1,
tls,
}))
}
}
// Otherwise, the protocol hint must have been `Detect` or the
// protocol was updated after detection was initiated, otherwise we
// would have avoided detection below. Continue handling the
// connection as if it were opaque.
_ => {
info!(%timeout, "Handling connection as opaque");
Ok(svc::Either::B(tls))
}
},
}
},
forward.into_inner(),
)
.lift_new_with_target()
.push(http::NewDetect::layer(
move |Detect { timeout, tls }: &Detect| http::DetectParams {
read_timeout: *timeout,
metrics: metrics.metrics(tls.policy.server_label()),
},
))
.push(detect::NewDetectService::layer(ConfigureHttpDetect))
.arc_new_tcp();
http.push_on_service(svc::MapTargetLayer::new(io::BoxedIo::new))
@ -185,7 +159,7 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
move |tls: Tls| -> Result<_, Infallible> {
let http = match tls.policy.protocol() {
Protocol::Detect { timeout, .. } => {
return Ok(svc::Either::Right(Detect { timeout, tls }));
return Ok(svc::Either::B(Detect { timeout, tls }));
}
// Meshed HTTP/1 services may actually be transported over HTTP/2 connections
// between proxies, so we have to do detection.
@ -193,18 +167,18 @@ impl Inbound<svc::ArcNewTcp<Http, io::BoxedIo>> {
// TODO(ver) outbound clients should hint this with ALPN so we don't
// have to detect this situation.
Protocol::Http1 { .. } if tls.status.is_some() => {
return Ok(svc::Either::Right(Detect {
return Ok(svc::Either::B(Detect {
timeout: detect_timeout,
tls,
}));
}
// Unmeshed services don't use protocol upgrading, so we can use the
// hint without further detection.
Protocol::Http1 { .. } => http::Variant::Http1,
Protocol::Http2 { .. } | Protocol::Grpc { .. } => http::Variant::H2,
Protocol::Http1 { .. } => http::Version::Http1,
Protocol::Http2 { .. } | Protocol::Grpc { .. } => http::Version::H2,
_ => unreachable!("opaque protocols must not hit the HTTP stack"),
};
Ok(svc::Either::Left(Http { http, tls }))
Ok(svc::Either::A(Http { http, tls }))
},
detect.into_inner(),
)
@ -258,10 +232,10 @@ impl<I> Inbound<svc::ArcNewTcp<Tls, TlsIo<I>>> {
// whether app TLS was employed, but we use this as a signal that we should
// not perform additional protocol detection.
if matches!(protocol, Protocol::Tls { .. }) {
return Ok(svc::Either::Right(tls));
return Ok(svc::Either::B(tls));
}
Ok(svc::Either::Left(tls))
Ok(svc::Either::A(tls))
},
forward
.clone()
@ -285,14 +259,14 @@ impl<I> Inbound<svc::ArcNewTcp<Tls, TlsIo<I>>> {
if matches!(policy.protocol(), Protocol::Opaque { .. }) {
const TLS_PORT_SKIPPED: tls::ConditionalServerTls =
tls::ConditionalServerTls::None(tls::NoServerTls::PortSkipped);
return Ok(svc::Either::Right(Tls {
return Ok(svc::Either::B(Tls {
client_addr: t.param(),
orig_dst_addr: t.param(),
status: TLS_PORT_SKIPPED,
policy,
}));
}
Ok(svc::Either::Left(t))
Ok(svc::Either::A(t))
},
forward
.push_on_service(svc::MapTargetLayer::new(io::BoxedIo::new))
@ -325,7 +299,7 @@ impl svc::Param<Remote<ServerAddr>> for Forward {
impl svc::Param<transport::labels::Key> for Forward {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
self.tls.as_ref().map(|t| t.labels()),
self.tls.clone(),
self.orig_dst_addr.into(),
self.permit.labels.server.clone(),
)
@ -358,10 +332,18 @@ impl svc::Param<tls::ConditionalServerTls> for Tls {
}
}
// === impl ConfigureHttpDetect ===
impl svc::ExtractParam<detect::Config<http::DetectHttp>, Detect> for ConfigureHttpDetect {
fn extract_param(&self, detect: &Detect) -> detect::Config<http::DetectHttp> {
detect::Config::from_timeout(detect.timeout)
}
}
// === impl Http ===
impl svc::Param<http::Variant> for Http {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for Http {
fn param(&self) -> http::Version {
self.http
}
}
@ -429,7 +411,7 @@ impl svc::Param<ServerLabel> for Http {
impl svc::Param<transport::labels::Key> for Http {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
self.tls.status.as_ref().map(|t| t.labels()),
self.tls.status.clone(),
self.tls.orig_dst_addr.into(),
self.tls.policy.server_label(),
)
@ -460,13 +442,3 @@ impl<T> svc::InsertParam<tls::ConditionalServerTls, T> for TlsParams {
(tls, target)
}
}
// === impl MetricsFamilies ===
impl MetricsFamilies {
pub fn register(reg: &mut prom::Registry) -> Self {
Self(http::DetectMetricsFamilies::register(
reg.sub_registry_with_prefix("http"),
))
}
}

View File

@ -13,12 +13,6 @@ const HTTP1: &[u8] = b"GET / HTTP/1.1\r\nhost: example.com\r\n\r\n";
const HTTP2: &[u8] = b"PRI * HTTP/2.0\r\n";
const NOT_HTTP: &[u8] = b"foo\r\nbar\r\nblah\r\n";
const RESULTS_NOT_HTTP: &str = "results_total{result=\"not_http\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_HTTP1: &str = "results_total{result=\"http/1\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_HTTP2: &str = "results_total{result=\"http/2\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_READ_TIMEOUT: &str = "results_total{result=\"read_timeout\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
const RESULTS_ERROR: &str = "results_total{result=\"error\",srv_group=\"policy.linkerd.io\",srv_kind=\"server\",srv_name=\"testsrv\",srv_port=\"1000\"}";
fn authzs() -> Arc<[Authorization]> {
Arc::new([Authorization {
authentication: Authentication::Unauthenticated,
@ -47,35 +41,6 @@ fn allow(protocol: Protocol) -> AllowPolicy {
allow
}
macro_rules! assert_contains_metric {
($registry:expr, $metric:expr, $value:expr) => {{
let mut buf = String::new();
prom::encoding::text::encode_registry(&mut buf, $registry).expect("encode registry failed");
let lines = buf.split_terminator('\n').collect::<Vec<_>>();
assert_eq!(
lines.iter().find(|l| l.starts_with($metric)),
Some(&&*format!("{} {}", $metric, $value)),
"metric '{}' not found in:\n{:?}",
$metric,
buf
);
}};
}
macro_rules! assert_not_contains_metric {
($registry:expr, $pattern:expr) => {{
let mut buf = String::new();
prom::encoding::text::encode_registry(&mut buf, $registry).expect("encode registry failed");
let lines = buf.split_terminator('\n').collect::<Vec<_>>();
assert!(
!lines.iter().any(|l| l.starts_with($pattern)),
"metric '{}' found in:\n{:?}",
$pattern,
buf
);
}};
}
#[tokio::test(flavor = "current_thread")]
async fn detect_tls_opaque() {
let _trace = trace::test::trace_init();
@ -112,21 +77,14 @@ async fn detect_http_non_http() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(NOT_HTTP).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_panic("http stack must not be used"))
.push_detect_http(super::HttpDetectMetrics::register(&mut registry), new_ok())
.push_detect_http(new_ok())
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 1);
assert_contains_metric!(&registry, RESULTS_HTTP1, 0);
assert_contains_metric!(&registry, RESULTS_HTTP2, 0);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -150,24 +108,14 @@ async fn detect_http() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(HTTP1).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 0);
assert_contains_metric!(&registry, RESULTS_HTTP1, 1);
assert_contains_metric!(&registry, RESULTS_HTTP2, 0);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -186,24 +134,14 @@ async fn hinted_http1() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(HTTP1).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 0);
assert_contains_metric!(&registry, RESULTS_HTTP1, 1);
assert_contains_metric!(&registry, RESULTS_HTTP2, 0);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -222,24 +160,14 @@ async fn hinted_http1_supports_http2() {
let (ior, mut iow) = io::duplex(100);
iow.write_all(HTTP2).await.unwrap();
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
assert_contains_metric!(&registry, RESULTS_NOT_HTTP, 0);
assert_contains_metric!(&registry, RESULTS_HTTP1, 0);
assert_contains_metric!(&registry, RESULTS_HTTP2, 1);
assert_contains_metric!(&registry, RESULTS_READ_TIMEOUT, 0);
assert_contains_metric!(&registry, RESULTS_ERROR, 0);
}
#[tokio::test(flavor = "current_thread")]
@ -257,25 +185,14 @@ async fn hinted_http2() {
let (ior, _) = io::duplex(100);
let mut registry = prom::Registry::default();
inbound()
.with_stack(new_ok())
.push_detect_http(
super::HttpDetectMetrics::register(&mut registry),
new_panic("tcp stack must not be used"),
)
.push_detect_http(new_panic("tcp stack must not be used"))
.into_inner()
.new_service(target)
.oneshot(ior)
.await
.expect("should succeed");
// No detection is performed when HTTP/2 is hinted, so no metrics are recorded.
assert_not_contains_metric!(&registry, RESULTS_NOT_HTTP);
assert_not_contains_metric!(&registry, RESULTS_HTTP1);
assert_not_contains_metric!(&registry, RESULTS_HTTP2);
assert_not_contains_metric!(&registry, RESULTS_READ_TIMEOUT);
assert_not_contains_metric!(&registry, RESULTS_ERROR);
}
fn client_id() -> tls::ClientId {
@ -293,11 +210,7 @@ fn orig_dst_addr() -> OrigDstAddr {
}
fn inbound() -> Inbound<()> {
Inbound::new(
test_util::default_config(),
test_util::runtime().0,
&mut Default::default(),
)
Inbound::new(test_util::default_config(), test_util::runtime().0)
}
fn new_panic<T, I: 'static>(msg: &'static str) -> svc::ArcNewTcp<T, I> {

View File

@ -15,10 +15,6 @@ use std::fmt::Debug;
use thiserror::Error;
use tracing::{debug_span, info_span};
mod metrics;
pub use self::metrics::MetricsFamilies;
/// Creates I/O errors when a connection cannot be forwarded because no transport
/// header was present.
#[derive(Debug, Default)]
@ -29,8 +25,8 @@ struct RefusedNoHeader;
pub struct RefusedNoIdentity(());
#[derive(Debug, Error)]
#[error("direct connections require transport header negotiation")]
struct TransportHeaderRequired(());
#[error("a named target must be provided on gateway connections")]
struct RefusedNoTarget;
#[derive(Debug, Clone)]
pub(crate) struct LocalTcp {
@ -97,7 +93,7 @@ impl<N> Inbound<N> {
self,
policies: impl policy::GetPolicy + Clone + Send + Sync + 'static,
gateway: svc::ArcNewTcp<GatewayTransportHeader, GatewayIo<I>>,
http: svc::ArcNewTcp<LocalHttp, SensorIo<io::PrefixedIo<TlsIo<I>>>>,
http: svc::ArcNewTcp<LocalHttp, io::PrefixedIo<TlsIo<I>>>,
) -> Inbound<svc::ArcNewTcp<T, I>>
where
T: Param<Remote<ClientAddr>> + Param<OrigDstAddr>,
@ -112,12 +108,11 @@ impl<N> Inbound<N> {
{
self.map_stack(|config, rt, inner| {
let detect_timeout = config.proxy.detect_protocol_timeout;
let metrics = rt.metrics.direct.clone();
let identity = rt
.identity
.server()
.spawn_with_alpn(vec![transport_header::PROTOCOL.into()])
.with_alpn(vec![transport_header::PROTOCOL.into()])
.expect("TLS credential store must be held");
inner
@ -140,14 +135,7 @@ impl<N> Inbound<N> {
// forwarding, or we may be processing an HTTP gateway connection. HTTP gateway
// connections that have a transport header must provide a target name as a part of
// the header.
.push_switch(
Ok::<Local, Infallible>,
svc::stack(http)
.push(transport::metrics::NewServer::layer(
rt.metrics.proxy.transport.clone(),
))
.into_inner(),
)
.push_switch(Ok::<Local, Infallible>, http)
.push_switch(
{
let policies = policies.clone();
@ -157,14 +145,14 @@ impl<N> Inbound<N> {
port,
name: None,
protocol,
} => Ok(svc::Either::Left({
} => Ok(svc::Either::A({
// When the transport header targets an alternate port (but does
// not identify an alternate target name), we check the new
// target's policy (rather than the inbound proxy's address).
let addr = (client.local_addr.ip(), port).into();
let policy = policies.get_policy(OrigDstAddr(addr));
match protocol {
None => svc::Either::Left(LocalTcp {
None => svc::Either::A(LocalTcp {
server_addr: Remote(ServerAddr(addr)),
client_addr: client.client_addr,
client_id: client.client_id,
@ -174,7 +162,7 @@ impl<N> Inbound<N> {
// When TransportHeader includes the protocol, but does not
// include an alternate name we go through the Inbound HTTP
// stack.
svc::Either::Right(LocalHttp {
svc::Either::B(LocalHttp {
addr: Remote(ServerAddr(addr)),
policy,
protocol,
@ -188,7 +176,7 @@ impl<N> Inbound<N> {
port,
name: Some(name),
protocol,
} => Ok(svc::Either::Right({
} => Ok(svc::Either::B({
// When the transport header provides an alternate target, the
// connection is a gateway connection. We check the _gateway
// address's_ policy (rather than the target address).
@ -216,7 +204,6 @@ impl<N> Inbound<N> {
)
.check_new_service::<(TransportHeader, ClientInfo), _>()
// Use ALPN to determine whether a transport header should be read.
.push(metrics::NewRecord::layer(metrics))
.push(svc::ArcNewService::layer())
.push(NewTransportHeaderServer::layer(detect_timeout))
.check_new_service::<ClientInfo, _>()
@ -228,7 +215,7 @@ impl<N> Inbound<N> {
if client.header_negotiated() {
Ok(client)
} else {
Err(TransportHeaderRequired(()).into())
Err(RefusedNoTarget.into())
}
})
.push(svc::ArcNewService::layer())
@ -311,8 +298,9 @@ impl Param<Remote<ServerAddr>> for AuthorizedLocalTcp {
impl Param<transport::labels::Key> for AuthorizedLocalTcp {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(self.client_id.clone()),
negotiated_protocol: None,
}),
self.addr.into(),
self.permit.labels.server.clone(),
@ -343,8 +331,9 @@ impl Param<Remote<ClientAddr>> for LocalHttp {
impl Param<transport::labels::Key> for LocalHttp {
fn param(&self) -> transport::labels::Key {
transport::labels::Key::inbound_server(
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(self.client.client_id.clone()),
negotiated_protocol: None,
}),
self.addr.into(),
self.policy.server_label(),
@ -358,11 +347,11 @@ impl svc::Param<policy::AllowPolicy> for LocalHttp {
}
}
impl svc::Param<http::Variant> for LocalHttp {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for LocalHttp {
fn param(&self) -> http::Version {
match self.protocol {
SessionProtocol::Http1 => http::Variant::Http1,
SessionProtocol::Http2 => http::Variant::H2,
SessionProtocol::Http1 => http::Version::Http1,
SessionProtocol::Http2 => http::Version::H2,
}
}
}
@ -433,14 +422,6 @@ impl Param<tls::ConditionalServerTls> for GatewayTransportHeader {
}
}
impl Param<tls::ConditionalServerTlsLabels> for GatewayTransportHeader {
fn param(&self) -> tls::ConditionalServerTlsLabels {
tls::ConditionalServerTlsLabels::Some(tls::ServerTlsLabels::Established {
client_id: Some(self.client.client_id.clone()),
})
}
}
impl Param<tls::ClientId> for GatewayTransportHeader {
fn param(&self) -> tls::ClientId {
self.client.client_id.clone()

View File

@ -1,91 +0,0 @@
use super::ClientInfo;
use linkerd_app_core::{
metrics::prom::{self, EncodeLabelSetMut},
svc, tls,
transport_header::{SessionProtocol, TransportHeader},
};
#[cfg(test)]
mod tests;
#[derive(Clone, Debug)]
pub struct NewRecord<N> {
inner: N,
metrics: MetricsFamilies,
}
#[derive(Clone, Debug, Default)]
pub struct MetricsFamilies {
connections: prom::Family<Labels, prom::Counter>,
}
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
struct Labels {
header: TransportHeader,
client_id: tls::ClientId,
}
impl MetricsFamilies {
pub fn register(reg: &mut prom::Registry) -> Self {
let connections = prom::Family::default();
reg.register(
"connections",
"TCP connections with transport headers",
connections.clone(),
);
Self { connections }
}
}
impl<N> NewRecord<N> {
pub fn layer(metrics: MetricsFamilies) -> impl svc::layer::Layer<N, Service = Self> + Clone {
svc::layer::mk(move |inner| Self {
inner,
metrics: metrics.clone(),
})
}
}
impl<N> svc::NewService<(TransportHeader, ClientInfo)> for NewRecord<N>
where
N: svc::NewService<(TransportHeader, ClientInfo)>,
{
type Service = N::Service;
fn new_service(&self, (header, client): (TransportHeader, ClientInfo)) -> Self::Service {
self.metrics
.connections
.get_or_create(&Labels {
header: header.clone(),
client_id: client.client_id.clone(),
})
.inc();
self.inner.new_service((header, client))
}
}
impl prom::EncodeLabelSetMut for Labels {
fn encode_label_set(&self, enc: &mut prom::encoding::LabelSetEncoder<'_>) -> std::fmt::Result {
use prom::encoding::EncodeLabel;
(
"session_protocol",
self.header.protocol.as_ref().map(|p| match p {
SessionProtocol::Http1 => "http/1",
SessionProtocol::Http2 => "http/2",
}),
)
.encode(enc.encode_label())?;
("target_port", self.header.port).encode(enc.encode_label())?;
("target_name", self.header.name.as_deref()).encode(enc.encode_label())?;
("client_id", self.client_id.to_str()).encode(enc.encode_label())?;
Ok(())
}
}
impl prom::encoding::EncodeLabelSet for Labels {
fn encode(&self, mut enc: prom::encoding::LabelSetEncoder<'_>) -> Result<(), std::fmt::Error> {
self.encode_label_set(&mut enc)
}
}

View File

@ -1,115 +0,0 @@
use super::*;
use crate::direct::ClientInfo;
use futures::future;
use linkerd_app_core::{
io,
metrics::prom,
svc, tls,
transport::addrs::{ClientAddr, OrigDstAddr, Remote},
transport_header::{SessionProtocol, TransportHeader},
Error,
};
use std::str::FromStr;
fn new_ok<T>() -> svc::ArcNewTcp<T, io::BoxedIo> {
svc::ArcNewService::new(|_| svc::BoxService::new(svc::mk(|_| future::ok::<(), Error>(()))))
}
macro_rules! assert_counted {
($registry:expr, $proto:expr, $port:expr, $name:expr, $value:expr) => {{
let mut buf = String::new();
prom::encoding::text::encode_registry(&mut buf, $registry).expect("encode registry failed");
let metric = format!("connections_total{{session_protocol=\"{}\",target_port=\"{}\",target_name=\"{}\",client_id=\"test.client\"}}", $proto, $port, $name);
assert_eq!(
buf.split_terminator('\n')
.find(|l| l.starts_with(&*metric)),
Some(&*format!("{metric} {}", $value)),
"metric '{metric}' not found in:\n{buf}"
);
}};
}
// Added helper to setup and run the test
fn run_metric_test(header: TransportHeader) -> prom::Registry {
let mut registry = prom::Registry::default();
let families = MetricsFamilies::register(&mut registry);
let new_record = svc::layer::Layer::layer(&NewRecord::layer(families.clone()), new_ok());
// common client info
let client_id = tls::ClientId::from_str("test.client").unwrap();
let client_addr = Remote(ClientAddr(([127, 0, 0, 1], 40000).into()));
let local_addr = OrigDstAddr(([127, 0, 0, 1], 4143).into());
let client_info = ClientInfo {
client_id: client_id.clone(),
alpn: Some(tls::NegotiatedProtocol("transport.l5d.io/v1".into())),
client_addr,
local_addr,
};
let _svc = svc::NewService::new_service(&new_record, (header.clone(), client_info.clone()));
registry
}
#[test]
fn records_metrics_http1_local() {
let header = TransportHeader {
port: 8080,
name: None,
protocol: Some(SessionProtocol::Http1),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/1", 8080, "", 1);
}
#[test]
fn records_metrics_http2_local() {
let header = TransportHeader {
port: 8081,
name: None,
protocol: Some(SessionProtocol::Http2),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/2", 8081, "", 1);
}
#[test]
fn records_metrics_opaq_local() {
let header = TransportHeader {
port: 8082,
name: None,
protocol: None,
};
let registry = run_metric_test(header);
assert_counted!(&registry, "", 8082, "", 1);
}
#[test]
fn records_metrics_http1_gateway() {
let header = TransportHeader {
port: 8080,
name: Some("mysvc.myns.svc.cluster.local".parse().unwrap()),
protocol: Some(SessionProtocol::Http1),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/1", 8080, "mysvc.myns.svc.cluster.local", 1);
}
#[test]
fn records_metrics_http2_gateway() {
let header = TransportHeader {
port: 8081,
name: Some("mysvc.myns.svc.cluster.local".parse().unwrap()),
protocol: Some(SessionProtocol::Http2),
};
let registry = run_metric_test(header);
assert_counted!(&registry, "http/2", 8081, "mysvc.myns.svc.cluster.local", 1);
}
#[test]
fn records_metrics_opaq_gateway() {
let header = TransportHeader {
port: 8082,
name: Some("mysvc.myns.svc.cluster.local".parse().unwrap()),
protocol: None,
};
let registry = run_metric_test(header);
assert_counted!(&registry, "", 8082, "mysvc.myns.svc.cluster.local", 1);
}

View File

@ -18,7 +18,7 @@ pub mod fuzz {
test_util::{support::connect::Connect, *},
Config, Inbound,
};
use hyper::{Body, Request, Response};
use hyper::{client::conn::Builder as ClientBuilder, Body, Request, Response};
use libfuzzer_sys::arbitrary::Arbitrary;
use linkerd_app_core::{
identity, io,
@ -41,8 +41,9 @@ pub mod fuzz {
}
pub async fn fuzz_entry_raw(requests: Vec<HttpRequestSpec>) {
let server = hyper::server::conn::http1::Builder::new();
let mut client = hyper::client::conn::http1::Builder::new();
let mut server = hyper::server::conn::Http::new();
server.http1_only(true);
let mut client = ClientBuilder::new();
let connect =
support::connect().endpoint_fn_boxed(Target::addr(), hello_fuzz_server(server));
let profiles = profile::resolver();
@ -54,7 +55,7 @@ pub mod fuzz {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_fuzz_server(cfg, rt, profiles, connect).new_service(Target::HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (mut client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Now send all of the requests
for inp in requests.iter() {
@ -73,7 +74,14 @@ pub mod fuzz {
.header(header_name, header_value)
.body(Body::default())
{
let rsp = client.send_request(req).await;
let rsp = client
.ready()
.await
.expect("HTTP client poll_ready failed")
.call(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
tracing::info!(?rsp);
if let Ok(rsp) = rsp {
let body = http_util::body_to_string(rsp.into_body()).await;
@ -85,18 +93,18 @@ pub mod fuzz {
}
}
drop(client);
// It's okay if the background task returns an error, as this would
// indicate that the proxy closed the connection --- which it will do on
// invalid inputs. We want to ensure that the proxy doesn't crash in the
// face of these inputs, and the background task will panic in this
// case.
drop(client);
let res = bg.join_all().await;
let res = bg.await;
tracing::info!(?res, "background tasks completed")
}
fn hello_fuzz_server(
http: hyper::server::conn::http1::Builder,
http: hyper::server::conn::Http,
) -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
move |_endpoint| {
let (client_io, server_io) = support::io::duplex(4096);
@ -162,12 +170,12 @@ pub mod fuzz {
}
#[derive(Clone, Debug)]
struct Target(http::Variant);
struct Target(http::Version);
// === impl Target ===
impl Target {
const HTTP1: Self = Self(http::Variant::Http1);
const HTTP1: Self = Self(http::Version::Http1);
fn addr() -> SocketAddr {
([127, 0, 0, 1], 80).into()
@ -192,8 +200,8 @@ pub mod fuzz {
}
}
impl svc::Param<http::Variant> for Target {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for Target {
fn param(&self) -> http::Version {
self.0
}
}
@ -227,9 +235,6 @@ pub mod fuzz {
kind: "server".into(),
name: "testsrv".into(),
}),
local_rate_limit: Arc::new(
linkerd_proxy_server_policy::LocalRateLimit::default(),
),
},
);
policy
@ -238,14 +243,11 @@ pub mod fuzz {
impl svc::Param<policy::ServerLabel> for Target {
fn param(&self) -> policy::ServerLabel {
policy::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
1000,
)
policy::ServerLabel(Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}))
}
}

View File

@ -33,7 +33,7 @@ struct Logical {
/// The request's logical destination. Used for profile discovery.
logical: Option<NameAddr>,
addr: Remote<ServerAddr>,
http: http::Variant,
http: http::Version,
tls: tls::ConditionalServerTls,
permit: policy::HttpRoutePermit,
labels: tap::Labels,
@ -69,7 +69,7 @@ struct LogicalError {
impl<C> Inbound<C> {
pub(crate) fn push_http_router<T, P>(self, profiles: P) -> Inbound<svc::ArcNewCloneHttp<T>>
where
T: Param<http::Variant>
T: Param<http::Version>
+ Param<Remote<ServerAddr>>
+ Param<Remote<ClientAddr>>
+ Param<tls::ConditionalServerTls>
@ -83,7 +83,6 @@ impl<C> Inbound<C> {
{
self.map_stack(|config, rt, connect| {
let allow_profile = config.allow_discovery.clone();
let unsafe_authority_labels = config.unsafe_authority_labels;
let h1_params = config.proxy.connect.http1;
let h2_params = config.proxy.connect.http2.clone();
@ -106,8 +105,8 @@ impl<C> Inbound<C> {
addr: t.addr,
permit: t.permit,
params: match t.http {
http::Variant::Http1 => http::client::Params::Http1(h1_params),
http::Variant::H2 => http::client::Params::H2(h2_params.clone())
http::Version::Http1 => http::client::Params::Http1(h1_params),
http::Version::H2 => http::client::Params::H2(h2_params.clone())
},
}
})
@ -123,9 +122,7 @@ impl<C> Inbound<C> {
rt.metrics
.proxy
.http_endpoint
.to_layer_via::<classify::Response, _, _, _>(
endpoint_labels(unsafe_authority_labels),
),
.to_layer::<classify::Response, _, _>(),
)
.push_on_service(http_tracing::client(rt.span_sink.clone(), super::trace_labels()))
.push_on_service(http::BoxResponse::layer())
@ -166,14 +163,14 @@ impl<C> Inbound<C> {
|(rx, logical): (Option<profiles::Receiver>, Logical)| -> Result<_, Infallible> {
if let Some(rx) = rx {
if let Some(addr) = rx.logical_addr() {
return Ok(svc::Either::Left(Profile {
return Ok(svc::Either::A(Profile {
addr,
logical,
profiles: rx,
}));
}
}
Ok(svc::Either::Right(logical))
Ok(svc::Either::B(logical))
},
http.clone().into_inner(),
)
@ -192,7 +189,7 @@ impl<C> Inbound<C> {
// discovery (so that we skip the profile stack above).
let addr = match logical.logical.clone() {
Some(addr) => addr,
None => return Ok(svc::Either::Right((None, logical))),
None => return Ok(svc::Either::B((None, logical))),
};
if !allow_profile.matches(addr.name()) {
tracing::debug!(
@ -200,9 +197,9 @@ impl<C> Inbound<C> {
suffixes = %allow_profile,
"Skipping discovery, address not in configured DNS suffixes",
);
return Ok(svc::Either::Right((None, logical)));
return Ok(svc::Either::B((None, logical)));
}
Ok(svc::Either::Left(logical))
Ok(svc::Either::A(logical))
},
router
.check_new_service::<(Option<profiles::Receiver>, Logical), http::Request<_>>()
@ -390,17 +387,13 @@ impl Param<transport::labels::Key> for Logical {
}
}
fn endpoint_labels(
unsafe_authority_labels: bool,
) -> impl svc::ExtractParam<metrics::EndpointLabels, Logical> + Clone {
move |t: &Logical| -> metrics::EndpointLabels {
impl Param<metrics::EndpointLabels> for Logical {
fn param(&self) -> metrics::EndpointLabels {
metrics::InboundEndpointLabels {
tls: t.tls.as_ref().map(|t| t.labels()),
authority: unsafe_authority_labels
.then(|| t.logical.as_ref().map(|d| d.as_http_authority()))
.flatten(),
target_addr: t.addr.into(),
policy: t.permit.labels.clone(),
tls: self.tls.clone(),
authority: self.logical.as_ref().map(|d| d.as_http_authority()),
target_addr: self.addr.into(),
policy: self.permit.labels.clone(),
}
.into()
}

View File

@ -1,6 +1,6 @@
use super::set_identity_header::NewSetIdentityHeader;
use crate::{policy, Inbound};
pub use linkerd_app_core::proxy::http::{normalize_uri, Variant};
pub use linkerd_app_core::proxy::http::{normalize_uri, Version};
use linkerd_app_core::{
config::ProxyConfig,
errors, http_tracing, io,
@ -31,7 +31,7 @@ impl<H> Inbound<H> {
pub fn push_http_server<T, HSvc>(self) -> Inbound<svc::ArcNewCloneHttp<T>>
where
// Connection target.
T: Param<Variant>
T: Param<Version>
+ Param<normalize_uri::DefaultAuthority>
+ Param<tls::ConditionalServerTls>
+ Param<ServerLabel>
@ -95,7 +95,7 @@ impl<H> Inbound<H> {
pub fn push_http_tcp_server<T, I, HSvc>(self) -> Inbound<svc::ArcNewTcp<T, I>>
where
// Connection target.
T: Param<Variant>,
T: Param<Version>,
T: Clone + Send + Unpin + 'static,
// Server-side socket.
I: io::AsyncRead + io::AsyncWrite + io::PeerAddr + Send + Unpin + 'static,

View File

@ -6,13 +6,13 @@ use crate::{
},
Config, Inbound,
};
use hyper::{Request, Response};
use hyper::{body::HttpBody, Body, Request, Response};
use linkerd_app_core::{
classify,
errors::header::L5D_PROXY_ERROR,
errors::respond::L5D_PROXY_ERROR,
identity, io, metrics,
proxy::http::{self, BoxBody},
svc::{self, http::TokioExecutor, NewService, Param},
proxy::http,
svc::{self, http::TracingExecutor, NewService, Param},
tls,
transport::{ClientAddr, OrigDstAddr, Remote, ServerAddr},
Error, NameAddr, ProxyRuntime,
@ -33,7 +33,7 @@ fn build_server<I>(
where
I: io::AsyncRead + io::AsyncWrite + io::PeerAddr + Send + Unpin + 'static,
{
Inbound::new(cfg, rt, &mut Default::default())
Inbound::new(cfg, rt)
.with_stack(connect)
.map_stack(|cfg, _, s| {
s.push_map_target(|t| Param::<Remote<ServerAddr>>::param(&t))
@ -47,10 +47,9 @@ where
#[tokio::test(flavor = "current_thread")]
async fn unmeshed_http1_hello_world() {
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
let mut client = hyper::client::conn::http1::Builder::new();
let server = hyper::server::conn::http1::Builder::new();
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
let _trace = trace_init();
// Build a mock "connector" that returns the upstream "server" IO.
@ -65,15 +64,15 @@ async fn unmeshed_http1_hello_world() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -82,7 +81,6 @@ async fn unmeshed_http1_hello_world() {
assert_eq!(body, "Hello world!");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
@ -93,10 +91,10 @@ async fn unmeshed_http1_hello_world() {
#[tokio::test(flavor = "current_thread")]
async fn downgrade_origin_form() {
// Reproduces https://github.com/linkerd/linkerd2/issues/5298
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let server = hyper::server::conn::http1::Builder::new();
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
client.http2_only(true);
let _trace = trace_init();
// Build a mock "connector" that returns the upstream "server" IO.
@ -111,45 +109,17 @@ async fn downgrade_origin_form() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = {
tracing::info!(settings = ?client, "connecting client with");
let (client_io, server_io) = io::duplex(4096);
let (client, conn) = client
.handshake(hyper_util::rt::TokioIo::new(client_io))
.await
.expect("Client must connect");
let mut bg = tokio::task::JoinSet::new();
bg.spawn(
async move {
server.oneshot(server_io).await?;
tracing::info!("proxy serve task complete");
Ok(())
}
.instrument(tracing::info_span!("proxy")),
);
bg.spawn(
async move {
conn.await?;
tracing::info!("client background complete");
Ok(())
}
.instrument(tracing::info_span!("client_bg")),
);
(client, bg)
};
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
let req = Request::builder()
.method(http::Method::GET)
.uri("/")
.header(http::header::HOST, "foo.svc.cluster.local")
.header("l5d-orig-proto", "HTTP/1.1")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -158,7 +128,6 @@ async fn downgrade_origin_form() {
assert_eq!(body, "Hello world!");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
@ -168,10 +137,10 @@ async fn downgrade_origin_form() {
#[tokio::test(flavor = "current_thread")]
async fn downgrade_absolute_form() {
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
let server = hyper::server::conn::http1::Builder::new();
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
client.http2_only(true);
let _trace = trace_init();
// Build a mock "connector" that returns the upstream "server" IO.
@ -186,46 +155,17 @@ async fn downgrade_absolute_form() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = {
tracing::info!(settings = ?client, "connecting client with");
let (client_io, server_io) = io::duplex(4096);
let (client, conn) = client
.handshake(hyper_util::rt::TokioIo::new(client_io))
.await
.expect("Client must connect");
let mut bg = tokio::task::JoinSet::new();
bg.spawn(
async move {
server.oneshot(server_io).await?;
tracing::info!("proxy serve task complete");
Ok(())
}
.instrument(tracing::info_span!("proxy")),
);
bg.spawn(
async move {
conn.await?;
tracing::info!("client background complete");
Ok(())
}
.instrument(tracing::info_span!("client_bg")),
);
(client, bg)
};
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550/")
.header(http::header::HOST, "foo.svc.cluster.local")
.header("l5d-orig-proto", "HTTP/1.1; absolute-form")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -234,7 +174,6 @@ async fn downgrade_absolute_form() {
assert_eq!(body, "Hello world!");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
@ -251,7 +190,8 @@ async fn http1_bad_gateway_meshed_response_error_header() {
// Build a client using the connect that always errors so that responses
// are BAD_GATEWAY.
let mut client = hyper::client::conn::http1::Builder::new();
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -259,17 +199,17 @@ async fn http1_bad_gateway_meshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_http1());
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a BAD_GATEWAY with the expected
// header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -278,10 +218,9 @@ async fn http1_bad_gateway_meshed_response_error_header() {
// because we don't build a real HTTP endpoint stack, which adds error
// context to this error, and the client rescue layer is below where the
// logical error context is added.
check_error_header(rsp.headers(), "client error (Connect)");
check_error_header(rsp.headers(), "server is not listening");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
@ -298,7 +237,8 @@ async fn http1_bad_gateway_unmeshed_response() {
// Build a client using the connect that always errors so that responses
// are BAD_GATEWAY.
let mut client = hyper::client::conn::http1::Builder::new();
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -306,17 +246,17 @@ async fn http1_bad_gateway_unmeshed_response() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a BAD_GATEWAY with the expected
// header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -327,7 +267,6 @@ async fn http1_bad_gateway_unmeshed_response() {
);
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
@ -346,7 +285,8 @@ async fn http1_connect_timeout_meshed_response_error_header() {
// Build a client using the connect that always sleeps so that responses
// are GATEWAY_TIMEOUT.
let mut client = hyper::client::conn::http1::Builder::new();
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -354,17 +294,17 @@ async fn http1_connect_timeout_meshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_http1());
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a GATEWAY_TIMEOUT with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -374,10 +314,9 @@ async fn http1_connect_timeout_meshed_response_error_header() {
// because we don't build a real HTTP endpoint stack, which adds error
// context to this error, and the client rescue layer is below where the
// logical error context is added.
check_error_header(rsp.headers(), "client error (Connect)");
check_error_header(rsp.headers(), "connect timed out after 1s");
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
@ -396,7 +335,8 @@ async fn http1_connect_timeout_unmeshed_response_error_header() {
// Build a client using the connect that always sleeps so that responses
// are GATEWAY_TIMEOUT.
let mut client = hyper::client::conn::http1::Builder::new();
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -404,17 +344,17 @@ async fn http1_connect_timeout_unmeshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_HTTP1);
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is a GATEWAY_TIMEOUT with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -425,7 +365,6 @@ async fn http1_connect_timeout_unmeshed_response_error_header() {
);
// Wait for all of the background tasks to complete, panicking if any returned an error.
drop(client);
bg.join_all()
.await
.into_iter()
@ -441,8 +380,9 @@ async fn h2_response_meshed_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -450,17 +390,17 @@ async fn h2_response_meshed_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_h2());
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is SERVICE_UNAVAILABLE with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::empty())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -482,8 +422,9 @@ async fn h2_response_unmeshed_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -491,17 +432,17 @@ async fn h2_response_unmeshed_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is SERVICE_UNAVAILABLE with the
// expected header message.
let req = Request::builder()
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -525,8 +466,9 @@ async fn grpc_meshed_response_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -534,7 +476,7 @@ async fn grpc_meshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_h2());
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
@ -542,10 +484,10 @@ async fn grpc_meshed_response_error_header() {
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "application/grpc")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -567,8 +509,9 @@ async fn grpc_unmeshed_response_error_header() {
let connect = support::connect().endpoint_fn_boxed(Target::addr(), connect_error());
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -576,7 +519,7 @@ async fn grpc_unmeshed_response_error_header() {
let cfg = default_config();
let (rt, _shutdown) = runtime();
let server = build_server(cfg, rt, profiles, connect).new_service(Target::UNMESHED_H2);
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
@ -584,10 +527,10 @@ async fn grpc_unmeshed_response_error_header() {
.method(http::Method::GET)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "application/grpc")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
@ -609,8 +552,7 @@ async fn grpc_response_class() {
// Build a mock connector serves a gRPC server that returns errors.
let connect = {
let mut server = hyper::server::conn::http2::Builder::new(TokioExecutor::new());
server.timer(hyper_util::rt::TokioTimer::new());
let server = hyper::server::conn::http2::Builder::new(TracingExecutor);
support::connect().endpoint_fn_boxed(
Target::addr(),
grpc_status_server(server, tonic::Code::Unknown),
@ -618,8 +560,9 @@ async fn grpc_response_class() {
};
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http2::Builder::new(TokioExecutor::new());
client.timer(hyper_util::rt::TokioTimer::new());
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut client = hyper::client::conn::Builder::new();
client.http2_only(true);
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
@ -632,7 +575,7 @@ async fn grpc_response_class() {
.http_endpoint
.into_report(time::Duration::from_secs(3600));
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_h2());
let (mut client, bg) = http_util::connect_and_accept_http2(&mut client, server).await;
let (client, bg) = http_util::connect_and_accept(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
@ -640,43 +583,33 @@ async fn grpc_response_class() {
.method(http::Method::POST)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "application/grpc")
.body(BoxBody::default())
.body(Body::default())
.unwrap();
let rsp = client
.send_request(req)
let mut rsp = client
.oneshot(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::OK);
use http_body_util::BodyExt;
let mut body = rsp.into_body();
let trls = body
.frame()
.await
.unwrap()
.unwrap()
.into_trailers()
.expect("trailers frame");
rsp.body_mut().data().await;
let trls = rsp.body_mut().trailers().await.unwrap().unwrap();
assert_eq!(trls.get("grpc-status").unwrap().to_str().unwrap(), "2");
let response_total = metrics
.get_response_total(
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
tls: Target::meshed_h2().1.map(|t| t.labels()),
authority: None,
tls: Target::meshed_h2().1,
authority: Some("foo.svc.cluster.local:5550".parse().unwrap()),
target_addr: "127.0.0.1:80".parse().unwrap(),
policy: metrics::RouteAuthzLabels {
route: metrics::RouteLabels {
server: metrics::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
80,
),
server: metrics::ServerLabel(Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
})),
route: policy::Meta::new_default("default"),
},
authz: Arc::new(policy::Meta::Resource {
@ -695,104 +628,6 @@ async fn grpc_response_class() {
drop(bg);
}
#[tokio::test(flavor = "current_thread")]
async fn unsafe_authority_labels_true() {
let _trace = trace_init();
let mut cfg = default_config();
cfg.unsafe_authority_labels = true;
test_unsafe_authority_labels(cfg, Some("foo.svc.cluster.local:5550".parse().unwrap())).await;
}
#[tokio::test(flavor = "current_thread")]
async fn unsafe_authority_labels_false() {
let _trace = trace_init();
let cfg = default_config();
test_unsafe_authority_labels(cfg, None).await;
}
async fn test_unsafe_authority_labels(
cfg: Config,
expected_authority: Option<http::uri::Authority>,
) {
let connect = {
let mut server = hyper::server::conn::http1::Builder::new();
server.timer(hyper_util::rt::TokioTimer::new());
support::connect().endpoint_fn_boxed(Target::addr(), hello_server(server))
};
// Build a client using the connect that always errors.
let mut client = hyper::client::conn::http1::Builder::new();
let profiles = profile::resolver();
let profile_tx =
profiles.profile_tx(NameAddr::from_str_and_port("foo.svc.cluster.local", 5550).unwrap());
profile_tx.send(profile::Profile::default()).unwrap();
let (rt, _shutdown) = runtime();
let metrics = rt
.metrics
.clone()
.http_endpoint
.into_report(time::Duration::from_secs(3600));
let server = build_server(cfg, rt, profiles, connect).new_service(Target::meshed_http1());
let (mut client, bg) = http_util::connect_and_accept_http1(&mut client, server).await;
// Send a request and assert that it is OK with the expected header
// message.
let req = Request::builder()
.method(http::Method::POST)
.uri("http://foo.svc.cluster.local:5550")
.header(http::header::CONTENT_TYPE, "text/plain")
.body(BoxBody::default())
.unwrap();
let rsp = client
.send_request(req)
.await
.expect("HTTP client request failed");
tracing::info!(?rsp);
assert_eq!(rsp.status(), http::StatusCode::OK);
use http_body_util::BodyExt;
let mut body = rsp.into_body();
while let Some(Ok(_)) = body.frame().await {}
tracing::info!("{metrics:#?}");
let response_total = metrics
.get_response_total(
&metrics::EndpointLabels::Inbound(metrics::InboundEndpointLabels {
tls: Target::meshed_http1().1.as_ref().map(|t| t.labels()),
authority: expected_authority,
target_addr: "127.0.0.1:80".parse().unwrap(),
policy: metrics::RouteAuthzLabels {
route: metrics::RouteLabels {
server: metrics::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
80,
),
route: policy::Meta::new_default("default"),
},
authz: Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "serverauthorization".into(),
name: "testsaz".into(),
}),
},
}),
Some(http::StatusCode::OK),
&classify::Class::Http(Ok(http::StatusCode::OK)),
)
.expect("response_total not found");
assert_eq!(response_total, 1.0);
drop(bg);
}
#[tracing::instrument]
fn hello_server(
server: hyper::server::conn::http1::Builder,
@ -802,14 +637,13 @@ fn hello_server(
let _e = span.enter();
tracing::info!("mock connecting");
let (client_io, server_io) = support::io::duplex(4096);
let hello_svc =
hyper::service::service_fn(|request: Request<hyper::body::Incoming>| async move {
tracing::info!(?request);
Ok::<_, io::Error>(Response::new(BoxBody::from_static("Hello world!")))
});
let hello_svc = hyper::service::service_fn(|request: Request<Body>| async move {
tracing::info!(?request);
Ok::<_, io::Error>(Response::new(Body::from("Hello world!")))
});
tokio::spawn(
server
.serve_connection(hyper_util::rt::TokioIo::new(server_io), hello_svc)
.serve_connection(server_io, hello_svc)
.in_current_span(),
);
Ok(io::BoxedIo::new(client_io))
@ -817,8 +651,9 @@ fn hello_server(
}
#[tracing::instrument]
#[allow(deprecated)] // linkerd/linkerd2#8733
fn grpc_status_server(
server: hyper::server::conn::http2::Builder<TokioExecutor>,
server: hyper::server::conn::http2::Builder<TracingExecutor>,
status: tonic::Code,
) -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
move |endpoint| {
@ -829,29 +664,26 @@ fn grpc_status_server(
tokio::spawn(
server
.serve_connection(
hyper_util::rt::TokioIo::new(server_io),
hyper::service::service_fn(
move |request: Request<hyper::body::Incoming>| async move {
tracing::info!(?request);
let (mut tx, rx) =
http_body_util::channel::Channel::<bytes::Bytes, Error>::new(1024);
tokio::spawn(async move {
let mut trls = ::http::HeaderMap::new();
trls.insert(
"grpc-status",
(status as u32).to_string().parse().unwrap(),
);
tx.send_trailers(trls).await
});
Ok::<_, io::Error>(
http::Response::builder()
.version(::http::Version::HTTP_2)
.header("content-type", "application/grpc")
.body(rx)
.unwrap(),
)
},
),
server_io,
hyper::service::service_fn(move |request: Request<Body>| async move {
tracing::info!(?request);
let (mut tx, rx) = Body::channel();
tokio::spawn(async move {
let mut trls = ::http::HeaderMap::new();
trls.insert(
"grpc-status",
(status as u32).to_string().parse().unwrap(),
);
tx.send_trailers(trls).await
});
Ok::<_, io::Error>(
http::Response::builder()
.version(::http::Version::HTTP_2)
.header("content-type", "application/grpc")
.body(rx)
.unwrap(),
)
}),
)
.in_current_span(),
);
@ -861,7 +693,12 @@ fn grpc_status_server(
#[tracing::instrument]
fn connect_error() -> impl Fn(Remote<ServerAddr>) -> io::Result<io::BoxedIo> {
move |_| Err(io::Error::other("server is not listening"))
move |_| {
Err(io::Error::new(
io::ErrorKind::Other,
"server is not listening",
))
}
}
#[tracing::instrument]
@ -882,7 +719,7 @@ fn connect_timeout() -> Box<dyn FnMut(Remote<ServerAddr>) -> ConnectFuture + Sen
}
#[derive(Clone, Debug)]
struct Target(http::Variant, tls::ConditionalServerTls);
struct Target(http::Version, tls::ConditionalServerTls);
#[track_caller]
fn check_error_header(hdrs: &::http::HeaderMap, expected: &str) {
@ -901,17 +738,17 @@ fn check_error_header(hdrs: &::http::HeaderMap, expected: &str) {
impl Target {
const UNMESHED_HTTP1: Self = Self(
http::Variant::Http1,
http::Version::Http1,
tls::ConditionalServerTls::None(tls::NoServerTls::NoClientHello),
);
const UNMESHED_H2: Self = Self(
http::Variant::H2,
http::Version::H2,
tls::ConditionalServerTls::None(tls::NoServerTls::NoClientHello),
);
fn meshed_http1() -> Self {
Self(
http::Variant::Http1,
http::Version::Http1,
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(tls::ClientId(
"foosa.barns.serviceaccount.identity.linkerd.cluster.local"
@ -925,7 +762,7 @@ impl Target {
fn meshed_h2() -> Self {
Self(
http::Variant::H2,
http::Version::H2,
tls::ConditionalServerTls::Some(tls::ServerTls::Established {
client_id: Some(tls::ClientId(
"foosa.barns.serviceaccount.identity.linkerd.cluster.local"
@ -960,8 +797,8 @@ impl svc::Param<Remote<ClientAddr>> for Target {
}
}
impl svc::Param<http::Variant> for Target {
fn param(&self) -> http::Variant {
impl svc::Param<http::Version> for Target {
fn param(&self) -> http::Version {
self.0
}
}
@ -1003,14 +840,11 @@ impl svc::Param<policy::AllowPolicy> for Target {
impl svc::Param<policy::ServerLabel> for Target {
fn param(&self) -> policy::ServerLabel {
policy::ServerLabel(
Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}),
80,
)
policy::ServerLabel(Arc::new(policy::Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "testsrv".into(),
}))
}
}

View File

@ -20,15 +20,12 @@ pub mod test_util;
#[cfg(fuzzing)]
pub use self::http::fuzz as http_fuzz;
pub use self::{
detect::MetricsFamilies as DetectMetrics, metrics::InboundMetrics, policy::DefaultPolicy,
};
pub use self::{metrics::InboundMetrics, policy::DefaultPolicy};
use linkerd_app_core::{
config::{ConnectConfig, ProxyConfig, QueueConfig},
drain,
http_tracing::SpanSink,
identity, io,
metrics::prom,
proxy::{tap, tcp},
svc,
transport::{self, Remote, ServerAddr},
@ -55,9 +52,6 @@ pub struct Config {
/// Configures how HTTP requests are buffered *for each inbound port*.
pub http_request_queue: QueueConfig,
/// Enables unsafe authority labels.
pub unsafe_authority_labels: bool,
}
#[derive(Clone)]
@ -113,6 +107,10 @@ impl<S> Inbound<S> {
&self.runtime.identity
}
pub fn proxy_metrics(&self) -> &metrics::Proxy {
&self.runtime.metrics.proxy
}
/// A helper for gateways to instrument policy checks.
pub fn authorize_http<N>(
&self,
@ -150,9 +148,9 @@ impl<S> Inbound<S> {
}
impl Inbound<()> {
pub fn new(config: Config, runtime: ProxyRuntime, prom: &mut prom::Registry) -> Self {
pub fn new(config: Config, runtime: ProxyRuntime) -> Self {
let runtime = Runtime {
metrics: InboundMetrics::new(runtime.metrics, prom),
metrics: InboundMetrics::new(runtime.metrics),
identity: runtime.identity,
tap: runtime.tap,
span_sink: runtime.span_sink,
@ -168,11 +166,7 @@ impl Inbound<()> {
#[cfg(any(test, feature = "test-util"))]
pub fn for_test() -> (Self, drain::Signal) {
let (rt, drain) = test_util::runtime();
let this = Self::new(
test_util::default_config(),
rt,
&mut prom::Registry::default(),
);
let this = Self::new(test_util::default_config(), rt);
(this, drain)
}

View File

@ -13,7 +13,7 @@ pub(crate) mod error;
pub use linkerd_app_core::metrics::*;
/// Holds LEGACY inbound proxy metrics.
/// Holds outbound proxy metrics.
#[derive(Clone, Debug)]
pub struct InboundMetrics {
pub http_authz: authz::HttpAuthzMetrics,
@ -25,32 +25,21 @@ pub struct InboundMetrics {
/// Holds metrics that are common to both inbound and outbound proxies. These metrics are
/// reported separately
pub proxy: Proxy,
pub detect: crate::detect::MetricsFamilies,
pub direct: crate::direct::MetricsFamilies,
}
impl InboundMetrics {
pub(crate) fn new(proxy: Proxy, reg: &mut prom::Registry) -> Self {
let detect =
crate::detect::MetricsFamilies::register(reg.sub_registry_with_prefix("tcp_detect"));
let direct = crate::direct::MetricsFamilies::register(
reg.sub_registry_with_prefix("tcp_transport_header"),
);
pub(crate) fn new(proxy: Proxy) -> Self {
Self {
http_authz: authz::HttpAuthzMetrics::default(),
http_errors: error::HttpErrorMetrics::default(),
tcp_authz: authz::TcpAuthzMetrics::default(),
tcp_errors: error::TcpErrorMetrics::default(),
proxy,
detect,
direct,
}
}
}
impl legacy::FmtMetrics for InboundMetrics {
impl FmtMetrics for InboundMetrics {
fn fmt_metrics(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.http_authz.fmt_metrics(f)?;
self.http_errors.fmt_metrics(f)?;

View File

@ -1,9 +1,8 @@
use crate::policy::{AllowPolicy, HttpRoutePermit, Meta, ServerPermit};
use linkerd_app_core::{
metrics::{
legacy::{Counter, FmtLabels, FmtMetrics},
metrics, RouteAuthzLabels, RouteLabels, ServerAuthzLabels, ServerLabel, TargetAddr,
TlsAccept,
metrics, Counter, FmtLabels, FmtMetrics, RouteAuthzLabels, RouteLabels, ServerAuthzLabels,
ServerLabel, TargetAddr, TlsAccept,
},
tls,
transport::OrigDstAddr,
@ -68,7 +67,7 @@ pub struct HTTPLocalRateLimitLabels {
#[derive(Debug, Hash, PartialEq, Eq)]
struct Key<L> {
target: TargetAddr,
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
labels: L,
}
@ -81,7 +80,7 @@ type HttpLocalRateLimitKey = Key<HTTPLocalRateLimitLabels>;
// === impl HttpAuthzMetrics ===
impl HttpAuthzMetrics {
pub fn allow(&self, permit: &HttpRoutePermit, tls: tls::ConditionalServerTlsLabels) {
pub fn allow(&self, permit: &HttpRoutePermit, tls: tls::ConditionalServerTls) {
self.0
.allow
.lock()
@ -94,7 +93,7 @@ impl HttpAuthzMetrics {
&self,
labels: ServerLabel,
dst: OrigDstAddr,
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
) {
self.0
.route_not_found
@ -104,12 +103,7 @@ impl HttpAuthzMetrics {
.incr();
}
pub fn deny(
&self,
labels: RouteLabels,
dst: OrigDstAddr,
tls: tls::ConditionalServerTlsLabels,
) {
pub fn deny(&self, labels: RouteLabels, dst: OrigDstAddr, tls: tls::ConditionalServerTls) {
self.0
.deny
.lock()
@ -122,7 +116,7 @@ impl HttpAuthzMetrics {
&self,
labels: HTTPLocalRateLimitLabels,
dst: OrigDstAddr,
tls: tls::ConditionalServerTlsLabels,
tls: tls::ConditionalServerTls,
) {
self.0
.http_local_rate_limit
@ -193,7 +187,7 @@ impl FmtMetrics for HttpAuthzMetrics {
// === impl TcpAuthzMetrics ===
impl TcpAuthzMetrics {
pub fn allow(&self, permit: &ServerPermit, tls: tls::ConditionalServerTlsLabels) {
pub fn allow(&self, permit: &ServerPermit, tls: tls::ConditionalServerTls) {
self.0
.allow
.lock()
@ -202,7 +196,7 @@ impl TcpAuthzMetrics {
.incr();
}
pub fn deny(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) {
pub fn deny(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTls) {
self.0
.deny
.lock()
@ -211,7 +205,7 @@ impl TcpAuthzMetrics {
.incr();
}
pub fn terminate(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) {
pub fn terminate(&self, policy: &AllowPolicy, tls: tls::ConditionalServerTls) {
self.0
.terminate
.lock()
@ -252,24 +246,18 @@ impl FmtMetrics for TcpAuthzMetrics {
impl FmtLabels for HTTPLocalRateLimitLabels {
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self {
server,
rate_limit,
scope,
} = self;
server.fmt_labels(f)?;
if let Some(rl) = rate_limit {
self.server.fmt_labels(f)?;
if let Some(rl) = &self.rate_limit {
write!(
f,
",ratelimit_group=\"{}\",ratelimit_kind=\"{}\",ratelimit_name=\"{}\",ratelimit_scope=\"{}\"",
rl.group(),
rl.kind(),
rl.name(),
scope,
self.scope,
)
} else {
write!(f, ",ratelimit_scope=\"{scope}\"")
write!(f, ",ratelimit_scope=\"{}\"", self.scope)
}
}
}
@ -277,7 +265,7 @@ impl FmtLabels for HTTPLocalRateLimitLabels {
// === impl Key ===
impl<L> Key<L> {
fn new(labels: L, dst: OrigDstAddr, tls: tls::ConditionalServerTlsLabels) -> Self {
fn new(labels: L, dst: OrigDstAddr, tls: tls::ConditionalServerTls) -> Self {
Self {
tls,
target: TargetAddr(dst.into()),
@ -288,30 +276,24 @@ impl<L> Key<L> {
impl<L: FmtLabels> FmtLabels for Key<L> {
fn fmt_labels(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let Self {
target,
tls,
labels,
} = self;
(target, (labels, TlsAccept(tls))).fmt_labels(f)
(self.target, (&self.labels, TlsAccept(&self.tls))).fmt_labels(f)
}
}
impl ServerKey {
fn from_policy(policy: &AllowPolicy, tls: tls::ConditionalServerTlsLabels) -> Self {
fn from_policy(policy: &AllowPolicy, tls: tls::ConditionalServerTls) -> Self {
Self::new(policy.server_label(), policy.dst_addr(), tls)
}
}
impl RouteAuthzKey {
fn from_permit(permit: &HttpRoutePermit, tls: tls::ConditionalServerTlsLabels) -> Self {
fn from_permit(permit: &HttpRoutePermit, tls: tls::ConditionalServerTls) -> Self {
Self::new(permit.labels.clone(), permit.dst, tls)
}
}
impl ServerAuthzKey {
fn from_permit(permit: &ServerPermit, tls: tls::ConditionalServerTlsLabels) -> Self {
fn from_permit(permit: &ServerPermit, tls: tls::ConditionalServerTls) -> Self {
Self::new(permit.labels.clone(), permit.dst, tls)
}
}

View File

@ -8,7 +8,7 @@ use crate::{
};
use linkerd_app_core::{
errors::{FailFastError, LoadShedError},
metrics::legacy::FmtLabels,
metrics::FmtLabels,
tls,
};
use std::fmt;

View File

@ -1,9 +1,6 @@
use super::ErrorKind;
use linkerd_app_core::{
metrics::{
legacy::{Counter, FmtMetrics},
metrics, ServerLabel,
},
metrics::{metrics, Counter, FmtMetrics, ServerLabel},
svc::{self, stack::NewMonitor},
transport::{labels::TargetAddr, OrigDstAddr},
Error,

View File

@ -1,9 +1,6 @@
use super::ErrorKind;
use linkerd_app_core::{
metrics::{
legacy::{Counter, FmtMetrics},
metrics,
},
metrics::{metrics, Counter, FmtMetrics},
svc::{self, stack::NewMonitor},
transport::{labels::TargetAddr, OrigDstAddr},
Error,

View File

@ -133,7 +133,7 @@ impl AllowPolicy {
#[inline]
pub fn server_label(&self) -> ServerLabel {
ServerLabel(self.server.borrow().meta.clone(), self.dst.port())
ServerLabel(self.server.borrow().meta.clone())
}
pub fn ratelimit_label(&self, error: &RateLimitError) -> HTTPLocalRateLimitLabels {
@ -220,7 +220,7 @@ impl ServerPermit {
protocol: server.protocol.clone(),
labels: ServerAuthzLabels {
authz: authz.meta.clone(),
server: ServerLabel(server.meta.clone(), dst.port()),
server: ServerLabel(server.meta.clone()),
},
}
}

View File

@ -33,8 +33,9 @@ static INVALID_POLICY: once_cell::sync::OnceCell<ServerPolicy> = once_cell::sync
impl<S> Api<S>
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error> + Clone,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error> + Clone,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
{
pub(super) fn new(
workload: Arc<str>,
@ -57,9 +58,10 @@ where
impl<S> Service<u16> for Api<S>
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
S: Clone + Send + Sync + 'static,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
S::Future: Send + 'static,
{
type Response =

View File

@ -40,10 +40,10 @@ impl Config {
limits: ReceiveLimits,
) -> impl GetPolicy + Clone + Send + Sync + 'static
where
C: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
C: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
C: Clone + Unpin + Send + Sync + 'static,
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Send + 'static,
C::ResponseBody: http::HttpBody<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Default + Send + 'static,
C::Future: Send,
{
match self {

View File

@ -248,11 +248,8 @@ impl<T, N> HttpPolicyService<T, N> {
);
}
}
self.metrics.deny(
labels,
self.connection.dst,
self.connection.tls.as_ref().map(|t| t.labels()),
);
self.metrics
.deny(labels, self.connection.dst, self.connection.tls.clone());
return Err(HttpRouteUnauthorized(()).into());
}
};
@ -282,19 +279,14 @@ impl<T, N> HttpPolicyService<T, N> {
}
};
self.metrics
.allow(&permit, self.connection.tls.as_ref().map(|t| t.labels()));
self.metrics.allow(&permit, self.connection.tls.clone());
Ok((permit, r#match, route))
}
fn mk_route_not_found(&self) -> Error {
let labels = self.policy.server_label();
self.metrics.route_not_found(
labels,
self.connection.dst,
self.connection.tls.as_ref().map(|t| t.labels()),
);
self.metrics
.route_not_found(labels, self.connection.dst, self.connection.tls.clone());
HttpRouteNotFound(()).into()
}
@ -314,7 +306,7 @@ impl<T, N> HttpPolicyService<T, N> {
self.metrics.ratelimit(
self.policy.ratelimit_label(&err),
self.connection.dst,
self.connection.tls.as_ref().map(|t| t.labels()),
self.connection.tls.clone(),
);
err.into()
})

View File

@ -1,7 +1,6 @@
use super::*;
use crate::policy::{Authentication, Authorization, Meta, Protocol, ServerPolicy};
use linkerd_app_core::{svc::Service, Infallible};
use linkerd_http_box::BoxBody;
use linkerd_proxy_server_policy::{LocalRateLimit, RateLimitError};
macro_rules! conn {
@ -41,7 +40,7 @@ macro_rules! new_svc {
metrics: HttpAuthzMetrics::default(),
inner: |(permit, _): (HttpRoutePermit, ())| {
let f = $rsp;
svc::mk(move |req: ::http::Request<BoxBody>| {
svc::mk(move |req: ::http::Request<hyper::Body>| {
futures::future::ready((f)(permit.clone(), req))
})
},
@ -57,9 +56,9 @@ macro_rules! new_svc {
new_svc!(
$proto,
conn!(),
|permit: HttpRoutePermit, _req: ::http::Request<BoxBody>| {
|permit: HttpRoutePermit, _req: ::http::Request<hyper::Body>| {
let mut rsp = ::http::Response::builder()
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap();
rsp.extensions_mut().insert(permit.clone());
Ok::<_, Infallible>(rsp)
@ -120,7 +119,11 @@ async fn http_route() {
// Test that authorization policies allow requests:
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect("serves");
let permit = rsp
@ -134,7 +137,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -146,7 +149,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::DELETE)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -210,7 +213,11 @@ async fn http_route() {
.expect("must send");
assert!(svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap(),)
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect_err("fails")
.is::<HttpRouteUnauthorized>());
@ -219,7 +226,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -230,7 +237,7 @@ async fn http_route() {
.call(
::http::Request::builder()
.method(::http::Method::DELETE)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -278,14 +285,14 @@ async fn http_filter_header() {
},
}],
}]));
let inner = |permit: HttpRoutePermit, req: ::http::Request<BoxBody>| -> Result<_> {
let inner = |permit: HttpRoutePermit, req: ::http::Request<hyper::Body>| -> Result<_> {
assert_eq!(req.headers().len(), 1);
assert_eq!(
req.headers().get("testkey"),
Some(&"testval".parse().unwrap())
);
let mut rsp = ::http::Response::builder()
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap();
rsp.extensions_mut().insert(permit);
Ok(rsp)
@ -293,7 +300,11 @@ async fn http_filter_header() {
let (mut svc, _tx) = new_svc!(proto, conn!(), inner);
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect("serves");
let permit = rsp
@ -343,12 +354,16 @@ async fn http_filter_inject_failure() {
}],
}]));
let inner = |_: HttpRoutePermit,
_: ::http::Request<BoxBody>|
-> Result<::http::Response<BoxBody>> { unreachable!() };
_: ::http::Request<hyper::Body>|
-> Result<::http::Response<hyper::Body>> { unreachable!() };
let (mut svc, _tx) = new_svc!(proto, conn!(), inner);
let err = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect_err("fails");
assert_eq!(
@ -382,14 +397,22 @@ async fn rate_limit_allow() {
// First request should be allowed
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect("serves");
assert_eq!(rsp.status(), ::http::StatusCode::OK);
// Second request should be allowed as well
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect("serves");
assert_eq!(rsp.status(), ::http::StatusCode::OK);
@ -417,14 +440,22 @@ async fn rate_limit_deny() {
// First request should be allowed
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect("serves");
assert_eq!(rsp.status(), ::http::StatusCode::OK);
// Second request should be denied
let rsp = svc
.call(::http::Request::builder().body(BoxBody::default()).unwrap())
.call(
::http::Request::builder()
.body(hyper::Body::default())
.unwrap(),
)
.await
.expect_err("should deny");
let err = rsp
@ -495,7 +526,7 @@ async fn grpc_route() {
::http::Request::builder()
.uri("/foo.bar.bah/baz")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -511,7 +542,7 @@ async fn grpc_route() {
::http::Request::builder()
.uri("/foo.bar.bah/qux")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -523,7 +554,7 @@ async fn grpc_route() {
::http::Request::builder()
.uri("/boo.bar.bah/bah")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -575,14 +606,14 @@ async fn grpc_filter_header() {
},
}],
}]));
let inner = |permit: HttpRoutePermit, req: ::http::Request<BoxBody>| -> Result<_> {
let inner = |permit: HttpRoutePermit, req: ::http::Request<hyper::Body>| -> Result<_> {
assert_eq!(req.headers().len(), 1);
assert_eq!(
req.headers().get("testkey"),
Some(&"testval".parse().unwrap())
);
let mut rsp = ::http::Response::builder()
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap();
rsp.extensions_mut().insert(permit);
Ok(rsp)
@ -594,7 +625,7 @@ async fn grpc_filter_header() {
::http::Request::builder()
.uri("/foo.bar.bah/baz")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await
@ -652,8 +683,8 @@ async fn grpc_filter_inject_failure() {
}],
}]));
let inner = |_: HttpRoutePermit,
_: ::http::Request<BoxBody>|
-> Result<::http::Response<BoxBody>> { unreachable!() };
_: ::http::Request<hyper::Body>|
-> Result<::http::Response<hyper::Body>> { unreachable!() };
let (mut svc, _tx) = new_svc!(proto, conn!(), inner);
let err = svc
@ -661,7 +692,7 @@ async fn grpc_filter_inject_failure() {
::http::Request::builder()
.uri("/foo.bar.bah/baz")
.method(::http::Method::POST)
.body(BoxBody::default())
.body(hyper::Body::default())
.unwrap(),
)
.await

View File

@ -74,10 +74,11 @@ impl<S> Store<S> {
opaque_ports: RangeInclusiveSet<u16>,
) -> Self
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
S: Clone + Send + Sync + 'static,
S::Future: Send,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
{
let opaque_default = Self::make_opaque(default.clone());
// The initial set of policies never expire from the cache.
@ -138,10 +139,11 @@ impl<S> Store<S> {
impl<S> GetPolicy for Store<S>
where
S: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
S: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
S: Clone + Send + Sync + 'static,
S::Future: Send,
S::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error> + Send + 'static,
S::ResponseBody:
http::HttpBody<Data = tonic::codegen::Bytes, Error = Error> + Default + Send + 'static,
{
fn get_policy(&self, dst: OrigDstAddr) -> AllowPolicy {
// Lookup the policy for the target port in the cache. If it doesn't

View File

@ -77,8 +77,7 @@ where
// This new services requires a ClientAddr, so it must necessarily be built for each
// connection. So we can just increment the counter here since the service can only
// be used at most once.
self.metrics
.allow(&permit, tls.as_ref().map(|t| t.labels()));
self.metrics.allow(&permit, tls.clone());
let inner = self.inner.new_service((permit, target));
TcpPolicy::Authorized(Authorized {
@ -98,7 +97,7 @@ where
?tls, %client,
"Connection denied"
);
self.metrics.deny(&policy, tls.as_ref().map(|t| t.labels()));
self.metrics.deny(&policy, tls);
TcpPolicy::Unauthorized(deny)
}
}
@ -168,7 +167,7 @@ where
%client,
"Connection terminated due to policy change",
);
metrics.terminate(&policy, tls.as_ref().map(|t| t.labels()));
metrics.terminate(&policy, tls);
return Err(denied.into());
}
}

View File

@ -43,14 +43,11 @@ async fn unauthenticated_allowed() {
kind: "serverauthorization".into(),
name: "unauth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
)
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}))
},
}
);
@ -99,14 +96,11 @@ async fn authenticated_identity() {
kind: "serverauthorization".into(),
name: "tls-auth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
)
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}))
}
}
);
@ -165,14 +159,11 @@ async fn authenticated_suffix() {
kind: "serverauthorization".into(),
name: "tls-auth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
),
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
})),
}
}
);
@ -228,14 +219,11 @@ async fn tls_unauthenticated() {
kind: "serverauthorization".into(),
name: "tls-unauth".into()
}),
server: ServerLabel(
Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
}),
1000
),
server: ServerLabel(Arc::new(Meta::Resource {
group: "policy.linkerd.io".into(),
kind: "server".into(),
name: "test".into()
})),
}
}
);
@ -263,7 +251,7 @@ fn orig_dst_addr() -> OrigDstAddr {
OrigDstAddr(([192, 0, 2, 2], 1000).into())
}
impl tonic::client::GrpcService<tonic::body::Body> for MockSvc {
impl tonic::client::GrpcService<tonic::body::BoxBody> for MockSvc {
type ResponseBody = linkerd_app_core::control::RspBody;
type Error = Error;
type Future = futures::future::Pending<Result<http::Response<Self::ResponseBody>, Self::Error>>;
@ -275,7 +263,7 @@ impl tonic::client::GrpcService<tonic::body::Body> for MockSvc {
unreachable!()
}
fn call(&mut self, _req: http::Request<tonic::body::Body>) -> Self::Future {
fn call(&mut self, _req: http::Request<tonic::body::BoxBody>) -> Self::Future {
unreachable!()
}
}

View File

@ -27,10 +27,10 @@ impl Inbound<()> {
limits: ReceiveLimits,
) -> impl policy::GetPolicy + Clone + Send + Sync + 'static
where
C: tonic::client::GrpcService<tonic::body::Body, Error = Error>,
C: tonic::client::GrpcService<tonic::body::BoxBody, Error = Error>,
C: Clone + Unpin + Send + Sync + 'static,
C::ResponseBody: http::Body<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Send + 'static,
C::ResponseBody: http::HttpBody<Data = tonic::codegen::Bytes, Error = Error>,
C::ResponseBody: Default + Send + 'static,
C::Future: Send,
{
self.config
@ -55,8 +55,6 @@ impl Inbound<()> {
I: Debug + Unpin + Send + Sync + 'static,
P: profiles::GetProfile<Error = Error>,
{
let detect_metrics = self.runtime.metrics.detect.clone();
// Handles connections to ports that can't be determined to be HTTP.
let forward = self
.clone()
@ -99,7 +97,7 @@ impl Inbound<()> {
// Determines how to handle an inbound connection, dispatching it to the appropriate
// stack.
http.push_http_tcp_server()
.push_detect(detect_metrics, forward)
.push_detect(forward)
.push_accept(addr.port(), policies, direct)
.into_inner()
}

View File

@ -3,7 +3,9 @@ pub use futures::prelude::*;
use linkerd_app_core::{
config,
dns::Suffix,
drain, exp_backoff, identity, metrics,
drain, exp_backoff,
identity::rustls,
metrics,
proxy::{
http::{h1, h2},
tap,
@ -87,7 +89,6 @@ pub fn default_config() -> Config {
},
discovery_idle_timeout: Duration::from_secs(20),
profile_skip_timeout: Duration::from_secs(1),
unsafe_authority_labels: false,
}
}
@ -96,7 +97,7 @@ pub fn runtime() -> (ProxyRuntime, drain::Signal) {
let (tap, _) = tap::new();
let (metrics, _) = metrics::Metrics::new(std::time::Duration::from_secs(10));
let runtime = ProxyRuntime {
identity: identity::creds::default_for_test().1,
identity: rustls::creds::default_for_test().1.into(),
metrics: metrics.proxy,
tap,
span_sink: None,

View File

@ -1,10 +1,10 @@
[package]
name = "linkerd-app-integration"
version = { workspace = true }
authors = { workspace = true }
license = { workspace = true }
edition = { workspace = true }
publish = { workspace = true }
version = "0.1.0"
authors = ["Linkerd Developers <cncf-linkerd-dev@lists.cncf.io>"]
license = "Apache-2.0"
edition = "2021"
publish = false
description = """
Proxy integration tests
@ -17,56 +17,44 @@ default = []
flakey = []
[dependencies]
bytes = { workspace = true }
bytes = "1"
futures = { version = "0.3", default-features = false, features = ["executor"] }
h2 = { workspace = true }
http = { workspace = true }
http-body = { workspace = true }
http-body-util = { workspace = true }
hyper-util = { workspace = true, features = ["service"] }
h2 = "0.3"
http = "0.2"
http-body = "0.4"
hyper = { version = "0.14", features = [
"deprecated",
"http1",
"http2",
"stream",
"client",
"server",
] }
ipnet = "2"
linkerd-app = { path = "..", features = ["allow-loopback"] }
linkerd-app-core = { path = "../core" }
linkerd-app-test = { path = "../test" }
linkerd-meshtls = { path = "../../meshtls", features = ["test-util"] }
linkerd-metrics = { path = "../../metrics", features = ["test_util"] }
linkerd-rustls = { path = "../../rustls" }
linkerd2-proxy-api = { workspace = true, features = [
"destination",
"arbitrary",
] }
linkerd-app-test = { path = "../test" }
linkerd-tracing = { path = "../../tracing" }
maplit = "1"
parking_lot = "0.12"
regex = "1"
rustls-pemfile = "2.2"
socket2 = "0.6"
socket2 = "0.5"
tokio = { version = "1", features = ["io-util", "net", "rt", "macros"] }
tokio-rustls = { workspace = true }
tokio-stream = { version = "0.1", features = ["sync"] }
tonic = { workspace = true, features = ["transport", "router"], default-features = false }
tower = { workspace = true, default-features = false }
tracing = { workspace = true }
[dependencies.hyper]
workspace = true
features = [
"client",
"http1",
"http2",
"server",
]
[dependencies.linkerd2-proxy-api]
workspace = true
features = [
"arbitrary",
"destination",
]
[dependencies.tracing-subscriber]
version = "0.3"
default-features = false
features = [
tokio-rustls = "0.24"
rustls-pemfile = "1.0"
tower = { version = "0.4", default-features = false }
tonic = { version = "0.10", features = ["transport"], default-features = false }
tracing = "0.1"
tracing-subscriber = { version = "0.3", default-features = false, features = [
"fmt",
"std",
]
] }
[dev-dependencies]
flate2 = { version = "1", default-features = false, features = [
@ -74,5 +62,8 @@ flate2 = { version = "1", default-features = false, features = [
] }
# Log streaming isn't enabled by default globally, but we want to test it.
linkerd-app-admin = { path = "../admin", features = ["log-streaming"] }
# No code from this crate is actually used; only necessary to enable the Rustls
# implementation.
linkerd-meshtls = { path = "../../meshtls", features = ["rustls"] }
linkerd-tracing = { path = "../../tracing", features = ["ansi"] }
serde_json = "1"

View File

@ -1,28 +1,26 @@
use super::*;
use http::{Request, Response};
use linkerd_app_core::{proxy::http::TokioExecutor, svc::http::BoxBody};
use linkerd_app_core::proxy::http::TracingExecutor;
use parking_lot::Mutex;
use std::io;
use tokio::{net::TcpStream, task::JoinHandle};
use tokio::net::TcpStream;
use tokio::task::JoinHandle;
use tokio_rustls::rustls::{self, ClientConfig};
use tracing::info_span;
type ClientError = hyper_util::client::legacy::Error;
type Sender = mpsc::UnboundedSender<(
Request<BoxBody>,
oneshot::Sender<Result<Response<hyper::body::Incoming>, ClientError>>,
)>;
type ClientError = hyper::Error;
type Request = http::Request<hyper::Body>;
type Response = http::Response<hyper::Body>;
type Sender = mpsc::UnboundedSender<(Request, oneshot::Sender<Result<Response, ClientError>>)>;
#[derive(Clone)]
pub struct TlsConfig {
client_config: Arc<ClientConfig>,
name: rustls::pki_types::ServerName<'static>,
name: rustls::ServerName,
}
impl TlsConfig {
pub fn new(client_config: Arc<ClientConfig>, name: &'static str) -> Self {
let name =
rustls::pki_types::ServerName::try_from(name).expect("name must be a valid DNS name");
pub fn new(client_config: Arc<ClientConfig>, name: &str) -> Self {
let name = rustls::ServerName::try_from(name).expect("name must be a valid DNS name");
TlsConfig {
client_config,
name,
@ -76,6 +74,9 @@ pub fn http2_tls<T: Into<String>>(addr: SocketAddr, auth: T, tls: TlsConfig) ->
Client::new(addr, auth.into(), Run::Http2, Some(tls))
}
pub fn tcp(addr: SocketAddr) -> tcp::TcpClient {
tcp::client(addr)
}
pub struct Client {
addr: SocketAddr,
run: Run,
@ -131,19 +132,11 @@ impl Client {
pub fn request(
&self,
builder: http::request::Builder,
) -> impl Future<Output = Result<Response<hyper::body::Incoming>, ClientError>> + Send + 'static
{
let req = builder.body(BoxBody::empty()).unwrap();
self.send_req(req)
) -> impl Future<Output = Result<Response, ClientError>> + Send + Sync + 'static {
self.send_req(builder.body(Bytes::new().into()).unwrap())
}
pub async fn request_body<B>(&self, req: Request<B>) -> Response<hyper::body::Incoming>
where
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Into<Error>,
{
let req = req.map(BoxBody::new);
pub async fn request_body(&self, req: Request) -> Response {
self.send_req(req).await.expect("response")
}
@ -159,16 +152,11 @@ impl Client {
}
}
#[tracing::instrument(skip(self, req))]
pub(crate) fn send_req<B>(
#[tracing::instrument(skip(self))]
pub(crate) fn send_req(
&self,
mut req: Request<B>,
) -> impl Future<Output = Result<Response<hyper::body::Incoming>, ClientError>> + Send + 'static
where
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Into<Error>,
{
mut req: Request,
) -> impl Future<Output = Result<Response, ClientError>> + Send + Sync + 'static {
if req.uri().scheme().is_none() {
if self.tls.is_some() {
*req.uri_mut() = format!("https://{}{}", self.authority, req.uri().path())
@ -182,8 +170,7 @@ impl Client {
}
tracing::debug!(headers = ?req.headers(), "request");
let (tx, rx) = oneshot::channel();
let req = req.map(BoxBody::new);
let _ = self.tx.send((req, tx));
let _ = self.tx.send((req.map(Into::into), tx));
async { rx.await.expect("request cancelled") }.in_current_span()
}
@ -233,17 +220,13 @@ enum Run {
Http2,
}
pub type Running = Pin<Box<dyn Future<Output = ()> + Send + 'static>>;
fn run(
addr: SocketAddr,
version: Run,
tls: Option<TlsConfig>,
) -> (Sender, JoinHandle<()>, Running) {
let (tx, rx) = mpsc::unbounded_channel::<(
Request<BoxBody>,
oneshot::Sender<Result<Response<hyper::body::Incoming>, ClientError>>,
)>();
let (tx, rx) =
mpsc::unbounded_channel::<(Request, oneshot::Sender<Result<Response, ClientError>>)>();
let test_name = thread_name();
let absolute_uris = if let Run::Http1 { absolute_uris } = version {
@ -252,12 +235,7 @@ fn run(
false
};
let (running_tx, running) = {
let (tx, rx) = oneshot::channel();
let rx = Box::pin(rx.map(|_| ()));
(tx, rx)
};
let (running_tx, running) = running();
let conn = Conn {
addr,
absolute_uris,
@ -272,9 +250,10 @@ fn run(
let span = info_span!("test client", peer_addr = %addr, ?version, test = %test_name);
let work = async move {
let client = hyper_util::client::legacy::Client::builder(TokioExecutor::new())
let client = hyper::Client::builder()
.http2_only(http2_only)
.build::<Conn, BoxBody>(conn);
.executor(TracingExecutor)
.build::<Conn, hyper::Body>(conn);
tracing::trace!("client task started");
let mut rx = rx;
let (drain_tx, drain) = drain::channel();
@ -284,6 +263,7 @@ fn run(
// instance would remain un-dropped.
async move {
while let Some((req, cb)) = rx.recv().await {
let req = req.map(hyper::Body::from);
tracing::trace!(?req);
let req = client.request(req);
tokio::spawn(
@ -315,11 +295,9 @@ struct Conn {
}
impl tower::Service<hyper::Uri> for Conn {
type Response = hyper_util::rt::TokioIo<RunningIo>;
type Response = RunningIo;
type Error = io::Error;
type Future = Pin<
Box<dyn Future<Output = io::Result<hyper_util::rt::TokioIo<RunningIo>>> + Send + 'static>,
>;
type Future = Pin<Box<dyn Future<Output = io::Result<RunningIo>> + Send + 'static>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
@ -349,19 +327,19 @@ impl tower::Service<hyper::Uri> for Conn {
} else {
Box::pin(io) as Pin<Box<dyn Io + Send + 'static>>
};
Ok(hyper_util::rt::TokioIo::new(RunningIo {
Ok(RunningIo {
io,
abs_form,
_running: Some(running),
}))
})
})
}
}
impl hyper_util::client::legacy::connect::Connection for RunningIo {
fn connected(&self) -> hyper_util::client::legacy::connect::Connected {
impl hyper::client::connect::Connection for RunningIo {
fn connected(&self) -> hyper::client::connect::Connected {
// Setting `proxy` to true will configure Hyper to use absolute-form
// URIs on this connection.
hyper_util::client::legacy::connect::Connected::new().proxy(self.abs_form)
hyper::client::connect::Connected::new().proxy(self.abs_form)
}
}

View File

@ -2,7 +2,7 @@ use super::*;
pub use linkerd2_proxy_api::destination as pb;
use linkerd2_proxy_api::net;
use linkerd_app_core::proxy::http::TokioExecutor;
use linkerd_app_core::proxy::http::TracingExecutor;
use parking_lot::Mutex;
use std::collections::VecDeque;
use std::net::IpAddr;
@ -262,7 +262,10 @@ impl pb::destination_server::Destination for Controller {
}
tracing::warn!(?dst, ?updates, "request does not match");
let msg = format!("expected get call for {dst:?} but got get call for {req:?}");
let msg = format!(
"expected get call for {:?} but got get call for {:?}",
dst, req
);
calls.push_front(Dst::Call(dst, updates));
return Err(grpc::Status::new(grpc::Code::Unavailable, msg));
}
@ -340,7 +343,7 @@ pub(crate) async fn run<T, B>(
delay: Option<Pin<Box<dyn Future<Output = ()> + Send>>>,
) -> Listening
where
T: tower::Service<http::Request<hyper::body::Incoming>, Response = http::Response<B>>,
T: tower::Service<http::Request<hyper::body::Body>, Response = http::Response<B>>,
T: Clone + Send + 'static,
T::Error: Into<Box<dyn std::error::Error + Send + Sync>>,
T::Future: Send,
@ -369,16 +372,13 @@ where
let _ = listening_tx.send(());
}
let mut http = hyper::server::conn::http2::Builder::new(TokioExecutor::new());
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut http = hyper::server::conn::Http::new().with_executor(TracingExecutor);
http.http2_only(true);
loop {
let (sock, addr) = listener.accept().await?;
let span = tracing::debug_span!("conn", %addr).or_current();
let serve = http
.timer(hyper_util::rt::TokioTimer::new())
.serve_connection(
hyper_util::rt::TokioIo::new(sock),
hyper_util::service::TowerToHyperService::new(svc.clone()),
);
let serve = http.serve_connection(sock, svc.clone());
let f = async move {
serve.await.map_err(|error| {
tracing::error!(

View File

@ -8,8 +8,7 @@ use std::{
};
use linkerd2_proxy_api::identity as pb;
use linkerd_rustls::get_default_provider;
use tokio_rustls::rustls::{self, server::WebPkiClientVerifier};
use tokio_rustls::rustls;
use tonic as grpc;
pub struct Identity {
@ -35,6 +34,10 @@ type Certify = Box<
> + Send,
>;
static TLS_VERSIONS: &[&rustls::SupportedProtocolVersion] = &[&rustls::version::TLS13];
static TLS_SUPPORTED_CIPHERSUITES: &[rustls::SupportedCipherSuite] =
&[rustls::cipher_suite::TLS13_CHACHA20_POLY1305_SHA256];
struct Certificates {
pub leaf: Vec<u8>,
pub intermediates: Vec<Vec<u8>>,
@ -47,17 +50,11 @@ impl Certificates {
{
let f = fs::File::open(p)?;
let mut r = io::BufReader::new(f);
let mut certs = rustls_pemfile::certs(&mut r);
let leaf = certs
.next()
.expect("no leaf cert in pemfile")
.map_err(|_| io::Error::other("rustls error reading certs"))?
.as_ref()
.to_vec();
let intermediates = certs
.map(|cert| cert.map(|cert| cert.as_ref().to_vec()))
.collect::<Result<Vec<_>, _>>()
.map_err(|_| io::Error::other("rustls error reading certs"))?;
let mut certs = rustls_pemfile::certs(&mut r)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "rustls error reading certs"))?;
let mut certs = certs.drain(..);
let leaf = certs.next().expect("no leaf cert in pemfile");
let intermediates = certs.collect();
Ok(Certificates {
leaf,
@ -65,14 +62,11 @@ impl Certificates {
})
}
pub fn chain(&self) -> Vec<rustls::pki_types::CertificateDer<'static>> {
pub fn chain(&self) -> Vec<rustls::Certificate> {
let mut chain = Vec::with_capacity(self.intermediates.len() + 1);
chain.push(self.leaf.clone());
chain.extend(self.intermediates.clone());
chain
.into_iter()
.map(rustls::pki_types::CertificateDer::from)
.collect()
chain.into_iter().map(rustls::Certificate).collect()
}
pub fn response(&self) -> pb::CertifyResponse {
@ -85,46 +79,43 @@ impl Certificates {
}
impl Identity {
fn load_key<P>(p: P) -> rustls::pki_types::PrivateKeyDer<'static>
fn load_key<P>(p: P) -> rustls::PrivateKey
where
P: AsRef<Path>,
{
let p8 = fs::read(&p).expect("read key");
rustls::pki_types::PrivateKeyDer::try_from(p8).expect("decode key")
rustls::PrivateKey(p8)
}
fn configs(
trust_anchors: &str,
certs: &Certificates,
key: rustls::pki_types::PrivateKeyDer<'static>,
key: rustls::PrivateKey,
) -> (Arc<rustls::ClientConfig>, Arc<rustls::ServerConfig>) {
use std::io::Cursor;
let mut roots = rustls::RootCertStore::empty();
let trust_anchors = rustls_pemfile::certs(&mut Cursor::new(trust_anchors))
.collect::<Result<Vec<_>, _>>()
.expect("error parsing pemfile");
let (added, skipped) = roots.add_parsable_certificates(trust_anchors);
let trust_anchors =
rustls_pemfile::certs(&mut Cursor::new(trust_anchors)).expect("error parsing pemfile");
let (added, skipped) = roots.add_parsable_certificates(&trust_anchors[..]);
assert_ne!(added, 0, "trust anchors must include at least one cert");
assert_eq!(skipped, 0, "no certs in pemfile should be invalid");
let provider = get_default_provider();
let client_config = rustls::ClientConfig::builder_with_provider(provider.clone())
.with_safe_default_protocol_versions()
let client_config = rustls::ClientConfig::builder()
.with_cipher_suites(TLS_SUPPORTED_CIPHERSUITES)
.with_safe_default_kx_groups()
.with_protocol_versions(TLS_VERSIONS)
.expect("client config must be valid")
.with_root_certificates(roots.clone())
.with_no_client_auth();
let client_cert_verifier =
WebPkiClientVerifier::builder_with_provider(Arc::new(roots), provider.clone())
.allow_unauthenticated()
.build()
.expect("server verifier must be valid");
let server_config = rustls::ServerConfig::builder_with_provider(provider)
.with_safe_default_protocol_versions()
let server_config = rustls::ServerConfig::builder()
.with_cipher_suites(TLS_SUPPORTED_CIPHERSUITES)
.with_safe_default_kx_groups()
.with_protocol_versions(TLS_VERSIONS)
.expect("server config must be valid")
.with_client_cert_verifier(client_cert_verifier)
.with_client_cert_verifier(Arc::new(
rustls::server::AllowAnyAnonymousOrAuthenticatedClient::new(roots),
))
.with_single_cert(certs.chain(), key)
.unwrap();
@ -213,7 +204,7 @@ impl Controller {
let f = f.take().expect("called twice?");
let fut = f(req)
.map_ok(grpc::Response::new)
.map_err(|e| grpc::Status::new(grpc::Code::Internal, format!("{e}")));
.map_err(|e| grpc::Status::new(grpc::Code::Internal, format!("{}", e)));
Box::pin(fut)
});
self.expect_calls.lock().push_back(func);

View File

@ -3,7 +3,6 @@
#![warn(rust_2018_idioms, clippy::disallowed_methods, clippy::disallowed_types)]
#![forbid(unsafe_code)]
#![recursion_limit = "256"]
#![allow(clippy::result_large_err)]
mod test_env;
@ -27,9 +26,9 @@ pub use bytes::{Buf, BufMut, Bytes};
pub use futures::stream::{Stream, StreamExt};
pub use futures::{future, FutureExt, TryFuture, TryFutureExt};
pub use http::{HeaderMap, Request, Response, StatusCode};
pub use http_body::Body;
pub use http_body::Body as HttpBody;
pub use linkerd_app as app;
pub use linkerd_app_core::{drain, Addr, Error};
pub use linkerd_app_core::{drain, Addr};
pub use linkerd_app_test::*;
pub use linkerd_tracing::test::*;
use socket2::Socket;
@ -51,6 +50,8 @@ pub use tower::Service;
pub const ENV_TEST_PATIENCE_MS: &str = "RUST_TEST_PATIENCE_MS";
pub const DEFAULT_TEST_PATIENCE: Duration = Duration::from_millis(15);
pub type Error = Box<dyn std::error::Error + Send + Sync + 'static>;
/// Retry an assertion up to a specified number of times, waiting
/// `RUST_TEST_PATIENCE_MS` between retries.
///
@ -218,6 +219,15 @@ impl Shutdown {
pub type ShutdownRx = Pin<Box<dyn Future<Output = ()> + Send>>;
/// A channel used to signal when a Client's related connection is running or closed.
pub fn running() -> (oneshot::Sender<()>, Running) {
let (tx, rx) = oneshot::channel();
let rx = Box::pin(rx.map(|_| ()));
(tx, rx)
}
pub type Running = Pin<Box<dyn Future<Output = ()> + Send + Sync + 'static>>;
pub fn s(bytes: &[u8]) -> &str {
::std::str::from_utf8(bytes).unwrap()
}
@ -248,7 +258,7 @@ impl fmt::Display for HumanDuration {
let secs = self.0.as_secs();
let subsec_ms = self.0.subsec_nanos() as f64 / 1_000_000f64;
if secs == 0 {
write!(fmt, "{subsec_ms}ms")
write!(fmt, "{}ms", subsec_ms)
} else {
write!(fmt, "{}s", secs as f64 + subsec_ms)
}
@ -257,7 +267,7 @@ impl fmt::Display for HumanDuration {
pub async fn cancelable<E: Send + 'static>(
drain: drain::Watch,
f: impl Future<Output = Result<(), E>>,
f: impl Future<Output = Result<(), E>> + Send + 'static,
) -> Result<(), E> {
tokio::select! {
res = f => res,

View File

@ -2,7 +2,6 @@ use super::*;
pub use api::{inbound, outbound};
use api::{inbound::inbound_server_policies_server, outbound::outbound_policies_server};
use futures::stream;
use http_body_util::combinators::UnsyncBoxBody;
use linkerd2_proxy_api as api;
use parking_lot::Mutex;
use std::collections::VecDeque;
@ -35,9 +34,6 @@ pub struct InboundSender(Tx<inbound::Server>);
#[derive(Debug, Clone)]
pub struct OutboundSender(Tx<outbound::OutboundPolicy>);
#[derive(Clone)]
struct RoutesSvc(grpc::service::Routes);
type Tx<T> = mpsc::UnboundedSender<Result<T, grpc::Status>>;
type Rx<T> = UnboundedReceiverStream<Result<T, grpc::Status>>;
type WatchStream<T> = Pin<Box<dyn Stream<Item = Result<T, grpc::Status>> + Send + Sync + 'static>>;
@ -302,7 +298,7 @@ impl Controller {
}
pub async fn run(self) -> controller::Listening {
let routes = grpc::service::Routes::default()
let svc = grpc::transport::Server::builder()
.add_service(
inbound_server_policies_server::InboundServerPoliciesServer::new(Server(Arc::new(
self.inbound,
@ -310,9 +306,9 @@ impl Controller {
)
.add_service(outbound_policies_server::OutboundPoliciesServer::new(
Server(Arc::new(self.outbound)),
));
controller::run(RoutesSvc(routes), "support policy controller", None).await
))
.into_service();
controller::run(svc, "support policy controller", None).await
}
}
@ -513,35 +509,6 @@ impl<Req, Rsp> Inner<Req, Rsp> {
}
}
// === impl RoutesSvc ===
impl Service<Request<hyper::body::Incoming>> for RoutesSvc {
type Response =
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::Response;
type Error =
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::Error;
type Future =
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::Future;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
let Self(routes) = self;
<grpc::service::Routes as Service<Request<UnsyncBoxBody<Bytes, grpc::Status>>>>::poll_ready(
routes, cx,
)
}
fn call(&mut self, req: Request<hyper::body::Incoming>) -> Self::Future {
use http_body_util::{combinators::UnsyncBoxBody, BodyExt};
let Self(routes) = self;
let req = req.map(|body| {
UnsyncBoxBody::new(body.map_err(|err| grpc::Status::from_error(Box::new(err))))
});
routes.call(req)
}
}
fn grpc_no_results() -> grpc::Status {
grpc::Status::new(
grpc::Code::NotFound,

View File

@ -108,7 +108,7 @@ impl fmt::Debug for MockOrigDst {
match self {
Self::Addr(addr) => f
.debug_tuple("MockOrigDst::Addr")
.field(&format_args!("{addr}"))
.field(&format_args!("{}", addr))
.finish(),
Self::Direct => f.debug_tuple("MockOrigDst::Direct").finish(),
Self::None => f.debug_tuple("MockOrigDst::None").finish(),
@ -416,9 +416,9 @@ async fn run(proxy: Proxy, mut env: TestEnv, random_ports: bool) -> Listening {
use std::fmt::Write;
let mut ports = inbound_default_ports.iter();
if let Some(port) = ports.next() {
let mut var = format!("{port}");
let mut var = format!("{}", port);
for port in ports {
write!(&mut var, ",{port}").expect("writing to String should never fail");
write!(&mut var, ",{}", port).expect("writing to String should never fail");
}
info!("{}={:?}", app::env::ENV_INBOUND_PORTS, var);
env.put(app::env::ENV_INBOUND_PORTS, var);

View File

@ -1,7 +1,5 @@
use super::app_core::svc::http::TokioExecutor;
use super::app_core::svc::http::TracingExecutor;
use super::*;
use http::{Request, Response};
use linkerd_app_core::svc::http::BoxBody;
use std::{
io,
sync::atomic::{AtomicUsize, Ordering},
@ -14,35 +12,23 @@ pub fn new() -> Server {
}
pub fn http1() -> Server {
Server {
routes: Default::default(),
version: Run::Http1,
tls: None,
}
Server::http1()
}
pub fn http1_tls(tls: Arc<ServerConfig>) -> Server {
Server {
routes: Default::default(),
version: Run::Http1,
tls: Some(tls),
}
Server::http1_tls(tls)
}
pub fn http2() -> Server {
Server {
routes: Default::default(),
version: Run::Http2,
tls: None,
}
Server::http2()
}
pub fn http2_tls(tls: Arc<ServerConfig>) -> Server {
Server {
routes: Default::default(),
version: Run::Http2,
tls: Some(tls),
}
Server::http2_tls(tls)
}
pub fn tcp() -> tcp::TcpServer {
tcp::server()
}
pub struct Server {
@ -59,8 +45,9 @@ pub struct Listening {
pub(super) http_version: Option<Run>,
}
type RspFuture<B = BoxBody> =
Pin<Box<dyn Future<Output = Result<Response<B>, Error>> + Send + 'static>>;
type Request = http::Request<hyper::Body>;
type Response = http::Response<hyper::Body>;
type RspFuture = Pin<Box<dyn Future<Output = Result<Response, BoxError>> + Send + Sync + 'static>>;
impl Listening {
pub fn connections(&self) -> usize {
@ -105,6 +92,29 @@ impl Listening {
}
impl Server {
fn new(run: Run, tls: Option<Arc<ServerConfig>>) -> Self {
Server {
routes: HashMap::new(),
version: run,
tls,
}
}
fn http1() -> Self {
Server::new(Run::Http1, None)
}
fn http1_tls(tls: Arc<ServerConfig>) -> Self {
Server::new(Run::Http1, Some(tls))
}
fn http2() -> Self {
Server::new(Run::Http2, None)
}
fn http2_tls(tls: Arc<ServerConfig>) -> Self {
Server::new(Run::Http2, Some(tls))
}
/// Return a string body as a 200 OK response, with the string as
/// the response body.
pub fn route(mut self, path: &str, resp: &str) -> Self {
@ -116,11 +126,11 @@ impl Server {
/// to send back.
pub fn route_fn<F>(self, path: &str, cb: F) -> Self
where
F: Fn(Request<BoxBody>) -> Response<BoxBody> + Send + Sync + 'static,
F: Fn(Request) -> Response + Send + Sync + 'static,
{
self.route_async(path, move |req| {
let res = cb(req);
async move { Ok::<_, Error>(res) }
async move { Ok::<_, BoxError>(res) }
})
}
@ -128,9 +138,9 @@ impl Server {
/// a response to send back.
pub fn route_async<F, U>(mut self, path: &str, cb: F) -> Self
where
F: Fn(Request<BoxBody>) -> U + Send + Sync + 'static,
U: TryFuture<Ok = Response<BoxBody>> + Send + 'static,
U::Error: Into<Error> + Send + 'static,
F: Fn(Request) -> U + Send + Sync + 'static,
U: TryFuture<Ok = Response> + Send + Sync + 'static,
U::Error: Into<BoxError> + Send + 'static,
{
let func = move |req| Box::pin(cb(req).map_err(Into::into)) as RspFuture;
self.routes.insert(path.into(), Route(Box::new(func)));
@ -138,17 +148,16 @@ impl Server {
}
pub fn route_with_latency(self, path: &str, resp: &str, latency: Duration) -> Self {
let body = resp.to_owned();
let resp = Bytes::from(resp.to_string());
self.route_async(path, move |_| {
let body = body.clone();
let resp = resp.clone();
async move {
tokio::time::sleep(latency).await;
Ok::<_, Error>(
Ok::<_, BoxError>(
http::Response::builder()
.status(StatusCode::OK)
.body(http_body_util::Full::new(Bytes::from(body.clone())))
.unwrap()
.map(BoxBody::new),
.status(200)
.body(hyper::Body::from(resp.clone()))
.unwrap(),
)
}
})
@ -184,7 +193,13 @@ impl Server {
drain.clone(),
async move {
tracing::info!("support server running");
let svc = Svc(Arc::new(self.routes));
let mut new_svc = NewSvc(Arc::new(self.routes));
#[allow(deprecated)] // linkerd/linkerd2#8733
let mut http = hyper::server::conn::Http::new().with_executor(TracingExecutor);
match self.version {
Run::Http1 => http.http1_only(true),
Run::Http2 => http.http2_only(true),
};
if let Some(delay) = delay {
let _ = listening_tx.take().unwrap().send(());
delay.await;
@ -203,41 +218,27 @@ impl Server {
let sock = accept_connection(sock, tls_config.clone())
.instrument(span.clone())
.await?;
let http = http.clone();
let srv_conn_count = srv_conn_count.clone();
let svc = svc.clone();
let svc = new_svc.call(());
let f = async move {
tracing::trace!("serving...");
let svc = svc.await;
tracing::trace!("service acquired");
srv_conn_count.fetch_add(1, Ordering::Release);
use hyper_util::{rt::TokioIo, service::TowerToHyperService};
let (sock, svc) = (TokioIo::new(sock), TowerToHyperService::new(svc));
let result = match self.version {
Run::Http1 => hyper::server::conn::http1::Builder::new()
.timer(hyper_util::rt::TokioTimer::new())
.serve_connection(sock, svc)
.await
.map_err(|e| tracing::error!("support/server error: {}", e)),
Run::Http2 => {
hyper::server::conn::http2::Builder::new(TokioExecutor::new())
.timer(hyper_util::rt::TokioTimer::new())
.serve_connection(sock, svc)
.await
.map_err(|e| tracing::error!("support/server error: {}", e))
}
};
let svc = svc.map_err(|e| {
tracing::error!("support/server new_service error: {}", e)
})?;
let result = http
.serve_connection(sock, svc)
.await
.map_err(|e| tracing::error!("support/server error: {}", e));
tracing::trace!(?result, "serve done");
result
};
// let fut = Box::pin(cancelable(drain.clone(), f).instrument(span.clone().or_current()))
let drain = drain.clone();
tokio::spawn(async move {
tokio::select! {
res = f => res,
_ = drain.signaled() => {
tracing::debug!("canceled!");
Ok(())
}
}
});
tokio::spawn(
cancelable(drain.clone(), f).instrument(span.clone().or_current()),
);
}
}
.instrument(
@ -266,19 +267,17 @@ pub(super) enum Run {
Http2,
}
struct Route(Box<dyn Fn(Request<BoxBody>) -> RspFuture + Send + Sync>);
struct Route(Box<dyn Fn(Request) -> RspFuture + Send + Sync>);
impl Route {
fn string(body: &str) -> Route {
let body = http_body_util::Full::new(Bytes::from(body.to_string()));
let body = Bytes::from(body.to_string());
Route(Box::new(move |_| {
let body = body.clone();
Box::pin(future::ok(
http::Response::builder()
.status(StatusCode::OK)
.body(body)
.unwrap()
.map(BoxBody::new),
.status(200)
.body(hyper::Body::from(body.clone()))
.unwrap(),
))
}))
}
@ -290,53 +289,58 @@ impl std::fmt::Debug for Route {
}
}
#[derive(Clone, Debug)]
type BoxError = Box<dyn std::error::Error + Send + Sync>;
#[derive(Debug)]
struct Svc(Arc<HashMap<String, Route>>);
impl Svc {
fn route<B>(
&mut self,
req: Request<B>,
) -> impl Future<Output = Result<Response<BoxBody>, crate::app_core::Error>> + Send
where
B: Body + Send + Sync + 'static,
B::Data: Send + 'static,
B::Error: std::error::Error + Send + Sync + 'static,
{
fn route(&mut self, req: Request) -> RspFuture {
match self.0.get(req.uri().path()) {
Some(Route(ref func)) => {
tracing::trace!(path = %req.uri().path(), "found route for path");
func(req.map(BoxBody::new))
func(req)
}
None => {
tracing::warn!("server 404: {:?}", req.uri().path());
Box::pin(futures::future::ok(
http::Response::builder()
.status(StatusCode::NOT_FOUND)
.body(BoxBody::empty())
.unwrap(),
))
let res = http::Response::builder()
.status(404)
.body(Default::default())
.unwrap();
Box::pin(async move { Ok(res) })
}
}
}
}
impl<B> tower::Service<Request<B>> for Svc
where
B: Body + Send + Sync + 'static,
B::Data: Send,
B::Error: std::error::Error + Send + Sync,
{
type Response = Response<BoxBody>;
type Error = Error;
impl tower::Service<Request> for Svc {
type Response = Response;
type Error = BoxError;
type Future = RspFuture;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, req: Request<B>) -> Self::Future {
Box::pin(self.route(req))
fn call(&mut self, req: Request) -> Self::Future {
self.route(req)
}
}
#[derive(Debug)]
struct NewSvc(Arc<HashMap<String, Route>>);
impl Service<()> for NewSvc {
type Response = Svc;
type Error = ::std::io::Error;
type Future = future::Ready<Result<Svc, Self::Error>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, _: ()) -> Self::Future {
future::ok(Svc(Arc::clone(&self.0)))
}
}
@ -354,6 +358,7 @@ async fn accept_connection(
_running: None,
})
}
None => Ok(RunningIo {
io: Box::pin(io),
abs_form: false,

View File

@ -2,7 +2,6 @@ use super::*;
use futures::stream;
use http_body::Body;
use linkerd2_proxy_api::tap as pb;
use linkerd_app_core::svc::http::BoxBody;
pub fn client(addr: SocketAddr) -> Client {
let api = pb::tap_client::TapClient::new(SyncSvc(client::http2(addr, "localhost")));
@ -107,6 +106,7 @@ pub trait TapEventExt {
//fn id(&self) -> (u32, u64);
fn event(&self) -> &pb::tap_event::http::Event;
fn request_init_method(&self) -> String;
fn request_init_authority(&self) -> &str;
fn request_init_path(&self) -> &str;
@ -134,31 +134,41 @@ impl TapEventExt for pb::TapEvent {
}
}
fn request_init_method(&self) -> String {
match self.event() {
pb::tap_event::http::Event::RequestInit(_ev) => {
//TODO: ugh
unimplemented!("method");
}
e => panic!("not RequestInit event: {:?}", e),
}
}
fn request_init_authority(&self) -> &str {
match self.event() {
pb::tap_event::http::Event::RequestInit(ev) => &ev.authority,
e => panic!("not RequestInit event: {e:?}"),
e => panic!("not RequestInit event: {:?}", e),
}
}
fn request_init_path(&self) -> &str {
match self.event() {
pb::tap_event::http::Event::RequestInit(ev) => &ev.path,
e => panic!("not RequestInit event: {e:?}"),
e => panic!("not RequestInit event: {:?}", e),
}
}
fn response_init_status(&self) -> u16 {
match self.event() {
pb::tap_event::http::Event::ResponseInit(ev) => ev.http_status as u16,
e => panic!("not ResponseInit event: {e:?}"),
e => panic!("not ResponseInit event: {:?}", e),
}
}
fn response_end_bytes(&self) -> u64 {
match self.event() {
pb::tap_event::http::Event::ResponseEnd(ev) => ev.response_bytes,
e => panic!("not ResponseEnd event: {e:?}"),
e => panic!("not ResponseEnd event: {:?}", e),
}
}
@ -170,7 +180,7 @@ impl TapEventExt for pb::TapEvent {
}) => code,
_ => panic!("not Eos GrpcStatusCode: {:?}", ev.eos),
},
ev => panic!("not ResponseEnd event: {ev:?}"),
ev => panic!("not ResponseEnd event: {:?}", ev),
}
}
}
@ -178,14 +188,15 @@ impl TapEventExt for pb::TapEvent {
struct SyncSvc(client::Client);
type ResponseFuture =
Pin<Box<dyn Future<Output = Result<http::Response<hyper::body::Incoming>, String>> + Send>>;
Pin<Box<dyn Future<Output = Result<http::Response<hyper::Body>, String>> + Send>>;
impl<B> tower::Service<http::Request<B>> for SyncSvc
where
B: Body,
B::Error: std::fmt::Debug,
B: Body + Send + 'static,
B::Data: Send + 'static,
B::Error: Send + 'static,
{
type Response = http::Response<hyper::body::Incoming>;
type Response = http::Response<hyper::Body>;
type Error = String;
type Future = ResponseFuture;
@ -194,31 +205,20 @@ where
}
fn call(&mut self, req: http::Request<B>) -> Self::Future {
use http_body_util::Full;
let Self(client) = self;
let req = req.map(Self::collect_body).map(Full::new).map(BoxBody::new);
let fut = client.send_req(req).map_err(|err| err.to_string());
Box::pin(fut)
}
}
impl SyncSvc {
/// Collects the given [`Body`], returning a [`Bytes`].
///
/// NB: This blocks the current thread until the provided body has been collected. This is
/// an acceptable practice in test code for the sake of simplicitly, because we will always
/// provide [`SyncSvc`] with bodies that are complete.
fn collect_body<B>(body: B) -> Bytes
where
B: Body,
B::Error: std::fmt::Debug,
{
futures::executor::block_on(async move {
use http_body_util::BodyExt;
body.collect()
.await
.expect("body should not fail")
.to_bytes()
})
// this is okay to do because the body should always be complete, we
// just can't prove it.
let req = futures::executor::block_on(async move {
let (parts, body) = req.into_parts();
let body = match body.collect().await.map(http_body::Collected::to_bytes) {
Ok(body) => body,
Err(_) => unreachable!("body should not fail"),
};
http::Request::from_parts(parts, body)
});
Box::pin(
self.0
.send_req(req.map(Into::into))
.map_err(|err| err.to_string()),
)
}
}

View File

@ -1,11 +1,10 @@
use super::*;
use std::{
collections::VecDeque,
io,
net::TcpListener as StdTcpListener,
sync::atomic::{AtomicUsize, Ordering},
};
use tokio::{net::TcpStream, task::JoinHandle};
use std::collections::VecDeque;
use std::io;
use std::net::TcpListener as StdTcpListener;
use std::sync::atomic::{AtomicUsize, Ordering};
use tokio::net::TcpStream;
use tokio::task::JoinHandle;
type TcpConnSender = mpsc::UnboundedSender<(
Option<Vec<u8>>,
@ -149,6 +148,10 @@ impl TcpServer {
}
impl TcpConn {
pub fn target_addr(&self) -> SocketAddr {
self.addr
}
pub async fn read(&self) -> Vec<u8> {
self.try_read()
.await

View File

@ -381,7 +381,7 @@ mod cross_version {
}
fn default_dst_name(port: u16) -> String {
format!("{HOST}:{port}")
format!("{}:{}", HOST, port)
}
fn send_default_dst(
@ -484,7 +484,7 @@ mod http2 {
let body = {
let body = res.into_body();
let body = http_body_util::BodyExt::collect(body)
let body = http_body::Body::collect(body)
.await
.unwrap()
.to_bytes()

View File

@ -24,7 +24,7 @@ async fn nonblocking_identity_detection() {
let msg1 = "custom tcp hello\n";
let msg2 = "custom tcp bye";
let srv = crate::tcp::server()
let srv = server::tcp()
.accept(move |read| {
assert_eq!(read, msg1.as_bytes());
msg2
@ -33,7 +33,7 @@ async fn nonblocking_identity_detection() {
.await;
let proxy = proxy.inbound(srv).run_with_test_env(env).await;
let client = crate::tcp::client(proxy.inbound);
let client = client::tcp(proxy.inbound);
// Create an idle connection and then an active connection. Ensure that
// protocol detection on the idle connection does not block communication on

View File

@ -1,6 +1,5 @@
use crate::*;
use linkerd2_proxy_api::destination as pb;
use linkerd_app_core::svc::http::BoxBody;
use std::sync::atomic::{AtomicUsize, Ordering};
struct Service {
@ -15,17 +14,11 @@ impl Service {
let counter = response_counter.clone();
let svc = server::http1()
.route_fn("/load-profile", |_| {
Response::builder()
.status(201)
.body(BoxBody::empty())
.unwrap()
Response::builder().status(201).body("".into()).unwrap()
})
.route_fn("/", move |_req| {
counter.fetch_add(1, Ordering::SeqCst);
Response::builder()
.status(200)
.body(BoxBody::from_static(name))
.unwrap()
Response::builder().status(200).body(name.into()).unwrap()
})
.run()
.await;
@ -63,7 +56,7 @@ async fn wait_for_profile_stage(client: &client::Client, metrics: &client::Clien
for _ in 0i32..10 {
assert_eq!(client.get("/load-profile").await, "");
let m = metrics.get("/metrics").await;
let stage_metric = format!("rt_load_profile=\"{stage}\"");
let stage_metric = format!("rt_load_profile=\"{}\"", stage);
if m.contains(stage_metric.as_str()) {
break;
}

View File

@ -1,5 +1,3 @@
use linkerd_app_core::svc::http::BoxBody;
use crate::*;
use std::sync::atomic::{AtomicUsize, Ordering};
@ -73,10 +71,7 @@ impl TestBuilder {
// This route is just called by the test setup, to trigger the proxy
// to start fetching the ServiceProfile.
.route_fn("/load-profile", |_| {
Response::builder()
.status(201)
.body(BoxBody::empty())
.unwrap()
Response::builder().status(201).body("".into()).unwrap()
});
if self.default_routes {
@ -88,12 +83,12 @@ impl TestBuilder {
let port = srv.addr.port();
let ctrl = controller::new();
let dst_tx = ctrl.destination_tx(format!("{host}:{port}"));
let dst_tx = ctrl.destination_tx(format!("{}:{}", host, port));
dst_tx.send_addr(srv.addr);
let ctrl = controller::new();
let dst_tx = ctrl.destination_tx(format!("{host}:{port}"));
let dst_tx = ctrl.destination_tx(format!("{}:{}", host, port));
dst_tx.send_addr(srv.addr);
let profile_tx = ctrl.profile_tx(srv.addr.to_string());
@ -126,7 +121,7 @@ impl TestBuilder {
::std::thread::sleep(Duration::from_secs(1));
Response::builder()
.status(200)
.body(BoxBody::from_static("slept"))
.body("slept".into())
.unwrap()
})
.route_async("/0.5", move |req| {
@ -134,20 +129,17 @@ impl TestBuilder {
async move {
// Read the entire body before responding, so that the
// client doesn't fail when writing it out.
let body = http_body_util::BodyExt::collect(req.into_body())
let body = http_body::Body::collect(req.into_body())
.await
.map(http_body_util::Collected::to_bytes);
.map(http_body::Collected::to_bytes);
let bytes = body.as_ref().map(Bytes::len);
tracing::debug!(?bytes, "recieved body");
Ok::<_, Error>(if fail {
Response::builder()
.status(533)
.body(BoxBody::from_static("nope"))
.unwrap()
Response::builder().status(533).body("nope".into()).unwrap()
} else {
Response::builder()
.status(200)
.body(BoxBody::from_static("retried"))
.body("retried".into())
.unwrap()
})
}
@ -155,14 +147,11 @@ impl TestBuilder {
.route_fn("/0.5/sleep", move |_req| {
::std::thread::sleep(Duration::from_secs(1));
if counter2.fetch_add(1, Ordering::Relaxed) % 2 == 0 {
Response::builder()
.status(533)
.body(BoxBody::from_static("nope"))
.unwrap()
Response::builder().status(533).body("nope".into()).unwrap()
} else {
Response::builder()
.status(200)
.body(BoxBody::from_static("retried"))
.body("retried".into())
.unwrap()
}
})
@ -170,15 +159,12 @@ impl TestBuilder {
if counter3.fetch_add(1, Ordering::Relaxed) % 2 == 0 {
Response::builder()
.status(533)
.body(BoxBody::new(http_body_util::Full::new(Bytes::from(vec![
b'x';
1024 * 100
]))))
.body(vec![b'x'; 1024 * 100].into())
.unwrap()
} else {
Response::builder()
.status(200)
.body(BoxBody::from_static("retried"))
.body("retried".into())
.unwrap()
}
})
@ -199,8 +185,6 @@ impl TestBuilder {
}
mod cross_version {
use std::convert::Infallible;
use super::*;
pub(super) async fn retry_if_profile_allows(version: server::Server) {
@ -264,7 +248,7 @@ mod cross_version {
let req = client
.request_builder("/0.5")
.method(http::Method::POST)
.body(BoxBody::from_static("req has a body"))
.body("req has a body".into())
.unwrap();
let res = client.request_body(req).await;
assert_eq!(res.status(), 200);
@ -285,7 +269,7 @@ mod cross_version {
let req = client
.request_builder("/0.5")
.method(http::Method::PUT)
.body(BoxBody::from_static("req has a body"))
.body("req has a body".into())
.unwrap();
let res = client.request_body(req).await;
assert_eq!(res.status(), 200);
@ -303,14 +287,13 @@ mod cross_version {
.await;
let client = test.client;
let (mut tx, body) = http_body_util::channel::Channel::<Bytes, Infallible>::new(1024);
let (mut tx, body) = hyper::body::Body::channel();
let req = client
.request_builder("/0.5")
.method("POST")
.body(body)
.unwrap();
let fut = client.send_req(req);
let res = tokio::spawn(fut);
let res = tokio::spawn(async move { client.request_body(req).await });
tx.send_data(Bytes::from_static(b"hello"))
.await
.expect("the whole body should be read");
@ -318,7 +301,7 @@ mod cross_version {
.await
.expect("the whole body should be read");
drop(tx);
let res = res.await.unwrap().unwrap();
let res = res.await.unwrap();
assert_eq!(res.status(), 200);
}
@ -381,9 +364,7 @@ mod cross_version {
let req = client
.request_builder("/0.5")
.method("POST")
.body(BoxBody::new(http_body_util::Full::new(Bytes::from(
&[1u8; 64 * 1024 + 1][..],
))))
.body(hyper::Body::from(&[1u8; 64 * 1024 + 1][..]))
.unwrap();
let res = client.request_body(req).await;
assert_eq!(res.status(), 533);
@ -405,14 +386,13 @@ mod cross_version {
.await;
let client = test.client;
let (mut tx, body) = http_body_util::channel::Channel::<Bytes, Infallible>::new(1024);
let (mut tx, body) = hyper::body::Body::channel();
let req = client
.request_builder("/0.5")
.method("POST")
.body(body)
.unwrap();
let fut = client.send_req(req);
let res = tokio::spawn(fut);
let res = tokio::spawn(async move { client.request_body(req).await });
// send a 32k chunk
tx.send_data(Bytes::from(&[1u8; 32 * 1024][..]))
.await
@ -426,7 +406,7 @@ mod cross_version {
.await
.expect("the whole body should be read");
drop(tx);
let res = res.await.unwrap().unwrap();
let res = res.await.unwrap();
assert_eq!(res.status(), 533);
}
@ -610,8 +590,6 @@ mod http2 {
}
mod grpc_retry {
use std::convert::Infallible;
use super::*;
use http::header::{HeaderName, HeaderValue};
static GRPC_STATUS: HeaderName = HeaderName::from_static("grpc-status");
@ -635,7 +613,7 @@ mod grpc_retry {
let rsp = Response::builder()
.header(GRPC_STATUS.clone(), header)
.status(200)
.body(BoxBody::empty())
.body(hyper::Body::empty())
.unwrap();
tracing::debug!(headers = ?rsp.headers());
rsp
@ -683,16 +661,9 @@ mod grpc_retry {
let mut trailers = HeaderMap::with_capacity(1);
trailers.insert(GRPC_STATUS.clone(), status);
tracing::debug!(?trailers);
let (mut tx, body) =
http_body_util::channel::Channel::<Bytes, Error>::new(1024);
let (mut tx, body) = hyper::body::Body::channel();
tx.send_trailers(trailers).await.unwrap();
Ok::<_, Error>(
Response::builder()
.status(200)
.body(body)
.unwrap()
.map(BoxBody::new),
)
Ok::<_, Error>(Response::builder().status(200).body(body).unwrap())
}
}
});
@ -733,17 +704,10 @@ mod grpc_retry {
let mut trailers = HeaderMap::with_capacity(1);
trailers.insert(GRPC_STATUS.clone(), GRPC_STATUS_OK.clone());
tracing::debug!(?trailers);
let (mut tx, body) =
http_body_util::channel::Channel::<Bytes, Error>::new(1024);
let (mut tx, body) = hyper::body::Body::channel();
tx.send_data("hello world".into()).await.unwrap();
tx.send_trailers(trailers).await.unwrap();
Ok::<_, Error>(
Response::builder()
.status(200)
.body(body)
.unwrap()
.map(BoxBody::new),
)
Ok::<_, Error>(Response::builder().status(200).body(body).unwrap())
}
}
});
@ -788,20 +752,13 @@ mod grpc_retry {
let mut trailers = HeaderMap::with_capacity(1);
trailers.insert(GRPC_STATUS.clone(), GRPC_STATUS_OK.clone());
tracing::debug!(?trailers);
let (mut tx, body) =
http_body_util::channel::Channel::<Bytes, Infallible>::new(1024);
let (mut tx, body) = hyper::body::Body::channel();
tokio::spawn(async move {
tx.send_data("hello".into()).await.unwrap();
tx.send_data("world".into()).await.unwrap();
tx.send_trailers(trailers).await.unwrap();
});
Ok::<_, Error>(
Response::builder()
.status(200)
.body(body)
.unwrap()
.map(BoxBody::new),
)
Ok::<_, Error>(Response::builder().status(200).body(body).unwrap())
}
}
});
@ -833,38 +790,21 @@ mod grpc_retry {
assert_eq!(retries.load(Ordering::Relaxed), 1);
}
async fn data<B>(body: &mut B) -> B::Data
where
B: http_body::Body + Unpin,
B::Data: std::fmt::Debug,
B::Error: std::fmt::Debug,
{
use http_body_util::BodyExt;
async fn data(body: &mut hyper::Body) -> Bytes {
let data = body
.frame()
.data()
.await
.expect("a result")
.expect("a frame")
.into_data()
.expect("a chunk of data");
.expect("body data frame must not be eaten")
.unwrap();
tracing::info!(?data);
data
}
async fn trailers<B>(body: &mut B) -> http::HeaderMap
where
B: http_body::Body + Unpin,
B::Error: std::fmt::Debug,
{
use http_body_util::BodyExt;
async fn trailers(body: &mut hyper::Body) -> http::HeaderMap {
let trailers = body
.frame()
.trailers()
.await
.expect("a result")
.expect("a frame")
.into_trailers()
.ok()
.expect("a trailers frame");
.expect("trailers future should not fail")
.expect("response should have trailers");
tracing::info!(?trailers);
trailers
}

View File

@ -1,5 +1,3 @@
use linkerd_app_core::svc::http::BoxBody;
use crate::*;
#[tokio::test]
@ -28,13 +26,10 @@ async fn h2_exercise_goaways_connections() {
let (shdn, rx) = shutdown_signal();
let body = http_body_util::Full::new(Bytes::from(vec![b'1'; RESPONSE_SIZE]));
let body = Bytes::from(vec![b'1'; RESPONSE_SIZE]);
let srv = server::http2()
.route_fn("/", move |_req| {
Response::builder()
.body(body.clone())
.unwrap()
.map(BoxBody::new)
Response::builder().body(body.clone().into()).unwrap()
})
.run()
.await;
@ -55,8 +50,8 @@ async fn h2_exercise_goaways_connections() {
.into_iter()
.map(Response::into_body)
.map(|body| {
http_body_util::BodyExt::collect(body)
.map_ok(http_body_util::Collected::aggregate)
http_body::Body::collect(body)
.map_ok(http_body::Collected::aggregate)
// Make sure the bodies weren't cut off
.map_ok(|buf| assert_eq!(buf.remaining(), RESPONSE_SIZE))
})
@ -77,7 +72,7 @@ async fn http1_closes_idle_connections() {
let (shdn, rx) = shutdown_signal();
const RESPONSE_SIZE: usize = 1024 * 16;
let body = http_body_util::Full::new(Bytes::from(vec![b'1'; RESPONSE_SIZE]));
let body = Bytes::from(vec![b'1'; RESPONSE_SIZE]);
let shdn = Arc::new(Mutex::new(Some(shdn)));
let srv = server::http1()
@ -85,10 +80,7 @@ async fn http1_closes_idle_connections() {
// Trigger a shutdown signal while the request is made
// but a response isn't returned yet.
shdn.lock().take().expect("only 1 request").signal();
Response::builder()
.body(body.clone())
.unwrap()
.map(BoxBody::new)
Response::builder().body(body.clone().into()).unwrap()
})
.run()
.await;
@ -109,7 +101,7 @@ async fn tcp_waits_for_proxies_to_close() {
let msg1 = "custom tcp hello\n";
let msg2 = "custom tcp bye";
let srv = crate::tcp::server()
let srv = server::tcp()
// Trigger a shutdown while TCP stream is busy
.accept_fut(move |mut sock| {
async move {
@ -125,7 +117,7 @@ async fn tcp_waits_for_proxies_to_close() {
.await;
let proxy = proxy::new().inbound(srv).shutdown_signal(rx).run().await;
let client = crate::tcp::client(proxy.inbound);
let client = client::tcp(proxy.inbound);
let tcp_client = client.connect().await;

View File

@ -254,7 +254,7 @@ async fn grpc_headers_end() {
assert_eq!(res.status(), 200);
assert_eq!(res.headers()["grpc-status"], "1");
let body = res.into_body();
let bytes = http_body_util::BodyExt::collect(body)
let bytes = http_body::Body::collect(body)
.await
.unwrap()
.to_bytes()

View File

@ -57,7 +57,9 @@ impl Fixture {
let client = client::new(proxy.inbound, "tele.test.svc.cluster.local");
let tcp_dst_labels = metrics::labels().label("direction", "inbound");
let tcp_src_labels = tcp_dst_labels.clone().label("target_addr", orig_dst);
let labels = tcp_dst_labels.clone().label("target_port", orig_dst.port());
let labels = tcp_dst_labels
.clone()
.label("authority", "tele.test.svc.cluster.local");
let tcp_src_labels = tcp_src_labels.label("peer", "src");
let tcp_dst_labels = tcp_dst_labels.label("peer", "dst");
Fixture {
@ -119,7 +121,7 @@ impl TcpFixture {
const BYE_MSG: &'static str = "custom tcp bye";
async fn server() -> server::Listening {
crate::tcp::server()
server::tcp()
.accept(move |read| {
assert_eq!(read, Self::HELLO_MSG.as_bytes());
TcpFixture::BYE_MSG
@ -145,7 +147,7 @@ impl TcpFixture {
.run()
.await;
let client = crate::tcp::client(proxy.inbound);
let client = client::tcp(proxy.inbound);
let metrics = client::http1(proxy.admin, "localhost");
let src_labels = metrics::labels()
@ -184,7 +186,7 @@ impl TcpFixture {
.run()
.await;
let client = crate::tcp::client(proxy.outbound);
let client = client::tcp(proxy.outbound);
let metrics = client::http1(proxy.admin, "localhost");
let src_labels = metrics::labels()
@ -292,7 +294,7 @@ async fn metrics_endpoint_outbound_response_count() {
test_http_count("response_total", Fixture::outbound()).await
}
async fn test_http_count(metric_name: &str, fixture: impl Future<Output = Fixture>) {
async fn test_http_count(metric: &str, fixture: impl Future<Output = Fixture>) {
let _trace = trace_init();
let Fixture {
client,
@ -305,13 +307,9 @@ async fn test_http_count(metric_name: &str, fixture: impl Future<Output = Fixtur
..
} = fixture.await;
let metric = labels.metric(metric_name);
let metric = labels.metric(metric);
let scrape = metrics.get("/metrics").await;
assert!(
metric.is_not_in(scrape),
"{metric:?} should not be in /metrics"
);
assert!(metric.is_not_in(metrics.get("/metrics").await));
info!("client.get(/)");
assert_eq!(client.get("/").await, "hello");
@ -323,7 +321,6 @@ async fn test_http_count(metric_name: &str, fixture: impl Future<Output = Fixtur
mod response_classification {
use super::Fixture;
use crate::*;
use linkerd_app_core::svc::http::BoxBody;
use tracing::info;
const REQ_STATUS_HEADER: &str = "x-test-status-requested";
@ -358,7 +355,7 @@ mod response_classification {
// TODO: tests for grpc statuses
unreachable!("not called in test")
} else {
Response::new(BoxBody::empty())
Response::new("".into())
};
*rsp.status_mut() = status;
rsp
@ -1309,7 +1306,7 @@ async fn metrics_compression() {
let mut body = {
let body = resp.into_body();
http_body_util::BodyExt::collect(body)
http_body::Body::collect(body)
.await
.expect("response body concat")
.aggregate()
@ -1318,9 +1315,9 @@ async fn metrics_compression() {
body.copy_to_bytes(body.remaining()),
));
let mut scrape = String::new();
decoder
.read_to_string(&mut scrape)
.unwrap_or_else(|_| panic!("decode gzip (requested Accept-Encoding: {encoding})"));
decoder.read_to_string(&mut scrape).unwrap_or_else(|_| {
panic!("decode gzip (requested Accept-Encoding: {})", encoding)
});
scrape
}
};

View File

@ -26,7 +26,7 @@ async fn is_valid_json() {
assert!(!json.is_empty());
for obj in json {
println!("{obj}\n");
println!("{}\n", obj);
}
}
@ -53,7 +53,7 @@ async fn query_is_valid_json() {
assert!(!json.is_empty());
for obj in json {
println!("{obj}\n");
println!("{}\n", obj);
}
}
@ -74,9 +74,12 @@ async fn valid_get_does_not_error() {
let json = logs.await.unwrap();
for obj in json {
println!("{obj}\n");
println!("{}\n", obj);
if obj.get("error").is_some() {
panic!("expected the log stream to contain no error responses!\njson = {obj}");
panic!(
"expected the log stream to contain no error responses!\njson = {}",
obj
);
}
}
}
@ -98,9 +101,12 @@ async fn valid_query_does_not_error() {
let json = logs.await.unwrap();
for obj in json {
println!("{obj}\n");
println!("{}\n", obj);
if obj.get("error").is_some() {
panic!("expected the log stream to contain no error responses!\njson = {obj}");
panic!(
"expected the log stream to contain no error responses!\njson = {}",
obj
);
}
}
}
@ -136,7 +142,9 @@ async fn multi_filter() {
level.and_then(|value| value.as_str()),
Some("DEBUG") | Some("INFO") | Some("WARN") | Some("ERROR")
),
"level must be DEBUG, INFO, WARN, or ERROR\n level: {level:?}\n json: {obj:#?}"
"level must be DEBUG, INFO, WARN, or ERROR\n level: {:?}\n json: {:#?}",
level,
obj
);
}
@ -167,9 +175,9 @@ async fn get_log_stream(
let req = client
.request_body(
client
.request_builder(&format!("{PATH}?{filter}"))
.request_builder(&format!("{}?{}", PATH, filter))
.method(http::Method::GET)
.body(http_body_util::Full::new(Bytes::from(filter)))
.body(hyper::Body::from(filter))
.unwrap(),
)
.await;
@ -191,7 +199,7 @@ async fn query_log_stream(
client
.request_builder(PATH)
.method("QUERY")
.body(http_body_util::Full::new(Bytes::from(filter)))
.body(hyper::Body::from(filter))
.unwrap(),
)
.await;
@ -202,28 +210,19 @@ async fn query_log_stream(
/// Spawns a task to collect all the logs in a streaming body and parse them as
/// JSON.
fn collect_logs<B>(mut body: B) -> (JoinHandle<Vec<serde_json::Value>>, oneshot::Sender<()>)
where
B: Body<Data = Bytes> + Send + Unpin + 'static,
B::Error: std::error::Error,
{
use http_body_util::BodyExt;
fn collect_logs(
mut body: hyper::Body,
) -> (JoinHandle<Vec<serde_json::Value>>, oneshot::Sender<()>) {
let (done_tx, done_rx) = oneshot::channel();
let result = tokio::spawn(async move {
let mut result = Vec::new();
let logs = &mut result;
let fut = async move {
while let Some(res) = body.frame().await {
while let Some(res) = body.data().await {
let chunk = match res {
Ok(frame) => {
if let Ok(data) = frame.into_data() {
data
} else {
break;
}
}
Ok(chunk) => chunk,
Err(e) => {
println!("body failed: {e}");
println!("body failed: {}", e);
break;
}
};

View File

@ -80,7 +80,10 @@ impl Test {
.await
};
env.put(app::env::ENV_INBOUND_DETECT_TIMEOUT, format!("{TIMEOUT:?}"));
env.put(
app::env::ENV_INBOUND_DETECT_TIMEOUT,
format!("{:?}", TIMEOUT),
);
(self.set_env)(&mut env);
@ -110,7 +113,7 @@ async fn inbound_timeout() {
let _trace = trace_init();
let (proxy, metrics) = Test::default().run().await;
let client = crate::tcp::client(proxy.inbound);
let client = client::tcp(proxy.inbound);
let _tcp_client = client.connect().await;
@ -124,6 +127,26 @@ async fn inbound_timeout() {
.await;
}
/// Tests that the detect metric is labeled and incremented on I/O error.
#[tokio::test]
async fn inbound_io_err() {
let _trace = trace_init();
let (proxy, metrics) = Test::default().run().await;
let client = client::tcp(proxy.inbound);
let tcp_client = client.connect().await;
tcp_client.write(TcpFixture::HELLO_MSG).await;
drop(tcp_client);
metric(&proxy)
.label("error", "i/o")
.value(1u64)
.assert_in(&metrics)
.await;
}
/// Tests that the detect metric is not incremented when TLS is successfully
/// detected.
#[tokio::test]
@ -144,7 +167,7 @@ async fn inbound_success() {
"foo.ns1.svc.cluster.local",
client_config.clone(),
);
let no_tls_client = crate::tcp::client(proxy.inbound);
let no_tls_client = client::tcp(proxy.inbound);
let metric = metric(&proxy)
.label("error", "tls detection timeout")
@ -169,6 +192,44 @@ async fn inbound_success() {
metric.assert_in(&metrics).await;
}
/// Tests both of the above cases together.
#[tokio::test]
async fn inbound_multi() {
let _trace = trace_init();
let (proxy, metrics) = Test::default().run().await;
let client = client::tcp(proxy.inbound);
let metric = metric(&proxy);
let timeout_metric = metric.clone().label("error", "tls detection timeout");
let io_metric = metric.label("error", "i/o");
let tcp_client = client.connect().await;
tokio::time::sleep(TIMEOUT + Duration::from_millis(15)) // just in case
.await;
timeout_metric.clone().value(1u64).assert_in(&metrics).await;
drop(tcp_client);
let tcp_client = client.connect().await;
tcp_client.write(TcpFixture::HELLO_MSG).await;
drop(tcp_client);
io_metric.clone().value(1u64).assert_in(&metrics).await;
timeout_metric.clone().value(1u64).assert_in(&metrics).await;
let tcp_client = client.connect().await;
tokio::time::sleep(TIMEOUT + Duration::from_millis(15)) // just in case
.await;
io_metric.clone().value(1u64).assert_in(&metrics).await;
timeout_metric.clone().value(2u64).assert_in(&metrics).await;
drop(tcp_client);
}
/// Tests that TLS detect failure metrics are collected for the direct stack.
#[tokio::test]
async fn inbound_direct_multi() {
@ -183,7 +244,7 @@ async fn inbound_direct_multi() {
let proxy = proxy::new().inbound(srv).inbound_direct();
let (proxy, metrics) = Test::new(proxy).run().await;
let client = crate::tcp::client(proxy.inbound);
let client = client::tcp(proxy.inbound);
let metric = metrics::metric(METRIC).label("target_addr", proxy.inbound);
let timeout_metric = metric.clone().label("error", "tls detection timeout");
@ -230,7 +291,7 @@ async fn inbound_invalid_ip() {
.run()
.await;
let client = crate::tcp::client(proxy.inbound);
let client = client::tcp(proxy.inbound);
let metric = metric(&proxy)
.label("error", "unexpected")
.label("target_addr", fake_ip);
@ -293,7 +354,7 @@ async fn inbound_direct_success() {
.await;
let tls_client = client::http1(proxy2.outbound, auth);
let no_tls_client = crate::tcp::client(proxy1.inbound);
let no_tls_client = client::tcp(proxy1.inbound);
let metric = metrics::metric(METRIC)
.label("target_addr", proxy1.inbound)

Some files were not shown because too many files have changed in this diff Show More