this commit adds a `[workspace.package]` table at the root of the cargo
workspace. constituent manifests are updated to use the workspace-level
metadata.
this is generally a superficial chore, but has a pleasant future upside:
when new rust editions are released (e.g. 2024), we will only need to
update the edition specified at the root of the workspace.
Signed-off-by: katelyn martin <kate@buoyant.io>
* build(deps): bump tempfile from 3.17.1 to 3.19.0
Bumps [tempfile](https://github.com/Stebalien/tempfile) from 3.17.1 to 3.19.0.
- [Changelog](https://github.com/Stebalien/tempfile/blob/master/CHANGELOG.md)
- [Commits](https://github.com/Stebalien/tempfile/compare/v3.17.1...v3.19.0)
---
updated-dependencies:
- dependency-name: tempfile
dependency-type: indirect
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
* chore(deny.toml): skip rustix v0.38
this commit adds mention of rustix, whose 1.0 release is still
propagating through the ecosystem, to the deny.toml.
nb: this also removes the bitflags directive, which no longer included a
duplicate version.
Signed-off-by: katelyn martin <kate@buoyant.io>
---------
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: katelyn martin <kate@buoyant.io>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: katelyn martin <kate@buoyant.io>
this commit performs a small refactor to one of the unit tests in
`linkerd-stack`'s load-shedding middleware.
this adds a span to the worker tasks spawned in this test, so that
tracing logs can be associated with particular oneshot services.
see #3744 for more information on upgrading our tower dependency. this
is cherry-picked from investigations on that branch related to breaking
changes in 0.5 related to the `Buffer` middleware.
after this change, logs now look like this:
```
; RUST_LOG="trace" cargo test -p linkerd-stack buffer_load_shed -- --nocapture
running 1 test
[ 0.002770s] TRACE worker{id=oneshot1}: tower::buffer::service: sending request to buffer worker
[ 0.002809s] TRACE worker{id=oneshot2}: tower::buffer::service: sending request to buffer worker
[ 0.002823s] TRACE worker{id=oneshot3}: tower::buffer::service: sending request to buffer worker
[ 0.002843s] DEBUG worker{id=oneshot4}: linkerd_stack::loadshed: Service has become unavailable
[ 0.002851s] DEBUG worker{id=oneshot4}: linkerd_stack::loadshed: Service shedding load
[ 0.002878s] TRACE tower::buffer::worker: worker polling for next message
[ 0.002885s] TRACE tower::buffer::worker: processing new request
[ 0.002892s] TRACE worker{id=oneshot1}: tower::buffer::worker: resumed=false worker received request; waiting for service readiness
[ 0.002901s] DEBUG worker{id=oneshot1}: tower::buffer::worker: service.ready=true processing request
[ 0.002914s] TRACE worker{id=oneshot1}: tower::buffer::worker: returning response future
[ 0.002926s] TRACE tower::buffer::worker: worker polling for next message
[ 0.002931s] TRACE tower::buffer::worker: processing new request
[ 0.002935s] TRACE worker{id=oneshot2}: tower::buffer::worker: resumed=false worker received request; waiting for service readiness
[ 0.002946s] TRACE worker{id=oneshot2}: tower::buffer::worker: service.ready=false delay
[ 0.002983s] TRACE worker{id=oneshot5}: tower::buffer::service: sending request to buffer worker
[ 0.003001s] DEBUG worker{id=oneshot6}: linkerd_stack::loadshed: Service has become unavailable
[ 0.003007s] DEBUG worker{id=oneshot6}: linkerd_stack::loadshed: Service shedding load
[ 0.003017s] DEBUG worker{id=oneshot7}: linkerd_stack::loadshed: Service has become unavailable
[ 0.003024s] DEBUG worker{id=oneshot7}: linkerd_stack::loadshed: Service shedding load
[ 0.003035s] TRACE tower::buffer::worker: worker polling for next message
[ 0.003041s] TRACE tower::buffer::worker: resuming buffered request
[ 0.003045s] TRACE worker{id=oneshot2}: tower::buffer::worker: resumed=true worker received request; waiting for service readiness
[ 0.003052s] DEBUG worker{id=oneshot2}: tower::buffer::worker: service.ready=true processing request
[ 0.003060s] TRACE worker{id=oneshot2}: tower::buffer::worker: returning response future
[ 0.003068s] TRACE tower::buffer::worker: worker polling for next message
[ 0.003073s] TRACE tower::buffer::worker: processing new request
[ 0.003077s] TRACE worker{id=oneshot3}: tower::buffer::worker: resumed=false worker received request; waiting for service readiness
[ 0.003084s] DEBUG worker{id=oneshot3}: tower::buffer::worker: service.ready=true processing request
[ 0.003091s] TRACE worker{id=oneshot3}: tower::buffer::worker: returning response future
[ 0.003099s] TRACE tower::buffer::worker: worker polling for next message
[ 0.003103s] TRACE tower::buffer::worker: processing new request
[ 0.003107s] TRACE worker{id=oneshot5}: tower::buffer::worker: resumed=false worker received request; waiting for service readiness
[ 0.003114s] DEBUG worker{id=oneshot5}: tower::buffer::worker: service.ready=true processing request
[ 0.003121s] TRACE worker{id=oneshot5}: tower::buffer::worker: returning response future
[ 0.003129s] TRACE tower::buffer::worker: worker polling for next message
test loadshed::tests::buffer_load_shed ... ok
```
Signed-off-by: katelyn martin <kate@buoyant.io>
this commit replaces `humantime`, which is no longer maintained, with
`jiff`.
see this error when `main` today is built:
```
error[unmaintained]: humantime is unmaintained
┌─ /linkerd/linkerd2-proxy/Cargo.lock:78:1
│
78 │ humantime 2.1.0 registry+https://github.com/rust-lang/crates.io-index
│ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ unmaintained advisory detected
│
├ ID: RUSTSEC-2025-0014
├ Advisory: https://rustsec.org/advisories/RUSTSEC-2025-0014
├ Latest `humantime` crates.io release is four years old and GitHub repository has
not seen commits in four years. Question about maintenance status has not gotten
any reaction from maintainer: https://github.com/tailhook/humantime/issues/31
## Possible alternatives
* [jiff](https://crates.io/crates/jiff) provides same kind of functionality
├ Announcement: https://github.com/tailhook/humantime/issues/31
├ Solution: No safe upgrade is available!
├ humantime v2.1.0
└── linkerd-http-access-log v0.1.0
└── linkerd-app-inbound v0.1.0
├── linkerd-app v0.1.0
│ ├── linkerd-app-integration v0.1.0
│ └── linkerd2-proxy v0.1.0
├── linkerd-app-admin v0.1.0
│ ├── linkerd-app v0.1.0 (*)
│ └── (dev) linkerd-app-integration v0.1.0 (*)
└── linkerd-app-gateway v0.1.0
└── linkerd-app v0.1.0 (*)
advisories FAILED, bans ok, licenses ok, sources ok
```
see:
* https://github.com/rustsec/advisory-db/pull/2249.
* https://github.com/tailhook/humantime/issues/31.
Signed-off-by: katelyn martin <kate@buoyant.io>
kubert-prometheus-process is a new crate that includes all of Linkerd's system
metrics and more. This also helps avoid annoying compilation build issues on
non-Linux systems.
this updates the prometheus client dependency.
additionally, this commit updates the `kubert-prometheus-tokio`
dependency, so that we agree on the client library in use.
Signed-off-by: katelyn martin <kate@buoyant.io>
When the proxy boots up, it needs to select a number of I/O worker threads to
allocate to the runtime. This change adds a new environment variable that allows
this value to scale based on the number of CPUs available on on the host.
A CORES_MAX_RATIO value of 1.0 will allocate one worker thread per CPU core. A
lesser value will allocate fewer worker threads. Values are rounded to the
nearest whole number.
The CORES_MIN value sets a lower bound on the number of worker threads to use.
The CORES_MAX value sets an upper bound.
The proxy predates the multi-threaded tokio runtime. When switching to it, we
added a 'multicore' feature to adopt it incrementally. This has been the only
supported configuration for many years now.
This change removes the needless feature flag to simplify the runtime
configuration.
The outbound proxy makes protocol decisions based on the discovery response,
keyed on a "parent" reference.
This change adds a `protocol::metrics` middleware that records connection counts
by parent reference.
Inbound proxies may receive meshed traffic directly on the proxy's inbound port
with a transport header, informing inbound routing behavior.
This change updates the inbound proxy to record metrics about the usage of
transport headers, including the total number of requests with a transport
header by session protocol and target port.
This change updates the DetectHttp middleware to record metrics about HTTP
protocol detection. Specfically, it records the the counts of results and a very
coarse histogram of the time taken to detect the protocol.
The inbound, outbound, and admin (via inbound) stacks are updated to record
metrics against the main registry.
* refactor(http): consolidate HTTP protocol detection
Linkerd's HTTP protocol detection logic is spread across a few crates: the
linkerd-detect crate is generic over the actual protocol detection logic, and
the linkerd-proxy-http crate provides an implementation. There are no other
implemetations of the Detect interface. This leads to gnarly type signatures in
the form `Result<Option<http::Variant>, DetectTimeoutError>`: simultaneously
verbose and not particularly informative (what does the None case mean exactly).
This commit introduces a new crate, `linkerd-http-detect`, consolidating this
logic and removes the prior implementations. The admin, inbound, and outbound
stacks are updated to use these new types. This work is done in anticipation of
introducing metrics that report HTTP detection behavior.
There are no functional changes.
* feat(http/detect)!: error when the socket is closed
When a proxy does protocol detection, the initial read may indicate that the
connection was closed by the client with no data being written to the socket. In
such a case, the proxy continues to process the connection as if may be proxied,
but we expect this to fail immediately. This can lead to unexpected proxy
behavior: for example, inbound proxies may report policy denials.
To address this, this change surfaces an error (as if the read call failed).
This could, theoretically, impact some bizarre clients that initiate half-open
connections. These corner cases can use explicit opaque policies to bypass
detection.
We include a group/version/kind for inbound server resources, but we do not
indicate which specific port the server is applied to. This is important context
to understand the inbound proxy's behavior, especially when using the default
servers.
This change adds a `srv_port` label to inbound server metrics to definitively
and consistently indicate the server port used for inbound policy.
The RefusedNoTarget error type is a remnant of an older version of the direct
stack. This commit updates the error message to reflect the current state of the
code: we require ALPN-negotiated transport headers on all direct connections.
Linkerd's HTTP protocol detection logic is spread across a few crates: the
linkerd-detect crate is generic over the actual protocol detection logic, and
the linkerd-proxy-http crate provides an implementation. There are no other
implemetations of the Detect interface. This leads to gnarly type signatures in
the form `Result<Option<http::Variant>, DetectTimeoutError>`: simultaneously
verbose and not particularly informative (what does the None case mean exactly).
This commit introduces a new crate, `linkerd-http-detect`, consolidating this
logic and removes the prior implementations. The admin, inbound, and outbound
stacks are updated to use these new types. This work is done in anticipation of
introducing metrics that report HTTP detection behavior.
There are no functional changes.
Our build can occaisionally fail when the sha is not a valid semver label:
--- stdout
cargo:rustc-env=GIT_SHA=025979070
cargo:rustc-env=LINKERD2_PROXY_BUILD_DATE=2025-03-08T16:32:34Z
--- stderr
thread 'main' panicked at linkerd/app/core/build.rs:18:17:
LINKERD2_PROXY_VERSION must be semver: version='0.0.0-dev.025979070'
error='invalid leading zero in pre-release identifier'
To fix this, the dot is removed so the version string is 0.0.0-dev025979070,
which is valid.
pr #3715 missed a small handful of cargo dependencies. this commit marks
these so that they also use the workspace-level tower version.
Signed-off-by: katelyn martin <kate@buoyant.io>
* chore(deps): `tower` is a workspace dependency
see https://github.com/linkerd/linkerd2/issues/8733 for more
information.
see https://github.com/linkerd/linkerd2-proxy/pull/3504 as well.
see #3456 (c740b6d8), #3466 (ca50d6bb), #3473 (b87455a9), and #3701
(cf4ef39) for some other previous pr's that moved dependencies to be
managed at the workspace level.
see also https://github.com/linkerd/drain-rs/pull/36 for another related
pull request that relates to our tower dependency.
Signed-off-by: katelyn martin <kate@buoyant.io>
* chore(deps): `tower-service` is a workspace dependency
Signed-off-by: katelyn martin <kate@buoyant.io>
* chore(deps): `tower-test` is a workspace dependency
Signed-off-by: katelyn martin <kate@buoyant.io>
---------
Signed-off-by: katelyn martin <kate@buoyant.io>
noticed while addressing `cargo-deny` errors in #3504. these crates
include a few unused dependencies, which we can remove. while we
are in the neighborhood, we make some subjective tweaks to tidy up
these imports.
---
* chore(opentelemetry): remove unused `http` dependency
Signed-off-by: katelyn martin <kate@buoyant.io>
* nit(opentelemetry): tidy imports
this groups imports at the crate level, and directly imports some
imports from their respective crates rather than through an alias of
said crate. a `self` prefix is added to clarify imports from submodules
of this crate.
Signed-off-by: katelyn martin <kate@buoyant.io>
* chore(opentelemetry): remove unused `tokio-stream` dependency
Signed-off-by: katelyn martin <kate@buoyant.io>
* chore(opencensus): remove unused `http` dependency
Signed-off-by: katelyn martin <kate@buoyant.io>
* nit(opencensus): use self prefix in import
Signed-off-by: katelyn martin <kate@buoyant.io>
---------
Signed-off-by: katelyn martin <kate@buoyant.io>