Hopefully resolves
https://github.com/open-telemetry/opentelemetry-java-instrumentation/issues/7124
Our kotlin coroutine instrumentation relies on a shaded copy of
`opentelemetry-extension-kotlin`. This doesn't work well when
application also uses `opentelemetry-extension-kotlin`, because the
shaded and unshaded copy store opentelemery context under different key.
This pr attempts to fix this by instrumenting
`opentelemetry-extension-kotlin` provided by the application so that it
would delegate to the one shaded inside the agent.
Co-authored-by: Mateusz Rzeszutek <mrzeszutek@splunk.com>
Will close#7216 after this is merged and that PR is rebased.
I tested and it does bring a few more scala versions into Intellij
without this, but scala is an odd case.
Fixes#7265
I took a look at the new Observation API, and I think that it still
makes sense to continue using the interceptors to implement this
instrumentation: they implement the OTel spec (which includes way more
attributes than the default observation convention implemented in
Spring), and cooperate with the Kafka client instrumentation and link
the receive and process spans together. And it's quite a simple change
in one of our interceptors, instead of rewriting everything.
(Draft because Spring Boot 3 hasn't released yet, and it is required to
run the tests. If we're not in a hurry this PR can wait a bit for that)
closes: #7028
For reasons explained in the linked issue, it might be handy to register
drivers directly against `OpenTelemetryDriver` rather than against
`DriverManager`. I decided to also go with static registry as drivers
are often instantiated connect pools (like Agroal) and it could be
difficult to use instance varibles. This PR adds additional `Driver`
collection where drivers can be registered. If driver is registered both
with `OpenTelemtryDriver` and `DriverManager`, drivers registered with
`OpenTelemetryDriver` are preferred.
I thought I was going to need this for #7293, but it seems like still a
good change, removes a bit of duplication across getHost and getPort,
and could be useful in the future if we want logic to grab both host and
port in a "single pass"
When a webflux filter is added which throws an exception, the
instrumentation does not currently capture the `http.status_code`.
The fix is to move `WebClientTracingFilter` from the first to the last
filter in the chain, which I think(?) is the general strategy we've
taken for other client instrumentation, e.g. so that if a filter makes
another http call it won't be suppressed.
I don't love the test coverage I added, so let me know if you have any
better suggestions?
EDIT: btw, I did archaeology to confirm that behavior (adding to the
beginning of the chain) has been in place since the webflux
instrumentation was added originally
6f472a62a0 (diff-493ad89b5bde807c90387aa2bb67eb10d3bcef6b6a388bd31e11796a6d01ac38R36)
Run callbacks added to the `CompletableFuture` returned from `sendAsync`
with the context that was used when `sendAsync` was called.
Add test for capturing message headers.
Replaces #6362.
I've reduced the attributes to only record the gc name and the action
that was taken (i.e. I've removed the gc cause). If needed we can add
the cause later, but for now this should be sufficient to determine
total time spent in GC, and categorize time spent as stop the world or
parallel.
This resolves#6694.
We've been tracking the update to cgroup version support and want to get
ahead of the widespread usage. The surface of the existing
`ContainerResource` has not changed, but its internals have been
factored out to two "extractor" utilities -- one that understands cgroup
v1 and another for v2. v1 is attempted and, if successful, the result is
used. If v1 fails, then the `ContainerResource` will fall back to v2.
As mentioned in #6694, the approach taken in this PR is borrowed from
[this SO
post](https://stackoverflow.com/questions/68816329/how-to-get-docker-container-id-from-within-the-container-with-cgroup-v2)
combined with local experimentation on docker desktop on a Mac, which
already uses cgroup2 v2.
Should help with maven central sporadic failure:
```
Could not determine the dependencies of task ':instrumentation:hibernate:hibernate-3.3:javaagent:test'.
> Could not resolve all task dependencies for configuration ':instrumentation:hibernate:hibernate-3.3:javaagent:testRuntimeClasspath'.
> Could not resolve javassist:javassist:+.
```
This PR adds support for OpenSearch 1.x and 2.x Java clients
auto-instrumentation.
This is made possible by OpenTelemetry specification v1.14.0 and
OpenTelemetry Java SDK v1.19.0.
Testing is being done using
org.opensearch:opensearch-testcontainers:2.0.0
(https://github.com/opensearch-project/opensearch-testcontainers)
Resolves#7007
Signed-off-by: Cédric Pelvet <cedric.pelvet@gmail.com>
Signed-off-by: Cédric Pelvet <cedric.pelvet@gmail.com>
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
~I'll create a tracking issue to remove these and support new versions.~
Tracking issue added to support latest project reactor 3.5.0 and revert
these limits: #7107
to improve the situation when logging/debugging
`Span.current().getSpanContext()`, currently:
> ImmutableSpanContext{traceId=115a2de6dffb17eaafd13a66d7aec660,
spanId=56af5c30e85bfb08,
traceFlags=**io.opentelemetry.javaagent.instrumentation.opentelemetryapi.trace.BridgedTraceFlags@20ea6fa6**,
traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}
In the 10/27 java sig we discussed that it would be valuable to
enumerate the attributes reported for memory pool and gc metrics when
different gcs are used.
I've went ahead and added a readme for the runtime metrics which
includes detailed information on the attributes reported. Note that I
also have the same data for gc metrics added in #6964 and #6963, but
will wait to add until those PRs are merged.