This is another follow-up from #7616. This makes the test options class
immutable and uses `@AutoValue` and `@AutoValue.Builder`. As a result, a
bunch of the configuration/setup code for these said options now flings
around a builder instance. This isn't great, but I think it's an
incremental improvement that can be seen in the `@BeforeAll
AbstractHttpClientTest.setupOptions()` method, where the immutable
options are (finally) instantiated.
I decided splitting changes on different PRs due to there are a lot of
lines of code in tests here and it should simplify review process.
Another option is I can add additional commit to this PR with conversion
of other groovy files.
As part of discussions #7616, the idea of trying to do a more piecemeal
approach came up. A reasonable ask.
This is the first step in refactoring the http client tests. It factors
out the `HttpClientResult` inner class of the `AbstractHttpClientTest`
so that this can be reused by new test framework later. It also factors
the relevant abstract methods in the abstract class to a new type
adapter, which will also be reused.
Co-authored-by: Mateusz Rzeszutek <mrzeszutek@splunk.com>
This PR includes updates to the SQLSanitizer, DbClientSpanNameExtractor
and SqlStatementInfo to name spans according to procedure name for CALL
statements. The updates to the naming logic are in the SqlSanitizer and
table has been renamed to identifier as using the table variable for the
procedure name would not be idiomatic. SqlStatementInfo has been updated
so that the db.sql.table attribute is not included for procedures.
Solves. #6991
This PR implements the request portion of the new gRPC metadata
instrumentation spec:
https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/rpc.md#grpc-request-and-response-metadata
The changes include:
- new CommonConfig entry for desired gRPC metadata values:
'otel.instrumentation.grpc.capture-metadata.request'
(Similar to http headers)
- setting the desired metadata values in GrpcTelemetry
- new property in GrpcAttributesExtractor that holds a reference to the
GrpcRpcAttributesGetter
- new property in GrpcAttributesExtractor that stores the desired values
so it can iterate them and extract each one from the request
- inject the GrpcRpcAttributesGetter to GrpcAttributesExtractor (in
GrpcTelemetryBuilder)
- logic in GrpcRpcAttributesGetter to safely extract the gRPC metadata
value
- A new test in GrpcTest that makes sure that when a certain metadata
key name is inserted, it also ends up in the span attributes
** Doesn't take care of the response because gRPC response is not
implemented in java-instrumentation yet. (This is absolutely necessary
but out of scope for this PR)
** "metadataValue" is only implemented inside GrpcRpcAttributesGetter
and not in RpcAttributesGetter to avoid providing implementations for
every RpcAttributesGetter in the repo as this PR only focuses on gRPC.
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
Let's keep close to the SDK repo config.
I reverted some of the changes, only left those that I think make sense
anyway (e.g. comparing enums with `==`)
for okhttp-3.0 instrumentation, the README uses `OkHttpTracing`:
```java
import io.opentelemetry.instrumentation.okhttp.v3_0.OkHttpTracing;
...
return OkHttpTracing.builder(openTelemetry).build().newCallFactory(createClient());
```
#5624 changed `OkHttpTracing` to `OkHttpTelemetry` but the docs still
show the previous value which no longer works
Can be reproduce (at least on Windows) using
```
java -javaagent:opentelemetry-javaagent.jar anything C:\one
```
producing
```
[otel.javaagent 2023-01-13 11:38:47:978 -0800] [main] INFO io.opentelemetry.javaagent.tooling.VersionLogger - opentelemetry-javaagent - version: 1.22.0
OpenTelemetry Javaagent failed to start
java.nio.file.InvalidPathException: Illegal char <:> at index 10: anything C:\one
at java.base/sun.nio.fs.WindowsPathParser.normalize(WindowsPathParser.java:182)
at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:153)
at java.base/sun.nio.fs.WindowsPathParser.parse(WindowsPathParser.java:77)
at java.base/sun.nio.fs.WindowsPath.parse(WindowsPath.java:92)
at java.base/sun.nio.fs.WindowsFileSystem.getPath(WindowsFileSystem.java:229)
at java.base/java.nio.file.Path.of(Path.java:147)
at java.base/java.nio.file.Paths.get(Paths.java:69)
at io.opentelemetry.instrumentation.resources.JarServiceNameDetector.getJarPathFromSunCommandLine(JarServiceNameDetector.java:104)
at io.opentelemetry.instrumentation.resources.JarServiceNameDetector.createResource(JarServiceNameDetector.java:59)
at io.opentelemetry.sdk.autoconfigure.ResourceConfiguration.configureResource(ResourceConfiguration.java:59)
at io.opentelemetry.sdk.autoconfigure.AutoConfiguredOpenTelemetrySdkBuilder.build(AutoConfiguredOpenTelemetrySdkBuilder.java:332)
at io.opentelemetry.javaagent.tooling.OpenTelemetryInstaller.installOpenTelemetrySdk(OpenTelemetryInstaller.java:29)
at io.opentelemetry.javaagent.tooling.AgentInstaller.installBytebuddyAgent(AgentInstaller.java:114)
at io.opentelemetry.javaagent.tooling.AgentInstaller.installBytebuddyAgent(AgentInstaller.java:94)
at io.opentelemetry.javaagent.tooling.AgentStarterImpl.start(AgentStarterImpl.java:78)
at io.opentelemetry.javaagent.bootstrap.AgentInitializer.initialize(AgentInitializer.java:35)
at io.opentelemetry.javaagent.OpenTelemetryAgent.startAgent(OpenTelemetryAgent.java:57)
at io.opentelemetry.javaagent.OpenTelemetryAgent.premain(OpenTelemetryAgent.java:45)
```
Related to #7107 and #7202
Support WebFlux 6.
Supporting reactor 3.5 seems pretty straightforward, the
`subscriberContext()` was deprecated in 3.4 in favor of
`contextWrite()`. In 3.5, `subscriberContext()` was removed.
This PR doesn't bump `latestDepTestLibrary` to 3.5 yet because there are
a couple of tests that succeed in 3.4 using `contextWrite()` but fail in
3.5 using `contextWrite()`.
My proposal is to review/merge this PR, and then I can ping our resident
reactor experts to see if they have thoughts on the failing tests in
3.5.
There were so many changes in the tests that extracting a base class
wouldn't really improve the readability; so I just reimplemented them in
Java.
The instrumentation itself is pretty much a copy-paste of the `jms-1.1`
instrumentation, with `s/javax/jakarta/` applied.
Bumps [spotless-plugin-gradle](https://github.com/diffplug/spotless)
from 6.12.0 to 6.12.1.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="718a504c12"><code>718a504</code></a>
Published gradle/6.12.1</li>
<li><a
href="c13acee213"><code>c13acee</code></a>
Published lib/2.31.1</li>
<li><a
href="552aabf876"><code>552aabf</code></a>
fix(deps): update dependency com.facebook:ktfmt to v0.42 (<a
href="https://github-redirect.dependabot.com/diffplug/spotless/issues/1421">#1421</a>)</li>
<li><a
href="4063e9f6d1"><code>4063e9f</code></a>
Add support for KtLint 0.48.0 (<a
href="https://github-redirect.dependabot.com/diffplug/spotless/issues/1432">#1432</a>
fixes <a
href="https://github-redirect.dependabot.com/diffplug/spotless/issues/1430">#1430</a>)</li>
<li><a
href="062e835846"><code>062e835</code></a>
Bump changelogs.</li>
<li><a
href="8f7e00594d"><code>8f7e005</code></a>
spotlessApply</li>
<li><a
href="9a8ccae9ec"><code>9a8ccae</code></a>
Bump default ktfmt 0.41 -> 0.42</li>
<li><a
href="fb4277d2b1"><code>fb4277d</code></a>
Merge branch 'main-ktlint-0.48.0' into renovate/ver_ktfmt</li>
<li><a
href="b44d70d00a"><code>b44d70d</code></a>
Move changelog entries to the correct release.</li>
<li><a
href="b3d8e89002"><code>b3d8e89</code></a>
spotlessApply for 2023</li>
<li>Additional commits viewable in <a
href="https://github.com/diffplug/spotless/compare/gradle/6.12.0...gradle/6.12.1">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
You can trigger a rebase of this PR by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mateusz Rzeszutek <mrzeszutek@splunk.com>
In
https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/7447
injected resource is opened with class loader getResourceAsStream. This
works only in class loaders where getResourceAsStream delegates to
getResource. This is not the case with all class loaders, for example
tomcat class loader does not do this. Because of this we also need to
instrument class loader getResourceAsStream.
Changed them from standalone top-level attributes to move them under
`log4j.map_message.*` (e.g. similar to `log4j.context_data.*` and
others).
Also adds doc for log4j javaagent properties.
Also cleans up related tests.
Hopefully resolves
https://github.com/open-telemetry/opentelemetry-java-instrumentation/issues/7124
Our kotlin coroutine instrumentation relies on a shaded copy of
`opentelemetry-extension-kotlin`. This doesn't work well when
application also uses `opentelemetry-extension-kotlin`, because the
shaded and unshaded copy store opentelemery context under different key.
This pr attempts to fix this by instrumenting
`opentelemetry-extension-kotlin` provided by the application so that it
would delegate to the one shaded inside the agent.
Co-authored-by: Mateusz Rzeszutek <mrzeszutek@splunk.com>
Will close#7216 after this is merged and that PR is rebased.
I tested and it does bring a few more scala versions into Intellij
without this, but scala is an odd case.
Fixes#7265
I took a look at the new Observation API, and I think that it still
makes sense to continue using the interceptors to implement this
instrumentation: they implement the OTel spec (which includes way more
attributes than the default observation convention implemented in
Spring), and cooperate with the Kafka client instrumentation and link
the receive and process spans together. And it's quite a simple change
in one of our interceptors, instead of rewriting everything.
(Draft because Spring Boot 3 hasn't released yet, and it is required to
run the tests. If we're not in a hurry this PR can wait a bit for that)
closes: #7028
For reasons explained in the linked issue, it might be handy to register
drivers directly against `OpenTelemetryDriver` rather than against
`DriverManager`. I decided to also go with static registry as drivers
are often instantiated connect pools (like Agroal) and it could be
difficult to use instance varibles. This PR adds additional `Driver`
collection where drivers can be registered. If driver is registered both
with `OpenTelemtryDriver` and `DriverManager`, drivers registered with
`OpenTelemetryDriver` are preferred.
I thought I was going to need this for #7293, but it seems like still a
good change, removes a bit of duplication across getHost and getPort,
and could be useful in the future if we want logic to grab both host and
port in a "single pass"
When a webflux filter is added which throws an exception, the
instrumentation does not currently capture the `http.status_code`.
The fix is to move `WebClientTracingFilter` from the first to the last
filter in the chain, which I think(?) is the general strategy we've
taken for other client instrumentation, e.g. so that if a filter makes
another http call it won't be suppressed.
I don't love the test coverage I added, so let me know if you have any
better suggestions?
EDIT: btw, I did archaeology to confirm that behavior (adding to the
beginning of the chain) has been in place since the webflux
instrumentation was added originally
6f472a62a0 (diff-493ad89b5bde807c90387aa2bb67eb10d3bcef6b6a388bd31e11796a6d01ac38R36)
Run callbacks added to the `CompletableFuture` returned from `sendAsync`
with the context that was used when `sendAsync` was called.
Add test for capturing message headers.
Replaces #6362.
I've reduced the attributes to only record the gc name and the action
that was taken (i.e. I've removed the gc cause). If needed we can add
the cause later, but for now this should be sufficient to determine
total time spent in GC, and categorize time spent as stop the world or
parallel.
This resolves#6694.
We've been tracking the update to cgroup version support and want to get
ahead of the widespread usage. The surface of the existing
`ContainerResource` has not changed, but its internals have been
factored out to two "extractor" utilities -- one that understands cgroup
v1 and another for v2. v1 is attempted and, if successful, the result is
used. If v1 fails, then the `ContainerResource` will fall back to v2.
As mentioned in #6694, the approach taken in this PR is borrowed from
[this SO
post](https://stackoverflow.com/questions/68816329/how-to-get-docker-container-id-from-within-the-container-with-cgroup-v2)
combined with local experimentation on docker desktop on a Mac, which
already uses cgroup2 v2.
Should help with maven central sporadic failure:
```
Could not determine the dependencies of task ':instrumentation:hibernate:hibernate-3.3:javaagent:test'.
> Could not resolve all task dependencies for configuration ':instrumentation:hibernate:hibernate-3.3:javaagent:testRuntimeClasspath'.
> Could not resolve javassist:javassist:+.
```
This PR adds support for OpenSearch 1.x and 2.x Java clients
auto-instrumentation.
This is made possible by OpenTelemetry specification v1.14.0 and
OpenTelemetry Java SDK v1.19.0.
Testing is being done using
org.opensearch:opensearch-testcontainers:2.0.0
(https://github.com/opensearch-project/opensearch-testcontainers)
Resolves#7007
Signed-off-by: Cédric Pelvet <cedric.pelvet@gmail.com>
Signed-off-by: Cédric Pelvet <cedric.pelvet@gmail.com>
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
~I'll create a tracking issue to remove these and support new versions.~
Tracking issue added to support latest project reactor 3.5.0 and revert
these limits: #7107
to improve the situation when logging/debugging
`Span.current().getSpanContext()`, currently:
> ImmutableSpanContext{traceId=115a2de6dffb17eaafd13a66d7aec660,
spanId=56af5c30e85bfb08,
traceFlags=**io.opentelemetry.javaagent.instrumentation.opentelemetryapi.trace.BridgedTraceFlags@20ea6fa6**,
traceState=ArrayBasedTraceState{entries=[]}, remote=false, valid=true}
In the 10/27 java sig we discussed that it would be valuable to
enumerate the attributes reported for memory pool and gc metrics when
different gcs are used.
I've went ahead and added a readme for the runtime metrics which
includes detailed information on the attributes reported. Note that I
also have the same data for gc metrics added in #6964 and #6963, but
will wait to add until those PRs are merged.
I suspect that this was added in the original RocketMQ instrumentation
because it existed in the Kafka instrumentation, and not because there
was a need for it(?)
See #6957 for documentation on why it is needed in Kafka
Runtime metrics doesn't include the meter version. This adds it from the
utility method in the instrumentation-api
`EmbeddedInstrumentationProperties.findVersion`. I know I can read the
properties file for this module, but its repetitive to implement that in
many places.
This may be a regression in 1.19.0 because you can no longer reconstruct
the original url for netty server spans (previously `http.host` was
captured which could be used).
While I was looking at issues
open-telemetry/opentelemetry-java-instrumentation#6694 and
open-telemetry/opentelemetry-java#2337, I saw that the code in
`io.opentelemetry.instrumentation.resources.ContainerResource` used
`null` several times as return value which isn't safe. Nowadays,
`Optional` is better suited to signal the absence of a result, so I
refactored `ContainerResource` to use `Optional`s instead of null.
On the way, I also refactored this class's unit tests into parameterised
tests to reduce test code duplication. These improvements should help
implementing a solution to
open-telemetry/opentelemetry-java-instrumentation#6694.
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
Calling `Mono#timeout()` with a timeout value smaller than the HTTP
client timeout caused the on request/response end callbacks to be simply
discarded; and the HTTP span was never finished.
Working PR to capture all the changes required to update to otel java
1.19.0. The new log API force allows
`:instrumentation-appender-api-internal` and
`:instrumentation-appender-sdk-internal`, but necessitates a decent
amount of refactoring as a result.
The PR points at the `1.19.0-SNAPSHOT`, which I'll update upon
publication.
Co-authored-by: Mateusz Rzeszutek <mrzeszutek@splunk.com>
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
Co-authored-by: Lauri Tulmin <ltulmin@splunk.com>
Related to #6734.
This first stage splits out the shared utilities in
`:instrumentation:netty:netty-common`. I'll follow it up by splitting
out `:instrumentation:netty:netty-4-common`,
`:instrumentation:netty:netty-4.1` in separate PRs. If there is
appetite, I can also split out library instrumentation for
`:instrumentation:netty:netty-4.0` and
`:instrumentation:netty:netty-3.8`, though I have no need for these.
Resolves#6779
In JMS you can have either the consumer receive span or the consumer
process span (unlike Kafka, where the process span is always there and
the receive span is just an addition) - in scenarios where polling
(receive) is used, I think it makes sense to add links to the producer
span to preserve the producer-consumer connection. Current messaging
semantic conventions don't really describe a situation like this one,
but the https://github.com/open-telemetry/oteps/pull/220 OTEP mentions
that links might be used in a scenario like this one - which makes me
think that adding links here might be a not that bad idea.