In the 10/27 java sig we discussed that it would be valuable to
enumerate the attributes reported for memory pool and gc metrics when
different gcs are used.
I've went ahead and added a readme for the runtime metrics which
includes detailed information on the attributes reported. Note that I
also have the same data for gc metrics added in #6964 and #6963, but
will wait to add until those PRs are merged.
I suspect that this was added in the original RocketMQ instrumentation
because it existed in the Kafka instrumentation, and not because there
was a need for it(?)
See #6957 for documentation on why it is needed in Kafka
Runtime metrics doesn't include the meter version. This adds it from the
utility method in the instrumentation-api
`EmbeddedInstrumentationProperties.findVersion`. I know I can read the
properties file for this module, but its repetitive to implement that in
many places.
This may be a regression in 1.19.0 because you can no longer reconstruct
the original url for netty server spans (previously `http.host` was
captured which could be used).
While I was looking at issues
open-telemetry/opentelemetry-java-instrumentation#6694 and
open-telemetry/opentelemetry-java#2337, I saw that the code in
`io.opentelemetry.instrumentation.resources.ContainerResource` used
`null` several times as return value which isn't safe. Nowadays,
`Optional` is better suited to signal the absence of a result, so I
refactored `ContainerResource` to use `Optional`s instead of null.
On the way, I also refactored this class's unit tests into parameterised
tests to reduce test code duplication. These improvements should help
implementing a solution to
open-telemetry/opentelemetry-java-instrumentation#6694.
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
Calling `Mono#timeout()` with a timeout value smaller than the HTTP
client timeout caused the on request/response end callbacks to be simply
discarded; and the HTTP span was never finished.
Working PR to capture all the changes required to update to otel java
1.19.0. The new log API force allows
`:instrumentation-appender-api-internal` and
`:instrumentation-appender-sdk-internal`, but necessitates a decent
amount of refactoring as a result.
The PR points at the `1.19.0-SNAPSHOT`, which I'll update upon
publication.
Co-authored-by: Mateusz Rzeszutek <mrzeszutek@splunk.com>
Co-authored-by: Trask Stalnaker <trask.stalnaker@gmail.com>
Co-authored-by: Lauri Tulmin <ltulmin@splunk.com>
Related to #6734.
This first stage splits out the shared utilities in
`:instrumentation:netty:netty-common`. I'll follow it up by splitting
out `:instrumentation:netty:netty-4-common`,
`:instrumentation:netty:netty-4.1` in separate PRs. If there is
appetite, I can also split out library instrumentation for
`:instrumentation:netty:netty-4.0` and
`:instrumentation:netty:netty-3.8`, though I have no need for these.
Resolves#6779
In JMS you can have either the consumer receive span or the consumer
process span (unlike Kafka, where the process span is always there and
the receive span is just an addition) - in scenarios where polling
(receive) is used, I think it makes sense to add links to the producer
span to preserve the producer-consumer connection. Current messaging
semantic conventions don't really describe a situation like this one,
but the https://github.com/open-telemetry/oteps/pull/220 OTEP mentions
that links might be used in a scenario like this one - which makes me
think that adding links here might be a not that bad idea.