@trask @mateuszrzeszutek hello, what do you think about
SingleThreadedSpringWebfluxTest, the test contains dependencies on new
reactor netty classes in testLatestDeps case. I tried use reflection for
rewriting the test to java but it was not trivial and I not reach the
result
---------
Co-authored-by: Lauri Tulmin <ltulmin@splunk.com>
Currently we run spotless and other checks for each of the parallel test
steps which seems wasteful. Here is an attempt to run only the tests in
given partition without any extra checks in the `test` step and run all
the checks in the `build` step.
Resolves#7437.
A few caveats about this. The TL;DR on #7437 is that a non-containerized
process was reporting a `container.id` attribute. The submitter narrowed
it down and I was able to confirm with the test case in this PR.
I hunted for other means for code to determine if it's containerized
with the idea to not even do the parsing if not containerized, but I
couldn't find anything useful. In fact, most approaches of detecting
containerization at all do involve parsing cgroups. Wacky.
So I attempted to verify that container IDs should always be 64
characters. I found:
* podman - docs
[here](https://docs.podman.io/en/latest/markdown/podman-container-inspect.1.html)
"Container ID (full 64-char hash)"
* docker - UID generator source
[here](634a848b8e/pkg/stringid/stringid.go (L36))
shows 32 bytes (and even guards against fully numeric!)
* lxc [man page
](https://linuxcontainers.org/lxc/manpages/man1/lxc-info.1.html)says
"container identifier format is an alphanumeric string". If this maps
into cgroups (no idea!), it would have already been broken in some cases
because we enforce hex.
I'm a little concerned about this approach because the [otel
spec](94c9c75c4f/specification/resource/semantic_conventions/container.md)
suggests that "The UUID might be abbreviated.", but it's
unclear/non-specific about the circumstances that might cause this.
Open to hearing about why the approach presented here is a bad idea. 🙃
Related discussion #7257Resolves#3413Resolves#5059Resolves#6258Resolves#7179
Adds a logging implementation that'll collect agent logs in memory until
slf4j is detected in the instrumented application; and when that happens
will dump all the logs into the application slf4j and log directly to
the application logger from that time on.
It's still in a POC state, unfortunately: while it works fine with an
app that uses & initializes slf4j directly, Spring Boot applications
actually reconfigure the logging implementation (e.g. logback) a while
after slf4j is loaded; which causes all the startup agent logs (debug
included) to be dumped with the default logback pattern.
Future work:
* ~~Make sure all logs produces by the agent are sent to loggers named
`io.opentelemetry...`
(https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/7446)~~
DONE
* Make this work on Spring Boot
* Documentation
* Smoke test?