Also move jdbc classes to bootstrap to reduce size and complexity of those reference checkers.
These changes reduce the total file size of these instrumentation classes by 635k, which should also result in decent memory savings.
Pulls out utility classes for reuse by other projects.
This also meant the dependency had to be bundled with dd-trace-ot since it isn't published as a separate dependency.
MuzzlePlugin groovy checks that no threads are spawned because this holds the ClassLoader live.
This was breaking with the caching change because the cache no longer uses the Cleaner service.
This caused a problem because the Thread behind the cleaner is created lazily when the first task is created, but without the cache the creation was delayed.
To solve this, I addressed the original cause of the leak. The newly created Thread automatically inherits the contextClassLoader of its parent, but that's unnecessary for a cleaner thread.
So I changed the ThreadFactory for cleaner to explicitly null out the contextClassLoader.
We should probably null out contextClassLoader in other thread factories and also reduce our use of contextClassLoaders in general, but that will left to another PR.
First pass at replacing ID generation with WeakReference reuse
In this first version, the Cache<ClassLoader, ID> was replaced with Cache<ClassLoader, WeakReference<ClassLoader>>.
The core cache is still of Cache<TypeCacheKey, TypePool.Resolution> and TypeCacheKey logically remains a composite key of ClassLoader, class name.
The removal of ID assignment means ID exhaustion is no longer na issue, so there's never a need to rebuild the cache. For that reason, CacheInstance has removed and the core caching logic has been moved into DDCachingPoolStrategy.
While TypeCacheKey remains conceptually the same, the internals have changed somewhat. The TypeCacheKey now has 3 core fields...
- loaderHash
- loadeRef
- class name
Since loader refs are recycled, the fast path for key equivalence can use reference equivalence of the reference objects.
This change ripples through the CacheProvider-s which also have to store loaderHash and loaderRef.
It may be worth going a step further and switching to a Cache<Loader, TypePool> as well. That still avoid the creation of many WeakReference-s, since the underlying CacheProvider will hold a canonical WeakReference per ClassLoader.
This change overhauls the core type cache
The new approach aims to achieve several things...
1 - cache is strictly bounded -- no variance for number of classes of ClassLoaders
2 - cache is significantly smaller
3 - cache doesn't compromise start-up time
4 - primary eviction policy isn't driven by time
5 - primary eviction policy isn't driven by GC
There are some slight compromises here.
In practice, start-up does increase slightly in a memory rich environment; however, start-up improves considerably in a memory poor environment.
The basic approcach is to have a single unified Guava cache for all ClassLoaders -- nominally keyed a composite of ClassLoader & class name
The ByteBuddy CacheProvider are simply thin wrappers around the Guava cache associated to a particular ClassLoader
However rather than having a large number of WeakReferences floating around. The cache assigns an ID to each ClassLoader.
To further avoid, consuming memory the cache only preserves a small map of Loader / ID assignments. This means a ClassLoader may have more than one active ID.
This introduce the possibility for ID exhaustion. That unlikely case is handle by retiring the internal CacheInstance and starting anew.
The flow for context propagation is as follows.
* <p>We inject into StreamRemoteCall constructor used for invoking remote tasks and performs a
* backwards compatible check to ensure if the other side is prepared to receive context propagation
* messages then if successful sends a context propagation message
*
* <p>Context propagation consist of a Serialized HashMap with all data set by usual context
* injection, which includes things like sampling priority, trace and parent id
*
* <p>As well as optional baggage items
*
* <p>On the other side of the communication a special Dispatcher is created when a message with
* DD_CONTEXT_CALL_ID is received.
*
* <p>If the server is not instrumented first call will gracefully fail just like any other unknown
* call. With small caveat that this first call needs to *not* have any parameters, since those will
* not be read from connection and instead will be interpreted as another remote instruction, but
* that instruction will essentially be garbage data and will cause the parsing loop to throw exception
* and shutdown the connection which we do not want
The problem was that on zulu8 loading OkHttp touches JFR which in turn
touches log manager - which would break things like JBOSS.
The fix is to delay installing agent (and writer) until log manager
things have settled down - in way similar to jmxfetch.
Unfortunately for 'main' agent this turns out to be more involved
because of classloader shenanigans.
Disabled by default, and only creates a span if existing trace detected.
To enable all of them:
* System Property: `-Ddd.integration.servlet.enabled=true`
* Environment Variable: `DD_INTEGRATION_SERVLET_ENABLED=true`
(They have independent configs as well. If needed, view the source below.)
For spring:
* Move more logic to the decorator.
* Use a fixed operation name, but set the resource name.
* Rename the root span instead of the parent span (If there are other spans in between this could make a difference.) Not sure what impact this would have if multiple controllers are called (ie, forward/include).
For Jax-rs:
* Rename the root span instead of the parent span (same concern as above with spring)