Also move jdbc classes to bootstrap to reduce size and complexity of those reference checkers.
These changes reduce the total file size of these instrumentation classes by 635k, which should also result in decent memory savings.
Pulls out utility classes for reuse by other projects.
This also meant the dependency had to be bundled with dd-trace-ot since it isn't published as a separate dependency.
MuzzlePlugin groovy checks that no threads are spawned because this holds the ClassLoader live.
This was breaking with the caching change because the cache no longer uses the Cleaner service.
This caused a problem because the Thread behind the cleaner is created lazily when the first task is created, but without the cache the creation was delayed.
To solve this, I addressed the original cause of the leak. The newly created Thread automatically inherits the contextClassLoader of its parent, but that's unnecessary for a cleaner thread.
So I changed the ThreadFactory for cleaner to explicitly null out the contextClassLoader.
We should probably null out contextClassLoader in other thread factories and also reduce our use of contextClassLoaders in general, but that will left to another PR.
First pass at replacing ID generation with WeakReference reuse
In this first version, the Cache<ClassLoader, ID> was replaced with Cache<ClassLoader, WeakReference<ClassLoader>>.
The core cache is still of Cache<TypeCacheKey, TypePool.Resolution> and TypeCacheKey logically remains a composite key of ClassLoader, class name.
The removal of ID assignment means ID exhaustion is no longer na issue, so there's never a need to rebuild the cache. For that reason, CacheInstance has removed and the core caching logic has been moved into DDCachingPoolStrategy.
While TypeCacheKey remains conceptually the same, the internals have changed somewhat. The TypeCacheKey now has 3 core fields...
- loaderHash
- loadeRef
- class name
Since loader refs are recycled, the fast path for key equivalence can use reference equivalence of the reference objects.
This change ripples through the CacheProvider-s which also have to store loaderHash and loaderRef.
It may be worth going a step further and switching to a Cache<Loader, TypePool> as well. That still avoid the creation of many WeakReference-s, since the underlying CacheProvider will hold a canonical WeakReference per ClassLoader.
This change overhauls the core type cache
The new approach aims to achieve several things...
1 - cache is strictly bounded -- no variance for number of classes of ClassLoaders
2 - cache is significantly smaller
3 - cache doesn't compromise start-up time
4 - primary eviction policy isn't driven by time
5 - primary eviction policy isn't driven by GC
There are some slight compromises here.
In practice, start-up does increase slightly in a memory rich environment; however, start-up improves considerably in a memory poor environment.
The basic approcach is to have a single unified Guava cache for all ClassLoaders -- nominally keyed a composite of ClassLoader & class name
The ByteBuddy CacheProvider are simply thin wrappers around the Guava cache associated to a particular ClassLoader
However rather than having a large number of WeakReferences floating around. The cache assigns an ID to each ClassLoader.
To further avoid, consuming memory the cache only preserves a small map of Loader / ID assignments. This means a ClassLoader may have more than one active ID.
This introduce the possibility for ID exhaustion. That unlikely case is handle by retiring the internal CacheInstance and starting anew.