* Fix Dropwizard conversion to OpenTelemetry API
* Finish converting JDBC to use OpenTelemetry API directly
* Finish converting Servlet to use OpenTelemetry API directly
* Convert Trace Annotation to use OpenTelemetry API directly
* Convert tests to use OpenTelemetry API directly
* Convert Grizzly to use OpenTelemetry API directly
* Convert Mongo to use OpenTelemetry API directly
* Convert SparkJava to use OpenTelemetry API directly
* Convert Spring Data to use OpenTelemetry API directly
* Convert Jetty to use OpenTelemetry API directly
* Convert JSP to use OpenTelemetry API directly
* Convert Kafka Clients to use OpenTelemetry API directly
* Convert Lettuce to use OpenTelemetry API directly
* Fix gRPC conversion to OpenTelemetry API
* Fix Akka conversion to OpenTelemetry API
* Convert JMS to use OpenTelemetry API directly
* Convert Netty 4.0 to use OpenTelemetry API directly
* Convert Netty 4.1 to use OpenTelemetry API directly
* Convert Play 2.4 to use OpenTelemetry API directly
* Convert Play 2.6 to use OpenTelemetry API directly
* Convert Play WS 1 to use OpenTelemetry API directly
* Convert Play WS 2 to use OpenTelemetry API directly
* Convert Play WS 2.1 to use OpenTelemetry API directly
* Convert RabbitMQ to use OpenTelemetry API directly
* Convert Ratpack to use OpenTelemetry API directly
* Convert RMI to use OpenTelemetry API directly
MuzzlePlugin groovy checks that no threads are spawned because this holds the ClassLoader live.
This was breaking with the caching change because the cache no longer uses the Cleaner service.
This caused a problem because the Thread behind the cleaner is created lazily when the first task is created, but without the cache the creation was delayed.
To solve this, I addressed the original cause of the leak. The newly created Thread automatically inherits the contextClassLoader of its parent, but that's unnecessary for a cleaner thread.
So I changed the ThreadFactory for cleaner to explicitly null out the contextClassLoader.
We should probably null out contextClassLoader in other thread factories and also reduce our use of contextClassLoaders in general, but that will left to another PR.
* Move OpenTelemetry SDK out of bootstrap loader
* Improve shading
After this change, the shaded opentelemetry-sdk is only used by test
modules, so it doesn't need to be published.
First pass at replacing ID generation with WeakReference reuse
In this first version, the Cache<ClassLoader, ID> was replaced with Cache<ClassLoader, WeakReference<ClassLoader>>.
The core cache is still of Cache<TypeCacheKey, TypePool.Resolution> and TypeCacheKey logically remains a composite key of ClassLoader, class name.
The removal of ID assignment means ID exhaustion is no longer na issue, so there's never a need to rebuild the cache. For that reason, CacheInstance has removed and the core caching logic has been moved into DDCachingPoolStrategy.
While TypeCacheKey remains conceptually the same, the internals have changed somewhat. The TypeCacheKey now has 3 core fields...
- loaderHash
- loadeRef
- class name
Since loader refs are recycled, the fast path for key equivalence can use reference equivalence of the reference objects.
This change ripples through the CacheProvider-s which also have to store loaderHash and loaderRef.
It may be worth going a step further and switching to a Cache<Loader, TypePool> as well. That still avoid the creation of many WeakReference-s, since the underlying CacheProvider will hold a canonical WeakReference per ClassLoader.
* Fix sporadic test failure
* Remove RetryOnFailure from Elasticsearch tests
* Remove retry from Hystrix tests
* Improve test verification
* Fix sporadic span order not found failures
* Add RetryOnFailure to tests with sporadic failures
This change overhauls the core type cache
The new approach aims to achieve several things...
1 - cache is strictly bounded -- no variance for number of classes of ClassLoaders
2 - cache is significantly smaller
3 - cache doesn't compromise start-up time
4 - primary eviction policy isn't driven by time
5 - primary eviction policy isn't driven by GC
There are some slight compromises here.
In practice, start-up does increase slightly in a memory rich environment; however, start-up improves considerably in a memory poor environment.
The basic approcach is to have a single unified Guava cache for all ClassLoaders -- nominally keyed a composite of ClassLoader & class name
The ByteBuddy CacheProvider are simply thin wrappers around the Guava cache associated to a particular ClassLoader
However rather than having a large number of WeakReferences floating around. The cache assigns an ID to each ClassLoader.
To further avoid, consuming memory the cache only preserves a small map of Loader / ID assignments. This means a ClassLoader may have more than one active ID.
This introduce the possibility for ID exhaustion. That unlikely case is handle by retiring the internal CacheInstance and starting anew.
* Refactor of twilio (WIP)
* Refactored hibernate instrumentation
* Finished refactoring hibernate instrumentation
* Minor changes
* Minor change
* Moved files after upstream restructuring
* Fixed typo and Twilio test issues
* Refactored hibernate tests
* Fixed formatting
* Moved span auto close functionality to SessionState
* Move things up a directory
* Scripted mass update
find -type f -name "*.gradle" | xargs sed -i 's/:java-agent:/:/g'
* Remove plugin version now that it's in root module
* Update java-agent and instrumentation configs
* Misc