Connection pools were creating proxy connection objects, so the proxy objects were cached. By unwrapping the proxies, the connection objects will be correctly cached and identified.
Refactored tests to use connection pools, and hsqldb connections only on statements (since they will not make a USER() query in a getMetaData call).
Shadow relocates are no longer needed because of our new bootstrapping
process.
It's no longer possible for agent dependencies to interfere with the
user's classpath.
The immediate reason for this change is a bug created in the Cassandra
instrumentation.
The Cassandra instrumentation references guava transitive deps from
the datastax driver. These references are re-written by shadow,
causing the instrumentation to reference 'datadog.agent.deps.google.*'
instead of the guava class.
It looks like server is started lazily and on laptop it may take over
5 seconds in parallel build. This means in may take long time on CI as
well.
It is in fact unikely that server will never return so adding timeout
introduces flakiness and doesn't really protect from any real-life
problems. Instead of hardcoding timeouts just rely on build eventually
giving up on its own one way or another.
Before: span start had millisecond precision because we just use
`currentTimeMillis` to get span start time.
This creates weirdly looking traces where spans 'fly' way out of
parent spans if they have sub-ms length.
After: we establish 'trace start time' with millisecond precision and
measure all span start and stop times from that. This means all
relative times are maintained with nanosecond precision (or whateve OS
clock fives us).
This is POC and some things are not yet fixed. E.g. JMS1
instrumentation injects time into span manually - and it is not
apparent how to pake it do so relative to trace clock.
Note: going forward this should allow us to completely get rid of
'double time keeping' we currently have in `DDSpan`.
Before those timeouts where set to 10ms which legitimatelly can come
before server (even localhost) has a chance to reply with 'connection
refused'.
This should fix flaky test.
Lettuce intrumentatio is implemented in a way that after operation has
been performed `span` is not closed syncronously - in fact this happens
on separate thread. This means `spans` for even syncronous operations
may be closed on opposite order.
This means that writing tests that pefrom two operations and expect
two traces is slightly more complicated. In many places we can just
avoid doing that by preparing necessary data in `setup`.
This fixes some of the false negatives in tests.
It looks like automatic reconnection was enabled which lead to random
traces popping up in random places when Redis server was shutdown.
Also make sure that server persists only during single test to weedout
all inter-test dependencies.