Connection pools were creating proxy connection objects, so the proxy objects were cached. By unwrapping the proxies, the connection objects will be correctly cached and identified.
Refactored tests to use connection pools, and hsqldb connections only on statements (since they will not make a USER() query in a getMetaData call).
Shadow relocates are no longer needed because of our new bootstrapping
process.
It's no longer possible for agent dependencies to interfere with the
user's classpath.
The immediate reason for this change is a bug created in the Cassandra
instrumentation.
The Cassandra instrumentation references guava transitive deps from
the datastax driver. These references are re-written by shadow,
causing the instrumentation to reference 'datadog.agent.deps.google.*'
instead of the guava class.
It looks like server is started lazily and on laptop it may take over
5 seconds in parallel build. This means in may take long time on CI as
well.
It is in fact unikely that server will never return so adding timeout
introduces flakiness and doesn't really protect from any real-life
problems. Instead of hardcoding timeouts just rely on build eventually
giving up on its own one way or another.
Before: span start had millisecond precision because we just use
`currentTimeMillis` to get span start time.
This creates weirdly looking traces where spans 'fly' way out of
parent spans if they have sub-ms length.
After: we establish 'trace start time' with millisecond precision and
measure all span start and stop times from that. This means all
relative times are maintained with nanosecond precision (or whateve OS
clock fives us).
This is POC and some things are not yet fixed. E.g. JMS1
instrumentation injects time into span manually - and it is not
apparent how to pake it do so relative to trace clock.
Note: going forward this should allow us to completely get rid of
'double time keeping' we currently have in `DDSpan`.
Before those timeouts where set to 10ms which legitimatelly can come
before server (even localhost) has a chance to reply with 'connection
refused'.
This should fix flaky test.