- Get rid of `XdsLbState` and make config change handling into two layers: `LookasideLb` (handles balancer name) and `LookasideChannelLb` (handles child policy), under `XdsLoadBalanecer` (fallback manager layer)
- Move `XdsComms`/`AdsStream` to a layer under `LookasideChannelLb`. They don't keep the helper, but only `SyncCtx` and `ChannelLogger`
- For each layer, we pass in a `LoadBalancer.Factory` for the next/child layer. In test,
+ we mock/fake the factory, so we don't care about the implementation details of the child layer.
+ we capture the helper of `factory.newBalancer(factory)`, so we can mimic `updateBalancingState() `from the child layer.
Part 1 contains fallback management logic. There is no change in fallback management logic.
Examples and android projects were left unchanged. They can be changed
later.
No plugin versions were changed, to make this as non-functional of a
change as possible. Upgrading Gradle to 5.6 was necessary for
pluginManagement in settings.gradle.
This allows plumbing information to the subchannel via Attributes, like
an authority override. RR still does not support multiple EAGs that only
differ by attributes.
* Added publish configurations in build file.
* Changed cronet dependency to cronet-api for implementation, only use full cronet implementation provider for testing.
* Set minSdkVersion to 16 as cronet dependency requires that.
* Added errorprone plugin.
* Use reflection to load and invoke setTrafficStats* and addRequestAnnotation methods for ExperimentalBidirectionalStream.Builder.
* Load reflection method lazily and cache for later usages.
* Fixed mistaken method invocation for privateKeyId getter/setter.
* Added test coverage to verify jwt credentials are applied to request metadata correctly.
* No need to expose serviceUri method for testing.
NETTY_LOCAL seem to have a flow control issue. Disable it since we don't
really look at it very often and we care about the
streamingCallsMessageThroughput benchmark.
This replaces FlowControlledMessagesPerSecondBenchmark, except it does not
avoid local flow control issues via request(5). If hacking in a request(5),
this benchmark produces similar results (non-direct: 671k vs previously 641k
msg/s).
This is equivalent to UnaryCallResponseBandwidthBenchmark and
StreamingResponseBandwidthBenchmark, although without the interface selection
logic (which allows for traffic shaping).
TearDown is guaranteed to execute after Setup; there is no synchronization
necessary. Although the volatile doesn't hurt anything functionally, it is
misleading and confusing.
It is basically the same as TransportBenchmark without protobuf, smaller
payload, and only Netty. It does show latencies around 66 µs instead of
TransportBenchmark's 70 µs on my laptop, but a quick conversion of
TransportBenchmark to ByteBufOutputMarshaller made it 66 µs as well.
This can be used to prevent duplicate classes in the classpath, one via
Maven and one via Bazel-native.
See census-instrumentation/opencensus-java#1963 and #5359