Almost all of these major version bumps are because they upgraded to
Node 16, which requires a new minimum version of the Runner (which
matters for those maintaining their own runners). The main outlier is
lock-threads, which changed the names of its input parameters.
This is essentially a repeat of b118e00c, but for our compiling
documentation. Protobuf has two versions nowadays: 3.21.7 for Java and
21.7 for protobuf as a whole. For 21.1 they tagged it both as 21.1 and
3.21.1, but they didn't do that for 3.21.7.
Fixes#9582
Second attempt at this, now with the understanding that RLS actually can
accept empty address lists.
This seems contrary to the behavior this LB advertizes with the canHandleEmptyAddressListFromNameResolution() method. This method is not overridden, so the default response of false is preserved. Empty address lists are supported though, and the parent LB never called the canHandleEmptyAddressListFromNameResolution() method.
Now the xds resources are dynamically managed in resourceStore in xdsClient. The types is a xdsResourceType, singleton.
There is no longer hardcoded static list of known resource types, the subscription list is the source of truth.
AbstractXdsClient that manages AdsStream will only accept the xds resource types that has already has watchers subscribed to, same behaviour as before.
This aligns the C++ version we're using for gRPC-generated code with the
Java version. This should have no real impact to our users, as there
were no features added to .proto files or the like that would be visible
to users.
Creates "Adaptive" cumulator: cumulate ByteBuf's by dynamically switching between merge and compose strategies.
This cumulator applies a heuristic to make a decision whether to track a reference to the buffer with bytes received from the network stack in an array ("zero-copy"), or to merge into the last component (the tail) by performing a memory copy.
It is necessary as a protection from a potential attack on the COMPOSITE_CUMULATOR. Consider a pathological case when an attacker sends TCP packages containing a single byte of data, and forcing the cumulator to track each one in a separate buffer. In this case we'll be paying a memory overhead for each buffer, as well as extra compute to read the cumulation.
Implemented heuristic establishes a minimal threshold for the total size of the tail and incoming buffer, below which they are merged. The sum of the tail and the incoming buffer is used to avoid a case where attacker alternates the size of data packets to trick the cumulator into always selecting compose strategy.
Merging strategy attempts to minimize unnecessary memory writes. When possible, it expands the tail capacity and only copies the incoming buffer into available memory. Otherwise, when both tail and the buffer must be copied, the tail is reallocated (or fully replaced) with a new buffer of exponentially increasing capacity (bounded to minComposeSize) to ensure runtime O(n^2) amortized to O(n).
Note: this reintroduces https://github.com/grpc/grpc-java/pull/7532, addressing the subtle issue (ref b/155940949) with `CompositeByteBuf.component()` indexes getting out of sync, which results in the merge operation producing broken buffers.
This fixes a regression in commit e1ad984. I'd create a test, but the
NPE gets thrown away in the context of the current test setup so can't
be created as quickly as we'd like to fix this. I have manually tested
in a custom reproduction to confirm it resolves the NPE.
Seen at b/248326695
```
java.lang.AssertionError: java.lang.NullPointerException
at io.grpc.xds.ClientXdsClient$1.uncaughtException(ClientXdsClient.java:89)
at io.grpc.SynchronizationContext.drain(SynchronizationContext.java:97)
at io.grpc.SynchronizationContext.execute(SynchronizationContext.java:127)
at io.grpc.xds.ClientXdsClient.cancelXdsResourceWatch(ClientXdsClient.java:327)
at io.grpc.xds.ClusterResolverLoadBalancer$ClusterResolverLbState$EdsClusterState.shutdown(ClusterResolverLoadBalancer.java:378)
at io.grpc.xds.ClusterResolverLoadBalancer$ClusterResolverLbState.shutdown(ClusterResolverLoadBalancer.java:206)
at io.grpc.util.GracefulSwitchLoadBalancer.shutdown(GracefulSwitchLoadBalancer.java:195)
at io.grpc.xds.ClusterResolverLoadBalancer.shutdown(ClusterResolverLoadBalancer.java:141)
at io.grpc.xds.CdsLoadBalancer2$CdsLbState.shutdown(CdsLoadBalancer2.java:136)
at io.grpc.xds.CdsLoadBalancer2.shutdown(CdsLoadBalancer2.java:110)
at io.grpc.util.GracefulSwitchLoadBalancer.shutdown(GracefulSwitchLoadBalancer.java:195)
at io.grpc.xds.ClusterManagerLoadBalancer$ChildLbState.shutdown(ClusterManagerLoadBalancer.java:256)
at io.grpc.xds.ClusterManagerLoadBalancer.shutdown(ClusterManagerLoadBalancer.java:138)
at io.grpc.internal.AutoConfiguredLoadBalancerFactory$AutoConfiguredLoadBalancer.shutdown(AutoConfiguredLoadBalancerFactory.java:164)
at io.grpc.internal.ManagedChannelImpl.shutdownNameResolverAndLoadBalancer(ManagedChannelImpl.java:381)
at io.grpc.internal.ManagedChannelImpl.access$8200(ManagedChannelImpl.java:118)
at io.grpc.internal.ManagedChannelImpl$DelayedTransportListener.transportTerminated(ManagedChannelImpl.java:2174)
at io.grpc.internal.DelayedClientTransport$3.run(DelayedClientTransport.java:122)
at io.grpc.SynchronizationContext.drain(SynchronizationContext.java:95)
at io.grpc.SynchronizationContext.execute(SynchronizationContext.java:127)
at io.grpc.internal.ManagedChannelImpl$RealChannel.shutdown(ManagedChannelImpl.java:1057)
at io.grpc.internal.ManagedChannelImpl.shutdown(ManagedChannelImpl.java:817)
at io.grpc.internal.ManagedChannelImpl.shutdownNow(ManagedChannelImpl.java:837)
at io.grpc.internal.ManagedChannelImpl.shutdownNow(ManagedChannelImpl.java:117)
at io.grpc.internal.ForwardingManagedChannel.shutdownNow(ForwardingManagedChannel.java:52)
at io.grpc.internal.ManagedChannelOrphanWrapper.shutdownNow(ManagedChannelOrphanWrapper.java:65)
at io.grpc.testing.integration.GrpclbFallbackTestClient.tearDown(GrpclbFallbackTestClient.java:178)
at io.grpc.testing.integration.GrpclbFallbackTestClient.main(GrpclbFallbackTestClient.java:67)
Caused by: java.lang.NullPointerException
at io.grpc.xds.ClientXdsClient.handleResourceResponse(ClientXdsClient.java:179)
at io.grpc.xds.AbstractXdsClient$AbstractAdsStream.handleRpcResponse(AbstractXdsClient.java:358)
at io.grpc.xds.AbstractXdsClient$AdsStreamV3$1$1.run(AbstractXdsClient.java:511)
at io.grpc.SynchronizationContext.drain(SynchronizationContext.java:95)
... 26 more
```
* xds: security code refactoring/renaming
1) move certprovider package under security
2) refactor inner Factory into CertProviderClientSslContextProviderFactory and CertProviderServerSslContextProviderFactory
3) Make CertProviderClientSslContextProvider and CertProviderServerSslContextProvider non-public
4) use only public (non package private) types like SslContextProvider (instead of CertProviderClientSslContextProvider etc)
Mainly refactor work to make type specific xds resources generic, e.g.
1. Define abstract class XdsResourceType to be extended by pluggable new resources. It mainly contains abstract method doParse() to parse unpacked proto messges and produce a ResourceUpdate. The common unpacking proto logic is in XdsResourceType default method parse()
2. Move the parsing/processing logics to specific XdsResourceType. Implementing:
XdsListenerResource for LDS
XdsRouteConfigureResource for RDS
XdsClusterResource for CDS
XdsEndpointResource for EDS
3. The XdsResourceTypes are singleton. To process for each XdsClient, context is passed in parameters, defined by XdsResourceType.Args.
4. Watchers will use generic APIs to subscribe to resource watchXdsResource(XdsResourceType, resourceName, watcher). Watcher and ResourceSubscribers becomes java generic class.
When allocating bytes to streams within a flow control window we always
go through the streams in the same order. This can lead to large streams
hogging all the bytes and a smaller one down the list getting starved
out. This change shuffles the stream array to lower the chance of this
happening.
Fixes#9089