failOnVersionConflict has never been good for us. It is equivalent to
Maven dependencyConvergence which we discourage our users to use because
it is too tempermental and _creates_ version skew issues over time.
However, we had no real alternative for determining if our deps would be
misinterpeted by Maven.
failOnVersionConflict has been a constant drain and makes it really hard
to do seemingly-trivial upgrades. As evidenced by protobuf/build.gradle
in this change, it also caused _us_ to introduce a version downgrade.
This introduces our own custom requireUpperBoundDeps implementation so
that we can get back to simple dependency upgrades _and_ increase our
confidence in a consistent dependency tree.
Internally this was package visible to retain strict control
over the available policies. However, that kind of strict control
doesn't work with the open-source version, since users will want
to create their own policies. There's at least google-specific
policy internally.
Use a multiplier of 1 for endpoints with endpoint-level load balancing weight unspecified when computing weights for mixing-locality load balancing. Therefore, if a locality has endpoints without endpoint-level load balancing weight, they are weighted equally within the locality.
Previously it required manually listing the direct deps of grpc-netty
which is error-prone as evidinced by the fact that we were missing
multiple deps (guava, perfmark-api). This didn't cause a problem because
grpc-core happens to bring in these same deps.
Sets ring_hash LB config to its default values (min_ring_size = 1024 and max_ring_size = 8M) if not given by the control plane. This applies to both parsing RingHashLbConfig from xDS proto and parsing RingHashConfig from the JSON config (currently not used). If the values are given by the control plane, they are validated such that min_ring_size is not less than max_ring_size and do not exceed the 8M limit.
This version works around a warning about DuplicateStrategy in Gradle 6
that will be an error in Gradle 7 caused by [a bug in the plugin][1].
Bumping the version makes a clean build with `--warning-mode all` (at
least if skipping Android and codegen).
[1]: https://github.com/google/protobuf-gradle-plugin/issues/470
When aggregating the endpoint resolution errors of the list of clusters in ClusterResolverLoadBalancer, clusters should be processed in its original order as received in the LB config. The last cluster's error is used as the overall error status.
When an RPC is injected with a delay and then fails with DEADLINE_EXCEEDED (partially) due to the delay, it could confuse users if the error message does not mention the existence of the delay injection, because end users normally are not the same people who configured fault injection policy in control plane.
I've already limited the grpc-wide setting to read-only access, but
limiting it explicitly here seems like a good idea; all workflows should
explicitly set their permissions since any action can implicitly access
the GITHUB_TOKEN.
Following up changes in bbc5f61abb, the cluster_resolver LB policy uses the hostname received in CDS responses for discovering LOGICAL_DNS cluster endpoints.
Based on the new design, TD will generate a CFE cluster called "google_cfe_${service_name}" (e.g., for DirectPath service "cloud-bigtable.googleapis.com", the cluster name will be "google_cfe_cloud-bigtable.googleapis.com") for each DirectPath service. google_default/compute_engine creds will identify CFE clusters by the name having the prefix "google_cfe_".
Fixes the source of hostname used for DNS resolution in the cluster_resolver LB policy for LOGICAL_DNS clusters. The change includes:
- parse the single endpoint address from the embedded Cluster resource in CDS responses as the DNS hostname for LOGICAL_DNS cluster and include it in CdsUpdate being notified to the CDS LB policy.
- propagate the DNS hostname to the cluster_resolver LB policy via its LB config (DiscoveryMechanism for LOGICAL_DNS cluster).
- cluster_resolver LB policy takes the DNS hostname from the DiscoveryMechanism for LOGICAL_DNS cluster and use it as the name for DNS resolution.
This adds support for creating a Netty Channel with SocketAddress and ChannelCredentials.
This aligns with NettyServerBuilder.forAddress(SocketAddress address, ServerCredentials creds).
Do not propagate partial endpoint discovery results to the child LB policy of cluster_resolver LB policy. This could avoid premature RPC failures when connections to resolved endpoints fail while there are other unresolved endpoints. Also, endpoints should be attempted in the order of clusters they belong to: endpoints from a lower-priority cluster should not be used before endpoints from a higher-priority cluster are attempted. Most importantly, it should not fallback to use DNS-resolved endpoints before all EDS-resolved endpoints failed.
Enables a codepath for zero-copy protobuf deserialization. Two new InputStream extension interfaces are added:
- HasByteBuffer: allows access to the underlying buffers containing inbound bytes directly without copying
- Detachable: allows customer marshaller to keep the buffers around until the application code is done with using the protobuf messages
Applications can implement a custom marshaller that takes over the ownership of ByteBuffers and wrap them into ByteStrings with protobuf's UnsafeByteOperations support. Then a RopeByteString, which is a in-place composite of ByteStrings can be created. This enables using the zero-copy codepath (requires immutable ByteBuffer indication) of CodedInputStream for deserialization.
When the ring_hash LB policy enters TRANSIENT_FAILURE, it tries to connect one of the IDLE subchannels. Which subchannel to be connected to is non-deterministic, it just choose the first one from the subchannels map.
The existing test creates 4 subchannels, brings down 2 of them to let ring_hash LB policy enter TRANSIENT_FAILURE. But which one fo the remaining two subchannels to be kicked off connection is nondeterministic. This introduces trouble for verifying the behavior. This change simplifies the test, to only create 3 subchannels so that there is only one single subchannel remaining in IDLE after bringing the other two down. We are able to easily verify the behavior of ring_hash LB policy requesting connection for that one subchannel.
Since static methods are pseudo-inherited by Builder implementations but
are trivially accidentally used, we re-define static methods in each
builder to make them behave more like the caller would expect. However,
not all the methods actually work; some just throw because the caller
was certainly not getting what they would expect.
Annotating with `@DoNotCall` can expose the problems at compile time
instead of runtime. While `@Deprecated` would also be an option, it is a
bit harder to figure out the ramifications and whether we want to go
that route.
This change was suggested by a lint tool for XdsServerBuilder and it
seems appropriate so I applied it to the other similar cases I could
find.
The pom.properties are apparently present to allow tooling to know what
Maven artifact cooresponds to a JAR, just by looking at the JAR. Since
we shade Netty, that produces inaccurate results. This was noticed in
in #8077.
`OkHttpClientTransport.ClientFrameHandler` will fail a stream with `Status.UNAVAILABLE.withDescription("End of stream or IOException")` when socket is closed with an error. However, it does not include any more error detail. This PR provides more error detail in case there is an existing goaway status, e.g. netty server can send goaway with lastKnownStreamId=MAX_INT when header size exceeded max allowed size netty/netty/pull/10775 and shutdown the connection.
Test: `io.grpc.okhttp.OkHttpTransportTest.serverChecksInboundMetadataSize` with `netty-4.1.54.Final`
In normal cases, we only have a few header matchers but the number of headers can be completely up to the application. Indexing headers eagerly parses all headers, even for those with no matcher matching the key. We should only parse header values for those with key matching the header matcher (aka, only call Metadata.get() with key that has some matcher looking for).
Kicks off connection for one of IDLE subchannels (if exist) when the ring_hash LB policy is reporting TRANSIENT_FAILURE to its upstream.
While the ring_hash policy is reporting TRANSIENT_FAILURE, it will not be getting any pick requests from the priority policy. However, because the ring_hash policy does not attempt to reconnect to subchannels unless it is getting pick requests, it will need special handling to ensure that it will eventually recover from TRANSIENT_FAILURE state once the problem is resolved. Specifically, it will make sure that it is attempting to connect (after applicable backoff period) to at least one subchannel at any given time.