google-auth-library-java:0.25.0 strips port and path parts in the
audience claim ("aud"). Updating the test to pass in both old
and new version of google-auth-library-java.
This commit does not upgrade google-auth-library-java because
it turned out that the upgrade involves the newer Guava version
(google-auth-library-java's dependency) failing with DexingNoClasspathTransform.
Details: https://github.com/grpc/grpc-java/pull/8078#issuecomment-821566805
It's technically possible to exclude the newer Guava, but it's a
good practice avoid excluding the newer version of a library.
This change can have large impact from two aspects:
1. It calls out a _large_ impact on the _few_ Java 7 users.
2. It may have _small_ impact on the _many_ Android users.
https://github.com/grpc/grpc-java/issues/4671 tracks gRPC's removal of
Java 7 support. We are quite eager to drop Java 7 support as that would
allow using new language features like default methods. Guava is also
dropping Java 7 support and starting in 30.1 it will warn when used on
Java 7. The purpose of the warning is to help discover users that are
negatively impacted by dropping Java 7 before it becomes a bigger
problem.
The Guava logging check was implemented in such a way that there is an
optional class that uses Java 8 bytecode. While the class is optional at
runtime, the Android build system notices when dexing and fails if
Java 8 language featutres are not enabled. We believe this will not be a
problem for most Android users, but they may need to add to their build:
```
android {
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
```
See also https://github.com/google/guava/releases/tag/v30.1
To separately manage services/classes with and without protobuf dependency in services package, we are moving classes with protobuf dependency into io.grpc.protobuf.services. This includes healthchecking, reflection, channelz, and binlogging.
Forwarding classes are created to avoid breaking existing users, while they are marked as deprecated to notify users to migrate.
Fix a bug in StreamBufferingEncoder: when client receives GOWAY while there are pending streams due to MAX_CONCURRENT_STREAMS, we see the following error:
io.netty.handler.codec.http2.Http2Exception$StreamException: Maximum active streams violated for this endpoint.
Update LB policy config generation to support ring hash policy as the endpoint-level LB policy.
- Changed the CDS LB policy to accept RING_HASH as the endpoint LB policy from CDS updates. This configuration is directly passed to its child policy (aka, ClusterResolverLoadBalancer) in its config.
- Changed ClusterResolverLoadBalancer to generate different LB configs for its downstream LB policies, depending on the endpoint-level LB policies.
- If the endpoint-level LB policy is ROUND_ROBIN, the downstream LB policy hierarchy is: PriorityLB -> ClusterImplLB -> WeightedTargetLB -> RoundRobinLB
- If the endpoin-level LB policy is RNIG_HASH, the downstream LB policy hierarchy is: PriorityLB -> ClusterImplLB -> RingHashLB.
Currently each subchannel implicitly refreshes the name resolution when its state changes to IDLE or TRANSIENT_FAILURE. That is, this feature is built into subchannel's internal implementation. Although it eliminates the burden of having LB implementations refreshing the resolver when connections to backends are broken, this is gives LB policies no chance to disable or override this refresh (e.g., in some complex load balancing hierarchy like xDS, LB policies may embed a resolver inside for resolving backends so the refreshing resolution operation should be hooked to the resolver embedded in the LB policy instead of the one in Channel).
In order to make this transition smoothly, we add a check to SubchannelImpl that checks if the LoadBalancer has explicitly called Helper.refreshNameResolution for broken subchannels created by it. If not, it logs a warning and do the refresh.
A temporary LoadBalancer.Helper API ignoreRefreshNameResolution() is added to avoid false-positive warnings for xDS that intentionally does not want a refresh. Once the migration is done, this should be deleted.
This recently came up in https://stackoverflow.com/a/67062503/4690866,
but it has come up multiple times before. These docs aren't ideal, as
they may be missed by a reader and so references in other parts of the
API would probably be appropriate. There could also be something about
"Context is not a general purpose map." But this is an improvement, and
I didn't want to let the perfect be the enemy of the good.
Fixes#8080. The address 0.0.0.0 (that comes from new Socket(0).
.getLocalSocketAddress()) is for listening with a server, but it
is not meant to be used as the destination address as per
"3.2.1.3 Addressing" in RFC 1122
Fix the following bug:
ManagedChannelImpl.ConfigSelectingClientCall may return early in start() leaving delegate null, and fails request() method after start().
Currently the bug can only be triggered when using xDS.
In the ring hash LB policy, building the ring is computationally heavy. Although using a larger ring can make the RPC distribution closer to the actual weights of hosts, it takes long time to finish the test.
Internally, each test class is expected to finish within 1 minute, while each of the test cases for testing pick distribution takes about 30 sec. By reducing the ring size by a factor of 10, the time spent for those test cases reduce to 1-2 seconds. Now we need larger tolerance for the distribution (three hosts with weights 1:10:100):
- With a ring size of 100000, the 10000 RPCs distribution is close to 91 : 866 : 9043
- With a ring size of 10000, the 10000 RPCs distribution is close to 104 : 808 : 9088
Roughly, this is still acceptable.
Implementation for the ring hash LB policy. A LoadBalancer that provides consistent hashing based load balancing to upstream hosts, with the "Ketama" hashing that maps hosts onto a circle (the "ring") by hashing its addresses. Each request is routed to a host by hashing some property of the request and finding the nearest corresponding host clockwise around the ring. Each host is placed on the ring some number of times proportional to its weight. With the ring partitioned appropriately, the addition or removal of one host from a set of N hosts will affect only 1/N requests.
Enhance error information reflected by RPC status when failing to fallback (aka, no fallback addresses provided by resolver), by including the original cause of entering fallback. Cases to fallback include:
- When the fallback timer fires before we have received the first response from the balancer.
- If no fallback addresses are found, RPCs will be failed with status {UNAVAILABLE, description="Unable to fallback, no fallback addresses found\n Timeout waiting for remote balancer", cause=null}
- When the balancer RPC finishes before receiving any backend addresses
- If no fallback addresses are found, RPCs will be failed with status {UNAVAILABLE, description="Unable to fallback, no fallback addresses found\n <description from the status of balancer RPC>", cause=<cause from the status of balancer RPC>}
- When we get an explicit response from the balancer telling us go into fallback
- If no fallback addresses are found, RPCs will be failed with status {UNAVAILABLE, description="Unable to fallback, no fallback addresses found\n Fallback requested by balancer", cause=null}
- When the balancer call has finished *and* we cannot connect to any of the backends in the last response we received from the balancer.
- Depending on whichever the two happened last, the last happening one is the reason that triggers entering fallback. If no fallback addresses are found, RPCs will be failed with status {UNAVAILABLE, description="Unable to fallback, no fallback addresses found\n <description from the status of balancer RPC>", cause=<cause from the status of balancer RPC>} or {UNAVAILABLE, description="Unable to fallback, no fallback addresses found\n <description from the status of one of the broken subchannels>", cause=<cause from the status of one of the broken subchannels>}
Note all RPCs will fail with UNAVAILABLE status code, the fallback reason will be attached as description and cause (if any).
Tested with the interop client on Zulu 8 and Zulu 11 with
-XX:+UseOpenJSSE (after disabling tcnative). I was unable to add a new
case to TlsTest because adding OpenJSSE as a dependency in a Gradle
build fails: https://github.com/openjsse/openjsse/issues/19Fixes#7907
Fixes inconsistent state of XdsNameResolver with most recently received xDS configurations. The full suite of configurations should always be updated whenever receiving new resource updates, including updates that revoke currently in-use resources. Reference counts for currently in-use clusters should also be cleaned up properly. Otherwise, re-receiving (after being revoked) the same resource can be treated as if the configuration never changed.
This just adds the ServiceBinding class and
BindServiceFlags, internal utils.
Most binderchannel code relies heavily on Java8 features,
so I'm keeping that requirement, since grpc-java plans to
require Java8 eventually anyway.
okio 2.x is ABI compatible with 1.x but not API compatible. This hasn't
been a problem as users use binaries from Maven Central so the ABI
compatibility is the important part. However, when building with Bazel
the API compatibily is the important part.
Tested with okio 2.10.0
Fixes#8004
GoogleDefaultChannelCredentials and ComputeEngineChannelCredentials are literally the same thing for DirectPath, both of them should behave the same for choosing the protocol negotiator for talking to backends given by Traffic Director.
Updates TestServiceClient to support creating channel without target port (mostly useful for xDS that uses the channel target as the resource name).
Adds an env var for overriding the TD URI used in cloud-to-prod test.
Initially we were about to support both xx_hash and murmur_hash. But later we decided to support xx_hash only. So for any hashing we are using in xDS, it's going to be xx_hash. Therefore, we do not need to define and carry this configuration around.