The pom.properties are apparently present to allow tooling to know what
Maven artifact cooresponds to a JAR, just by looking at the JAR. Since
we shade Netty, that produces inaccurate results. This was noticed in
in #8077.
`OkHttpClientTransport.ClientFrameHandler` will fail a stream with `Status.UNAVAILABLE.withDescription("End of stream or IOException")` when socket is closed with an error. However, it does not include any more error detail. This PR provides more error detail in case there is an existing goaway status, e.g. netty server can send goaway with lastKnownStreamId=MAX_INT when header size exceeded max allowed size netty/netty/pull/10775 and shutdown the connection.
Test: `io.grpc.okhttp.OkHttpTransportTest.serverChecksInboundMetadataSize` with `netty-4.1.54.Final`
In normal cases, we only have a few header matchers but the number of headers can be completely up to the application. Indexing headers eagerly parses all headers, even for those with no matcher matching the key. We should only parse header values for those with key matching the header matcher (aka, only call Metadata.get() with key that has some matcher looking for).
Kicks off connection for one of IDLE subchannels (if exist) when the ring_hash LB policy is reporting TRANSIENT_FAILURE to its upstream.
While the ring_hash policy is reporting TRANSIENT_FAILURE, it will not be getting any pick requests from the priority policy. However, because the ring_hash policy does not attempt to reconnect to subchannels unless it is getting pick requests, it will need special handling to ensure that it will eventually recover from TRANSIENT_FAILURE state once the problem is resolved. Specifically, it will make sure that it is attempting to connect (after applicable backoff period) to at least one subchannel at any given time.
Inject a standalone Context that is independent of application RPCs to GrpclbLoadBalancer for control plane RPCs. The control plane RPC should be independent and not impacted by the lifetime of Context used for application RPCs.
Control plane RPCs are independent of application RPCs, they can stand for completely different lifetime. So the context for making application RPCs should not be propagated to control plane RPCs. This change makes control plane RPCs use the ROOT Context.
LoadBalancers should not propagate balancing state updates after itself being shutdown.
For LB policies that maintain a group of child LB policies with each having its independent lifetime, balancing state update propagations from each child LB policy can go out of the lifetime of its parent easily, especially for cases that balancing state update is put to the back of the queue and not propagated up inline.
For LBs that are simple pass-through in the middle of the LB tree structure, it isn't a big issue as its lifecycle would be the same as its child. Transitively, It would behave correctly as long as its downstream is doing in the right way.
This change is a sanity cleanup for LB policies that maintain multiple child LB policies to preserve the invariant that further balancing state updates from their child policies will not get propagated.
Similar to 368c43aec4.
Clean up subchannels after the RingHashLoadBalancer itself is shutdown to prevent further balancing state updates being propagated to the upstream.
Note this should not be considered as a fix for any problem anybody is noticing. Upstreams of RingHashLoadBalancer should not rely on this, it should still have its own logic for maintaining the lifecycle of downstream LB and ignore invalid upcalls when necessary.
When creating the URI using Channel authority for instantiating a DNS resolver in the cluster_resolver LB policy, a "dns" scheme needs to be manually attached and the Channel authority would be used as the URI path (same as creating Channel with target). Otherwise, the Channel authority will just be used as the scheme and causing name resolver not found.
The change also handles name resolver lookup more defensively. Although it should not happen, if there does have bug causing DNS resolver not being able to be loaded, the cluster_resolver LB policy propagates the INTERNAL error to upstream.
Triggering balancing state updates after already being shutdown can be confusing for the upstream of round_robin. In cases of the callers not managing round_robin's lifecycle (e.g., not ignoring updates after it shuts down round_robin, which it should), it can make problem very bad, especially with the behavior that round_robin is actually propagating TRANSIENT_FAILURE with a picker that buffers RPCs.
This change only polishes round_robin by always preserving its invariant. Callers/LBs should not rely on this and should still manage the balancing updates from its downstream correctly based on the downstream's lifetime.
pip 21.1 released on Apr 24 introduced a regression for python 3.6.1.
The regression was identified on Apr 24, the fix merged on Apr 25.
The fix is expected to be delivered in the 21.1.1 patch.
There's no clear date, when 21.1.1 will be released.
Until then, pin is temporarily pinned to the previous release, 21.0.1.
Back in 10fb206 when this for loop was added, we didn't have the
subprojects list. That was added in 9bd7bab, but I failed to update one
reference to rootProject. So all has had an evaluation dependency on all
projects, even though it only needs a subset.
This should have little impact, but would prevent weird scenarios like
an issue in :grpc-gae-interop-testing-jdk8 preventing :all-all from
being loaded. Not to say things wouldn't still fail to load, but that
this bug could distract from the real problem. I noticed this
during #8049.
google-auth-library-java:0.25.0 strips port and path parts in the
audience claim ("aud"). Updating the test to pass in both old
and new version of google-auth-library-java.
This commit does not upgrade google-auth-library-java because
it turned out that the upgrade involves the newer Guava version
(google-auth-library-java's dependency) failing with DexingNoClasspathTransform.
Details: https://github.com/grpc/grpc-java/pull/8078#issuecomment-821566805
It's technically possible to exclude the newer Guava, but it's a
good practice avoid excluding the newer version of a library.
This change can have large impact from two aspects:
1. It calls out a _large_ impact on the _few_ Java 7 users.
2. It may have _small_ impact on the _many_ Android users.
https://github.com/grpc/grpc-java/issues/4671 tracks gRPC's removal of
Java 7 support. We are quite eager to drop Java 7 support as that would
allow using new language features like default methods. Guava is also
dropping Java 7 support and starting in 30.1 it will warn when used on
Java 7. The purpose of the warning is to help discover users that are
negatively impacted by dropping Java 7 before it becomes a bigger
problem.
The Guava logging check was implemented in such a way that there is an
optional class that uses Java 8 bytecode. While the class is optional at
runtime, the Android build system notices when dexing and fails if
Java 8 language featutres are not enabled. We believe this will not be a
problem for most Android users, but they may need to add to their build:
```
android {
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
}
```
See also https://github.com/google/guava/releases/tag/v30.1
To separately manage services/classes with and without protobuf dependency in services package, we are moving classes with protobuf dependency into io.grpc.protobuf.services. This includes healthchecking, reflection, channelz, and binlogging.
Forwarding classes are created to avoid breaking existing users, while they are marked as deprecated to notify users to migrate.
Fix a bug in StreamBufferingEncoder: when client receives GOWAY while there are pending streams due to MAX_CONCURRENT_STREAMS, we see the following error:
io.netty.handler.codec.http2.Http2Exception$StreamException: Maximum active streams violated for this endpoint.
Update LB policy config generation to support ring hash policy as the endpoint-level LB policy.
- Changed the CDS LB policy to accept RING_HASH as the endpoint LB policy from CDS updates. This configuration is directly passed to its child policy (aka, ClusterResolverLoadBalancer) in its config.
- Changed ClusterResolverLoadBalancer to generate different LB configs for its downstream LB policies, depending on the endpoint-level LB policies.
- If the endpoint-level LB policy is ROUND_ROBIN, the downstream LB policy hierarchy is: PriorityLB -> ClusterImplLB -> WeightedTargetLB -> RoundRobinLB
- If the endpoin-level LB policy is RNIG_HASH, the downstream LB policy hierarchy is: PriorityLB -> ClusterImplLB -> RingHashLB.
Currently each subchannel implicitly refreshes the name resolution when its state changes to IDLE or TRANSIENT_FAILURE. That is, this feature is built into subchannel's internal implementation. Although it eliminates the burden of having LB implementations refreshing the resolver when connections to backends are broken, this is gives LB policies no chance to disable or override this refresh (e.g., in some complex load balancing hierarchy like xDS, LB policies may embed a resolver inside for resolving backends so the refreshing resolution operation should be hooked to the resolver embedded in the LB policy instead of the one in Channel).
In order to make this transition smoothly, we add a check to SubchannelImpl that checks if the LoadBalancer has explicitly called Helper.refreshNameResolution for broken subchannels created by it. If not, it logs a warning and do the refresh.
A temporary LoadBalancer.Helper API ignoreRefreshNameResolution() is added to avoid false-positive warnings for xDS that intentionally does not want a refresh. Once the migration is done, this should be deleted.
This recently came up in https://stackoverflow.com/a/67062503/4690866,
but it has come up multiple times before. These docs aren't ideal, as
they may be missed by a reader and so references in other parts of the
API would probably be appropriate. There could also be something about
"Context is not a general purpose map." But this is an improvement, and
I didn't want to let the perfect be the enemy of the good.
Fixes#8080. The address 0.0.0.0 (that comes from new Socket(0).
.getLocalSocketAddress()) is for listening with a server, but it
is not meant to be used as the destination address as per
"3.2.1.3 Addressing" in RFC 1122
Fix the following bug:
ManagedChannelImpl.ConfigSelectingClientCall may return early in start() leaving delegate null, and fails request() method after start().
Currently the bug can only be triggered when using xDS.
In the ring hash LB policy, building the ring is computationally heavy. Although using a larger ring can make the RPC distribution closer to the actual weights of hosts, it takes long time to finish the test.
Internally, each test class is expected to finish within 1 minute, while each of the test cases for testing pick distribution takes about 30 sec. By reducing the ring size by a factor of 10, the time spent for those test cases reduce to 1-2 seconds. Now we need larger tolerance for the distribution (three hosts with weights 1:10:100):
- With a ring size of 100000, the 10000 RPCs distribution is close to 91 : 866 : 9043
- With a ring size of 10000, the 10000 RPCs distribution is close to 104 : 808 : 9088
Roughly, this is still acceptable.
Implementation for the ring hash LB policy. A LoadBalancer that provides consistent hashing based load balancing to upstream hosts, with the "Ketama" hashing that maps hosts onto a circle (the "ring") by hashing its addresses. Each request is routed to a host by hashing some property of the request and finding the nearest corresponding host clockwise around the ring. Each host is placed on the ring some number of times proportional to its weight. With the ring partitioned appropriately, the addition or removal of one host from a set of N hosts will affect only 1/N requests.