Starting in Netty 4.1.60, Netty will validate Content-Length headers
using getAll() and setLong(). While getAll() was documented as only used
in tests, it doesn't appear it was currently used in any tests.
While Http2NettyTest.contentLengthPermitted() was added to confirm that
Content-Length works, it won't actually exercise any interesting
behavior until we upgrade to Netty 4.1.60. However, I did test with
Netty 4.1.60 and it reproduced the failure in
https://github.com/grpc/grpc-java/issues/7953 and passed with this
change.
Since Netty is now observing/modifying the headers, it would seem
appropriate to implement a substantial portion of the Http2Headers API.
However, the surface is much larger than we'd want to implement for a
'quick fix' that could be backported. In addition, it seems much of the
API is just convenience methods, so it is probably appropriate to split
out a AbstractHeaders class from DefaultHeaders in Netty that doesn't
make any assumptions about the header storage mechanism.
Turn LB state into TRANSIENT_FAILURE if the list of backend addresses given to be used is found to be empty. It applies to both cases in which the address list is given by the balancer or the resolver (aka, the fallback list).
Using the new credentials API allows using generic APIs instead of
Netty-specific ones and allows using grpc-netty-shaded. The new API is
still marked experimental, but it is intended to be stabilized which
isn't the case for the Netty-specific API.
The client now looks more similar to HelloWorldClient.
We should treat both IDLE and CONNECTING subchannels as "connection in progress" when aggregating for the overall load balancing state. Otherwise, RPCs could fail prematurely if one subchannel enters TF while all the others are still in CONNECTING.
23d279660c made each individual subchannel stay in TF until READY if it previously was in TF. So subchannels with CONNECTING state are those in first time connecting. We should give them time to connect.
When the LB stream has been closed and a retry task is scheduled. Receiving a ResolvedAddress update with LB addresses immediately creates a new RPC stream again. Then when the retry task fires, a LB stream already exists.
This change cancels the retry task when the address update causing a new LB stream to be created.
We find codecov.io generally more useful than coveralls.io, so make it a
bit easier to dig into the coverage.
We put it second simply because when people think "test coverage"
without any other details they (rightly) generally assume line coverage;
condition coverage is generally a separate metric. Codecov uses a
combined line+condition coverage metric which is pretty nice, but
if you are unfamiliar with it it appears like code coverage is lower
than it actually is.
This reverts commit 61e0f30 (#7798).
Our stub/core implementation had a bug (#7921) that might make it possible to leak cancellation through to the executor multiple times, typically when a custom interceptor is used. Revert this commit because import of grpc to google internal fails. We think the bug found in the import is legit and in our code. So we revert this change to avoid hurting users until the underlying issue is fixed.
Implemented CloudToProdNameResolver, which will be used for DirectPath with URI scheme "google-c2p". The resolver is only a wrapper that delegates name resolution either to DNS or xDS resolver depending on the environment. If it is delegating to the xDS resolver, it will send HTTP requests (to a local HTTP server) to fetch metadata that is used to generate a bootstrap config. The self-generated bootstrap will be used for xDS.
If a handshake is ongoing during shutdown, this would substantially
reduce the time it takes to shut down. Previously, you would need to use
channel.shutdownNow() to have fast shutdown behavior, which is an
unnecessary use of the variant.
When the current approach was written WriteBufferingAndExceptionHandler
didn't exist and so it was hard to predict how the pipeline would react
to events (particularly because of HTTP/2 handler's re-definition of
close()). Now that WBAEH exists, this is more straight-forward.
Implement a simple allocation-free xx_hash utility class without using sun.misc.Unsafe. The hash function mainly targets on xDS use case, which is mostly small strings (endpoint address, headers, etc) and primitive types.
In gRPC's use case, string characters need to be treated as ASCIIs to make the produced hash values match other implementations (Envoy, gRPC-Go, C-core, etc) would produce.
The hashing implementation and tests are borrowed from OpenHFT's XxHash implementation (https://github.com/OpenHFT/Zero-Allocation-Hashing/blob/master/src/main/java/net/openhft/hashing/XxHash.java, see commit 658079a50903c32c54f2ab5c86243244b3ac60ed), which is under Apache 2.0 license. For more details, see https://github.com/OpenHFT/Zero-Allocation-Hashing.
The code is made to be in third_party directory with LISENCE and NOTICE files.
The serviceName field in oobChannel grpclb config should not be null, otherwise it will default to the lbHelper.getAuthority(), which perviously defaulted to the lookup service before #7852, but has been overridden to the backend service for authentication in #7852.