Since netty version v4.1.67, content-lenght header validation will be enforced. So once grpc upgrades netty to that version or above, RPCs with invalid content-length header will fail.
Some libraries such as HTTP to gRPC adapters blindly copy all HTTP headers to gRPC metadata, but the content-length header is one of those that shouldn't be forwarded because gRPC uses different encoding. This mistake has already been in existence for a long time.
Discard outbound content-length headers in gRPC, so that users who encounter invalid content-length issue when upgrading grpc-java version on server/client side would be able to workaround by upgrading grpc-java on client/server side as well without fixing the HTTP adapter.
- Removes CallCredentials2
- Removes CallCredentials2ApplyingTest
- Adds two tests from CallCredentials2ApplyingTest to CallCredentialsApplyingTest
- Updates GoogleAuthLibraryCallCredentials to extend from CallCredentials instead of CallCredentials2
This change assures that if there are only calls in real transport the
channel will remain in idle mode. Idle mode will be exited if there
are calls in delayed transport to allow them to be processed.
The semantics around cancel vary slightly between ServerCall and CancellableContext - the context should always be cancelled regardless of the outcome of the call while the ServerCall should only be cancelled on a non-OK status.
This fixes a bug where the ServerCall was always marked cancelled regardless of call status.
Fixes#5882
Stabilize `enableRetry()` and `disableRetry()`.
Disable retry in `ManagedChannelImplTest` because each call attempt will fork the headers to a new instance, and add a ClientStreamTracer.Factory for bufferSizeLimit in CallOptions, which makes verification not straightforward.
There has been an issue about flow control when retry is enabled.
Currently we call `masterListener.onReady()` whenever `substreamListener.onReady()` is called.
The user's `onReady()` implementation might do
```
while(observer.isReady()) {
// send one more message.
}
```
However, currently if the `RetriableStream` is still draining, `isReady()` is false, and user's `onReady()` exits immediately. And because `substreamListener.onReady()` is already called, it may not be called again after drained.
This PR fixes the issue by
- Use a SerializeExecutor to call all `masterListener` callbacks.
- Once `RetriableStream` is drained, check `isReady()` and if so call `onReady()`.
- Once `substreamListener.onReady()` is called, check `isReady()` and only if so we call `masterListener.onReady()`.
While adding regression tests to #8386, I found a bug in an edge case: while retry attempt is draining the last buffered entry, if it is in the mean time committed and then we cancel the call, the stream will never be cancelled. See the regression test case `commitAndCancelWhileDraining()`.
There is a bug in the scenario of the following sequence of events:
- `call.start()`
- received retryable status and about to retry
- The retry attempt Substream is created but not yet `drain()`
- `call.cancel()`
But `stream.cancel()` cannot be called prior to `stream.start()`, otherwise retry will cause a failure with IllegalStateException. The current RetriableStream code must be fixed to not cancel a stream until `start()` is called in `drain()`.
Rebased PR #8343 into the first commit of this PR, then (the 2nd commit) reverted the part for metric recording of retry attempts. The PR as a whole is mechanical refactoring. No behavior change (except that some of the old code path when tracer is created is moved into the new method `streamCreated()`).
The API change is documented in go/grpc-stats-api-change-for-retry-java
This change adds a traditional try/finally block around readers and
streams to control the closing of those objects when the method has
completed rather than relying on the GC to deal with them.
This issue was flagged by an analysis tool via binary analysis of the
grpc-core package as part of a dependency from another project.
We used to have two ClientStreamListener.closed() methods. One is simply calling the other with default arg. This doubles debugging (e.g. #7921) and sometimes unit testing work. Deleting the 2-arg method to cleanup.
This PR is purely refactoring.
Add ServerCallExecutorSupplier interface in serverBuilder to allow defining which executor to to handle the server call.
Split StreamCreated() contextRunnable into two to support this new feature: one for method lookup, the other for server call handling. methodLookup() runs on default executor, handleServerCall() may run on the executorSupplier executor.
callbacks are queued after methodLookup() and handleServerCall() on serializing executor to ensure stream listener is set when callbacks starts running.
Make executor settable in serializing executor to allow switching executor for the server call handling runnable as a result of the outcome of the method lookup runnable.
failOnVersionConflict has never been good for us. It is equivalent to
Maven dependencyConvergence which we discourage our users to use because
it is too tempermental and _creates_ version skew issues over time.
However, we had no real alternative for determining if our deps would be
misinterpeted by Maven.
failOnVersionConflict has been a constant drain and makes it really hard
to do seemingly-trivial upgrades. As evidenced by protobuf/build.gradle
in this change, it also caused _us_ to introduce a version downgrade.
This introduces our own custom requireUpperBoundDeps implementation so
that we can get back to simple dependency upgrades _and_ increase our
confidence in a consistent dependency tree.
This adds support for creating a Netty Channel with SocketAddress and ChannelCredentials.
This aligns with NettyServerBuilder.forAddress(SocketAddress address, ServerCredentials creds).
Enables a codepath for zero-copy protobuf deserialization. Two new InputStream extension interfaces are added:
- HasByteBuffer: allows access to the underlying buffers containing inbound bytes directly without copying
- Detachable: allows customer marshaller to keep the buffers around until the application code is done with using the protobuf messages
Applications can implement a custom marshaller that takes over the ownership of ByteBuffers and wrap them into ByteStrings with protobuf's UnsafeByteOperations support. Then a RopeByteString, which is a in-place composite of ByteStrings can be created. This enables using the zero-copy codepath (requires immutable ByteBuffer indication) of CodedInputStream for deserialization.
Since static methods are pseudo-inherited by Builder implementations but
are trivially accidentally used, we re-define static methods in each
builder to make them behave more like the caller would expect. However,
not all the methods actually work; some just throw because the caller
was certainly not getting what they would expect.
Annotating with `@DoNotCall` can expose the problems at compile time
instead of runtime. While `@Deprecated` would also be an option, it is a
bit harder to figure out the ramifications and whether we want to go
that route.
This change was suggested by a lint tool for XdsServerBuilder and it
seems appropriate so I applied it to the other similar cases I could
find.
Triggering balancing state updates after already being shutdown can be confusing for the upstream of round_robin. In cases of the callers not managing round_robin's lifecycle (e.g., not ignoring updates after it shuts down round_robin, which it should), it can make problem very bad, especially with the behavior that round_robin is actually propagating TRANSIENT_FAILURE with a picker that buffers RPCs.
This change only polishes round_robin by always preserving its invariant. Callers/LBs should not rely on this and should still manage the balancing updates from its downstream correctly based on the downstream's lifetime.
Currently each subchannel implicitly refreshes the name resolution when its state changes to IDLE or TRANSIENT_FAILURE. That is, this feature is built into subchannel's internal implementation. Although it eliminates the burden of having LB implementations refreshing the resolver when connections to backends are broken, this is gives LB policies no chance to disable or override this refresh (e.g., in some complex load balancing hierarchy like xDS, LB policies may embed a resolver inside for resolving backends so the refreshing resolution operation should be hooked to the resolver embedded in the LB policy instead of the one in Channel).
In order to make this transition smoothly, we add a check to SubchannelImpl that checks if the LoadBalancer has explicitly called Helper.refreshNameResolution for broken subchannels created by it. If not, it logs a warning and do the refresh.
A temporary LoadBalancer.Helper API ignoreRefreshNameResolution() is added to avoid false-positive warnings for xDS that intentionally does not want a refresh. Once the migration is done, this should be deleted.
Fix the following bug:
ManagedChannelImpl.ConfigSelectingClientCall may return early in start() leaving delegate null, and fails request() method after start().
Currently the bug can only be triggered when using xDS.
ManagedChannelImpl should not override authority for createResolvingOobChannel(target, creds), because ManagedChannelImpl does not know what target and creds are.