There is data race in `CensusStatsModule. CallAttemptsTracerFactory`:
If client call is cancelled while an active stream on the transport is not committed, then a [noop substream](https://github.com/grpc/grpc-java/blob/v1.40.0/core/src/main/java/io/grpc/internal/RetriableStream.java#L486) will be committed and the active stream will be cancelled. Because the active stream cancellation triggers the stream listener closed() on the _transport_ thread, the closed() method can be invoked concurrently with the call listener onClose(). Therefore, one `CallAttemptsTracerFactory.attemptEnded()` can be called concurrently with `CallAttemptsTracerFactory.callEnded()`, and there could be data race on RETRY_DELAY_PER_CALL. See also the regression test added.
The same data race can happen in hedging case when one of hedges is committed and completes the call, other uncommitted hedges would cancel themselves and trigger their stream listeners closed() on the transport_thread concurrently.
Fixing the race by recording RETRY_DELAY_PER_CALL once both the conditions are met:
- callEnded is true
- number of active streams is 0.
- Removes CallCredentials2
- Removes CallCredentials2ApplyingTest
- Adds two tests from CallCredentials2ApplyingTest to CallCredentialsApplyingTest
- Updates GoogleAuthLibraryCallCredentials to extend from CallCredentials instead of CallCredentials2
Added routing config discovery for HCM in LdsUpdate in XdsServerWrapper. This can be LDS inline or through RDS. Deal with inflight SslContextProviderSupplier resource handling. Discovered routing config is updated to FilterChainSelectorRef.
Added routing config data field in FilterChainSelector. Filter chain matching would resulting in setting a new attribute key for server routing config. Filter chain matching logics mostly not changed.
Installed ConfigApplyingInterceptor in XdsServerWrapper's delegateBuilder. It fetches server routing config attribute set above. It does routing match and creates server interceptors for the http filters as a result.
This change assures that if there are only calls in real transport the
channel will remain in idle mode. Idle mode will be exited if there
are calls in delayed transport to allow them to be processed.
The semantics around cancel vary slightly between ServerCall and CancellableContext - the context should always be cancelled regardless of the outcome of the call while the ServerCall should only be cancelled on a non-OK status.
This fixes a bug where the ServerCall was always marked cancelled regardless of call status.
Fixes#5882
The FINE logging was just repeating the exceptions. But really, it is
trivial to avoid exceptions in this case and that is beneficial because
it will avoid an expensive error handling path in something that is
trivial to trigger remotely.
The WARNING may be a bit much if connections don't match the filter
chains often in production, but it seems most likely a misconfiguration
and not something that would be seen often.
We've still been seeing random memory-related failures with the Android
CI, but it is nowhere near as severe as it was. But even when running
locally with "-Xmx512m -XX:MaxMetaspaceSize=512m" I get failures. Our CI
environment has lots of RAM; let's use it.
Stabilize `enableRetry()` and `disableRetry()`.
Disable retry in `ManagedChannelImplTest` because each call attempt will fork the headers to a new instance, and add a ClientStreamTracer.Factory for bufferSizeLimit in CallOptions, which makes verification not straightforward.
There has been an issue about flow control when retry is enabled.
Currently we call `masterListener.onReady()` whenever `substreamListener.onReady()` is called.
The user's `onReady()` implementation might do
```
while(observer.isReady()) {
// send one more message.
}
```
However, currently if the `RetriableStream` is still draining, `isReady()` is false, and user's `onReady()` exits immediately. And because `substreamListener.onReady()` is already called, it may not be called again after drained.
This PR fixes the issue by
- Use a SerializeExecutor to call all `masterListener` callbacks.
- Once `RetriableStream` is drained, check `isReady()` and if so call `onReady()`.
- Once `substreamListener.onReady()` is called, check `isReady()` and only if so we call `masterListener.onReady()`.
This is split from #8318, refactoring changes include:
1. FilterChainMatchingHandler
1.1. Previously filter chain match is built-in in XdsServerCredential for xdsServer. (But it does not have to be XdsServerCredential.) The protocol negotiator associated with the XdsServerCredential does the filter chain match computation. Now filter chain match is through a FilterChainMatchingHandler and it always run. As a result, it sets attributes of sslContextProviderSupplier from xds config in protocol negotiation event.
1.2. The previous protocol negotiator associated with the XdsServerCredential is modified to just lookup the config in the attribute set above and decide to use xds config credential or fallback credential.
1.3. Previously credential is a must in XdsBuilder. Now credential becomes optional to allow routing config to be fetched. Xds TCP listener update will always be used to run filter chain match.
Later, we will add routing config in filter chain match and apply http filter configs by installing ConfigApplyingInterceptor.
2. Removed xdsClientWrapperForServerXds, unnecessarily complicated.
3. Changed event attribute key. Previously filter chain matching happens in the xdsClientWrapperForServerXds, the xds client wrapper is passed to negotiation handler via attributes to allow protocol negotiator to trigger the filter chain matching computation.
Now the attributes becomes an atomic config selector reference that xdsServerWrapper will inject by watching xds resources updates via xds client.
4. Previously there are multiple server states enum in xdsServerWrapper, this is removed because it is unnecessarily complicated. But there are still isServing status to avoid re-start delegate upon listener update.
5. Previously xdsServerWrapper ignores any xds updates once initial started, now we allow dynamic update to happen even if server is up. This is done via updating config selector atomic reference upon listener update.
6. Previously xdsServerWrapper synchronizes on the server object, this is modified to syncContext to be more manageable.
While adding regression tests to #8386, I found a bug in an edge case: while retry attempt is draining the last buffered entry, if it is in the mean time committed and then we cancel the call, the stream will never be cancelled. See the regression test case `commitAndCancelWhileDraining()`.
We don't want other APIs to copy the stub-based API to attach the
interceptor. The API has a shorter name, but isn't actually all that
easier to use and isn't fluent like using the interceptor API.
These are _very_ old methods, so we won't be quick to delete them. Seems
we should have them deprecated at least a year or two; they are easy to
maintain in the mean time.
See API Review notes in #1789
There is a bug in the scenario of the following sequence of events:
- `call.start()`
- received retryable status and about to retry
- The retry attempt Substream is created but not yet `drain()`
- `call.cancel()`
But `stream.cancel()` cannot be called prior to `stream.start()`, otherwise retry will cause a failure with IllegalStateException. The current RetriableStream code must be fixed to not cancel a stream until `start()` is called in `drain()`.