Channels, subchannels and oobchannels are added at creation and
removed at termination. Same goes for subchannels and
oobchannels. Everything is registered as a hard reference. This means
channels that are not properly cleaned up using `awaitTermination`
will remain visible in channelz.
In `FakeClock` we have `runDueTasks()` (without filter), `numPendingTasks(filter)`, and `getPendingTasks(filter)` which are useful. The `runDueTasks(filter)` method seems not a right way to test.
okhttp and netty libraries will retain hard references to
ManagedChannelImpl via its non static nested classes. Moving the
orphan check to a wrapper class solves this problem.
- Added ForwardingManagedChannel + test
Channel enters this mode whenever there is an uncaught throwable from
its ChannelExecutor, which is where most channel internal states are
mutated, such as load-balancing.
In panic mode, the channel will always report TRANSIENT_FAILURE as its
state, and will fail RPCs with an INTERNAL error code with the
uncaught throwable as the cause, which is helpful for investigating
bugs within gRPC and 3rd-party LoadBalancer implementations.
## Change to existing behavior
Previously if `LoadBalancer` throws in `handleResolvedAddressGroups()`, it would
be routed back to `LoadBalancer.handleNameResolutionError()`. Now it will make
the channel panic.
## Internal refactors
- Refactored out `shutdownNameResolverAndLoadBalancer()`, called from three code paths: `enterIdleMode()`, `delayedTransport.transportTerminated()`, and `panic()`.
- Refactored out `updateSubchannelPicker()`, called from both `updateBalancingState()` and `panic()`.
To avoid breakage, temporarily disable retry when stats or tracing is enabled.
This still agrees with the description of the javadoc of `ManagedChannelBuilder.enableRetry()`
```java
/**
* Enables the retry mechanism provided by the gRPC library.
*
* <p>This method may not work as expected for the current release because retry is not fully
* implemented yet.
*
* @return this
* @since 1.11.0
*/
@ExperimentalApi("https://github.com/grpc/grpc-java/issues/3982")
public T enableRetry()
```
This removes the debug string added in 5a4794f2c because the issue was resolved
and the debug string is confusing for Netty, producing results like:
io.grpc.netty.NettyClientTransport$3: Frame size 216695976 exceeds maximum: 4194304
This makes it seem like it's a HTTP/2 frame problem and not a gRPC message
problem. Although it is clearer once you see the maximum is 4 MB, this already
bit me in #4086.
In general, we should talk about "messages" instead of "frames" when possible
as that's what is most comprehensible to the consumer of the description. I
also make sure to mention "gRPC" messages/frames because otherwise the message
is ambiguous.
It is the responsibility of the caller to close the `InputStream`
returned by `Marshaller`.
`Stream` takes ownership of any `InputStream`s passed in.
`DelayedStream` and the streams of `InProcessTransport` already
handle ownership transfer, but `AbstractStream` needs to call
`close`.
The `BinaryLog` will write `GrpcLogEntry`s to the sink, which is
intended as a pluggable interface. It is the sink's
responsibility to send the binlog protos to disk, to a remote
logging service, etc.
This adds back functionality that was accidentally dropped when
porting from travis:
- make sure generating protos will not lead to any uncomitted changes
- actually run the unit tests
netty-handler has SslHandler which is really the piece that needs to
agree with netty-tcnative. But as I mention, all of netty should be a
consistent version, otherwise it may randomly break due to internal API
changes.
Some users have reported "Channel closed but for unknown reason".
Adding this information doesn't tell us where the bug is, but may help
us narrow down why getShutdownStatus() is null.
Kokoro already has cmake installed, so it's not necessary in that
environment. We still keep the old code around because it's helpful for
setting up new Windows environments in general.