Per spec this metric should be calculated on the server and sent back to
the client, for which the mechanism is not currently defined. As it's
not a required metric, we remove the incorrect implementation for now.
Internal ref: b/37208451
While we can use permit/deny in this one case, it isn't generalizable to
other cases. In order to avoid always questioning how to deal with
boolean config options, just pass the boolean in all cases.
This mirrors what is being done with the client-side's
keepAliveWithoutCalls.
These methods were very recently added, so there is a low risk of
breakage.
Background
==========
LoadBalancer needs to track RPC measurements and status for
load-reporting. We need to introduce a "Tracer" API for that.
Since such API is very close to the current
Census(instrumentation)-based stats reporting mechanism in terms of what
are recorded, we will migrate the Census-based stats reporting under the
new Tracer API.
Alternatives
============
We considered plumbing the LB-related information from the LoadBalancer
to the core, and recording those information along with the currently
recorded stats to Census. The LB-related information, such as LB_ID,
reason for dropping reqeusts etc, would be added to the Census
StatsContext as tags.
Since tags are held by StatsContext before eventually being recorded by
providing the measurements, and StatsContext is immutable, this would
require a way for LoadBalancer to override the StatsContext, which means
LoadBalancer API would has direct reference to the Census StatsContext.
This is undesirable because Census API is not stable yet.
Part of the LB-related information is whether the client has received
the initial headers from the server. While such information can be
grabbed by implementing a ClientInterceptor, it must be recorded along
with other information such as LB_ID to be useful, and LB_ID is only
available in GrpclbLoadBalancer.
Bottom line, trying to use solely the Census StatsContext API to record
LB load information would require extra data plumbing channel between
ClientInterceptor, LoadBalancer and the gRPC core, as well as exposing
Census API on the gRPC API. Even with those extensive changes, we are
yet to find a working solution. Therefore, we abandoned this idea and
propose this PR.
Summary of changes
==================
API summary
-----------
Introduce "StreamTracer" API, a callback interface for receiving stats
and tracing related updates concerning **a single stream**.
"ClientStreamTracer" and "ServerStreamTracer" add side-specific
events. A stream can have zero or more tracers and report to all of
them.
On the client-side, CallOptions now takes a list of
ClientStreamTracer.Factory. Opon creating a ClientStream, each of the
factory creates a ClientStreamTracer for the stream. This allows
ClientInterceptors to install its own tracer factories by overriding the
CallOptions.
Since StreamTracer only tracks the span of a stream, tracking of a
ClientCall needs to be done in a ClientInterceptor. By installing its
own StreamTracer when a ClientCall is created, ClientInterceptor can
associate the updates for a Call with the updates for the Streams
created for that Call. This is how we keep the existing Census
reporting mechanism in CensusStreamTracerModule.
On the server-side, ServerStreamTracer.Factory is added through the
ServerBuilder, and is used to create ServerStreamTracers for every
ServerStream.
The Tracer API supports propagation of stats/tracing information through
Context and metadata. Both client-side and server-side tracer factories
have access to the headers object. Client-side tracer relies on
interceptor to read the Context, while server-side tracer has
filterContext() method that can override the Context.
Implementation details
----------------------
Only real streams report stats. Pseudo streams such as delayed stream,
failing stream don't report. InProcess transport streams currently
don't report stats.
"StatsTraceContext" which used to receive updates from core and report
directly to Census (StatsContext), now delegates to the StreamTracers of
a stream. On the client-side, the scope of a StatsTraceContext reduces
from ClientCall to a ClientStream to match the scope of StreamTracer.
The Census-specific logic that was in StatsTraceContext is moved into
CensusStreamTracerModule, which produces factories for StreamTracers
that report to Census.
Reporting with StatsTraceContext is moved out of the Channel/Call layer
into Transport/Stream layer, to match the scope change of
StatsTraceContext.
Bug fixed
----------------
The end of a server-side call was reported in ServerCallImpl's
ServerStreamListenerImpl.closed(), which was wrong. Because closed()
receiving OK doesn't necessarily mean the RPC ended with OK. Instead it
means the server has successfully sent the final status, which may be
non-OK, to the client.
Now the end report is done in both ServerStream.close(any Status) and
before calling ServerStreamListener.closed(non-OK). Whichever happens
first is the reported status.
TODOs
=====
A follow-up change to the LoadBalancer API will add a
ClientStreamTracer.Factory to the PickResult to complete the API needed
by load-reporting.
Now that there is a config, the new defaults are now being enabled.
Previously there were no default limits. Now keepalives may not be more
frequent than every 5 minutes and only when there are outstanding RPCs.
Each time helper.updatePicker() is called, the Channel will re-process
all pending streams with the new picker. If the old picker is
equivalent to the old one, it's wasteful.
This is also needed to make our internal integration test easier.
Because the load-balancer may send address list that is identical to the
previous one, just to update the TTL. Without this change, new picker
replaces the old picker even if they carry the same list, which
effectively resets the round-robin pointer. This causes a little
imbalance between test backends, resulting in test failure.
To be in line with `NettyServerBuilder` APIs
- Deprecated `enableKeepAlive(boolean enable)` and
`enableKeepAlive(boolean enable, long keepAliveDelay, TimeUnit delayUnit, long keepAliveTimeout,
TimeUnit timeoutUnit)`
which never worked in v1.2
- Added `keepAliveTime(long keepAliveTime, TimeUnit timeUnit)` and
`keepAliveTimeout(long keepAliveTimeout, TimeUnit timeUnit)`
Everything is currently permitted, but I've tested with other
configurations and all tests pass. I'll set the restrictive default at
the same time as adding a configuration API.
d116cc9 fixed the NPE, but the initialization of the manager happened
_after_ newHandler() was called, so a null manager was passed to the
handler.
Fixes#2828
executor.schedule() will "eat" any exceptions thrown by the Runnables,
because the Future is expected to be used to see them. However, we never
call get() on the Future, so we need to just the exceptions like we do
elsewhere in this case.
We got a NullPointerException from ClientCallImpl#startDeadlineTimer
when a new Call is created after a Netty channel is terminated. Here
is a stacktrace:
INTERNAL: java.lang.NullPointerException
at io.grpc.internal.ClientCallImpl.startDeadlineTimer(ClientCallImpl.java:320)
at io.grpc.internal.ClientCallImpl.start(ClientCallImpl.java:253)
The following code snippet reproduces the bug:
```
ManagedChannel channel = NettyChannelBuilder.forAddress(host, port)
.usePlaintext(true)
.build();
channel.shutdown();
Thread.sleep(1000);
GreeterGrpc.GreeterBlockingStub stub =
GreeterGrpc.newBlockingStub(channel)
.withDeadlineAfter(10, TimeUnit.SECONDS);
stub.sayHello(HelloRequest.newBuilder().setName("world").build());
```
The issue was that ClientCallImpl is created from RealChannel#newCall
*after* ManagedChannelImpl#maybeTerminateChannel is called and
scheduledExecutor is set to null. In such a scenario,
deadlineCancellationExecutor is set to null.
I think there are several ways to fix this, but one way would be to
just avoid calling startDeadlineTimer() when
deadlineCancellationExecutor is null. DelayedClientTransport will
create a FailingClientStream with Status.UNAVAILABLE and we will get
```
Exception in thread "main" io.grpc.StatusRuntimeException:
UNAVAILABLE: Channel has shutdown (reported by delayed transport)
```
This removes a needless warning, and isn't much slower. Also this
includes a benchmark for StatsTraceContext to measure the overhead
for creation. It adds about 40ns per RPC. Optimization will come
after structural changes are made to break the dependency on
Census.
This appears to have been broken by 3df1446 (which was reverted and
later rolled forward again in 66ab956).
Without this fix, the ServerServiceDefinition.Builder realizes that a
method is registered that isn't in the ServiceDescriptor. Swapping to a
different constructor causes the builder to generate the
ServiceDescriptor for us.
java.lang.IllegalStateException: No entry in descriptor matching bound method E6Cq77iKGNKVCGyVOqq8DqEazX9AcBdPNoMj86c3I5zo4Tv77U/vLe7QS7mhUfaooN7eYdBW7gd9oyV.kc9I0zJumfuUbhyb7SR1u
at io.grpc.ServerServiceDefinition$Builder.build(ServerServiceDefinition.java:164)
at io.grpc.benchmarks.netty.HandlerRegistryBenchmark.setup(HandlerRegistryBenchmark.java:107)
Resolves#2716
- Add attributes to EquivalentAddressGroup
- Deprecate ResolvedServerInfoGroup by EquivalentAddressGroup
- Deprecate ResolvedServerInfo, because attributes for a single address
with an address group is not found to be useful.
- The changes on the NameResolver and LoadBalancer interfaces are backward-compatible
in the next release, with which implementors can switch to the new API smoothly.
As a related change, redefine the semantics of DnsNameResolver and
RoundRobinLoadBalancer:
- Before: DnsNameResolver returns all addresses in one address group.
RoundRobinLoadBalancer ignores the grouping of addresses and
round-robin on every single addresses. It doesn't work well with the
one-server-multiple-address setup, e.g., both IPv4 and IPv6 addresses
are returned for a single serve, even if they are put in the same
address group by the NameResolver.
- After: DnsNameResolver returns every address in its own
EAG. RoundRobinLoadBalancer takes an EAG as a whole, and only
round-robin on the list of EAGs. The new behavior is a better
interpretation of the EAGs, and really allows the case where one
server has more than one addresses (e.g., IPv4 and IPv6).
This change will affect users that use custom LoadBalancer with the
stock DnsNameResolver, and those who use custom NameResolver with the
stock RoundRobinLoadBalancer.
Users who use both the stock DnsNameResolver and RoundRobinLoadBalancer
or PickFirstBalancer will see no behavioral change. Because they will
still round-robin on individual addresses from DNS, or do pick-first on
all addresses from DNS (PickFirstBalancer flattens all addresses).
The result is a simpler API and reduction of boilderplates.
`keepAlivedManager#onTransportshutdown` should not be called in `transport.shutdown()` because it is possible that there are still open RPC streams, and maybe inactive, so keepalive is still needed.
Preparing to support server side keepalive.
For the convience on server side, not to use Ping `onSuccess()` callback to cancle shutdownFuture any more, instead, regard `onDataReceived()` as ping Ack and cancel shutdownFuture in it.