This makes it easier to convert over to Proto lite, which will be
replacing nano in the near future.
This CL is a port originally from @arielbackenroth
* core: add finalizer checks for ManagedChannels
Cleaning up channels is something users should do. To promote this
behavior, add a log message to indicate that the channel has not
been properly cleaned.
This change users WeakReferences to avoid keeping the channel
alive and retaining too much memory. Only the id and the target
are kept. Additionally, the lost references are only checked at
JVM shutdown and on new channel creation. This is done to avoid
Object finalizers.
The test added checks to see that the message is logged. Since
java does not allow forcing of a GC cycle, this code is best
effort, giving up after about a second. A custom log filter is
added to hook the log messages and check to see if the correct
one is present. Handlers are not used because they are
hierarchical, and would be annoying to restore their state after
the test.
The other tests in the file contribute a lot of bad channels. This
is reasonable, because they aren't real channels. However, it does
mean that less than half of them are being cleaned up properly.
After trying to fix a few, it is too hard to do. It would only
serve to massively complicate the tests.
Instead, this code just keeps track of how many it wasn't able to
clean up, and ignores them for the test. They are still logged,
because really they should be closed.
If a context has an unreasonable number of ancestors, then
chances are this is an application error. Log the stack trace to
notify the user and aid in debugging.
* netty: hide ProtocolNegotiator, and expose initial ChannelHandler
This change does two things: it hides the ProtocolNegotiator from
NSB, and exposes an internal "init channel" on NSB and NCB. The
reason for the change is that PN is not a powerful enough
abstraction for internal Google use (and for some other outside
users with highly specific uses).
The new API exposes adding a ChannelHandler to the pipeline upon
registration of the channel.
To accomplish this, NettyClientTransport is modified to use
ChannelInitializer. There is a comment explaining why it cannot
be used, but after looking at the the original discussion, I
believe the reasons for doing so are no longer applicable.
Specifically, at the time that CI was removed, there was no
WriteQueue class. The WQ class buffers all writes and executes
them on the EventLoop. Prior to WQ it was not the case that all
writes happened on the loop, so it could race. If the write was
not on the loop, it would be put on the loops execution queue,
but with the CI handler as the target. Since CI removed itself
upon registration, the write wouldn get fired on the wrong
handler.
With the additional of WQ, this is no longer a problem. All
writes go through WQ, and only execute on the loop, so pipeline
changes are no longer racy.
...That is, except for the initial noop write. This does still
experience the race. If the channel is failed during
registration or connect, the lifecycle manager will fail for
differing, racy reasons.
====
To make things more uniform across NCT and NST, I have put them
both back to using CI. I have added listeners to each of the
bootstrap futures. I have also moved the initial write to the
CI, so that it always goes through the the buffering negotiation
handler.
Lastly, racy shutdown errors will be logged so that if multiple
callbacks try to shutdown, it will be obvious where they came
from and in which order they happened.
I am not sure how to test the raciness of this code, but I *think*
it is deterministic. From my reading, Promises are resolved
before channel events so the first future to complete should be the
winner. Since listeners are always added from the same thread,
and resolved by the loop, I think this forces determinism.
One last note: the negotiator has a scheme that is hard coded
after the transport has started. This makes it impossible to
change schemes after the channel is started. Thats okay, but it
should be a use case we knowingly prevent. Others may want to
do something more bold than we do.
This reverts commit 671783f
The dependency on core caused some problems with
Proguard. There are android builds that should include
protobuf-* but expect the rest of gRPC to be bundled with the
runtime environment. In that case, when Proguard inspects the
output, it will find a reference to IoUtil but fail to find the
class itself. It makes the builds easier to just avoid this
dependency.
* inprocess,core: add ManagedChannelBuilder and ServerBuilder factory hiders
Because the factory for Channels and Servers resides on the builder
itself, it is easy for subclasses to accidentally inherit the
factory. This causes confusion, because calling a static method on
a specific class may result in a different class.
This change adds hiding static factories to each builder, and a test
to enforce that each subclass hides the factory. The test lives in
the interop tests, because it has a classpath dependency on all the
existing transports.
Minor note: the test scans the classpath using a Beta Guava API.
The test can be disabled if the API goes away.
The benchmarks should be close to the code they're benchmarking, like
we do with tests.
This includes a bugfix to SerializingExecutorBenchmark to let it run.
The io.grpc.benchmarks.netty benchmarks in benchmarks/ depend on
ByteBufOutputMarshaller from benchmarks's main, so they were not moved.
While the code had correctly determined full threads were available, the
call to MoreExecutors returned a request thread factory, which has
limitations.
Note that Async stub users may not be able to call GAE APIs in
callbacks. This is because the threads aren't request threads. They can
override the individual call's executor with
com.google.appengine.api.ThreadManager.currentRequestThreadFactory() in
an interceptor via callOptions.withExecutor().
Fixes#3296
Previously, if two streams are added (but not active yet), then the transport is changed into inUse; after that, if one of them gets active and then closed and removed, then the transport will be changed into and staying at notInUse, although the other stream could later be active.
GrpclbLoadBalancer can work in non-GRPCLB (delegate) mode according to
name resolution results. Previously the policy selection, delegation
and GRPCLB logic are in the same file, which is not very readable. It
will get worse as we going to implement policy fallback logic soon.
This PR refactors the GRPCLB logic out, and makes GrpclbLoadBalancer
focus on the policy selection and delegation logic.
This bump changelist is applied a bit late with respect to the
1.6.0 branch cut. Look at the 1.6.0 to see the source of truth of
where it was cut. Do not assume it is the commit that precedes
this one.
Now that we have the copy of write keyvalue store (#3368), there
is no need to keep the full parent chain. We only need a
reference to the nearest cancellable ancestor. This optimization
should in theory make cancellations more efficient and also make
our data structs more GC friendly.