* inprocess,core: add ManagedChannelBuilder and ServerBuilder factory hiders
Because the factory for Channels and Servers resides on the builder
itself, it is easy for subclasses to accidentally inherit the
factory. This causes confusion, because calling a static method on
a specific class may result in a different class.
This change adds hiding static factories to each builder, and a test
to enforce that each subclass hides the factory. The test lives in
the interop tests, because it has a classpath dependency on all the
existing transports.
Minor note: the test scans the classpath using a Beta Guava API.
The test can be disabled if the API goes away.
The benchmarks should be close to the code they're benchmarking, like
we do with tests.
This includes a bugfix to SerializingExecutorBenchmark to let it run.
The io.grpc.benchmarks.netty benchmarks in benchmarks/ depend on
ByteBufOutputMarshaller from benchmarks's main, so they were not moved.
While the code had correctly determined full threads were available, the
call to MoreExecutors returned a request thread factory, which has
limitations.
Note that Async stub users may not be able to call GAE APIs in
callbacks. This is because the threads aren't request threads. They can
override the individual call's executor with
com.google.appengine.api.ThreadManager.currentRequestThreadFactory() in
an interceptor via callOptions.withExecutor().
Fixes#3296
Previously, if two streams are added (but not active yet), then the transport is changed into inUse; after that, if one of them gets active and then closed and removed, then the transport will be changed into and staying at notInUse, although the other stream could later be active.
GrpclbLoadBalancer can work in non-GRPCLB (delegate) mode according to
name resolution results. Previously the policy selection, delegation
and GRPCLB logic are in the same file, which is not very readable. It
will get worse as we going to implement policy fallback logic soon.
This PR refactors the GRPCLB logic out, and makes GrpclbLoadBalancer
focus on the policy selection and delegation logic.
This bump changelist is applied a bit late with respect to the
1.6.0 branch cut. Look at the 1.6.0 to see the source of truth of
where it was cut. Do not assume it is the commit that precedes
this one.
Now that we have the copy of write keyvalue store (#3368), there
is no need to keep the full parent chain. We only need a
reference to the nearest cancellable ancestor. This optimization
should in theory make cancellations more efficient and also make
our data structs more GC friendly.
This is the hashtrie data structure authored by @ejona86
The linked list key value store is known cause problems in
pathological cases where users keep updating the same key(s) over and
over. This copy on write tree will bound reads at O(lgN) where N is
the number of keys in the map, rather than O(lgM) where M is the total
number of put operations.
Also:
- added some unit tests
- ran a test putting random keys into the map and comparing the result
with a java.util.HashMap to verify sanity. The test passes but I
won't check it into the repo because it takes a long time to run:
https://gist.github.com/zpencer/12cb435235d171c1fe09aef18825fad0
NettyClientTransport needs to call close() on the Channel directly
instead of sending a message, since the message would typically be
delayed until negotiation completes.
The closeFuture() closes too early to be helpful, which is very
unfortunate. Using it squelches the negotiator's error handling. We now
rely on the handlers to report shutdown without any back-up. The
handlers error handling has matured, so maybe this is okay.
Previously, the round-robin list that the client uses (effective
round-robin list, ERRL) was the received round-robin list (RRRL)
excluding non-READY backends. Drop and backend entries are in the
same list.
The problem with it is that when not all backends are READY, drop
entries take a larger proportion in ERRL than they do in the RRRL,
resulting a larger drop ratio than intended.
To fix this, we employ a two-list scheme:
- A "drop list" (DL) that is out of the RRRL, with the same size and
the same number of drop entries.
- A "backend list" (BL) that contains only the backend entries from
the RRRL, excluding non-READY ones.
For every pick, the client would round-robin on the DL to determine
whether the pick should be dropped. Only when it's not dropped,
round-robin on the BL to pick the actual backend.
This way, the drop ratio is always equal to the proportion they take
in the RRRL.
The assertions are actually wrong and fail every time. It doesn't
cause test failures because SharedResourceHolder calls them in a
scheduled executor because of its delayed close feature.
It's better to remove them, rather than leaving them there deceiving
us.
This aligns with shutdownNow(), which is already accepting a status.
The status will be propagated to application when RPCs failed because
of transport shutdown, which will become useful information for debug.
InputStream by contract can return zero if requested length equal to zero.
```
If len is zero, then no bytes are read and 0 is returned;
otherwise, there is an attempt to read at least one byte.
If no byte is available because the stream is at end of file,
the value -1 is returned; otherwise, at least one byte is read
and stored into b.
```
Close#3323
In `NettyHandlerTestBase` class, extended Netty's `EmbeddedChannel` by overriding`eventLoop()` to return an `eventLoop` that uses `FakeClock.getScheduledExecutorService() to schedule tasks.
Resolves#3326