GrpcServerRule configures an in-process server and channel. It is
useful for asserting requests being made to a service. A consumer can
create a mock implementation of their service that records each
request, then make assertions on those records in their test.
OutputStreamAdapter is a private class, and is only ever called by two
places: ByteStreams.copy, which never calls the single byte method, and
DrainTo, which potentially can. There are two classes that implement
DrainTo, which is primarily ProtoInputStream. It calls
MessageLite.writeTo(OutputStream) or back again to ByteStreams.copy.
MessageLite.writeTo in turn wraps the OutputStream in a
CodedOutputStream.OutputStreamEncoder, which then never calls the single
byte version. Thus, all know implementations never call the single
byte override.
Additionally, it is well known that the single byte write is slow, and
is expected to be wrapped in a BufferedOutputStream if there are many
small writes.
Currently Context propagate in-thread by its own ThreadLocal, and
cross-thread propagation must be done with the mechanisms provied by
gRPC Context. However, there are frameworks (e.g. what we are using
inside Google) which have already established context-propagation
mechanisms. If gRPC Context may ride on top of them, it would be
propagated automatically without additional effort from the application.
The Storage API allows gRPC Context to be attached to anything. The
default implementation still uses its own ThreadLocal. If an override
implementation is present, gRPC Context will use it.
- Remove unused variable terminated from TransportListener#transportTerminated
- Do not mention getLock in the javadoc of InUseStateAggregator2#handleNotInUse
FackeClock used PriorityQueue for storing tasks which is not
thread-safe, and caused flake
> io.grpc.internal.ClientCallImplTest > deadlineExceededBeforeCallStarted FAILED
> java.lang.ArrayIndexOutOfBoundsException: -1
> at java.util.PriorityQueue.removeAt(PriorityQueue.java:619)
> at java.util.PriorityQueue.remove(PriorityQueue.java:378)
> at io.grpc.internal.FakeClock$ScheduledTask.cancel(FakeClock.java:89)
> at io.grpc.internal.ClientCallImpl.removeContextListenerAndCancelDeadlineFuture(ClientCallImpl.java:296)
> at io.grpc.internal.ClientCallImpl.start(ClientCallImpl.java:250)
> at io.grpc.internal.ClientCallImplTest.deadlineExceededBeforeCallStarted(ClientCallImplTest.java:615)
Document the threading requirements.
Turn all metrics to volatile because callEnded() may not be synchronized
with other metric-updating methods in the case where the RPC is closed
because of transport error or cancellation from the other side.
Remove precondition checks thus allow metrics updating methods to be
called after callEnded() has been called, which may happen since the
application may not be aware when the RPC is closed by the transport.
The 32-bit windows grpc codegen built with protobuf 3.0.2 depends on
USER32.DLL, while the previous release built with protobuf 3.0.0 didn't.
Both protobuf and grpc codegen are built under msys2-mingw32.
"objdump -x libprotobuf-10.dll | grep DLL" also shows the USER32.DLL
dependency being brought by protobuf:
3.0.0
DLL Name: KERNEL32.dll
DLL Name: msvcrt.dll
DLL Name: zlib1.dll
DLL Name: libgcc_s_dw2-1.dll
DLL Name: libstdc++-6.dll
3.0.2
DLL Name: KERNEL32.dll
DLL Name: msvcrt.dll
DLL Name: USER32.dll
DLL Name: zlib1.dll
DLL Name: libgcc_s_dw2-1.dll
DLL Name: libstdc++-6.dll
While unexpected, USER32.dll is shipped with Windows so having such
dependency shouldn't cause any trouble.
This is the first step of a major refactor for the LoadBalancer-related part of Channel impl (#1600). It forks TransportSet into InternalSubchannel and makes changes on it.
What's changed:
- InternalSubchannel no longer has delayed transport, thus will not buffer
streams when a READY real transport is absent.
- When there isn't a READ real transport, obtainActiveTransport() will
return null.
- InternalSubchannel is no longer a ManagedChannel
- Overhauled Callback interface, with onStateChange() replacing the
adhoc transport event methods.
- Call out channelExecutor, which is a serializing executor that runs
the Callback.
The first commit is an unmodified copy of the files that are being forked. Please review the second commit which changes on the forked files.
Discovered when importing grpc to internal repo:
1. ErrorProne: Compound assignments from floating point to integral
types hide dangerous casts.
2. found raw type: io.grpc.ManagedChannelBuilder missing type arguments
for generic class io.grpc.ManagedChannelBuilder<T>
* interop-testing: modify the stress test client to accept the same --use_tls, --use_test_ca, and --server_host_override arguments as the interop client.
Clarify usage of --use_test_ca=true for stress and interop clients.
NoopCensusContextFactory used to use a shared zero-sized ByteBuffer,
which has mutable states like positions and thus is not thread-safe.
Sharing it means it will be used by all calls thus may be used from
different threads, which makes TSAN unhappy in tests.
Resolves#2377
resolvesgrpc/grpc#8715
now that setListener is called prior to
`JumpToApplicationThreadServerStreamListener` being completely ready to
use. We should not call `AbstractStream2#onStreamAllocated()` inside
`setListener()` anymore, but call it after `ServerImpl#streamCreated()`
is completed.
Resolves#1936
Two bugs fixed:
- NPE in `ServerImpl#streamCreated()` when stream listener not set before
stream closed
- It is possible that `internalCancel()` is called during
`InProcessClientStream#start()` due to early server `onComplete()` or server `onError()`,
in this case no need to enlist `streams`, otherwise the channel can not be shutdown by `shutdown()`.
Perhaps by accident, each LoadClient channel got its own executor
pool rather than sharing between all of them. The benchmarks are
currently set up with C++ idioms, creating 64 "channels" per
client. C++ uses one thread per channel, while java does not,
leading to a threading model mismatch.
ForkJoinPool is the current executor because it has good contention
characteristics. However, FJP is also very sensitive to the
number of actively running tasks. Too many, and the contention
will go up. Too few, and threads will repeatedly go to sleep and
wake up.
This change basically makes the FJP shared for all clients in the
same process. It doesn't try at all to shut it down properly or
be generally useful; it's just a benchmark. Additionally, this
also groups the Netty and OkHttp specific channel options into
helper functions. This means OkHttp benchmarks will also use
the correct executor now.
A quick test shows QPS going from ~107kqps to 149kqps.
Fixes: #2406
Proxies can trigger errors, and we'd like to categorize common
proxy-generating errors into gRPC status codes for sane application
handling.
Fixes#2264