examples/android/app/build.gradle is now
examples/android/helloworld/app/build.gradle and
examples/android/routeguide/app/build.gradle.
Since the number of files is getting a bit long, move it to a variable.
CodedInputStream is risk averse in ways that hurt performance when
parsing large messages. gRPC knows how large the input size is as it
is being read from the wire, and only tries to parse it once the entire
message has been read in. The message is represented as chunks of
memory strung together in a CompositeReadableBuffer, and then wrapped
in a custom BufferInputStream.
When passed to Protobuf, CodedInputStream attempts to read data out
of this InputStream into CIS's internal 4K buffer. For messages that
are much larger, CIS copies from the input in chunks of 4K and saved in
an ArrayList. Once the entire message size is read in, it is re-copied
into one large byte array and passed back up. This only happens for
ByteStrings and ByteBuffers that are read out of CIS. (See
CIS.readRawBytesSlowPath for implementation).
gRPC doesn't need this overhead, since we already have the entire
message in memory, albeit in chunks. This change copies the composite
buffer into a single heap byte buffer, and passes this (via
UnsafeByteOperations) into CodedInputStream. This pays one copy to
build the heap buffer, but avoids the two copes in CIS. This also
ensures that the buffer is considered "immutable" from CIS's point of
view.
Because CIS does not have ByteString aliasing turned on, this large
buffer will not accidentally be kept in memory even if only tiny fields
from the proto are still referenced. Instead, reading ByteStrings out
of CIS will always copy. (This copy, and the problems it avoids, can
be turned off by calling CIS.enableAliasing.)
Benchmark results will come shortly, but initial testing shows
significant speedup in throughput tests. Profiling has shown that
copying memory was a large time consumer for messages of size 1MB.
Comments should really use '#', since it is shell. Also, we avoid
telling users to clone the git repo since 1) this is basically implicit
already and 2) it encourages them to checkout master instead of using
the latest release. This is especially helpful when the document is
referenced from http://grpc.io/docs since they specify checking out the
latest release (which is much easier to maintain when using jekyll which
is not an option here).
After debugging #2153, it would have been nice to know what the exact
parameter was that was null. This change adds a name for each
checkNotNull (and tries to normalized on static imports in order to
shorten lines)
io.grpc should not be depending on anything from internal. Also, the
convenience method of Deadline is part of our public API and shouldn't
use LogExceptionRunnable because it would surprise our users.
Swapped to lower-case 'log' since the logger is not immutable.
If there are multiple versions available, cmake won't choose the Visual
Studio version selected by vsvars. So we have to explicitly specify the
generator to use.
This allows grpc-java to run on the shared Windows workers instead of
its own specialized instance.
Implementations of ManagedClientTransport.start() are restricted from
calling the passed listener until start() returns, in order to avoid
reentrency problems with locks. For most transports this isn't a
problem, because they need additional threads anyway. InProcess uses no
additional threads naturally so ends up needing a thread just to
notifyReady. Now transports can just return a Runnable that can be run
after locks are dropped.
This was originally intended to be a performance optimization, but the
thread also causes nondeterminism because RPCs are delayed until
notifyReady is called. So avoiding the thread reduces needless fakes
during tests.
Protobuf-lite since beta-4 is now more of a fork than a subset of
protobuf-java, which may cause us problems later since lite API is not
stable. Also, lite-generated code may now depend on APIs only in
protobuf-lite, so our users must depend on the protobuf-lite runtime.
Having all our users explicitly override the dependency is bothersome to
them and can easily only expose problems only after we do a release.
So now we are doing the dependency overriding; most users should "just
work" and pick up the correct protobuf artifact. I've confirmed the
exclusion is listed in the grpc-protobuf pom and "gradle dependencies"
and "mvn dependency:tree" do not include protobuf-lite for the examples.
Vanilla protobuf users are most likely to experience any breakage, which
should detect problems more quickly since we use protobuf-java more
frequently than protobuf-lite during development.
protobuf-lite does not include pre-generated code for the well-known
protos, so users will need to generate them themselves for the moment
(google/protobuf#1889).
Note that today changing deps does not noticeably reduce the method code
for our users, since ProGuard already is stripping most classes. The
difference in output is only a reduction of 3 classes and 6 methods for
the android example.
The != should have been ==. However, it is provable that the exception
won't be null, but we want to make that fact obvious when auditing. So
we just fail if the exception is ever null.
780b2696 caused all failures for blocked unary stubs to have a
StatusRuntimeException as the cause of the StatusRuntimeException, with
the two exceptions having almost the same status.
`ClientTransport.newStream()` and
`CallCredentials.applyRequestMetadata()` is now called under the context
of the call. This can be used to pass any call-specific information to
`CallCredentials`.
The value of nodeCount depended on deadlines expiring after the chain
was constructed. This is effectively the same as using Thread.sleep()
and would commonly fail if the machine was under load.
Instead of checking nodeCount after the deadline expires, we now wait
for the chain to be constructed and then cancel the RPC. This also
ensures that the cancel propagates instead of each hop just enforcing
the deadline. As a bonus, this also reduces test execution time by one
second. A new test was added for deadline propagation.
Fixes#1852
MessageFramer calls Drainable.drainTo with a special output stream of
OutputStreamAdapter. Currently, ByteBufInputStream writes to this output
stream by allocating a heapBuffer in UnsafeByteBufUtil.getBytes, copying
from the direct byte buffer of BBIS, and then copies to the direct byte
buffer from MessageFramer.writeRaw().
This change is an easy way to cut down on wasted memory, even though
ideally there would be some way to have less copies. The actual data is
only around 10 bytes, but causes O(10)s of megabytes allocation for the
heap pool.
For #2062
We are no longer using resources to load providers on Android. Instead,
we are calling Class.forName() for known providers. ProGuard is able to
detect these usages automatically.
The benchmarks today do not have a good way to record metrics with precision
or shutdown safely when the benchmark is over. This change alters the
AbstractBenchmark class to return a latch that can be waited upon when ending
the benchmark.
Benchmarks also would accidentally request way too many messages from the
server by calling request(1) explicitly in addition to the implicit one
in the StreamObserver to Call adapter. This change adds a few outstanding
requests, but otherwise keeps the request count bounded.
Additionally, benchmark calls would ignore errors, and just shutdown in such
cases. This changes them to log the error and just wait for the benchmark to
complete. In the successful case, the benchmark client notifies server by
halfClosing (via onCompleted) where it previously did not. It is also
careful to only do this once.
Lastly, Benchmarks have been changes to enable and disable recording at exact
points in the benchmark method, rather than waiting for teardown to occur.
Also, recording begins inside the recording method, not in Setup. JMH may
do other procressing before, between, and after iterations.