mirror of https://github.com/grpc/grpc-java.git
CodedInputStream is risk averse in ways that hurt performance when parsing large messages. gRPC knows how large the input size is as it is being read from the wire, and only tries to parse it once the entire message has been read in. The message is represented as chunks of memory strung together in a CompositeReadableBuffer, and then wrapped in a custom BufferInputStream. When passed to Protobuf, CodedInputStream attempts to read data out of this InputStream into CIS's internal 4K buffer. For messages that are much larger, CIS copies from the input in chunks of 4K and saved in an ArrayList. Once the entire message size is read in, it is re-copied into one large byte array and passed back up. This only happens for ByteStrings and ByteBuffers that are read out of CIS. (See CIS.readRawBytesSlowPath for implementation). gRPC doesn't need this overhead, since we already have the entire message in memory, albeit in chunks. This change copies the composite buffer into a single heap byte buffer, and passes this (via UnsafeByteOperations) into CodedInputStream. This pays one copy to build the heap buffer, but avoids the two copes in CIS. This also ensures that the buffer is considered "immutable" from CIS's point of view. Because CIS does not have ByteString aliasing turned on, this large buffer will not accidentally be kept in memory even if only tiny fields from the proto are still referenced. Instead, reading ByteStrings out of CIS will always copy. (This copy, and the problems it avoids, can be turned off by calling CIS.enableAliasing.) Benchmark results will come shortly, but initial testing shows significant speedup in throughput tests. Profiling has shown that copying memory was a large time consumer for messages of size 1MB. |
||
|---|---|---|
| .. | ||
| src | ||
| build.gradle | ||