grpc-java/benchmarks
Carl Mastrangelo 1623063143 core: speed up Status code and message parsing
This introduces the idea of a "Trusted" Ascii Marshaller, which is
known to always produce valid ASCII byte arrays.  This saves a
surprising amount of garbage, since String conversion involves
creating a new java.lang.StringCoding, and a sun.nio.cs.US_ASCII.

There are other types that can be converted (notably
Http2ClientStream's :status marshaller, which is particularly
wasteful).

Before:
Benchmark                              Mode     Cnt     Score    Error  Units
StatusBenchmark.codeDecode           sample  641278    88.889 ±  9.673  ns/op
StatusBenchmark.codeEncode           sample  430800    73.014 ±  1.444  ns/op
StatusBenchmark.messageDecodeEscape  sample  433467   441.078 ± 58.373  ns/op
StatusBenchmark.messageDecodePlain   sample  676526   268.620 ±  7.849  ns/op
StatusBenchmark.messageEncodeEscape  sample  547350  1211.243 ± 29.907  ns/op
StatusBenchmark.messageEncodePlain   sample  419318   223.263 ±  9.673  ns/op

After:
Benchmark                              Mode     Cnt    Score    Error  Units
StatusBenchmark.codeDecode           sample  442241   48.310 ±  2.409  ns/op
StatusBenchmark.codeEncode           sample  622026   35.475 ±  0.642  ns/op
StatusBenchmark.messageDecodeEscape  sample  595572  312.407 ± 15.870  ns/op
StatusBenchmark.messageDecodePlain   sample  565581   99.090 ±  8.799  ns/op
StatusBenchmark.messageEncodeEscape  sample  479147  201.422 ± 10.765  ns/op
StatusBenchmark.messageEncodePlain   sample  560957   94.722 ±  1.187  ns/op

Also fixes #2237

Before:
Result "unaryCall1024":
  mean = 155710.268 ±(99.9%) 149.278 ns/op

  Percentiles, ns/op:
      p(0.0000) =  63552.000 ns/op
     p(50.0000) = 151552.000 ns/op
     p(90.0000) = 188672.000 ns/op
     p(95.0000) = 207360.000 ns/op
     p(99.0000) = 260608.000 ns/op
     p(99.9000) = 358912.000 ns/op
     p(99.9900) = 1851425.792 ns/op
     p(99.9990) = 11161178.767 ns/op
     p(99.9999) = 14985005.383 ns/op
    p(100.0000) = 17235968.000 ns/op

Benchmark                         (direct)  (transport)    Mode      Cnt       Score     Error  Units
TransportBenchmark.unaryCall1024      true        NETTY  sample  3205966  155710.268 ± 149.278  ns/op

After:
Result "unaryCall1024":
  mean = 147474.794 ±(99.9%) 128.733 ns/op

  Percentiles, ns/op:
      p(0.0000) =  59520.000 ns/op
     p(50.0000) = 144640.000 ns/op
     p(90.0000) = 176128.000 ns/op
     p(95.0000) = 190464.000 ns/op
     p(99.0000) = 236544.000 ns/op
     p(99.9000) = 314880.000 ns/op
     p(99.9900) = 1113084.723 ns/op
     p(99.9990) = 10783126.979 ns/op
     p(99.9999) = 13887153.242 ns/op
    p(100.0000) = 15253504.000 ns/op

Benchmark                         (direct)  (transport)    Mode      Cnt       Score     Error  Units
TransportBenchmark.unaryCall1024      true        NETTY  sample  3385015  147474.794 ± 128.733  ns/op
2016-09-13 13:08:59 -07:00
..
src core: speed up Status code and message parsing 2016-09-13 13:08:59 -07:00
README.md Update QPS Client and Server. 2015-04-24 17:23:28 -07:00
build.gradle benchmarks: upgrade to jmh 1.14 2016-09-09 14:14:31 -07:00

README.md

grpc Benchmarks

QPS Benchmark

The "Queries Per Second Benchmark" allows you to get a quick overview of the throughput and latency characteristics of grpc.

To build the benchmark type

$ ./gradlew :grpc-benchmarks:installDist

from the grpc-java directory.

You can now find the client and the server executables in benchmarks/build/install/grpc-benchmarks/bin.

The C++ counterpart can be found at https://github.com/grpc/grpc/tree/master/test/cpp/qps

Visualizing the Latency Distribution

The QPS client comes with the option --dump_histogram=FILE, if set it serializes the histogram to FILE which can then be used with a plotter to visualize the latency distribution. The histogram is stored in the file format of HdrHistogram. That way it can be plotted very easily using a browser based tool like http://hdrhistogram.github.io/HdrHistogram/plotFiles.html. Simply upload the generated file and it will generate a beautiful graph for you. It also allows you to plot two or more histograms on the same surface in order two easily compare latency distributions.

JVM Options

When running a benchmark it's often useful to adjust some JVM options to improve performance and to gain some insights into what's happening. Passing JVM options to the QPS server and client is as easy as setting the JAVA_OPTS environment variables. Below are some options that I find very useful:

  • -Xms gives a lower bound on the heap to allocate and -Xmx gives an upper bound. If your program uses more than what's specified in -Xmx the JVM will exit with an OutOfMemoryError. When setting those always set Xms and Xmx to the same value. The reason for this is that the young and old generation are sized according to the total available heap space. So if the total heap gets resized, they will also have to be resized and this will then trigger a full GC.
  • -verbose:gc prints some basic information about garbage collection. It will log to stdout whenever a GC happend and will tell you about the kind of GC, pause time and memory compaction.
  • -XX:+PrintGCDetails prints out very detailed GC and heap usage information before the program terminates.
  • -XX:-HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=path when you are pushing the JVM hard it sometimes happens that it will crash due to the lack of available heap space. This option will allow you to dive into the details of why it happened. The heap dump can be viewed with e.g. the Eclipse Memory Analyzer.
  • -XX:+PrintCompilation will give you a detailed overview of what gets compiled, when it gets compiled, by which HotSpot compiler it gets compiled and such. It's a lot of output. I usually just redirect it to file and look at it with less and grep.
  • -XX:+PrintInlining will give you a detailed overview of what gets inlined and why some methods didn't get inlined. The output is very verbose and like -XX:+PrintCompilation and useful to look at after some major changes or when a drop in performance occurs.
  • It sometimes happens that a benchmark just doesn't make any progress, that is no bytes are transferred over the network, there is hardly any CPU utilization and low memory usage but the benchmark is still running. In that case it's useful to get a thread dump and see what's going on. HotSpot ships with a tool called jps and jstack. jps tells you the process id of all running JVMs on the machine, which you can then pass to jstack and it will print a thread dump of this JVM.
  • Taking a heap dump of a running JVM is similarly straightforward. First get the process id with jps and then use jmap to take the heap dump. You will almost always want to run it with -dump:live in order to only dump live objects. If possible, try to size the heap of your JVM (-Xmx) as small as possible in order to also keep the heap dump small. Large heap dumps are very painful and slow to analyze.

Profiling

Newer JVMs come with a built-in profiler called Java Flight Recorder. It's an excellent profiler and it can be used to start a recording directly on the command line, from within Java Mission Control or with jcmd.

A good introduction on how it works and how to use it are http://hirt.se/blog/?p=364 and http://hirt.se/blog/?p=370.