Commit Graph

294 Commits

Author SHA1 Message Date
Lidi Zheng 13b378bc45
internal: add global DialOptions and ServerOptions for all clients and servers (#5352) 2022-06-02 16:17:01 -07:00
Doug Fawley 9711b148c4
server: clarify documentation around setting and sending headers and ServerStream errors (#5302) 2022-04-08 13:11:40 -07:00
Easwar Swaminathan a73725f42d
channelz: include channelz identifier in logs (#5192) 2022-02-23 07:30:06 -08:00
Menghan Li 61a6a06b88
server: handle context errors returned by service handler (#5156) 2022-01-26 11:02:23 -08:00
Doug Fawley f068a13ef0
server: add missing conn.Close if the connection dies before reading the HTTP/2 preface (#4837) 2021-10-04 11:22:00 -07:00
Evan Jones e6246c22eb
server: optimize chain interceptors (-1 allocation, -10% time/call) (#4746) 2021-09-22 13:30:27 -07:00
Zach Reyes 606403ded2
transport: fix log spam from Server Authentication Handshake errors (#4798)
* transport: fix log spam from Server Authentication Handshake errors
2021-09-21 19:33:18 -04:00
Zach Reyes c361e9ea16
Move Server Credentials Handshake to transport (#4692)
* Move Server Credentials Handshake to transport
2021-08-23 19:39:14 -04:00
吴亲库里 52cea24534
server: fix net.conn closed twice (#4663) 2021-08-18 13:31:22 -07:00
Menghan Li c052940bcd
server: fix leaked net.Conn (#4633)
This happens when NewServerTransport() returns nil, nil. The rawConn is
closed when the transport is closed, which will never happen in this
case (since the returned transport is nil).
2021-08-02 13:05:02 -07:00
Aliaksandr Mianzhynski 9b2fa9f8d3
server: improve chained interceptors performance (#4524) 2021-06-24 22:11:47 -07:00
Iskandarov Lev 4faa31f0a5
stats: add stream info inside stats.Begin (#4533) 2021-06-18 13:21:07 -07:00
Easwar Swaminathan 174b1c28af
internal/transport: skip log on EOF when reading client preface (#4458) 2021-06-02 16:47:35 -07:00
Easwar Swaminathan 728364accf
server: return UNIMPLEMENTED on receipt of malformed method name (#4464) 2021-05-24 17:30:40 -07:00
Ehsan Afzali a8e85e0d57
server: allow PreparedMsgs to work for server streams (#3480) 2021-05-21 15:54:24 -07:00
Doug Fawley 328b1d171a
transport: allow InTapHandle to return status errors (#4365) 2021-05-07 14:37:52 -07:00
Mikhail Mazurskiy d2d6bdae07
server: add ForceServerCodec() to set a custom encoding.Codec on the server (#4205) 2021-05-06 09:40:54 -07:00
Easwar Swaminathan 52a707c0da
xds: serving mode changes outlined in gRFC A36 (#4328) 2021-04-26 14:29:06 -07:00
Easwar Swaminathan 2fad6bf4da
xds: Implement server-side security (#4092) 2020-12-16 10:27:18 -08:00
Gaurav Gahlot d9063e7af3
standardized experimental warnings (#3917) 2020-10-02 09:11:08 -07:00
Stephen L. White e6c98a478e
stats: include message header in stats.InPayload.WireLength (#3886) 2020-09-25 10:06:54 -07:00
Doug Fawley 44d73dff99
cmd/protoc-gen-go-grpc: rework service registration (#3828) 2020-08-25 09:28:01 -07:00
Garrett Gutierrez 0e72e09474
server: prevent hang in Go HTTP transport in some error cases (#3833) 2020-08-21 18:04:04 -07:00
Easwar Swaminathan f30caa90ad
server: Add ServiceRegistrar interface. (#3816) 2020-08-14 10:26:20 -07:00
Easwar Swaminathan a5514c9e50
grpc: Minor refactor in server code. (#3779) 2020-08-06 13:10:09 -07:00
Menghan Li c95dc4da23
doc: mark CustomCodec as deprecated (#3698) 2020-06-26 12:56:03 -07:00
Garrett Gutierrez 506b773066
Implemented component logging (#3617) 2020-06-26 12:04:47 -07:00
IceberGu 636b0d84dd
internal: fix typos (#3581) 2020-05-19 19:24:38 -07:00
Adhityaa Chandrasekar a0cdc21e61
server.go: use worker goroutines for fewer stack allocations (#3204)
Currently (go1.13.4), the default stack size for newly spawned
goroutines is 2048 bytes. This is insufficient when processing gRPC
requests as the we often require more than 4 KiB stacks. This causes the
Go runtime to call runtime.morestack at least twice per RPC, which
causes performance to suffer needlessly as stack reallocations require
all sorts of internal work such as changing pointers to point to new
addresses.

Since this stack growth is guaranteed to happen at least twice per RPC,
reusing goroutines gives us two wins:

  1. The stack is already grown to 8 KiB after the first RPC, so
     subsequent RPCs do not call runtime.morestack.
  2. We eliminate the need to spawn a new goroutine for each request
     (even though they're relatively inexpensive).

Performance improves across the board. The improvement is especially
visible in small, unary requests as the overhead of stack reallocation
is higher, percentage-wise. QPS is up anywhere between 3% and 5%
depending on the number of concurrent RPC requests in flight. Latency is
down ~3%. There is even a 1% decrease in memory footprint in some cases,
though that is an unintended, but happy coincidence.

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_1m0s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_8-reqSize_1B-respSize_1B-compressor_off-channelz_false-preloader_false
               Title       Before        After Percentage
            TotalOps      2613512      2701705     3.37%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op      8657.00      8654.17    -0.03%
           Allocs/op       173.37       173.28     0.00%
             ReqT/op    348468.27    360227.33     3.37%
            RespT/op    348468.27    360227.33     3.37%
            50th-Lat    174.601µs    167.378µs    -4.14%
            90th-Lat    233.132µs    229.087µs    -1.74%
            99th-Lat     438.98µs    441.857µs     0.66%
             Avg-Lat    183.263µs     177.26µs    -3.28%
2020-04-23 15:50:02 -07:00
Garrett Gutierrez fff75ae40f
channelz: log on channelz trace events and trace on channelz relevant logs. (#3329)
channelz: log on channelz trace events and trace on channelz relevant logs. (#3329)
2020-02-14 10:11:26 -08:00
Doug Fawley 6b9bf4296e
Revert "profiling: add hooks within grpc (#3159)" (#3378)
This reverts commit 83263d17f7.
2020-02-14 07:56:46 -08:00
tukeJonny d0235e4d6b
interceptor: new APIs for chaining server interceptors. (#3336) 2020-02-12 11:11:50 -08:00
Adhityaa Chandrasekar 83263d17f7
profiling: add hooks within grpc (#3159) 2020-02-12 11:10:44 -08:00
Menghan Li 8c50fc2565
revert buffer reuse (#3338)
* Revert "stream: fix returnBuffers race during retry (#3293)"

This reverts commit ede71d589c.

* Revert "codec/proto: reuse of marshal byte buffers (#3167)"

This reverts commit 642675125e.
2020-01-27 13:30:41 -08:00
Menghan Li ede71d589c
stream: fix returnBuffers race during retry (#3293)
And release the buffer after Write(), unless the buffer needs to be kept for retries.
2020-01-07 17:17:22 -08:00
Adhityaa Chandrasekar 642675125e codec/proto: reuse of marshal byte buffers (#3167)
Performance benchmarks can be found below. Obviously, a 8 KiB
request/response is tailored to showcase this improvement as this is
where codec buffer reuse shines, but I've run other benchmarks too (like
1-byte requests and responses) and there's no discernable impact on
performance.

We do not allow reuse of buffers when stat handlers or binlogs are
turned on. This is because those two may need access to the data and
payload even after the data has been written to the wire. In such cases,
we never return the data back to the pool.

A buffer reuse threshold of 1 KiB was determined after several
experiments. There's diminished returns when buffer reuse is enabled for
smaller messages (actually, a negative impact).

unary-networkMode_none-bufConn_false-keepalive_false-benchTime_40s-trace_false-latency_0s-kbps_0-MTU_0-maxConcurrentCalls_6-reqSize_8192B-respSize_8192B-compressor_off-channelz_false-preloader_false
               Title       Before        After Percentage
            TotalOps       839638       906223     7.93%
             SendOps            0            0      NaN%
             RecvOps            0            0      NaN%
            Bytes/op    103788.29     80592.47   -22.35%
           Allocs/op       183.33       189.30     3.27%
             ReqT/op 1375662899.20 1484755763.20     7.93%
            RespT/op 1375662899.20 1484755763.20     7.93%
            50th-Lat    238.746µs    225.019µs    -5.75%
            90th-Lat    514.253µs    456.439µs   -11.24%
            99th-Lat    711.083µs    702.466µs    -1.21%
             Avg-Lat     285.45µs    264.456µs    -7.35%
2019-12-20 09:41:23 -08:00
Adhityaa Chandrasekar 3180dcb49d server.go: combine defers to reduce stack usage (#3208)
Continuing the war on stacks, we can reduce the amount of stack required
per-RPC by combining defers from different components into one.

Each defer statement in process{Unary,Streaming}RPC goes on the stack
and occupies about 56-64 bytes the entire lifetime of an RPC, which
could be very long. More importantly, a call to runtime.morestack is
often required to allocate a new, larger stack when the handler
goroutine runs out of stack memory (Go's default stack size is 2 KiB).

Before:

    $ go tool objdump <binary> | grep "TEXT.*processUnaryRPC(SB)" -A 10 | grep "SUBQ.*SP"
      server.go:867   0x9132fb    4881ec80030000      SUBQ $0x380, SP
    $ go tool objdump <binary> | grep "TEXT.*processStreamingRPC(SB)" -A 10 | grep "SUBQ.*SP"
      server.go:1099  0x9151bb    4881ec68020000      SUBQ $0x268, SP

After:

    $ go tool objdump <binary> | grep "TEXT.*processUnaryRPC(SB)" -A 10 | grep "SUBQ.*SP"
      server.go:867   0x9132fb    4881ecd0020000      SUBQ $0x2d0, SP
    $ go tool objdump <binary> | grep "TEXT.*processStreamingRPC(SB)" -A 10 | grep "SUBQ.*SP"
      server.go:1116  0x9150fb    4881ecf8010000      SUBQ $0x1f8, SP

As one can observe, the processUnaryRPC's stack goes down from 0x380
bytes to 0x2d0 bytes (896 - 720 = 176 bytes) while processStreamingRPC's
stack goes down from 0x2d8 bytes to 0x1f8 bytes (616 - 504 = 112 bytes).

There are probably other things we can do here, but these are some low
hanging fruits to pick off.
2019-12-05 14:50:20 -08:00
Doug Fawley 24f6331d7e
server: correct doc regarding unknown handlers and interceptors (#3195) 2019-11-19 14:30:36 -08:00
Mo Zhonghua fb2e5cdc85 server: add ServerOption HeaderTableSize (#2931) 2019-10-03 16:08:31 -07:00
David Zbarsky 92635fa6bf server: avoid call to trace.FromContext and resulting allocations when tracing is disabled (#2926) 2019-07-30 10:14:53 -07:00
ajwerner b5748caae7 server: populate WireLength on stats.InPayload for unary RPCs (#2932)
Fixes #2692 which was incompletely fixed by #2711.

Also adds updates stats/stat_test.go to sanity check WireLength.
2019-07-24 16:24:45 -07:00
David Zbarsky 04c71b7aac server: avoid an unnecessary allocation per-RPC for OK status (#2920) 2019-07-22 09:53:08 -07:00
Doug Fawley 59fd1f3d41
server: immediately close all connections created after GracefulStop (#2903)
Internal cleanup: replace quit/quitOnce/done/doneOnce with grpcsync.Events.
2019-07-12 13:14:19 -07:00
Can Guler 915d20dcdb
grpc: change type of Server.conns
Change Server.conns from a map[io.Closer]bool to a map[transport.ServerTransport]bool.
2019-06-26 11:09:45 -07:00
Menghan Li 1e6ab1e96e
server: define ServerOption as interfaces (#2784)
Instead of functions. So custom server options can be made by wrapping an
EmptyServerOption.
2019-04-26 10:33:22 -07:00
David Symonds f1437f7cc5 server: Improve error message when an unknown method is invoked. (#2723)
Previously only `unknown service <x>` was returned, which is misleading
when the service is known but the method is unknown.
2019-03-27 16:19:28 -07:00
David Symonds 9a2caafd93 client: restore remote address in traces (#2718)
The client-side traces were otherwise only showing `RPC: to <nil>`,
which is not helpful.

Also clean up construction of traceInfo and firstLine in a few places.
2019-03-27 09:52:40 -07:00
JP Sugarbroad a618c37a27 server: Don't log errors on ErrConnDispatched (#2656)
ErrConnDispatched is a normal error -- we should not fill up logs with it.
2019-03-07 13:22:17 -08:00
Doug Fawley ed70822b12 keepalive: apply minimum ping time of 10s to client and 1s to server (#2642)
* keepalive: apply minimum ping time of 10s to client and 1s to server

* review fixes
2019-02-21 13:09:37 -08:00
Wu Kai 4cad6a6283 comment: default MaxSendMsgSize should be math.MaxInt32 instead of 4MB (#2586) 2019-01-22 10:48:59 -08:00