Rebased PR #8343 into the first commit of this PR, then (the 2nd commit) reverted the part for metric recording of retry attempts. The PR as a whole is mechanical refactoring. No behavior change (except that some of the old code path when tracer is created is moved into the new method `streamCreated()`).
The API change is documented in go/grpc-stats-api-change-for-retry-java
Nginx and C core don't do graceful GOAWAY and retries have matured such
that transparent retries may soon be on by default. Refining the
workaround thus can reduces error rate for users.
Fixes#8310
This has been wrong since the introduction of the code in df357cb8.
Noticed as part of https://github.com/grpc/grpc-go/pull/4491 . The error
text is generally ASCII, so this probably doesn't matter much.
We used to have two ClientStreamListener.closed() methods. One is simply calling the other with default arg. This doubles debugging (e.g. #7921) and sometimes unit testing work. Deleting the 2-arg method to cleanup.
This PR is purely refactoring.
This is essentially re-adding #5901 after its behavior was "reverted"
in #8242. At the time grpc-core was the only dependency, so this change
only applies to grpc-core to make it a tiny bit easier to swap to
runtime scope in the future.
failOnVersionConflict has never been good for us. It is equivalent to
Maven dependencyConvergence which we discourage our users to use because
it is too tempermental and _creates_ version skew issues over time.
However, we had no real alternative for determining if our deps would be
misinterpeted by Maven.
failOnVersionConflict has been a constant drain and makes it really hard
to do seemingly-trivial upgrades. As evidenced by protobuf/build.gradle
in this change, it also caused _us_ to introduce a version downgrade.
This introduces our own custom requireUpperBoundDeps implementation so
that we can get back to simple dependency upgrades _and_ increase our
confidence in a consistent dependency tree.
Previously it required manually listing the direct deps of grpc-netty
which is error-prone as evidinced by the fact that we were missing
multiple deps (guava, perfmark-api). This didn't cause a problem because
grpc-core happens to bring in these same deps.
This adds support for creating a Netty Channel with SocketAddress and ChannelCredentials.
This aligns with NettyServerBuilder.forAddress(SocketAddress address, ServerCredentials creds).
Enables a codepath for zero-copy protobuf deserialization. Two new InputStream extension interfaces are added:
- HasByteBuffer: allows access to the underlying buffers containing inbound bytes directly without copying
- Detachable: allows customer marshaller to keep the buffers around until the application code is done with using the protobuf messages
Applications can implement a custom marshaller that takes over the ownership of ByteBuffers and wrap them into ByteStrings with protobuf's UnsafeByteOperations support. Then a RopeByteString, which is a in-place composite of ByteStrings can be created. This enables using the zero-copy codepath (requires immutable ByteBuffer indication) of CodedInputStream for deserialization.
The pom.properties are apparently present to allow tooling to know what
Maven artifact cooresponds to a JAR, just by looking at the JAR. Since
we shade Netty, that produces inaccurate results. This was noticed in
in #8077.
Fix a bug in StreamBufferingEncoder: when client receives GOWAY while there are pending streams due to MAX_CONCURRENT_STREAMS, we see the following error:
io.netty.handler.codec.http2.Http2Exception$StreamException: Maximum active streams violated for this endpoint.
Tested with the interop client on Zulu 8 and Zulu 11 with
-XX:+UseOpenJSSE (after disabling tcnative). I was unable to add a new
case to TlsTest because adding OpenJSSE as a dependency in a Gradle
build fails: https://github.com/openjsse/openjsse/issues/19Fixes#7907
Starting in Netty 4.1.60, Netty will validate Content-Length headers
using getAll() and setLong(). While getAll() was documented as only used
in tests, it doesn't appear it was currently used in any tests.
While Http2NettyTest.contentLengthPermitted() was added to confirm that
Content-Length works, it won't actually exercise any interesting
behavior until we upgrade to Netty 4.1.60. However, I did test with
Netty 4.1.60 and it reproduced the failure in
https://github.com/grpc/grpc-java/issues/7953 and passed with this
change.
Since Netty is now observing/modifying the headers, it would seem
appropriate to implement a substantial portion of the Http2Headers API.
However, the surface is much larger than we'd want to implement for a
'quick fix' that could be backported. In addition, it seems much of the
API is just convenience methods, so it is probably appropriate to split
out a AbstractHeaders class from DefaultHeaders in Netty that doesn't
make any assumptions about the header storage mechanism.
If a handshake is ongoing during shutdown, this would substantially
reduce the time it takes to shut down. Previously, you would need to use
channel.shutdownNow() to have fast shutdown behavior, which is an
unnecessary use of the variant.
When the current approach was written WriteBufferingAndExceptionHandler
didn't exist and so it was hard to predict how the pipeline would react
to events (particularly because of HTTP/2 handler's re-definition of
close()). Now that WBAEH exists, this is more straight-forward.
See this PR in netty: https://github.com/netty/netty/pull/9798 . It's
possible that one peer has closed the stream, yet another frame from
peers arrives after it. This is largely harmless, as explained in the PR
from netty repository. If we don't do this, the log will be polluted with
these harmless logs.
Example that would no longer be logged:
```
Jan 25, 2021 6:23:51 PM io.grpc.netty.NettyServerHandler onStreamError
WARNING: Stream Error
io.netty.handler.codec.http2.Http2Exception$StreamException: Received DATA frame for an unknown stream 27
at io.netty.handler.codec.http2.Http2Exception.streamError(Http2Exception.java:147)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.shouldIgnoreHeadersOrDataFrame(DefaultHttp2ConnectionDecoder.java:596)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:239)
...
```
- Add APIs to `ClientTransportFactory`:
```java
public interface ClientTransportFactory {
/**
* Swaps to a new ChannelCredentials with all other settings unchanged. Returns null if the
* ChannelCredentials is not supported by the current ClientTransportFactory settings.
*/
SwapChannelCredentialsResult swapChannelCredentials(ChannelCredentials channelCreds);
final class SwapChannelCredentialsResult {
final ClientTransportFactory transportFactory;
@Nullable final CallCredentials callCredentials;
}
}
```
- Add `ChannelCredentials` to constructor args of `ManagedChannelImplBuilder`:
```java
public ManagedChannelImplBuilder(
String target, @Nullable ChannelCredentials channelCreds, @Nullable CallCredentials callCreds, ...)
```
Enable this feature by setting the system property
-Dio.netty.ssl.masterKeyHandler=true
or
System.setProperty(SslMasterKeyHandler.SYSTEM_PROP_KEY, "true");
The keys will be written to the log named "io.netty.wireshark" in
the warnning level. To export the keys to a file, you can configure
log factory like: (with log4j.xml for example)
<appender name="key-file" class="org.apache.log4j.RollingFileAppender">
<param name="file" value="d:/keyfile.txt"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%m%n"/>
</layout>
</appender>
<category name="io.netty.wireshark">
<priority value="DEBUG" />
<appender-ref ref="key-file" />
</category>
Wireshark can analyze the messages gRPC over TLS with this
key log file.
close#7199
Change InternalServer to handle multiple addresses and implemented in NettyServer.
It makes ServerImpl to have a single transport server, and this single transport server (NettyServer) will bind to all listening addresses during bootstrap. (#7674)
The tiny cache size was removed from the bytebuf allocator and so was
deprecated. TLSv1.3 was enabled by the upgrade, which fails mTLS
connections at different times. Conscrypt is incompatible with the
default TrustManager when TLSv1.3 is enabled so we explicitly disable
TLSv1.3 when Conscrypt is used for the moment.
API change (See go/grpc-rls-callcreds-to-server):
- Add `ChannelCredentials.withoutBearerTokens()`
- Add `createResolvingOobChannelBuilder(String, ChannelCredentials)`, `getChannelCredentials()` and `getUnsafeChannelCredentials()` for `LoadBalancer.Helper`
This PR does not include the implementation of `createResolvingOobChannelBuilder(String, ChannelCredentials)`.
This is to be used for xDS to inject configuration for the
XdsServerCredentials. We'd like a cleaner approach, but they mostly seem
to be more heavy-weight. We will probably address this at the same time
we handle the Executor being passed for TLS. In the mean time this is
easy, doesn't hurt much, and can easily be changed in the future.
* fix channel builders ABI backward compatibility broken in v1.33.0
* fix server builders ABI backward compatibility broken in v1.33.0
* makes ForwardingServerBuilder package-private
With this, it will be clear if the RPC failed because the server didn't
use a double-GOAWAY or if it failed because of MAX_CONCURRENT_STREAMS or
if it was due to a local race. It also fixes the status code to be
UNAVAILABLE except for the RPCs included in the GOAWAY error (modulo the
Netty bug).
Fixes#5855
The stream creation was failing because the stream id was disallowed:
Caused by: io.grpc.StatusRuntimeException: INTERNAL: http2 exception
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:629)
... 16 more
Caused by: io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception$StreamException: Cannot create stream 222691 greater than Last-Stream-ID 222689 from GOAWAY.
The problem was introduced in 9ead606. Fixes#7357
Normally the first exception/event experienced is the cause and is
followed by a stampede of ClosedChannelExceptions. In this case,
SslHandler is manufacturing a ClosedChannelException of its own and
propagating it _before_ the trigger event. This might be considered a
bug, but changing SslHandler's behavior would be very risky and almost
certainly break someone's code.
Fixes#7376
It's hoped that this resolves the "too_many_pings" issue some users are
seeing that is worked around by GRPC_EXPERIMENTAL_AUTOFLOWCONTROL=false.
This change also avoids resetting the ping count for empty data frames
(which shouldn't really happen with gRPC).
The previous code failed to reset the ping count on HEADERS and
WINDOW_UPDATE. The code _appeared_ to have callbacks for WINDOW_UPDATE,
but was layered above the Http2Connection so was never called. Thus,
this version is much more aggressive then the previous version while
also addressing the correctness issue.
It deprecates ExpectedException and Assert.assertThat(T, org.hamcrest.Matcher).
Without Java 8 we don't want to migrate away from ExpectedException at
this time. We tend to prefer Truth over Hamcrest, so I swapped the one
instance of Assert.assertThat() to use Truth. With this change we get a
warning-less build with JUnit 4.13. We don't yet upgrade because we
still need to support JUnit 4.12 for some use-cases, but will be able to
upgrade to 4.13 soon when they upgrade.
This is a very simple change to test for IBMJSSE2 security provider in addition to the others. IBM JRE does not support the Sun provider, but instead has IBMJSSE2 which supports the same API calls.
I tested this on Z/OS machine as now it works when before it couldn't find a security provider
96ad6338 accidentally caused the javadoc and sources jars to no longer
be published for grpc-netty-shaded. It would appear to be due to the
jars being empty. This commit causes them to be published again.