* core, netty, okhttp: implement new logic for nameResolverFactory API in channelBuilder
fix ManagedChannelImpl to use NameResolverRegistry instead of NameResolverFactory
fix the ManagedChannelImplBuilder and remove nameResolverFactory
* Integrate target parsing and NameResolverProvider searching
Actually creating the name resolver is now delayed to the end of
ManagedChannelImpl.getNameResolver; we don't want to call into the name
resolver to determine if we should use the name resolver.
Added getDefaultScheme() to NameResolverRegistry to avoid needing
NameResolver.Factory.
---------
Co-authored-by: Eric Anderson <ejona@google.com>
Currently, the gRPC compiler isn't properly using the fully qualified
string name `java.lang.String` instead of `String`. Update the generator
to use the `$String$` alias to avoid compile issues with protobuf
messages called String.
Fixes#10316.
TlsTesting.loadCert() is a public API and so should be preferred over
our internal utility. It avoids creating a temp file that has to be
deleted by a shutdown hook. Usages that needed a file were not migrated.
The version used by protoc-gen-grpc-java will be upgraded separately,
because of large C++ build changes necessary. But that won't impact
users at all. We are upgrading to protoc 22.3; only the grpc plugin is
not upgraded.
Bazel is upgraded for both Java and C++.
LoadWorkerTest.runUnaryBlockingClosedLoop and Http2NettyTest.tlsInfo are
failing every CI run. It appears they are the unfortunate tests run
first, so are slowest to start as classloading proceeds. There's
definitely other tests that probably need adjustment, but fixing these
two gives us some hope of having a green run occasionally.
Introduce an AsyncService interface in the generated code and move the methods from <service>ImplBase to default implementation of the interface.
* update pom files to allow java 1.8
* Add a bindService(<service>Async) method
* Change TestServiceImpl to use the interface and include a bind method instead of extending TestServiceImplBase.
Trying to upgrade Gradle to 7.6 improved the checkstyle plugin such that
it appears to have been running in new occasions. That in turn exposed
us to https://github.com/checkstyle/checkstyle/issues/5088. That bug was
fixed in 8.28, which also fixed lots of other bugs. So now we have
better checking and some existing volations needed fixing. Since the
code style fixes generated a lot of noise, this is a pre-fix to reduce
the size of a Gradle upgrade.
I did not upgrade past 8.28 because at some point some other bugs were
introduced, in particular with the Indentation module. I chose the
oldest version that had the particular bug impacting me fixed. Upgrading
to this old-but-newer version still makes it easier to upgrade to a
newer version in the future.
runUnaryBlockingClosedLoop is failing after 10.3s, which means 5.3s was
probably spent loading the LoadWorker. That means things are likely
indeed slow enough that 5s isn't enough time to wait for the server to
start. A successful execution of runUnaryBlockingClosedLoop takes
12.1 seconds, which again points to general slow execution.
This should fix test failures on aarch64.
```
expected to be less than: 0.0
but was : 0.0
at app//io.grpc.benchmarks.driver.LoadWorkerTest.assertWorkOccurred(LoadWorkerTest.java:198)
at app//io.grpc.benchmarks.driver.LoadWorkerTest.runUnaryBlockingClosedLoop(LoadWorkerTest.java:90)
```
runUnaryBlockingClosedLoop() has been failing but the other tests
suceeding. The failure is complaining that getCount() == 0, which means
no RPCs completed. The slowest successful test has a mean RPC time of
226 ms (the unit was logged incorrectly) and comparing to x86 tests
runUnaryBlockingClosedLoop() is ~2x as slow because it executes first.
So this is probably _barely_ failing and 4 attempts instead of 3 would
be sufficient. While the test tries to wait for 10 RPCs to complete, it
seems likely it is stopping early even for the successful runs on
aarch64. There are 4 concurrent RPCs, so to get 10 RPCs we need to wait
for 3 batches of RPCs to complete which would be 1346 ms (5 loops)
assuming a 452 ms mean latency. Bumping timeout by 10x to give lots of
headroom.
This allows using LoadClient with xDS.
The changes to SocketAddressValidator are a bit hacky, but we really
don't care about cleanliness there. Eventually on client-side, we should
be deleting the unix special case entirely, as we'll have a unix name
resolver.
Fixes#8877
This can be used by annotation processors to avoid processing the
gRPC-generated code. The normal Generated annotation only has SOURCE
retention, so isn't available to annotation processors.
I don't include the service name within the annotation as that assumes
we'll never have need for any other type of generated class. If there's
a request for exposing service name via an annotation in the future, we
can make an RpcService annotation or the like.
Fixes#8158
Resolves#7741
Some of the static methods in generated code have the same method name but different package name, such `ClientCalls.asyncClientStreamingCall` and `ServerCalls.asyncClientStreamingCall`. It's less readable using static import than using full-qualified method name in-place.
It's a lot of code and there are classes in Guava are better. This was noticed
with a lint checker. This commit does change the error-handling behavior, as
previous the code wrongly cancelled the Future instead of setting it to have an
exception.
NETTY_LOCAL seem to have a flow control issue. Disable it since we don't
really look at it very often and we care about the
streamingCallsMessageThroughput benchmark.
This replaces FlowControlledMessagesPerSecondBenchmark, except it does not
avoid local flow control issues via request(5). If hacking in a request(5),
this benchmark produces similar results (non-direct: 671k vs previously 641k
msg/s).
This is equivalent to UnaryCallResponseBandwidthBenchmark and
StreamingResponseBandwidthBenchmark, although without the interface selection
logic (which allows for traffic shaping).
TearDown is guaranteed to execute after Setup; there is no synchronization
necessary. Although the volatile doesn't hurt anything functionally, it is
misleading and confusing.
It is basically the same as TransportBenchmark without protobuf, smaller
payload, and only Netty. It does show latencies around 66 µs instead of
TransportBenchmark's 70 µs on my laptop, but a quick conversion of
TransportBenchmark to ByteBufOutputMarshaller made it 66 µs as well.
* compiler: Use 'SERVICE_NAME' instead of duplicated '$Package$$service_name$'
* compiler: Align indentation
* Fix typo
* Add modified golden files and all re-generated code to meet Travis CI and Windows build requirements
See PR #5943
* Polishing
This commit swaps to using a Sync task to place generated code in the
src/generated folder instead of the gradle-protobuf-plugin's
generatedFilesBaseDir. This provides much nicer results on failed
builds, and you will no longer see all the generated files deleted.
But at the same time the Sync task makes it easy to only copy the
grpc-generated code. This was not previously done because we were lazy
and using generatedFilesBaseDir, which made it difficult to treat the
services differently from the messages.
For security, we should change http into https links.
Co-Authored-By: Nguyen Van Trung [trungnvfet@outlook.com](mailto:trungnvfet@outlook.com)
Signed-off-by: Nguyen Quang Huy [huynq0911@gmail.com](mailto:huynq0911@gmail.com)
For Bazel, we upgrade to protobuf 3.6.1.2 and javalite HEAD to fix
incompatibilities in newer Bazel releases.
compiler/Dockerfile is unused, so it was removed instead of being updated.
protoc no longer includes codegen for nano, so we remain on the older protoc
any time nano is used.
Protobuf now requires C++11 when compiling, so windows was swapped to
VC 14.
This will allow enabling Error Prone on JDK 10+ (after
updating the net.ltgt.errorprone plugin), and is also a
prerequisite to that plugin update.
Also remove net.ltgt.apt plugin, as Gradle has native
support for annotationProcessor.