- Remove validation and default value logic that already exists in providers from the factory.
- Using the PolicySelection in CdsUpdate instead of the JSON config.
This refactoring is done in preparation of a larger change where LB
configuration will be provided in the xDS Cluster proto message
load_balancing_policy field. This field will allow for the configuration
of custom LB policies with arbitrary configuration data.
- Instead of directly creating Java configuration objects, the client
delegates to a new builder to generate JSON configurations
- This factory is considered a "legacy" one as a separate factory will
be introduced to build configs based on the new load_balancing_policy
field
- The client will use a LoadBalancerProvider to parse the generated
config to assure it is valid.
- CdsLoadBalancer2 will parse to config again to produce the LB config
object passed down to child LBs.
This would limit LRS stream creation to one per second, even if the
old stream was considered good as it received a response. This is the
same change as made to ADS in 957079194a.
b/224833499
1. Unnecessary fully qualified names
Currently in XdsCredentialsRegistry, the child classes are referred by
their fully qualified names i.e.
'io.grpc.xds.internal.GoogleDefaultXdsCredentialsProvider' instead of
importing GoogleDefaultXdsCredentialsProvider and just using
GoogleDefaultXdsCredentialsProvider.class.
2. Use immutable interfaces instead of the generic collection interface
i.e. ImmutableMap instead of just Map.
These improvements are related to #8924.
Just using 100ms is mostly sufficient, but could potentially still flake
on the start()s that should return successfully. Waiting for the
XdsServingStatusListener to be called greatly reduces the amount of
processing needing to be done for start() to react and thus should
greatly avoid flakiness.
This would limit ADS stream creation to one per second, even if the
old stream was considered good as it received a response. This shouldn't
really ever trigger, and if it does 1 QPS is probably still too high.
But 1 QPS is _substantially_ better than a closed loop and there's very
few additional signals we could use to avoid resetting the backoff.
b/224833499
Currently the credentials used for xDS communications is hardcoded in the BootstrapperImpl. The bootstrap config chooses one of the possible hardcoded credential.
This commit adds support for a credential plugin which allows users to register custom credentials through XdsCredentialProviders. gRPC will automatically discover the implementations via Java's SPI mechanism
Bootstrapper will use XdsCredentialRegistry to retrieve the list of supported credentials. The current hardcoded list of credentials(google_default, insecure and tls) are registered by default to keep the behavior as is.
We want to ignore the route in these situations, which is achieved by returning a null. The current behavior of returning an error triggers a NACK to the update.
`setCall()` returns drainPendingCalls runnable only when there are calls to drain, otherwise return null. Preserved the behaviour of `start()` and `cancel()`, as they are protected by `delayOrExecute()`.
GoogleCloudToProdNameResolver has a hard dependency on alts whereas xds
only has a weak dependency on alts that can be solved by a
ChannelCredentialsRegistry. So split out the code to a separate
artifact.
2a45524 introduced '.' to the end of some status descriptions. We
typically don't end status descriptiosn in periods, but that's minor. In
this case though if the causal status ends in period then the new status
will end in two periods, which could easily be confusing to users.
added a java control plane for xds tests end-to-end.
The FakeControlPlaneService manages full sets of xds resources. Use `setXdsConfig()` method to update the latest xds configurations; the method can be called anytime and multiple times dynamically. The fake control plane allows multiple clients connecting, delivers xds responses(for the data resources, or ACK/NACK) for the xds client requests.
The `FakeControlPlaneXdsIntegrationTest` only has one pingPong test case now. Other test case can be added in a similar way.
Workaround for #8886, as we wait on a real fix. The regular load
balancing disconnections are confusing users and will train users to
start ignoring gRPC warnings. At present, it is better to have no log
than excessively log.
Adopting the change in the [spec](367ba33a0a/A47-xds-federation.md (xds-api-changes)):
>Currently, for the ConfigSource fields in the LDS resource that points to the RDS resource and in the CDS resource that points to the EDS resource, gRPC requires the ConfigSource to have its ads field set. As part of supporting federation, gRPC will now also allow the ConfigSource to have its self field set. Both fields will have the same meaning.
This was noticed because Mockito can't mock Random in Java 17, so it was
replaced with actual Random. But when doing that change it exposed that
negative numbers would cause the id to have a double '-'.
Previous versions of error prone were incompatible with Java 17 javac.
In grpc-api, errorprone is now api dependency because it is on a public
API. I was happy to see that Gradle failed the build without the dep
change, although the error message wasn't super clear as to the cause.
It seems that previously -PerrorProne=false did nothing. I'm guessing
this is due to a behavior change of Gradle at some point. Swapping to
using the project does build without errorProne, although the build
fails with Javac complaining certain classes are unavailable. It's
unclear why. It doesn't seem to be caused by the error-prone plugin.
I've left it failing as a pre-existing issue.
ClientCalls/ServerCalls had Deprecated removed from some methods because
they were only deprecated in the internal class, not the API. And with
Deprecated, InlineMeSuggester complained.
I'm finding InlineMeSuggester to be overzealous, complaining about
package-private methods. In time we may figure out how to use it better,
or we may request changes to the checker in error-prone.
* xds: fix a concurrency issue in CSDS ClientStatus responses
Fixes an issue with ClientXdsClient.getSubscribedResourcesMetadata()
executed out of shared synchronization context, and leading to:
- each individual config dump containing outdated data when
an xDS resource is updated during CsdsService preparing the response
- config dumps for different services being out-of-sync with each
other when any of the related xDS resources is updated during
CsdsService preparing the response
The fix replaces getSubscribedResourcesMetadata(ResourceType type)
with atomic getSubscribedResourcesMetadataSnapshot() returning
a snapshot of all resources for each type as they are
at the moment of a CSDS request.
Ring hash can only be used from within xds currently, because that's
the only way to get a hash assigned to RPCs which is required for it
to function. So it should be using the _experimental suffix like the
other only-used-from-xds policies.
This is to keep names of the top-level process* functions called from
handle*Response functions, and returning *Update resources consistent:
- `handleLdsResponse()` -> `LdsUpdate processClientSideListener()`
`LdsUpdate processServerSideListener()`
- `handleCdsResponse()` -> `CdsUpdate processCluster()`
- `handleRdsResponse()` -> `RdsUpdate processRouteConfiguration()`
- `handleEdsResponse()` -> `EdsUpdate processClusterLoadAssignment()`
For some reason, processCluster() was renamed to parseCluster() in
fa4b980e0.
Implement applying `server_listener_resource_name_template` and `client_listener_resource_name_template` with xdstp scheme, extracting authorities from xdstp resource URI and lookup authorities map in bootstrap.
- Partially revert the change of RlsProtoData.java in #8612 by removing `public` accessor
- Have grpc-xds no longer strongly depend on grpc-rls. The application will need grpc-rls as runtime dependencies if they need route lookup feature in xds.
- Parse RouteLookupServiceClusterSpecifierPlugin config to the Json/Map representation of `io.grpc.lookup.v1.RouteLookupClusterSpecifier` instead of `io.grpc.rls.RlsProtoData.RouteLookupConfig`
Fix bugs:
1. Invalid resource at xdsClient, the watcher should have been delivered an error instead of resource not found.
2. If the resource is properly determined to not exist, it shouldn't cause start() to fail. From A36 xDS for Servers:
"XdsServer's start must not fail due to transient xDS issues, like missing xDS configuration from the xDS server."
Generating a uuid in filterChain breaks the de-duplication detection which causes XdsServer to cycle connections, so removing it.
An empty name is now allowed. The name is currently only used for debug purpose.
Add RlsClusterSpecifierPlugin as per the [design doc](http://go/grpc-rls-in-xds#heading=h.dmyrvi6ohebx)
The structure of `ClusterSpecifierPlugin` is very similar to `io.grpc.xds.Filter`.
The following changes to the existing code are made:
- move `ConfigOrError` class out of `Filter` class to be shared with `ClusterSpecifierPlugin`
- make `io.grpc.rls.RlsProtoData` public to be accessible by `io.grpc.xds`
- treat empty defaultTarget in `io.grpc.rls.RlsProtoData.RouteLookupConfig` as null to support both json and proto config without defaultTarget field specified.
As many new fields will be added to `BootstrapInfo` for xds federation support, refactor `Bootstrapper.java` to use `AutoValue`. All the other files are just mechanical changes due to the refactoring.
Ideally we should plumb this through Filter, but FilterRegistry will
need to be plumbed to XdsClient and it started becoming non-trivial
compared to the "just add two lines." Expediency is helpful as the XDS
logs are pretty hard to read without the pretty-printing.
The sslContextProviderSupplier is used by the xds creds themselves when
the control plane has security configured. But the fallback credentials
don't use such a supplier and may not even be using TLS.
Language tweak following #8554.
The internal build fails with "reference to assertThat is ambiguous". It
isn't clear why the internal build fails while the external one is okay,
but it is clear that the wildcard T return of readOutbound() is probably
confusing things as javac is considering assertThat(BigDecimal) as a
possible match.
The T return type is a hidden, convenience cast. We force the type
passed to assertThat() to be Object to avoid any ambiguity.
Envoy RetryPolicy with empty retryOn should not be ignored as no retry config when selecting Route config. Therefore, if xDS update for a route contains a RetryPolicy that has no RetryOn value that we support, but the virtual host config does, xds client should choose the Envoy RetryPolicy from the route (even with no RetryOn), rather than choosing the one from virtual host, and try to convert it into grpc RetryPolicy, and end up with no retry.
Added routing config discovery for HCM in LdsUpdate in XdsServerWrapper. This can be LDS inline or through RDS. Deal with inflight SslContextProviderSupplier resource handling. Discovered routing config is updated to FilterChainSelectorRef.
Added routing config data field in FilterChainSelector. Filter chain matching would resulting in setting a new attribute key for server routing config. Filter chain matching logics mostly not changed.
Installed ConfigApplyingInterceptor in XdsServerWrapper's delegateBuilder. It fetches server routing config attribute set above. It does routing match and creates server interceptors for the http filters as a result.
The FINE logging was just repeating the exceptions. But really, it is
trivial to avoid exceptions in this case and that is beneficial because
it will avoid an expensive error handling path in something that is
trivial to trigger remotely.
The WARNING may be a bit much if connections don't match the filter
chains often in production, but it seems most likely a misconfiguration
and not something that would be seen often.
This is split from #8318, refactoring changes include:
1. FilterChainMatchingHandler
1.1. Previously filter chain match is built-in in XdsServerCredential for xdsServer. (But it does not have to be XdsServerCredential.) The protocol negotiator associated with the XdsServerCredential does the filter chain match computation. Now filter chain match is through a FilterChainMatchingHandler and it always run. As a result, it sets attributes of sslContextProviderSupplier from xds config in protocol negotiation event.
1.2. The previous protocol negotiator associated with the XdsServerCredential is modified to just lookup the config in the attribute set above and decide to use xds config credential or fallback credential.
1.3. Previously credential is a must in XdsBuilder. Now credential becomes optional to allow routing config to be fetched. Xds TCP listener update will always be used to run filter chain match.
Later, we will add routing config in filter chain match and apply http filter configs by installing ConfigApplyingInterceptor.
2. Removed xdsClientWrapperForServerXds, unnecessarily complicated.
3. Changed event attribute key. Previously filter chain matching happens in the xdsClientWrapperForServerXds, the xds client wrapper is passed to negotiation handler via attributes to allow protocol negotiator to trigger the filter chain matching computation.
Now the attributes becomes an atomic config selector reference that xdsServerWrapper will inject by watching xds resources updates via xds client.
4. Previously there are multiple server states enum in xdsServerWrapper, this is removed because it is unnecessarily complicated. But there are still isServing status to avoid re-start delegate upon listener update.
5. Previously xdsServerWrapper ignores any xds updates once initial started, now we allow dynamic update to happen even if server is up. This is done via updating config selector atomic reference upon listener update.
6. Previously xdsServerWrapper synchronizes on the server object, this is modified to syncContext to be more manageable.
Rebased PR #8343 into the first commit of this PR, then (the 2nd commit) reverted the part for metric recording of retry attempts. The PR as a whole is mechanical refactoring. No behavior change (except that some of the old code path when tracer is created is moved into the new method `streamCreated()`).
The API change is documented in go/grpc-stats-api-change-for-retry-java
* xds: sync envoy proto to commit 62ca8bd2b5960ed1c6ce2be97d3120cee719ecab
* Suppress warnings for newly deprecated xDS proto fields
Sync to the latest update to pick up https://github.com/envoyproxy/envoy/pull/16942 for forward compatibility with upcoming xDS Rate Limiting features.
Internal Envoy import CL for `62ca8bd2b5960ed1c6ce2be97d3120cee719ecab`: cl/381356375
Suppressed warnings for newly deprecated xDS proto fields:
1) `PerXdsConfig xds_config` to be replaced with `GenericXdsConfig generic_xds_configs`, but this work yet to be planned
2) `HttpConnectionManager`'s `uint32 setXffNumTrustedHops` to be replaced with `TypedExtensionConfig OriginalIpDetectionExtensions`: https://github.com/envoyproxy/envoy/pull/14855
Extend XdsTestServer features as specified in go/xds-retry-interop-test
See also xds retry interop test case implementation grpc/grpc#26746, grpc/grpc#26791
Previously, rpc-behavior values in the request headers are handled in tow different places, one in interceptor and the other in service implementation via Context. I moved all the rpc-behavior handling in interceptor, Context is not needed any more.
This can be used by annotation processors to avoid processing the
gRPC-generated code. The normal Generated annotation only has SOURCE
retention, so isn't available to annotation processors.
I don't include the service name within the annotation as that assumes
we'll never have need for any other type of generated class. If there's
a request for exposing service name via an annotation in the future, we
can make an RpcService annotation or the like.
Fixes#8158
Enables parsing HttpConnectionManager filter for the server side TCP listener, with the same codepath for handling it on the client side. Major changes include:
- Remodeled LdsUpdate with HttpConnectionManager. Now LdsUpdate is an oneof of HttpConnectionManager (for client side) or Listener (for server side). Each of Listener's FiliterChain contains an HttpConnectionManager (required).
Refactored code for validating and parsing the TCP Listener (for server side), put it into ClientXdsClient. The common part of validating/parsing HttpConnectionManager is reused/shared for client side.
- Included the name of FilterChain in the parsed form. As specified by the API, each FilterChain has a unique name. If the name is not provided by the control plane, a UUID is used. FilterChain names can be used for bookkeeping a set of FilterChain easily (e.g., used as map key).
- Added methods isSupportedOnClients() and isSupportedOnServers() to the Filter interface. Parsing the top-level HttpFilter requires knowing if the HttpFilter implementation is supported for the target usage (client-side or server-side). Note, parsing override HttpFilter configs does not need to know whether the config is used for an HttpFilter that is only supported for the client-side or server side.
- Added a new kind of Route: Route with non-forwarding action. Updated the XdsNameResolver being able to handle Route with non-forwarding action: if such a Route is matched to an RPC, that RPC is failed. Note, it is possible that XdsNameResolver receives xDS updates with all Routes with non-forwarding action. That is, the service config will not reference any cluster. Such case can be handled by cluster_manager LB policy's LB config parser: the parser returns the error to Channel and the Channel will handle it as error service config.
failOnVersionConflict has never been good for us. It is equivalent to
Maven dependencyConvergence which we discourage our users to use because
it is too tempermental and _creates_ version skew issues over time.
However, we had no real alternative for determining if our deps would be
misinterpeted by Maven.
failOnVersionConflict has been a constant drain and makes it really hard
to do seemingly-trivial upgrades. As evidenced by protobuf/build.gradle
in this change, it also caused _us_ to introduce a version downgrade.
This introduces our own custom requireUpperBoundDeps implementation so
that we can get back to simple dependency upgrades _and_ increase our
confidence in a consistent dependency tree.
Use a multiplier of 1 for endpoints with endpoint-level load balancing weight unspecified when computing weights for mixing-locality load balancing. Therefore, if a locality has endpoints without endpoint-level load balancing weight, they are weighted equally within the locality.
Sets ring_hash LB config to its default values (min_ring_size = 1024 and max_ring_size = 8M) if not given by the control plane. This applies to both parsing RingHashLbConfig from xDS proto and parsing RingHashConfig from the JSON config (currently not used). If the values are given by the control plane, they are validated such that min_ring_size is not less than max_ring_size and do not exceed the 8M limit.
When aggregating the endpoint resolution errors of the list of clusters in ClusterResolverLoadBalancer, clusters should be processed in its original order as received in the LB config. The last cluster's error is used as the overall error status.
When an RPC is injected with a delay and then fails with DEADLINE_EXCEEDED (partially) due to the delay, it could confuse users if the error message does not mention the existence of the delay injection, because end users normally are not the same people who configured fault injection policy in control plane.
Fixes the source of hostname used for DNS resolution in the cluster_resolver LB policy for LOGICAL_DNS clusters. The change includes:
- parse the single endpoint address from the embedded Cluster resource in CDS responses as the DNS hostname for LOGICAL_DNS cluster and include it in CdsUpdate being notified to the CDS LB policy.
- propagate the DNS hostname to the cluster_resolver LB policy via its LB config (DiscoveryMechanism for LOGICAL_DNS cluster).
- cluster_resolver LB policy takes the DNS hostname from the DiscoveryMechanism for LOGICAL_DNS cluster and use it as the name for DNS resolution.
Do not propagate partial endpoint discovery results to the child LB policy of cluster_resolver LB policy. This could avoid premature RPC failures when connections to resolved endpoints fail while there are other unresolved endpoints. Also, endpoints should be attempted in the order of clusters they belong to: endpoints from a lower-priority cluster should not be used before endpoints from a higher-priority cluster are attempted. Most importantly, it should not fallback to use DNS-resolved endpoints before all EDS-resolved endpoints failed.
When the ring_hash LB policy enters TRANSIENT_FAILURE, it tries to connect one of the IDLE subchannels. Which subchannel to be connected to is non-deterministic, it just choose the first one from the subchannels map.
The existing test creates 4 subchannels, brings down 2 of them to let ring_hash LB policy enter TRANSIENT_FAILURE. But which one fo the remaining two subchannels to be kicked off connection is nondeterministic. This introduces trouble for verifying the behavior. This change simplifies the test, to only create 3 subchannels so that there is only one single subchannel remaining in IDLE after bringing the other two down. We are able to easily verify the behavior of ring_hash LB policy requesting connection for that one subchannel.
Since static methods are pseudo-inherited by Builder implementations but
are trivially accidentally used, we re-define static methods in each
builder to make them behave more like the caller would expect. However,
not all the methods actually work; some just throw because the caller
was certainly not getting what they would expect.
Annotating with `@DoNotCall` can expose the problems at compile time
instead of runtime. While `@Deprecated` would also be an option, it is a
bit harder to figure out the ramifications and whether we want to go
that route.
This change was suggested by a lint tool for XdsServerBuilder and it
seems appropriate so I applied it to the other similar cases I could
find.
In normal cases, we only have a few header matchers but the number of headers can be completely up to the application. Indexing headers eagerly parses all headers, even for those with no matcher matching the key. We should only parse header values for those with key matching the header matcher (aka, only call Metadata.get() with key that has some matcher looking for).
Kicks off connection for one of IDLE subchannels (if exist) when the ring_hash LB policy is reporting TRANSIENT_FAILURE to its upstream.
While the ring_hash policy is reporting TRANSIENT_FAILURE, it will not be getting any pick requests from the priority policy. However, because the ring_hash policy does not attempt to reconnect to subchannels unless it is getting pick requests, it will need special handling to ensure that it will eventually recover from TRANSIENT_FAILURE state once the problem is resolved. Specifically, it will make sure that it is attempting to connect (after applicable backoff period) to at least one subchannel at any given time.