This can be used by annotation processors to avoid processing the
gRPC-generated code. The normal Generated annotation only has SOURCE
retention, so isn't available to annotation processors.
I don't include the service name within the annotation as that assumes
we'll never have need for any other type of generated class. If there's
a request for exposing service name via an annotation in the future, we
can make an RpcService annotation or the like.
Fixes#8158
Enables parsing HttpConnectionManager filter for the server side TCP listener, with the same codepath for handling it on the client side. Major changes include:
- Remodeled LdsUpdate with HttpConnectionManager. Now LdsUpdate is an oneof of HttpConnectionManager (for client side) or Listener (for server side). Each of Listener's FiliterChain contains an HttpConnectionManager (required).
Refactored code for validating and parsing the TCP Listener (for server side), put it into ClientXdsClient. The common part of validating/parsing HttpConnectionManager is reused/shared for client side.
- Included the name of FilterChain in the parsed form. As specified by the API, each FilterChain has a unique name. If the name is not provided by the control plane, a UUID is used. FilterChain names can be used for bookkeeping a set of FilterChain easily (e.g., used as map key).
- Added methods isSupportedOnClients() and isSupportedOnServers() to the Filter interface. Parsing the top-level HttpFilter requires knowing if the HttpFilter implementation is supported for the target usage (client-side or server-side). Note, parsing override HttpFilter configs does not need to know whether the config is used for an HttpFilter that is only supported for the client-side or server side.
- Added a new kind of Route: Route with non-forwarding action. Updated the XdsNameResolver being able to handle Route with non-forwarding action: if such a Route is matched to an RPC, that RPC is failed. Note, it is possible that XdsNameResolver receives xDS updates with all Routes with non-forwarding action. That is, the service config will not reference any cluster. Such case can be handled by cluster_manager LB policy's LB config parser: the parser returns the error to Channel and the Channel will handle it as error service config.
Use a multiplier of 1 for endpoints with endpoint-level load balancing weight unspecified when computing weights for mixing-locality load balancing. Therefore, if a locality has endpoints without endpoint-level load balancing weight, they are weighted equally within the locality.
Sets ring_hash LB config to its default values (min_ring_size = 1024 and max_ring_size = 8M) if not given by the control plane. This applies to both parsing RingHashLbConfig from xDS proto and parsing RingHashConfig from the JSON config (currently not used). If the values are given by the control plane, they are validated such that min_ring_size is not less than max_ring_size and do not exceed the 8M limit.
When aggregating the endpoint resolution errors of the list of clusters in ClusterResolverLoadBalancer, clusters should be processed in its original order as received in the LB config. The last cluster's error is used as the overall error status.
When an RPC is injected with a delay and then fails with DEADLINE_EXCEEDED (partially) due to the delay, it could confuse users if the error message does not mention the existence of the delay injection, because end users normally are not the same people who configured fault injection policy in control plane.
Fixes the source of hostname used for DNS resolution in the cluster_resolver LB policy for LOGICAL_DNS clusters. The change includes:
- parse the single endpoint address from the embedded Cluster resource in CDS responses as the DNS hostname for LOGICAL_DNS cluster and include it in CdsUpdate being notified to the CDS LB policy.
- propagate the DNS hostname to the cluster_resolver LB policy via its LB config (DiscoveryMechanism for LOGICAL_DNS cluster).
- cluster_resolver LB policy takes the DNS hostname from the DiscoveryMechanism for LOGICAL_DNS cluster and use it as the name for DNS resolution.
Do not propagate partial endpoint discovery results to the child LB policy of cluster_resolver LB policy. This could avoid premature RPC failures when connections to resolved endpoints fail while there are other unresolved endpoints. Also, endpoints should be attempted in the order of clusters they belong to: endpoints from a lower-priority cluster should not be used before endpoints from a higher-priority cluster are attempted. Most importantly, it should not fallback to use DNS-resolved endpoints before all EDS-resolved endpoints failed.
When the ring_hash LB policy enters TRANSIENT_FAILURE, it tries to connect one of the IDLE subchannels. Which subchannel to be connected to is non-deterministic, it just choose the first one from the subchannels map.
The existing test creates 4 subchannels, brings down 2 of them to let ring_hash LB policy enter TRANSIENT_FAILURE. But which one fo the remaining two subchannels to be kicked off connection is nondeterministic. This introduces trouble for verifying the behavior. This change simplifies the test, to only create 3 subchannels so that there is only one single subchannel remaining in IDLE after bringing the other two down. We are able to easily verify the behavior of ring_hash LB policy requesting connection for that one subchannel.
Since static methods are pseudo-inherited by Builder implementations but
are trivially accidentally used, we re-define static methods in each
builder to make them behave more like the caller would expect. However,
not all the methods actually work; some just throw because the caller
was certainly not getting what they would expect.
Annotating with `@DoNotCall` can expose the problems at compile time
instead of runtime. While `@Deprecated` would also be an option, it is a
bit harder to figure out the ramifications and whether we want to go
that route.
This change was suggested by a lint tool for XdsServerBuilder and it
seems appropriate so I applied it to the other similar cases I could
find.
In normal cases, we only have a few header matchers but the number of headers can be completely up to the application. Indexing headers eagerly parses all headers, even for those with no matcher matching the key. We should only parse header values for those with key matching the header matcher (aka, only call Metadata.get() with key that has some matcher looking for).
Kicks off connection for one of IDLE subchannels (if exist) when the ring_hash LB policy is reporting TRANSIENT_FAILURE to its upstream.
While the ring_hash policy is reporting TRANSIENT_FAILURE, it will not be getting any pick requests from the priority policy. However, because the ring_hash policy does not attempt to reconnect to subchannels unless it is getting pick requests, it will need special handling to ensure that it will eventually recover from TRANSIENT_FAILURE state once the problem is resolved. Specifically, it will make sure that it is attempting to connect (after applicable backoff period) to at least one subchannel at any given time.
Control plane RPCs are independent of application RPCs, they can stand for completely different lifetime. So the context for making application RPCs should not be propagated to control plane RPCs. This change makes control plane RPCs use the ROOT Context.
LoadBalancers should not propagate balancing state updates after itself being shutdown.
For LB policies that maintain a group of child LB policies with each having its independent lifetime, balancing state update propagations from each child LB policy can go out of the lifetime of its parent easily, especially for cases that balancing state update is put to the back of the queue and not propagated up inline.
For LBs that are simple pass-through in the middle of the LB tree structure, it isn't a big issue as its lifecycle would be the same as its child. Transitively, It would behave correctly as long as its downstream is doing in the right way.
This change is a sanity cleanup for LB policies that maintain multiple child LB policies to preserve the invariant that further balancing state updates from their child policies will not get propagated.
Similar to 368c43aec4.
Clean up subchannels after the RingHashLoadBalancer itself is shutdown to prevent further balancing state updates being propagated to the upstream.
Note this should not be considered as a fix for any problem anybody is noticing. Upstreams of RingHashLoadBalancer should not rely on this, it should still have its own logic for maintaining the lifecycle of downstream LB and ignore invalid upcalls when necessary.
When creating the URI using Channel authority for instantiating a DNS resolver in the cluster_resolver LB policy, a "dns" scheme needs to be manually attached and the Channel authority would be used as the URI path (same as creating Channel with target). Otherwise, the Channel authority will just be used as the scheme and causing name resolver not found.
The change also handles name resolver lookup more defensively. Although it should not happen, if there does have bug causing DNS resolver not being able to be loaded, the cluster_resolver LB policy propagates the INTERNAL error to upstream.