Circuit breakers should be applied to clusters in the global scope. However, the LB hierarchy might cause the LB policy (currently EDS, but cluster_impl in the future) that applies circuit breaking to be duplicated. Also, for multi-channel cases, the circuit breaking threshold should still be shared across channels in the process.
This change creates a global map for accessing circuit breaking atomics that used to count the number of outstanding requests per global cluster basis. Atomics in the global map are held by WeakReferences so LB policies/Pickers/StreamTracers do not need to worry about counter's lifecycle and refcount.
Previously the EDS LB policies does not propagate an updated picker that uses the new circuit breaker threshold and drop policies when those values change. The result is new circuit breaker/drop policies are not dynamically applied to new RPCs unless subchannel state has changed. This change fixes this problem. Whenever the EDS LB policy receives an config update, the immediately updates the picker with corresponding circuit breakers and drop policies to the channel so that the channel is alway picking up the latest configuration.
This change refactors client side XdsClient's unit test. The main testing logic (test cases) will being the abstract class while the extended classes will be providing xDS version-specific services and messages. With this approach, we do not suffer from maintaining two copies of test logics in order to cover both v2 and v3 xDS protocols. So every time making changes to XdsClient's own logic, we only need to modify the corresponding test logic in the abstract class. Also, this approach could be sustainable for future xDS protocol version upgrades without necessity to re-implement test logics.
Implemented xDS circuit breaking for the maximum number of requests can be in-flight. The threshold is retrieved from CDS responses and is configured at the cluster level. It is implemented by wrapping the Picker spawned by EDS LB policy (which resolves endpoints for a single cluster) with stream-limiting logic. That is, when the picker is trying to create a new stream (aka, a new call), it is controlled by the number of open streams created by the current EDS LB policy. RPCs dropped by circuit breakers are recorded into total number of drops at cluster level and will be reported to TD via LRS.
In the future, multiple gRPC channels can be load balancing requests to the same (global) cluster. Those request should share the same quota for maximum number of requests can be in-flight. We will use a global counter for aggregating the number of currently-in-flight requests per cluster.
Since the xDS resource version info persists across ADS stream recreation so that the management server can choose to not send client resources that have already been sent previously (in the previous stream). The client should not consider previously received (resolved) resources not exist if it does not receive them on the new ADS stream. So initial resource fetch timers should only be scheduled for unresolved resources when the ADS stream is recreated.
Use a global factory to create a shared XdsClient object pool that can be used by multiple client channels. The object pool is thread-safe and holds a single XdsClient returning to each client channel. So at most one XdsClient instance will be created per process, and it is shared between client channels.
LoadReportClient is a subcomponent of XdsClient. Since the XdsClient uses a SynchronizationContext for synchronizing its operations, calls to LoadReportClient APIs should all from that SynchronizationContext. Hence, we can pass that SynchronizationContext into LoadReportClient to synchronize its RPC operations as well. This eliminates the synchronization needed by LoadReportClient itself.
A LoadStatsStore instance is used for recording client stats for a global cluster. A single instance may be shared by multiple client channels. So it should be thread-safe.
Two major changes involved:
- Separated client and server side XdsClient code paths. Currently the single XdsClientImpl2 implementation runs separate code paths for client side and server side usages. Due to different implementation progress for client side and server side development, client and server implementations diverge in whether it supports multiple/removing watchers, response data cache, synchronization model, etc. It became cumbersome to put them together in a single class. The separation is effectively duplicating the XdsClientImpl2 class for client and server so that the two sides can develop independently. But we made this AbstractXdsClient to reuse some of the code, such as the logic for xDS RPC stream. More details can be found in go/separate-client-server-xds-client.
- Changes the synchronization model for the client side APIs. Multiple gRPC Channels will be sharing a single XdsClient instance. So the client side APIs need to be thread-safe. Also, the XdsClient needs to implement synchronization for API calls and xDS RPC callbacks without using a particular Channel's SynchronizationContext. This is done by using XdsClient's own lock.
The xDS timeout retrieves per-route timeout value from RouteAction.max_stream_duration.grpc_timeout_header_max or RouteAction.max_stream_duration.max_stream_duration if the former is not set. If neither is set, it eventually falls back to the max_stream_duration setting in HttpConnectionManager.common_http_options retrieved from the Route's upstream Listener resource. The final timeout value applied to the call is the minimum of the xDS timeout value and the per-call timeout set by application.
Migrate java proto map getter from get to getMap.
This is part of a set of changes to java proto map API described here: go/java-proto-maplike
More information: go/java-proto-maplike-getFooMap
Add XdsClient implementation of watching LDS/RDS resources, replacing the ConfigWatcher API. This makes LDS/RDS/CDS/EDS resource watchers work similarly. This change also cleans up XdsClientImpl's tests.
It deprecates ExpectedException and Assert.assertThat(T, org.hamcrest.Matcher).
Without Java 8 we don't want to migrate away from ExpectedException at
this time. We tend to prefer Truth over Hamcrest, so I swapped the one
instance of Assert.assertThat() to use Truth. With this change we get a
warning-less build with JUnit 4.13. We don't yet upgrade because we
still need to support JUnit 4.12 for some use-cases, but will be able to
upgrade to 4.13 soon when they upgrade.
Introduce ResourceSubscriber for tracking the state of a single resource.
Every time newly subscribing to some resource, a corresponding ResourceSubscriber is created. Note it does not control the resource discovery RPCs. It is still the XdsClient that sends RPCs for with all subscribed resource names for each type. A ResourceSubscriber can have the following states:
- When the initial resource fetch timer (respTimer) is pending, the resource is under discovery, the resource data is unknown. Even if the XdsClient receives a response not containing the corresponding resource, it does not mean the resource is absent. We still need to wait until a response containing the resource data coming or the timer being fired. The timer is scheduled when the ResourceSubscriber is created. So the XdsClient should always create the corresponding ResourceSubscriber when it starts to subscribe a new resource.
- If the resource fetch timer is not pending, we must know the existence of the resource data. If data field is set, it is the most recently received resource data (aka, cached entry). Otherwise, absent field is set to true, indicating the resource does not exist. The exceptional case is when the ADS stream is closed and in the retry backoff period. During that period, respTimer is cancelled and the resource existence may or may not be known. Once the backoff finishes, the XdsClient will reschedule the respTimer when it recreates the ADS stream and re-request all the resources.
Watchers can be added to existing ResourceSubscribers. At the time the watcher is added, its callback will be invoked if we've already known the existence of the resource. Otherwise, the watcher will just sit there and wait data or absence to come in the future.
The PROXYLESS_CLIENT_HOSTNAME node metadata was a temporary workaround for management server to not send back all backend services as load reporting clusters. Now the management server is able to use `send_all_clusters` field to let the client side decide the group of clusters it is reporting loads for. So this node metadata is no longer needed.