The allocator has a circular reference that prevents it from GC'ed,
thus causes memory leak if gRPC Channels are created and shutdown
(even cleanly) on a regular basis.
See https://github.com/netty/netty/issues/6891#issuecomment-457809308
and internal b/146074696.
The dns scheme is only the default scheme with grpc-java. Other
libraries could add more NameResolvers and thus change the default. For
compatibility reasons, the schema should therefore be specified
explicitly.
First add a new a Metadata.BinaryStreamMarshaller interface which
serializes to/from instances of InputStream, and a corresponding
Key.of() factory method.
Values set with this type of key will be kept unserialized internally,
alongside a reference to the Marshaller. A new method
InternalMetadata.serializePartial(), returns values which are either
byte[] or InputStream, and allows transport-specific handling of
lazily-serialized values.
For the regular serialize() method, stream-marshalled values will be
converted to byte[] via an InputStreams.
This change integrates invocation of client side load reporting into XdsClient's implementation:
- Changed LRS client implementation based on LRS design changes. In the new design, first LRS request contains a single ClusterStats message with cluster_name set to the cluster (AKA, CDS cluster) that this LRS client is reporting loads for (no stats data in first request). Then server responses back the name of cluster service (AKA, EDS service) to report loads for.
- Implemented newly proposed LRS client API for adding/removing sources of load stats data.
- Implemented XdsClient APIs for initiating/stopping load reporting.
Move helper methods for building xDS protobuf messages into a utility class for code sharing. Tests for gRPC components that use `XdsClient` instance may want to use these methods as well.
`GracefulSwitchLoadBalancer` was doing switch based on `LoadBalancerProvider.getPolicyName()`. This turned out to be very awkward when I have to synthesize a policy name for the provider, and what I actually care about is the identity of the lb provider not necessarily the policy name.
Now `GracefulSwitchLoadBalancer` is doing switch based on identity of `LoadBalancer.Factory`, which is simpler.
`XdsNameResolver` will eventually send out a CDS config instead EDS config, but balancer_name should never be sent out from `XdsNameResolver` anyway.
This PR is mainly to unblock current staging test.
Support bootstrap file containing multiple xDS servers, with each has its own server URI and channel credential options. Multiple xDS servers are provided in case of one not reachable. For now, we would only use the first one.
This change also formats JSON strings in bootstrap related tests and add several tests for parsing bootstrap JSON as completeness.
Implementation of XdsClient is changed to take in a list of xDS servers. But still, we only use the first one.
- Contains `ClusterWatcher` implementation. On cluster update, `ClusterWatcherImpl` spawns an EDS child balancer if not created, then based on the ClusterUpdate data, it sends an edsConfig to the EDS child balancer.
- `CdsLoadBalancer` reads `XdsClientRef` and `CdsConfig` from `resolvedAddresses`. Base on `resolvedAddresses`, it register a `ClusterWatcherImpl` to `XdsClient`.
- For a different cluster resource name in CdsConfig when `handleResolvedAddresses()`, `CdsLoadBalancer` will gracefully switch to the new cluster.
Add two APIs to LoadReportClient interface:
- addLoadStatsStore(String, LoadStatsStore)
- removeLoadStatsStore(String)
Each LoadReportClient is responsible for reporting loads for a single cluster. But a gRPC client can spread loads to multiple EDS services per cluster, while loads for each EDS service should be aggregated separately.
With this change, starting a LoadReportClient only starts LRS RPC, the source of load data to be reported is added via addLoadStatsStore API.
Bazel was stuck on the javalite branch of protobuf because the master
branch was lacking the toolchain necessary for javalite. The fix was
backported to 3.11 in protocolbuffers/protobuf#6976
Add CDS protocol integration into XdsClient. gRPC client interacts with XdsClient for CDS results via adding a cluster watcher to receive cluster information from traffic director. Multiple watchers can exist to receive cluster information for the same or different clusters.
This change implements EDS protocol in XdsClient, which requests for endpoint information for a given cluster for gRPC's load balancing usage. Note EDS is a quasi-incremental protocol, which means each response may contain only a subset of present resources. Absence of resources should be revealed by response timeout. Currently, timeout is not implemented, endpoint watchers will wait indefinitely if endpoints for the cluster interested in does not exist.
Currently `LoadStatsStore` is create by `LocalityStore` and `LoadReportClient` retrieves `LoadStatsStore` from `LocalityStore.getLoadStatsStore()`.
But `LocalityStore` is create by EDS policy, whereas `LoadReportClient` and `LoadStatsStore` should be created by CDS (if not EDS-only), before `LocalityStore` is created. If `LoadReportClient` is embedded in `XdsClientImpl`, it need a `LoadStatsStore` which shouldn't be created by `LocalityStore`.
Instead, `LoadStatsStore` should be create before `LocalityStore` is created, and be passed to `LocalityStore`'s constructor. A getter is not needed.