- xdsclient.New returns the interface now
- xdsclient.SetClient and xdsclient.FromResolverState take and return the interface now
- cleanup xds balancer tests to pass xds_client in resolver state
Also changed circuit breaking counter implementation to move max_count into the
picker, because this is how cluster_impl is designed. Implementation in EDS is
also modified to keep max_count in picker.
Part of C2P fallback. To support fallback to a DNS cluster.
This PR adds implementation of xds_cluster_impl_balancer, which will be responsible for circuit breaking and rpc dropping.
This PR only added RPC dropping. Circuit breaking will be done in a followup PR, after some necessary refactoring.
- in xds_client, accept (not NACK) RDS resp with case_insensitive=true
- pass case_insensitive to xds resolver and routing balancer
- Note that after the config selector change, the routing balancer will be removed, and
this will be handled in the resolver config selector
- unexport `perClusterStore` and it's `stats()`
- add `Store.Stats(clusterNames)` to report loads for the given clusters
- refactor store's map to a two layer map
- move `lastLoadReportAt` from client ton the load store, because a client can now have multiple clusters, each with a different `lastLoadReportAt`
- all tests will ignore `ReportInterval` when comparing Data
* One subtest should not depend on a previous one. So, refactored the test to be a single one, instead of having multiple subtests.
* Added a channel on the fake server to notify when it receives a connection. This test was sometimes failing because it went ahead and sent EDS requests and expected responses, before the connection to the fake server was up.
- Stop sending empty update to sub-balancers at init time
- At init, when only one sub-balancer reports transient failure, wait for other sub-balancers
- When aggregating states, consider a sub-balancer turns Connecting from TransientFailure still in TransientFailure, so aggregated state doesn't stay connecting for a long time
This PR refactors xds_client to support multiples watches. Those watches can be for the same type and same resource_name.
There's upper level `Client` and lower level `v2client`. Before this change, all logic was in `v2client`, and `Client` was a thin wrapper.
This PR moves some of the functionality from `v2client` to `Client`. New layers:
- Upper level `Client`
- keeps a list of watchers
- provides method `func WatchXXX() (cancel func())`
- has `WatchService()` which involves `LDS` and `RDS`
- handles resources from the xDS responses and dispatch to the watchers
- including multiple watchers for the same resource_name
- keeps cache
- and checks cache for new watches
- Lower level `v2client`
- is a dumb client that
- manages ADS stream
- sends a new xDS request when add/remove watch
- parses xDS responses
- It doesn't call watchers, but forwards all parsed results to upper Client
- handles ACK/NACK
- supports `addWatch(type, name)` and `removeWatch(type, name)`
- instead of `func watchCDS() func()`, which is now moved up to upper `Client`
Also includes other changes:
- Corresponding test changes (some tests for `v2client` were moved to `Client`)
- Method and type renaming
- CDS/EDS -> Cluster/Endpoints
- callback functions all accept updates as non-pointers
Before this change, in EDS balancer, child balancer's state update is handled synchronously, which includes priority handling. This would cause a deadlock if the child policy sends a state update inline when handling addresses (e.g. when roundrobin handles empty address list).
This change moves the child balancer state handling into a goroutine, to avoid the problem.
- More logs in xds bootstrap/resolver/cds/eds
- Bootstrap file content/error
- Request/response on ADS stream
- Actions by client/resolver/balancer
- Content of updates
- Logs prefixed with component name and id
- `[xds-bootstrap]`
- `[xds-client <address>]`
- `[cds-lb <address>]`
- `[eds-lb <address>]`
- The LRS client will find known clusters from the response, and send the loads
- The LRS client support only one cluster now, will be extended to support multiple
* Modified tests to use tlogger.
* Fail on errors, with error expectations.
* Added expects and MixedCapsed grpclb_config tests
* Moved tlogger to grpctest, moved leakcheck tester to grpctest.go
* Added ExpectErrorN()
* Removed redundant leak checks
* Fixed new test
* Made tlogger globals into tlogger methods
* ErrorsLeft -> EndTest
* Removed some redundant lines
* Fixed error in test and empty map in EndTest
edsBalancer (the old xds balancer) was in `package balancer`, one level above the eds implementation. It's a thin wrapper of the eds impl (and fallback in the future).
This change moves the thin wrapper to `package edsbalancer`, and also renames some structs.
Simplified the tests by only testing what is required and faking out
whatever can be faked out.
Also added a fakexds.Server implementation. Will switch other users of
the existing fakeserver implementation after this PR is merged.
Errors will be handled specifically, depending on whether it's a
connection error or other types of errors.
Without this fix, balancer's callback will be called with <nil> update,
causing nil panic later.
This PR removes the xds_client implementation from eds balancer, and replaces it with a xds_client wrapper. (The xds_client wrapper has very similar API as the old xds_client implementation, so the change in the eds balancer is minimal).
The eds balancer currently doesn't look for xds_client from attributes, and always creates a new xds_client. The attributes change will be done in a following up change.
The xds client will parse the EDS response, and give the parse result to eds balancer, so the balancer doesn't need to deal with proto directly.
Also moved `ClusterLoadAssignmentBuilder` to another pacakge to be shared by tests in different packages.
Generated protobuf messages contain internal data structures
that general purpose comparison functions (e.g., reflect.DeepEqual,
pretty.Compare, etc) do not properly compare. It is already the case
today that these functions may report a difference when two messages
are actually semantically equivalent.
Fix all usages by either calling proto.Equal directly if
the top-level types are themselves proto.Message, or by calling
cmp.Equal with the cmp.Comparer(proto.Equal) option specified.
This option teaches cmp to use proto.Equal anytime it encounters
proto.Message types.
Each priority maps to a balancer group.
When a priority is in use, its balancer group is started, and it will close the balancer groups with lower priorities. When a priority is down (no connection ready), it will start the next priority balancer group.
Fields are added in: https://github.com/grpc/grpc-proto/pull/64
Other changes:
- Move XDSConfig from internal to balancer
- Later we will add a separate config for CDS balancer
- generate service_config.pb.go and test with json generated from proto message
When a locality is removed from EDS response, it's corresponding
sub-balancer will be removed from balancer group.
With this change, the sub-balancer won't be removed immediately. It will
be kept in a cache (for 15 minutes by default). If the locality is
re-added within the timeout, the sub-balancer in cache will be picked
and re-used.
* Implement missing pieces for connection backoff.
Spec can be found here:
https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md
Summary of changes:
* Added a new type (marked experimental), ConnectParams, which contains
the knobs defined in the spec (except for minConnectTimeout).
* Added a new API (marked experimental), WithConnectParams() to return a
DialOption to dial with the provided parameters.
* Added new fields to the implementation of the exponential backoff in
internal/backoff which mirror the ones in ConnectParams.
* Marked existing APIs WithBackoffMaxDelay() and WithBackoffConfig() as
deprecated.
* Added a default exponential backoff implementation, for easy use of
internal callers.
Added a new backoff package which defines the backoff configuration
options, and is used by both the grpc package and the internal/backoff
package. This allows us to have all backoff related options in a
separate package.