Multi-tenant interceptor and scaler (#206)

* multi-tenant interceptor and scaler

Signed-off-by: Aaron Schlesinger <aaron@ecomaz.net>

* specifying host in XKCD ingress

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* routing the xkcd chart to the interceptor properly

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* check host header first

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* sending true active response in stream

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* passing target pending requests through to the underlying ScaledObject (so the scaler can read it later)

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* removing broken target pending requests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* using getHost in proxy handler

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding integration test

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding more tests to the integration test

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* splitting up integration tests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* more checks

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* mark new test TODO

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* expanding interceptor integration tests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* error messages

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* refactor test

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* more test improvements

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* rolling back target pending requests in ScaledObject

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* removing target metric error. it's not used anymore

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* improving waitFunc test

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* Refactoring the deployment cache to add better error handing and resilience.

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding doc comment

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* refactoring deploy cache and adding tests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* Using interfaces for deployment watch & list

this makes tests easier

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding more deploy cache tests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* Fixing up TestK8sDeploymentCacheRewatch

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* shutting down everything else when one thing errors, and adding a deployments cache endpoint

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* removing commented code

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* clarifying deployment cache JSON output, and simplifying deployment watch function

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding TODO tests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* error logs and restoring the count middleware

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* using consistent net/http package name throughout main.go

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* Refactoring deployment cache deployment storage

Also, running go mod tidy and adding new TODO (i.e. failing) tests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* using deployment.Status.ReadyReplicas, instead of just replicas

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* integration_tets ==> proxy_handlers_integration_test

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding some resilience to tests

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding deployment cache endpoint documentation

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* running the global test target with sh.RunV

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* adding timeout to magefile test target

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>

* finishing one TODO test and adding issue for the rest:

Signed-off-by: Aaron Schlesinger <70865+arschles@users.noreply.github.com>
This commit is contained in:
Aaron Schlesinger 2021-09-03 01:19:20 -07:00 committed by GitHub
parent a0a7969a6d
commit c211da9bd1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
103 changed files with 5503 additions and 1764 deletions

View File

@ -9,5 +9,8 @@
/examples /examples
/docs /docs
/.envrc /.envrc
/.github
/README.md
/RELEASE_PROCESS.md
CONTRIBUTING.md CONTRIBUTING.md
Makefile Makefile

View File

@ -19,15 +19,19 @@ We've split this project into a few different major areas of functionality, whic
We've introduced a new [Custom Resource (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `HTTPScaledObject.http.keda.sh` - `HTTPScaledObject` for short. Fundamentally, this resource allows an application developer to submit their HTTP-based application name and container image to the system, and have the system deploy all the necessary internal machinery required to deploy their HTTP application and expose it to the public internet. We've introduced a new [Custom Resource (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `HTTPScaledObject.http.keda.sh` - `HTTPScaledObject` for short. Fundamentally, this resource allows an application developer to submit their HTTP-based application name and container image to the system, and have the system deploy all the necessary internal machinery required to deploy their HTTP application and expose it to the public internet.
The [operator](../operator) runs inside the Kubernetes namespace to which they're deploying their application and watches for these `HTTPScaledObject` resources. When one is created, it will create a `Deployment` and `Service` for the app, interceptor, and scaler, and a [`ScaledObject`](https://keda.sh/docs/2.1/concepts/scaling-deployments/) which KEDA then uses to scale the application. The [operator](../operator) runs inside the Kubernetes namespace to which they're deploying their application and watches for these `HTTPScaledObject` resources. When one is created, it does the following:
When the `HTTPScaledObject` is deleted, the operator then removes all of the aforementioned resources. - Update an internal routing table that maps incoming HTTP hostnames to internal applications.
- Furnish this routing table information to interceptors so that they can properly route requests.
- Create a [`ScaledObject`](https://keda.sh/docs/2.3/concepts/scaling-deployments/#scaledobject-spec) for the `Deployment` specified in the `HTTPScaledObject` resource.
When the `HTTPScaledObject` is deleted, the operator reverses all of the aforementioned actions.
### Autoscaling for HTTP Apps ### Autoscaling for HTTP Apps
After an `HTTPScaledObject` is created and the operator creates the appropriate resources, there is a public IP address (and DNS entry, if configured) and the interceptor takes over. When HTTP traffic enters the system from the public internet, the interceptor accepts it and forwards it to the app's `Service` IP (it is most commonly configured as a `ClusterIP` service). After an `HTTPScaledObject` is created and the operator creates the appropriate resources, you must send HTTP requests through the interceptor so that the application is scaled. A Kubernetes `Service` called `keda-add-ons-http-interceptor-proxy` was created when you `helm install`ed the addon. Send requests to that service.
At the same time, the interceptor keeps track of the size of the pending HTTP requests - HTTP requests that it has forwarded but the app hasn't returned. The scaler periodically makes HTTP requests to the interceptor via an internal HTTP endpoint - on a separate port from the public server - to get the size of the pending queue. Based on this queue size, it reports scaling metrics as appropriate to KEDA. As the queue size increases, the scaler instructs KEDA to scale up as appropriate. Similarly, as the queue size decreases, the scaler instructs KEDA to scale down. The interceptor keeps track of the number of pending HTTP requests - HTTP requests that it has forwarded but the app hasn't returned. The scaler periodically makes HTTP requests to the interceptor via an internal RPC endpoint - on a separate port from the public server - to get the size of the pending queue. Based on this queue size, it reports scaling metrics as appropriate to KEDA. As the queue size increases, the scaler instructs KEDA to scale up as appropriate. Similarly, as the queue size decreases, the scaler instructs KEDA to scale down.
## Architecture Overview ## Architecture Overview

View File

@ -72,4 +72,99 @@ Some of the above commands require several environment variables to be set. You
- `KEDAHTTP_OPERATOR_IMAGE`: the fully qualified name of the [operator](../operator) image. This is used to build, push, and install the operator into a Kubernetes cluster (required) - `KEDAHTTP_OPERATOR_IMAGE`: the fully qualified name of the [operator](../operator) image. This is used to build, push, and install the operator into a Kubernetes cluster (required)
- `KEDAHTTP_NAMESPACE`: the Kubernetes namespace to which to install the add on and other required components (optional, defaults to `kedahttp`) - `KEDAHTTP_NAMESPACE`: the Kubernetes namespace to which to install the add on and other required components (optional, defaults to `kedahttp`)
>Suffic any `*_IMAGE` variable with `<keda-git-sha>` and the build system will automatically replace it with `sha-$(git rev-parse --short HEAD)` >Suffix any `*_IMAGE` variable with `<keda-git-sha>` and the build system will automatically replace it with `sha-$(git rev-parse --short HEAD)`
## Helpful Tips
The below tips assist with debugging, introspecting, or observing the current state of a running HTTP addon installation. They involve making network requests to cluster-internal (i.e. `ClusterIP` `Service`s).
There are generally two ways to communicate with these services.
### Use `kubectl proxy`
`kubectl proxy` establishes an authenticated connection to the Kubernetes API server, runs a local web server, and lets you execute REST API requests against `localhost` as if you were executing them against the Kubernetes API server.
To establish one, run the following command in a separate terminal window:
```shell
kubectl proxy -p 9898
```
>You'll keep this proxy running throughout all of your testing, so make sure you keep this terminal window open.
### Use a dedicated running pod
The second way to communicate with these services is almost the opposite as the previous. Instead of bringing the API server to you with `kubectl proxy`, you'll be creating an execution environment closer to the API server.
First, launch a container with an interactive shell in Kubernetes with the following command (substituting your namespace in for `$NAMESPACE`):
```shell
kubectl run -it alpine --image=alpine -n $NAMESPACE
```
Then, when you see a `curl` command below, replace the entire path up to and including the `/proxy/` segment with just the name of the service and its port. For example, `curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/routing_ping` would just become `curl -L keda-add-ons-http-interceptor-admin:9090/routing_ping`
### Routing Table - Interceptor
Any interceptor pod has both a _proxy_ and _admin_ server running inside it. The proxy server is where users send HTTP requests to, and the admin server is for internal use. You can use this server to
1. Prompt the interceptor to re-fetch the routing table from the interceptor, or
2. Print out the interceptor's current routing table (useful for debugging)
Assuming you've run `kubectl proxy` in a separate terminal window, prompt for a re-fetch with the below command (substitute `${NAMESPACE}` for your appropriate namespace):
Then, to prompt for a re-fetch (in a separate terminal shell):
```shell
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/routing_ping
```
>To print out the current routing table without a re-fetch, replace `routing_ping` with `routing_table`
### Queue Counts - Interceptor
You can use the same interceptor port forward that you established in the previous section to fetch the HTTP pending queue counts table. This is the same table that the external scaler requests. See the "Queue Counts - Scaler" section below for more details on that.
To fetch the queue counts from an interceptor, ensure you've established a `kubectl proxy` on port 9898 and use the below `curl` command (again, substituting your preferred namespace for `$NAMESPACE`):
```shell
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/queue
```
### Deployment Cache - Interceptor
You can use the same interceptor port forward that you established in the previous section to fetch a short summary of the state of its deployment cache (the data that it uses to determine whether and how long to hold requests prior to forwarding them). To do so, ensure that you've established a `kubectl proxy` on port 9898 and use the below `curl` command (again, substituting your preferred namespace for `$NAMESPACE`):
```shell
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-interceptor-admin:9090/proxy/deployments
```
The output of this command is a JSON map where the keys are the deployment name and the values are the latest known number of replicas for that deployment.
### Routing Table - Operator
The operator pod (whose name looks like `keda-add-ons-http-controller-manager-1234567`) has a similar `/routing_table` endpoint as the interceptor. That data returned from this endpoint, however, is the source of truth. Interceptors fetch their copies of the routing table from this endpoint. Accessing data from this endpoint is similar.
Ensure that you are running `kubectl proxy -p 9898` and then, in a separate terminal window, fetch the routing table from the operator with this `curl` command (again, substitute your namespace in for `${NAMESPACE}`):
```shell
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-operator-admin:9090/proxy/routing_table
```
### Queue Counts - Scaler
The external scaler fetches pending queue counts from each interceptor in the system, aggregates and stores them, and then returns them to KEDA when requested. KEDA fetches these data via the [standard gRPC external scaler interface](https://keda.sh/docs/2.3/concepts/external-scalers/#external-scaler-grpc-interface).
For convenience, the scaler also provides a plain HTTP server from which you can also fetch these metrics.
Ensure that you are running `kubectl proxy -p 9898` and then, in a separate terminal window, fetch the routing table from the operator with this `curl` command (again, substitute your namespace in for `${NAMESPACE}`):
```shell
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-external-scaler:9091/proxy/queue
```
Or, you can prompt the scaler to fetch counts from all interceptors, aggregate, store, and return counts:
```shell
curl -L localhost:9898/api/v1/namespaces/$NAMESPACE/services/keda-add-ons-http-external-scaler:9091/proxy/queue_ping
```

View File

@ -1,6 +1,6 @@
# The `HTTPScaledObject` # The `HTTPScaledObject`
>This document reflects the specification of the `HTTPScaledObject` resource for the latest version >This document reflects the specification of the `HTTPScaledObject` resource for the `v0.1.0` version.
Each `HTTPScaledObject` looks approximately like the below: Each `HTTPScaledObject` looks approximately like the below:

View File

@ -0,0 +1,44 @@
# The `HTTPScaledObject`
>This document reflects the specification of the `HTTPScaledObject` resource for the `v0.2.0` version.
Each `HTTPScaledObject` looks approximately like the below:
```yaml
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: xkcd
spec:
host: "myhost.com"
scaleTargetRef:
deployment: xkcd
service: xkcd
port: 8080
```
This document is a narrated reference guide for the `HTTPScaledObject`, and we'll focus on the `spec` field.
## `host`
This is the host to apply this scaling rule to. All incoming requests with this value in their `Host` header will be forwarded to the `Service` and port specified in the below `scaleTargetRef`, and that same `scaleTargetRef`'s `Deployment` will be scaled accordingly.
## `scaleTargetRef`
This is the primary and most important part of the `spec` because it describes:
1. The incoming host to apply this scaling rule to.
2. What `Deployment` to scale.
3. The service to which to route HTTP traffic.
### `deployment`
This is the name of the `Deployment` to scale. It must exist in the same namespace as this `HTTPScaledObject` and shouldn't be managed by any other autoscaling system. This means that there should not be any `ScaledObject` already created for this `Deployment`. The HTTP add on will manage a `ScaledObject` internally.
### `service`
This is the name of the service to route traffic to. The add on will create autoscaling and routing components that route to this `Service`. It must exist in the same namespace as this `HTTPScaledObject` and should route to the same `Deployment` as you entered in the `deployment` field.
### `port`
This is the port to route to on the service that you specified in the `service` field. It should be exposed on the service and should route to a valid `containerPort` on the `Deployment` you gave in the `deployment` field.

View File

@ -8,7 +8,7 @@ If you haven't installed KEDA and the HTTP Add On (this project), please do so f
## Creating An Application ## Creating An Application
You'll need to install a `Deployment` and `Service` first. You'll tell the add on to begin scaling it up and down after this step. Use the below [Helm](https://helm.sh) command to create the resources you need. You'll need to install a `Deployment` and `Service` first. You'll tell the add on to begin scaling it up and down after this step. We've provided a [Helm](https://helm.sh) chart in this repository that you can use to try it out. Use this command to create the resources you need.
```shell ```shell
helm install xkcd ./examples/xkcd -n ${NAMESPACE} helm install xkcd ./examples/xkcd -n ${NAMESPACE}
@ -26,14 +26,14 @@ You interact with the operator via a CRD called `HTTPScaledObject`. This CRD obj
kubectl create -f -n $NAMESPACE examples/v0.0.2/httpscaledobject.yaml kubectl create -f -n $NAMESPACE examples/v0.0.2/httpscaledobject.yaml
``` ```
>If you'd like to learn more about this object, please see the [`HTTPScaledObject` reference](./ref/http_scaled_object.md). >If you'd like to learn more about this object, please see the [`HTTPScaledObject` reference](./ref/v0.2.0/http_scaled_object.md).
## Testing Your Installation ## Testing Your Installation
You've now installed a web application and activated autoscaling by creating an `HTTPScaledObject` for it. For autoscaling to work properly, HTTP traffic needs to route through the `Service` that the add on has set up. You can use `kubectl port-forward` to quickly test things out: You've now installed a web application and activated autoscaling by creating an `HTTPScaledObject` for it. For autoscaling to work properly, HTTP traffic needs to route through the `Service` that the add on has set up. You can use `kubectl port-forward` to quickly test things out:
```shell ```shell
kubectl port-forward svc/xkcd-interceptor-proxy -n ${NAMESPACE} 8080:80 kubectl port-forward svc/keda-add-ons-http-interceptor-proxy -n ${NAMESPACE} 8080:80
``` ```
### Routing to the Right `Service` ### Routing to the Right `Service`
@ -41,10 +41,10 @@ kubectl port-forward svc/xkcd-interceptor-proxy -n ${NAMESPACE} 8080:80
As said above, you need to route your HTTP traffic to the `Service` that the add on has created. If you have existing systems - like an ingress controller - you'll need to anticipate the name of these created `Service`s. Each one will be named consistently like so, in the same namespace as the `HTTPScaledObject` and your application (i.e. `$NAMESPACE`): As said above, you need to route your HTTP traffic to the `Service` that the add on has created. If you have existing systems - like an ingress controller - you'll need to anticipate the name of these created `Service`s. Each one will be named consistently like so, in the same namespace as the `HTTPScaledObject` and your application (i.e. `$NAMESPACE`):
```shell ```shell
<deployment name>-interceptor-proxy keda-add-ons-http-interceptor-proxy
``` ```
>The service will always be a `ClusterIP` type and will be created in the same namespace as the `HTTPScaledObject` you created. >This is installed by the [Helm chart](https://github.com/kedacore/charts/tree/master/http-add-on) as a `ClusterIP` `Service` by default.
#### Installing and Using the [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm) Ingress Controller #### Installing and Using the [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/#using-helm) Ingress Controller

41
e2e/config.go Normal file
View File

@ -0,0 +1,41 @@
package e2e
import "strings"
type config struct {
Namespace string `envconfig:"NAMESPACE"`
RunSetupTeardown bool `envconfig:"RUN_SETUP_TEARDOWN" default:"false"`
AddonChartLocation string `envconfig:"ADD_ON_CHART_LOCATION" required:"true"`
ExampleAppChartLocation string `envconfig:"EXAMPLE_APP_CHART_LOCATION" required:"true"`
OperatorImg string `envconfig:"KEDAHTTP_OPERATOR_IMAGE"`
InterceptorImg string `envconfig:"KEDAHTTP_INTERCEPTOR_IMAGE"`
ScalerImg string `envconfig:"KEDAHTTP_SCALER_IMAGE"`
HTTPAddOnImageTag string `envconfig:"KEDAHTTP_IMAGE_TAG"`
NumReqsAgainstProxy int `envconfig:"NUM_REQUESTS_TO_EXECUTE" default:"10000"`
}
func (c *config) httpAddOnHelmVars() map[string]string {
ret := map[string]string{}
if c.OperatorImg != "" {
ret["images.operator"] = strings.Split(
c.OperatorImg,
":",
)[0]
}
if c.InterceptorImg != "" {
ret["images.interceptor"] = strings.Split(
c.InterceptorImg,
":",
)[0]
}
if c.ScalerImg != "" {
ret["images.scaler"] = strings.Split(
c.ScalerImg,
":",
)[0]
}
if c.HTTPAddOnImageTag != "" {
ret["images.tag"] = c.HTTPAddOnImageTag
}
return ret
}

97
e2e/e2e_test.go Normal file
View File

@ -0,0 +1,97 @@
package e2e
import (
"context"
"fmt"
"os"
"testing"
"time"
"github.com/kelseyhightower/envconfig"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/util/uuid"
)
func TestE2E(t *testing.T) {
shouldRun := os.Getenv("KEDA_HTTP_E2E_SHOULD_RUN")
if shouldRun != "true" {
t.Logf("Not running E2E Tests")
t.SkipNow()
}
ctx, cancel := context.WithCancel(context.Background())
r := require.New(t)
ns := fmt.Sprintf("keda-http-add-on-e2e-%s", uuid.NewUUID())
cfg := new(config)
envconfig.MustProcess("KEDA_HTTP_E2E", cfg)
if cfg.Namespace != "" {
ns = cfg.Namespace
}
t.Logf("E2E Tests Starting")
t.Logf("Using namespace: %s", ns)
// setup and register teardown functionality.
// register cleanup before executing setup, so that
// if setup times out, we'll still clean up
t.Cleanup(func() {
cancel()
if cfg.RunSetupTeardown {
teardown(t, ns)
}
})
if cfg.RunSetupTeardown {
t.Logf("Running setup and teardown scripts")
setup(t, ns, cfg)
}
cl, restCfg, err := getClient()
r.NoError(err)
// wait until all expected deployments are available
r.NoError(waitUntilDeplomentsAvailable(
ctx,
cl,
remainingDurInTest(t, 20*time.Second),
ns,
[]string{
"keda-operator",
"keda-add-ons-http-controller-manager",
"keda-add-ons-http-external-scaler",
"keda-add-ons-http-interceptor",
"keda-operator-metrics-apiserver",
"xkcd",
},
))
// ensure that the interceptor and XKCD scaledobjects
// exist
_, err = getScaledObject(ctx, cl, ns, "keda-add-ons-http-interceptor")
r.NoError(err)
_, err = getScaledObject(ctx, cl, ns, "xkcd-app")
r.NoError(err)
// issue requests to the XKCD service directly to make
// sure it's up and properly configured
r.NoError(makeRequestsToSvc(
ctx,
restCfg,
ns,
"xkcd",
8080,
cfg.NumReqsAgainstProxy,
))
// issue requests to the proxy service to make sure
// it's forwarding properly
r.NoError(makeRequestsToSvc(
ctx,
restCfg,
ns,
"keda-add-ons-http-interceptor-proxy",
8080,
cfg.NumReqsAgainstProxy,
))
}

64
e2e/helm.go Normal file
View File

@ -0,0 +1,64 @@
package e2e
import (
"fmt"
"github.com/magefile/mage/sh"
)
func helmDelete(namespace, chartName string) error {
return sh.RunV(
"helm",
"delete",
"-n",
namespace,
chartName,
)
}
func helmRepoAdd(name, url string) error {
return sh.RunV(
"helm",
"repo",
"add",
name,
url,
)
}
func helmRepoUpdate() error {
return sh.RunV(
"helm",
"repo",
"update",
)
}
func emptyHelmVars() map[string]string {
return map[string]string{}
}
func helmInstall(
namespace,
chartName,
chartLoc string,
vars map[string]string,
) error {
helmArgs := []string{
"install",
chartName,
chartLoc,
"-n",
namespace,
"--create-namespace",
}
for k, v := range vars {
helmArgs = append(helmArgs, fmt.Sprintf(
"--set %s=%s",
k,
v,
))
}
return sh.RunV(
"helm",
helmArgs...,
)
}

67
e2e/k8s.go Normal file
View File

@ -0,0 +1,67 @@
package e2e
import (
"context"
"strconv"
"github.com/kedacore/http-add-on/pkg/k8s"
"github.com/magefile/mage/sh"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
clientconfig "sigs.k8s.io/controller-runtime/pkg/client/config"
)
func getClient() (
client.Client,
*rest.Config,
error,
) {
cfg, err := clientconfig.GetConfig()
if err != nil {
return nil, nil, err
}
cl, err := client.New(cfg, client.Options{})
if err != nil {
return nil, nil, errors.Wrap(err, "getClient")
}
return cl, cfg, nil
}
func deleteNS(ns string) error {
return sh.RunV("kubectl", "delete", "namespace", ns)
}
func getPortStrings(svc *corev1.Service) []string {
ret := []string{}
for _, port := range svc.Spec.Ports {
ret = append(ret, strconv.Itoa(int(port.Port)))
}
return ret
}
func getScaledObject(
ctx context.Context,
cl client.Client,
ns,
name string,
) (*unstructured.Unstructured, error) {
scaledObject, err := k8s.NewScaledObject(
ns,
name,
"",
"",
"",
1,
2,
)
if err != nil {
return nil, err
}
if err := cl.Get(ctx, k8s.ObjKey(ns, name), scaledObject); err != nil {
return nil, err
}
return scaledObject, nil
}

37
e2e/proxy.go Normal file
View File

@ -0,0 +1,37 @@
package e2e
import (
"context"
"fmt"
"golang.org/x/sync/errgroup"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/util/flowcontrol"
)
func makeRequestsToSvc(ctx context.Context, cfg *rest.Config, ns, svcName string, svcPort, numReqs int) error {
cls, err := kubernetes.NewForConfig(cfg)
if err != nil {
return err
}
restCl := cls.CoreV1().RESTClient()
makeReq := func(ctx context.Context) error {
req := restCl.
Get().
Namespace(ns).
Resource("services").
Name(fmt.Sprintf("%s:%d", svcName, svcPort)).
SubResource("proxy").
Throttle(flowcontrol.NewFakeAlwaysRateLimiter())
res := req.Do(ctx)
return res.Error()
}
g, ctx := errgroup.WithContext(ctx)
for i := 0; i < numReqs; i++ {
g.Go(func() error {
return makeReq(ctx)
})
}
return g.Wait()
}

52
e2e/setup_teardown.go Normal file
View File

@ -0,0 +1,52 @@
package e2e
import (
"testing"
"github.com/stretchr/testify/assert"
)
func setup(t *testing.T, ns string, cfg *config) {
empty := emptyHelmVars()
t.Helper()
// use assert rather than require so that everything
// gets run even if something fails
a := assert.New(t)
a.NoError(helmRepoAdd("kedacore", "https://kedacore.github.io/charts"))
a.NoError(helmRepoUpdate())
t.Logf("Installing KEDA")
a.NoError(helmInstall(
ns,
"keda",
"kedacore/keda",
empty,
))
t.Logf("Installing HTTP addon")
a.NoError(helmInstall(
ns,
"http-add-on",
cfg.AddonChartLocation,
cfg.httpAddOnHelmVars(),
))
t.Logf("Installing XKCD")
a.NoError(helmInstall(
ns,
"xkcd",
cfg.ExampleAppChartLocation,
empty,
))
}
func teardown(t *testing.T, ns string) {
t.Helper()
// use assert rather than require so that everything
// gets run even if something fails
a := assert.New(t)
t.Logf("Cleaning up")
// always delete the charts in LIFO order
a.NoError(helmDelete(ns, "xkcd"))
a.NoError(helmDelete(ns, "http-add-on"))
a.NoError(helmDelete(ns, "keda"))
a.NoError(deleteNS(ns))
}

20
e2e/timeout.go Normal file
View File

@ -0,0 +1,20 @@
package e2e
import (
"testing"
"time"
)
func remainingDurInTest(
t *testing.T,
def time.Duration,
) time.Duration {
expireTime, ok := t.Deadline()
if !ok {
return def
}
if time.Now().After(expireTime) {
return 0
}
return time.Until(expireTime)
}

35
e2e/wait.go Normal file
View File

@ -0,0 +1,35 @@
package e2e
import (
"context"
"fmt"
"time"
)
func waitUntil(
ctx context.Context,
dur time.Duration,
fn func(context.Context) error,
) error {
ctx, cancel := context.WithTimeout(ctx, dur)
defer cancel()
lastErr := fmt.Errorf(
"timeout after %s waiting for condition",
dur,
)
for {
select {
case <-ctx.Done():
return fmt.Errorf(
"timed out after %s. last error: %w",
dur,
lastErr,
)
default:
}
lastErr = fn(ctx)
if lastErr == nil {
return nil
}
}
}

84
e2e/wait_deployment.go Normal file
View File

@ -0,0 +1,84 @@
package e2e
import (
"context"
"fmt"
"time"
"github.com/kedacore/http-add-on/pkg/k8s"
"golang.org/x/sync/errgroup"
appsv1 "k8s.io/api/apps/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func waitUntilDeployment(
ctx context.Context,
cl client.Client,
dur time.Duration,
ns,
name string,
fn func(context.Context, *appsv1.Deployment) error,
) error {
depl := &appsv1.Deployment{}
if err := cl.Get(
ctx,
k8s.ObjKey(ns, name),
depl,
); err != nil {
return err
}
return waitUntil(ctx, dur, func(ctx context.Context) error {
return fn(ctx, depl)
})
}
func waitUntilDeploymentAvailable(
ctx context.Context,
cl client.Client,
dur time.Duration,
ns,
name string,
) error {
return waitUntilDeployment(
ctx,
cl,
dur,
ns,
name,
func(ctx context.Context, depl *appsv1.Deployment) error {
if depl.Status.UnavailableReplicas == 0 {
return nil
}
return fmt.Errorf(
"deployment %s still has %d unavailable replicas",
"keda-operator",
depl.Status.UnavailableReplicas,
)
},
)
}
func waitUntilDeplomentsAvailable(
ctx context.Context,
cl client.Client,
dur time.Duration,
ns string,
names []string,
) error {
ctx, done := context.WithTimeout(ctx, dur)
defer done()
g, ctx := errgroup.WithContext(ctx)
for _, name := range names {
n := name
g.Go(func() error {
return waitUntilDeploymentAvailable(
ctx,
cl,
dur,
ns,
n,
)
})
}
return g.Wait()
}

View File

@ -0,0 +1,12 @@
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: xkcd
spec:
scaleTargetRef:
deployment: xkcd
service: xkcd
port: 8080
replicas:
min: 5
max: 10

View File

@ -0,0 +1,13 @@
kind: HTTPScaledObject
apiVersion: http.keda.sh/v1alpha1
metadata:
name: xkcd
spec:
host: myhost.com
scaleTargetRef:
deployment: xkcd
service: xkcd
port: 8080
replicas:
min: 5
max: 10

View File

@ -3,6 +3,7 @@ apiVersion: http.keda.sh/v1alpha1
metadata: metadata:
name: {{ include "xkcd.fullname" . }} name: {{ include "xkcd.fullname" . }}
spec: spec:
host: {{ .Values.host }}
scaleTargetRef: scaleTargetRef:
deployment: {{ include "xkcd.fullname" . }} deployment: {{ include "xkcd.fullname" . }}
service: {{ include "xkcd.fullname" . }} service: {{ include "xkcd.fullname" . }}

View File

@ -6,12 +6,13 @@ metadata:
nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/rewrite-target: /
spec: spec:
rules: rules:
- http: - host: {{ .Values.host }}
http:
paths: paths:
- path: / - path: /
pathType: Prefix pathType: Prefix
backend: backend:
service: service:
name: {{ include "xkcd.fullname" . }} name: keda-add-ons-http-interceptor-proxy
port: port:
number: 80 number: 8080

View File

@ -1,5 +1,5 @@
replicaCount: 1 replicaCount: 1
host: myhost.com
image: image:
repository: arschles/xkcd repository: arschles/xkcd
pullPolicy: Always pullPolicy: Always
@ -38,7 +38,7 @@ service:
autoscaling: autoscaling:
http: http:
minReplicas: 5 minReplicas: 0
maxReplicas: 10 maxReplicas: 10
ingress: ingress:

15
go.mod
View File

@ -4,19 +4,22 @@ go 1.16
require ( require (
github.com/go-logr/logr v0.4.0 github.com/go-logr/logr v0.4.0
github.com/go-logr/zapr v0.4.0
github.com/golang/protobuf v1.5.2 github.com/golang/protobuf v1.5.2
github.com/google/uuid v1.1.2
github.com/kelseyhightower/envconfig v1.4.0 github.com/kelseyhightower/envconfig v1.4.0
github.com/labstack/echo/v4 v4.5.0
github.com/magefile/mage v1.11.0 github.com/magefile/mage v1.11.0
github.com/onsi/ginkgo v1.16.4 github.com/onsi/ginkgo v1.16.4
github.com/onsi/gomega v1.16.0 github.com/onsi/gomega v1.16.0
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/stretchr/testify v1.7.0 github.com/stretchr/testify v1.7.0
go.uber.org/zap v1.17.0
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
google.golang.org/grpc v1.33.2 google.golang.org/grpc v1.33.2
google.golang.org/protobuf v1.27.1 google.golang.org/protobuf v1.26.0
k8s.io/api v0.20.4 k8s.io/api v0.21.3
k8s.io/apimachinery v0.20.4 k8s.io/apimachinery v0.21.3
k8s.io/client-go v0.20.2 k8s.io/client-go v0.21.3
sigs.k8s.io/controller-runtime v0.8.3 k8s.io/component-base v0.21.3 // indirect
sigs.k8s.io/controller-runtime v0.9.2
) )

227
go.sum
View File

@ -25,18 +25,16 @@ cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohl
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8= github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.1/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw= github.com/Azure/go-autorest/autorest v0.11.12/go.mod h1:eipySxLmqSyC5s5k1CLupqet0PSENBEDP93LQ9a8QYw=
github.com/Azure/go-autorest/autorest/adal v0.9.0/go.mod h1:/c022QCutn2P7uY+/oQWWNcK9YU+MH96NgK+jErpbcg=
github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A= github.com/Azure/go-autorest/autorest/adal v0.9.5/go.mod h1:B7KF7jKIeC9Mct5spmyCB/A8CG/sEz1vwIRGv/bbw7A=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74= github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.0/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k= github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8= github.com/Azure/go-autorest/logger v0.2.0/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU= github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ= github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0= github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE= github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
@ -44,6 +42,7 @@ github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuy
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o= github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY= github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8= github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
@ -77,12 +76,13 @@ github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfc
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY= github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ= github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE= github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
@ -93,44 +93,45 @@ github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymF
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.9.0+incompatible h1:kLcOMZeuLAJvL2BPWLMIj5oaZQobrkAqrL+WFZwQses=
github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.11.0+incompatible h1:glyUF9yIYtMHzn8xaKw5rMhdWcwsYV8dZHIq5567/xs=
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k= github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4= github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas= github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v0.3.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc= github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc=
github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU= github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/zapr v0.2.0 h1:v6Ji8yBW77pva6NkJKQdHLAJKrIJKRHz0RXwPqCHSR4= github.com/go-logr/zapr v0.4.0 h1:uc1uML3hRYL9/ZZPdgHS/n8Nzo+eaYL/Efxkkamf7OM=
github.com/go-logr/zapr v0.2.0/go.mod h1:qhKdvif7YF5GI9NWEpyxTSSBdGmzkNguibrdCNVPunU= github.com/go-logr/zapr v0.4.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg= github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg= github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc= github.com/go-openapi/jsonreference v0.19.2/go.mod h1:jMjeRr2HHw6nAVajTXJ4eiUwohSTlpa0o73RUL1owJc=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8= github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo= github.com/go-openapi/spec v0.19.3/go.mod h1:FpwSN1ksY1eteniUU7X0N/BgJ7a4WvBFVA8Lj9mJglo=
github.com/go-openapi/spec v0.19.5/go.mod h1:Hm2Jr4jv8G1ciIAo+frC/Ft+rR2kQDh8JHKHb3gWUSk=
github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.2/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk= github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE= github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4= github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt v3.2.2+incompatible/go.mod h1:8pz2t5EyA70fFQQSrl6XZXzqecmYZeUEB8OUGHkxJ+I=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@ -166,7 +167,7 @@ github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMyw
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU= github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@ -186,8 +187,8 @@ github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg= github.com/googleapis/gnostic v0.4.1/go.mod h1:LRhVm6pbyptWbWbuZ38d1eyptfvIytN3ir6b65WBswg=
github.com/googleapis/gnostic v0.5.1 h1:A8Yhf6EtqTv9RMsU6MQTyrtV1TjWlR6xU9BsZIwuTCM= github.com/googleapis/gnostic v0.5.5 h1:9fHAtK0uDfpveeqqo1hkEZJcFvYXAiCN3UutL8F9xHw=
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU= github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
@ -222,37 +223,37 @@ github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/J
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA= github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.10 h1:6q5mVkdH/vYmqngx7kZQTjJ5HRsx+ImorDIEQ+beJgc= github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA= github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8= github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10 h1:Kz6Cvnvv2wGdaG/V8yMvfkmNiXq9Ya2KUv4rouJJr68=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11 h1:uVUAXhF2To8cbw/3xN3pxj6kk7TYKs98NIrTqPlMWAQ=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU= github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8= github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg= github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA= github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/labstack/echo/v4 v4.5.0 h1:JXk6H5PAw9I3GwizqUHhYyS4f45iyGebR/c1xNCeOCY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/labstack/echo/v4 v4.5.0/go.mod h1:czIriw4a0C1dFun+ObrXp7ok03xON0N1awStJ6ArI7Y= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/labstack/gommon v0.3.0 h1:JEeO0bvc78PKdyHxloTKiF8BD5iGrH8T6MSeGvSgob0=
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/magefile/mage v1.11.0 h1:C/55Ywp9BpgVVclD3lRnSYCwXTYxmSppIgLeDYlNuls= github.com/magefile/mage v1.11.0 h1:C/55Ywp9BpgVVclD3lRnSYCwXTYxmSppIgLeDYlNuls=
github.com/magefile/mage v1.11.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A= github.com/magefile/mage v1.11.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
@ -260,15 +261,8 @@ github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN
github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs= github.com/mailru/easyjson v0.7.0/go.mod h1:KAzv3t3aY1NaHWoQz1+4F1ccyAH66Jk7yos7ldAVICs=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU= github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.8 h1:c1ghPdyEDarC70ftn0y+A/Ee++9zz8ljHG1b13eJ0s8=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHXY=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI= github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
@ -282,7 +276,8 @@ github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS4
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY= github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo= github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/term v0.0.0-20201216013528-df9cb8a40635/go.mod h1:FBS0z0QWA44HXygs7VXDUOGoN/1TV3RuWkLO04am3wc=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@ -292,7 +287,10 @@ github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3Rllmb
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw= github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e h1:fD57ERR4JtEqsWbfPhv4DMiApHyliiK5xCTNVSPiaAs=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A= github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE= github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU= github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
@ -302,14 +300,14 @@ github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk= github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.1/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY= github.com/onsi/ginkgo v1.16.2/go.mod h1:CObGmKUOKaSC0RjmoAK7tKyn4Azo5P2IWuoMnvwxz1E=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc= github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0= github.com/onsi/ginkgo v1.16.4/go.mod h1:dX+/inL/fNMqNlz0e9LfyB9TswhZpCVdJM/Z6Vvnwo0=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA= github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY= github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/onsi/gomega v1.10.2/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo= github.com/onsi/gomega v1.13.0/go.mod h1:lRk9szgn8TxENtWd0Tp4c3wjlRfMTMH27I+3Je41yGY=
github.com/onsi/gomega v1.16.0 h1:6gjqkI8iiRHMvdccRJM8rVKjCWk6ZIm6FTm3ddIe4/c= github.com/onsi/gomega v1.16.0 h1:6gjqkI8iiRHMvdccRJM8rVKjCWk6ZIm6FTm3ddIe4/c=
github.com/onsi/gomega v1.16.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY= github.com/onsi/gomega v1.16.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc= github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
@ -326,8 +324,9 @@ github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prY
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.1 h1:NTGy1Ja9pByO+xAeH/qiWnLrKtr3hJPNjaVUwnjpdpA=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@ -336,14 +335,16 @@ github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6T
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0 h1:RyRA7RzGXQZiW+tGMr7sxa85G1z0yOpM1qq5c8lNawc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0 h1:iMAkS2TDoNWnKM+Kopnx/8tnEStIfpYA0ur0xQzzhMQ=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.2.0 h1:wH4vA7pcjKuZzjF7lM8awk4fnuJO6idemZXoKnULUx4=
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
@ -354,6 +355,7 @@ github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeV
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88= github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc= github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA= github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM= github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
@ -385,11 +387,6 @@ github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA= github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6KllzawFIhcdPw=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8=
github.com/valyala/fasttemplate v1.2.1 h1:TVEnxayobAdVkhQfrfes2IzOB6o+z4roRkPF52WA1u4=
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
@ -403,19 +400,16 @@ go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk= go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ= go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0= go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0=
go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A= go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A= go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU= go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.8.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.15.0 h1:ZZCA22JRF2gQE5FoNmhmrf7jeJJ2uhqDUNRYKm8dvmM= go.uber.org/zap v1.17.0 h1:MTjgFu6ZLKvY6Pvaqk97GlxNBuMpV4Hy/3P6tRGlI2U=
go.uber.org/zap v1.15.0/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc= go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
@ -425,8 +419,7 @@ golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2 h1:It14KIkyBFYkHkwZ7k45minvA9aorojkyjGk9KJ5B/w= golang.org/x/crypto v0.0.0-20210220033148-5ea612d1eb83/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -457,8 +450,8 @@ golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0 h1:RM4zey1++hCTbCVQfnWeKs9/IEsaBLA8vTkd0WVtmH4=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.1-0.20200828183125-ce943fd02449/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -486,10 +479,9 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781 h1:DzZ89McO9/gWPsQXS/FVKAlG02ZjaQ6AlZRBimEYOd0= golang.org/x/net v0.0.0-20210428140749-89ef3d95e781 h1:DzZ89McO9/gWPsQXS/FVKAlG02ZjaQ6AlZRBimEYOd0=
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk= golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@ -505,6 +497,7 @@ golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -515,7 +508,6 @@ golang.org/x/sys v0.0.0-20181026203630-95b1ffbd15a5/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -525,35 +517,38 @@ golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200831180312-196b9ba8737a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201112073958-5cba982894dd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da h1:b3NXsE2LusjYGGjL5bxEVZZORm/YEFFrWFjR8eFrw/c=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E= golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40 h1:JWgyZ1qgdTaF3N3oxC+MdTV7qvEEgHo3otj+HB5CM7Q=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -566,12 +561,11 @@ golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324 h1:Hir2P/De0WpUhtrKGGjvSb2YxUgyZ7EFOSLIcSSpiwE= golang.org/x/time v0.0.0-20210611083556-38a9dc6acbc6 h1:Vv0JUPWTyeqUq42B2WJ1FeIDjjvGKoA2Ss+Ts0lAVbs=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20210611083556-38a9dc6acbc6/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs= golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
@ -589,8 +583,6 @@ golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191108193012-7d206e10da11/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191112195655-aa38f8e97acc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@ -609,16 +601,18 @@ golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapK
golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw= golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200616133436-c1934b75d054/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e h1:4nW4NLDYnU28ojHaHO8OVxFHk/aQ33U01a9cjED+pzE=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0 h1:po9/4sTYwZU9lPhi1tOrb4hCv3qrhiQ77LZfGa2OjwY=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gomodules.xyz/jsonpatch/v2 v2.1.0 h1:Phva6wqu+xR//Njw6iorylFFgn/z547tw5Ne3HZPQ+k= gomodules.xyz/jsonpatch/v2 v2.2.0 h1:4pT439QV83L+G9FkcCriY6EkpcK6r6bK+A5FBUMI7qY=
gomodules.xyz/jsonpatch/v2 v2.1.0/go.mod h1:IhYNNY4jnS53ZnfE4PAmpKtDpTCj1JFXc+3mwe7XcUU= gomodules.xyz/jsonpatch/v2 v2.2.0/go.mod h1:WXp+iVDkoLQqPudfQ9GBlwB2eZ5DKOnjQZCYdOS8GPY=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M= google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg= google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@ -634,8 +628,8 @@ google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.6 h1:lMO5rYAqUxkmaj76jAkRUvt5JZgFymx/+Q5Mzfivuhc= google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
@ -655,6 +649,7 @@ google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfG
google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c= google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a h1:pOwg4OoaRYScjmR4LlLgdtnyoHYTSAVhhqe5uPdpII8= google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a h1:pOwg4OoaRYScjmR4LlLgdtnyoHYTSAVhhqe5uPdpII8=
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no= google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@ -678,14 +673,14 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4= google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f h1:BLraFXnmrev5lT+xlilqcH8XK9/i0At2xKjWk4p6zsU=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw= gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
@ -707,54 +702,54 @@ gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776 h1:tQIYjPdBoyREyB9XMu+nnTclpTYkz2zFM+lzLJFO4gQ=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw= gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk= gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3 h1:sXmLre5bzIR6ypkjXCDI3jHPssRhc8KD/Ome589sc3U=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo= k8s.io/api v0.21.2/go.mod h1:Lv6UGJZ1rlMI1qusN8ruAp9PUBFyBwpEHAdG24vIsiU=
k8s.io/api v0.20.2/go.mod h1:d7n6Ehyzx+S+cE3VhTGfVNNqtGc/oL9DCdYYahlurV8= k8s.io/api v0.21.3 h1:cblWILbLO8ar+Fj6xdDGr603HRsf8Wu9E9rngJeprZQ=
k8s.io/api v0.20.4 h1:xZjKidCirayzX6tHONRQyTNDVIR55TYVqgATqo6ZULY= k8s.io/api v0.21.3/go.mod h1:hUgeYHUbBp23Ue4qdX9tR8/ANi/g3ehylAqDn9NWVOg=
k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ= k8s.io/apiextensions-apiserver v0.21.2 h1:+exKMRep4pDrphEafRvpEi79wTnCFMqKf8LBtlA3yrE=
k8s.io/apiextensions-apiserver v0.20.1 h1:ZrXQeslal+6zKM/HjDXLzThlz/vPSxrfK3OqL8txgVQ= k8s.io/apiextensions-apiserver v0.21.2/go.mod h1:+Axoz5/l3AYpGLlhJDfcVQzCerVYq3K3CvDMvw6X1RA=
k8s.io/apiextensions-apiserver v0.20.1/go.mod h1:ntnrZV+6a3dB504qwC5PN/Yg9PBiDNt1EVqbW2kORVk= k8s.io/apimachinery v0.21.2/go.mod h1:CdTY8fU/BlvAbJ2z/8kBwimGki5Zp8/fbVuLY8gJumM=
k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= k8s.io/apimachinery v0.21.3 h1:3Ju4nvjCngxxMYby0BimUk+pQHPOQp3eCGChk5kfVII=
k8s.io/apimachinery v0.20.2/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= k8s.io/apimachinery v0.21.3/go.mod h1:H/IM+5vH9kZRNJ4l3x/fXP/5bOPJaVP/guptnZPeCFI=
k8s.io/apimachinery v0.20.4 h1:vhxQ0PPUUU2Ns1b9r4/UFp13UPs8cw2iOoTjnY9faa0= k8s.io/apiserver v0.21.2/go.mod h1:lN4yBoGyiNT7SC1dmNk0ue6a5Wi6O3SWOIw91TsucQw=
k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= k8s.io/client-go v0.21.2/go.mod h1:HdJ9iknWpbl3vMGtib6T2PyI/VYxiZfq936WNVHBRrA=
k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU= k8s.io/client-go v0.21.3 h1:J9nxZTOmvkInRDCzcSNQmPJbDYN/PjlxXT9Mos3HcLg=
k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y= k8s.io/client-go v0.21.3/go.mod h1:+VPhCgTsaFmGILxR/7E1N0S+ryO010QBeNCv5JwRGYU=
k8s.io/client-go v0.20.2 h1:uuf+iIAbfnCSw8IGAv/Rg0giM+2bOzHLOsbbrwrdhNQ= k8s.io/code-generator v0.21.2/go.mod h1:8mXJDCB7HcRo1xiEQstcguZkbxZaqeUOrO9SsicWs3U=
k8s.io/client-go v0.20.2/go.mod h1:kH5brqWqp7HDxUFKoEgiI4v8G1xzbe9giaCenUWJzgE= k8s.io/component-base v0.21.2/go.mod h1:9lvmIThzdlrJj5Hp8Z/TOgIkdfsNARQ1pT+3PByuiuc=
k8s.io/code-generator v0.20.1/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg= k8s.io/component-base v0.21.3 h1:4WuuXY3Npa+iFfi2aDRiOz+anhNvRfye0859ZgfC5Og=
k8s.io/component-base v0.20.1/go.mod h1:guxkoJnNoh8LNrbtiQOlyp2Y2XFCZQmrcg2n/DeYNLk= k8s.io/component-base v0.21.3/go.mod h1:kkuhtfEHeZM6LkX0saqSK8PbdO7A0HigUngmhhrwfGQ=
k8s.io/component-base v0.20.2 h1:LMmu5I0pLtwjpp5009KLuMGFqSc2S2isGw8t1hpYKLE=
k8s.io/component-base v0.20.2/go.mod h1:pzFtCiwe/ASD0iV7ySMu8SYVJjCapNM9bjvk7ptpKh0=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0= k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20201113003025-83324d819ded/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E= k8s.io/gengo v0.0.0-20201214224949-b6c5ce23f027/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE= k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/klog/v2 v2.4.0 h1:7+X0fUguPyrKEC4WjH8iGDg3laWgMo5tMnRTIGTTxGQ= k8s.io/klog/v2 v2.8.0 h1:Q3gmuM9hKEjefWFFYF0Mat+YyFJvsUyYuwyNNJ5C9Ts=
k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= k8s.io/klog/v2 v2.8.0/go.mod h1:hy9LJ/NvuK+iVyP4Ehqva4HxZG/oXyIS3n3Jmire4Ec=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd h1:sOHNzJIkytDF6qadMNKhhDRpc6ODik8lVC6nOur7B2c= k8s.io/kube-openapi v0.0.0-20210305001622-591a79e4bda7 h1:vEx13qjvaZ4yfObSSXW7BrMc/KQBBT/Jyee8XtLf4x0=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM= k8s.io/kube-openapi v0.0.0-20210305001622-591a79e4bda7/go.mod h1:wXW5VT87nVfh/iLV8FpR2uDvrFyomxbtb1KivDbvPTE=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20210111153108-fddb29f9d009 h1:0T5IaWHO3sJTEmCP6mUlBvMukxPKUQWqiI/YuiBNMiQ= k8s.io/utils v0.0.0-20210527160623-6fdb442a123b h1:MSqsVQ3pZvPGTqCjptfimO2WjG7A9un2zcpiHkA6M/s=
k8s.io/utils v0.0.0-20210111153108-fddb29f9d009/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20210527160623-6fdb442a123b/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg= sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.19/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg=
sigs.k8s.io/controller-runtime v0.8.3 h1:GMHvzjTmaWHQB8HadW+dIvBoJuLvZObYJ5YoZruPRao= sigs.k8s.io/controller-runtime v0.9.2 h1:MnCAsopQno6+hI9SgJHKddzXpmv2wtouZz6931Eax+Q=
sigs.k8s.io/controller-runtime v0.8.3/go.mod h1:U/l+DUopBc1ecfRZ5aviA9JDmGFQKvLf5YkZNx2e0sU= sigs.k8s.io/controller-runtime v0.9.2/go.mod h1:TxzMCHyEUpaeuOiZx/bIdc2T81vfs/aKdvJt9wuu0zk=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2 h1:YHQV7Dajm86OuqnIR6zAelnDWBRjo+YhYV9PmGrh1s8=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw= sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.0/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2 h1:Hr/htKFmJEbtMgS/UD0N+gtgctAqz81t3nu+sPzynno=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2/go.mod h1:j/nl6xW8vLS49O8YvXW1ocPhZawJtm+Yrr7PPRQ0Vg4=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o= sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q= sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc= sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=

View File

@ -6,7 +6,7 @@ ARG GOOS=linux
FROM golang:${GOLANG_VERSION}-alpine AS builder FROM golang:${GOLANG_VERSION}-alpine AS builder
WORKDIR $GOPATH/src/github.com/kedahttp/http-add-on WORKDIR $GOPATH/src/github.com/kedacore/http-add-on
COPY go.mod go.mod COPY go.mod go.mod
COPY go.sum go.sum COPY go.sum go.sum

View File

@ -1,25 +0,0 @@
package main
import (
"log"
"github.com/kedacore/http-add-on/pkg/http"
echo "github.com/labstack/echo/v4"
)
// newForwardingHandler takes in the service URL for the app backend
// and forwards incoming requests to it. Note that it isn't multitenant.
// It's intended to be deployed and scaled alongside the application itself
func newQueueSizeHandler(q http.QueueCountReader) echo.HandlerFunc {
return func(c echo.Context) error {
cur, err := q.Current()
if err != nil {
log.Printf("Error getting queue size (%s)", err)
c.Error(err)
return err
}
return c.JSON(200, map[string]int{
"current_size": cur,
})
}
}

View File

@ -1,46 +0,0 @@
package main
import (
"encoding/json"
"errors"
"testing"
"github.com/stretchr/testify/require"
)
func TestQueueSizeHandlerSuccess(t *testing.T) {
r := require.New(t)
reader := &fakeQueueCountReader{
current: 123,
err: nil,
}
handler := newQueueSizeHandler(reader)
_, echoCtx, rec := newTestCtx("GET", "/queue")
err := handler(echoCtx)
r.NoError(err)
r.Equal(200, rec.Code, "response code")
respMap := map[string]int{}
decodeErr := json.NewDecoder(rec.Body).Decode(&respMap)
r.NoError(decodeErr)
r.Equalf(1, len(respMap), "response JSON length was not 1")
sizeVal, ok := respMap["current_size"]
r.Truef(ok, "'current_size' entry not available in return JSON")
r.Equalf(reader.current, sizeVal, "returned JSON queue size was wrong")
reader.err = errors.New("test error")
r.Error(handler(echoCtx))
}
func TestQueueSizeHandlerFail(t *testing.T) {
r := require.New(t)
reader := &fakeQueueCountReader{
current: 0,
err: errors.New("test error"),
}
handler := newQueueSizeHandler(reader)
_, echoCtx, rec := newTestCtx("GET", "/queue")
err := handler(echoCtx)
r.Error(err)
r.Equal(500, rec.Code, "response code")
}

View File

@ -1,38 +0,0 @@
package config
import (
"fmt"
"net/url"
"github.com/kelseyhightower/envconfig"
)
// Origin is the configuration for where and how the proxy forwards
// requests to a backing Kubernetes service
type Origin struct {
// AppServiceName is the name of the service that fronts the user's app
AppServiceName string `envconfig:"KEDA_HTTP_APP_SERVICE_NAME" required:"true"`
// AppServiecPort the port that that the proxy should forward to
AppServicePort string `envconfig:"KEDA_HTTP_APP_SERVICE_PORT" required:"true"`
// TargetDeploymentName is the name of the backing deployment that the interceptor
// should forward to
TargetDeploymentName string `envconfig:"KEDA_HTTP_TARGET_DEPLOYMENT_NAME" required:"true"`
// Namespace is the namespace that this interceptor is running in
Namespace string `envconfig:"KEDA_HTTP_NAMESPACE" required:"true"`
}
// ServiceURL formats the app service name and port into a URL
func (o *Origin) ServiceURL() (*url.URL, error) {
urlStr := fmt.Sprintf("http://%s:%s", o.AppServiceName, o.AppServicePort)
u, err := url.Parse(urlStr)
if err != nil {
return nil, err
}
return u, nil
}
func MustParseOrigin() *Origin {
ret := new(Origin)
envconfig.MustProcess("", ret)
return ret
}

View File

@ -7,12 +7,29 @@ import (
// Serving is configuration for how the interceptor serves the proxy // Serving is configuration for how the interceptor serves the proxy
// and admin server // and admin server
type Serving struct { type Serving struct {
// CurrentNamespace is the namespace that the interceptor is
// currently running in
CurrentNamespace string `envconfig:"KEDA_HTTP_CURRENT_NAMESPACE" required:"true"`
// ProxyPort is the port that the public proxy should run on // ProxyPort is the port that the public proxy should run on
ProxyPort int `envconfig:"KEDA_HTTP_PROXY_PORT" required:"true"` ProxyPort int `envconfig:"KEDA_HTTP_PROXY_PORT" required:"true"`
// AdminPort is the port that the internal admin server should run on. // AdminPort is the port that the internal admin server should run on.
// This is the server that the external scaler will issue metrics // This is the server that the external scaler will issue metrics
// requests to // requests to
AdminPort int `envconfig:"KEDA_HTTP_ADMIN_PORT" required:"true"` AdminPort int `envconfig:"KEDA_HTTP_ADMIN_PORT" required:"true"`
// RoutingTableUpdateDurationMS is the interval (in milliseconds) representing how
// often to do a complete update of the routing table ConfigMap.
//
// The interceptor will also open a watch stream to the routing table
// ConfigMap and attempt to update the routing table on every update.
//
// Since it does full updates alongside watch stream updates, it can
// only process one at a time. Therefore, this is a best effort timeout
RoutingTableUpdateDurationMS int `envconfig:"KEDA_HTTP_ROUTING_TABLE_UPDATE_DURATION_MS" default:"500"`
// The interceptor has an internal process that periodically fetches the state
// of deployment that is running the servers it forwards to.
//
// This is the interval (in milliseconds) representing how often to do a fetch
DeploymentCachePollIntervalMS int `envconfig:"KEDA_HTTP_DEPLOYMENT_CACHE_POLLING_INTERVAL_MS" default:"250"`
} }
// Parse parses standard configs using envconfig and returns a pointer to the // Parse parses standard configs using envconfig and returns a pointer to the

View File

@ -1,40 +1,35 @@
package main package main
import ( import (
"context"
"fmt" "fmt"
"log" "log"
"time"
"github.com/kedacore/http-add-on/pkg/k8s" "github.com/kedacore/http-add-on/pkg/k8s"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
) )
type forwardWaitFunc func() error type forwardWaitFunc func(context.Context, string) error
func newDeployReplicasForwardWaitFunc( func newDeployReplicasForwardWaitFunc(
deployCache k8s.DeploymentCache, deployCache k8s.DeploymentCache,
deployName string,
totalWait time.Duration,
) forwardWaitFunc { ) forwardWaitFunc {
return func() error { return func(ctx context.Context, deployName string) error {
deployment, err := deployCache.Get(deployName) deployment, err := deployCache.Get(deployName)
if err != nil { if err != nil {
// if we didn't get the initial deployment state, bail out // if we didn't get the initial deployment state, bail out
return fmt.Errorf("Error getting state for deployment %s (%s)", deployName, err) return fmt.Errorf("error getting state for deployment %s (%s)", deployName, err)
} }
// if there is 1 or more replica, we're done waiting // if there is 1 or more replica, we're done waiting
if moreThanPtr(deployment.Spec.Replicas, 0) { if deployment.Status.ReadyReplicas > 0 {
return nil return nil
} }
watcher := deployCache.Watch(deployName) watcher := deployCache.Watch(deployName)
if err != nil { if err != nil {
return fmt.Errorf("Error getting the stream of deployment changes") return fmt.Errorf("error getting the stream of deployment changes")
} }
defer watcher.Stop() defer watcher.Stop()
eventCh := watcher.ResultChan() eventCh := watcher.ResultChan()
timer := time.NewTimer(totalWait)
defer timer.Stop()
for { for {
select { select {
case event := <-eventCh: case event := <-eventCh:
@ -42,12 +37,17 @@ func newDeployReplicasForwardWaitFunc(
if !ok { if !ok {
log.Println("Didn't get a deployment back in event") log.Println("Didn't get a deployment back in event")
} }
if moreThanPtr(deployment.Spec.Replicas, 0) { if deployment.Status.ReadyReplicas > 0 {
return nil return nil
} }
case <-timer.C: case <-ctx.Done():
// otherwise, if we hit the end of the timeout, fail // otherwise, if the context is marked done before
return fmt.Errorf("Timeout expired waiting for deployment %s to reach > 0 replicas", deployName) // we're done waiting, fail.
return fmt.Errorf(
"context marked done while waiting for deployment %s to reach > 0 replicas (%w)",
deployName,
ctx.Err(),
)
} }
} }
} }

View File

@ -16,11 +16,14 @@ import (
// Test to make sure the wait function returns a nil error if there is immediately // Test to make sure the wait function returns a nil error if there is immediately
// one replica on the target deployment // one replica on the target deployment
func TestForwardWaitFuncOneReplica(t *testing.T) { func TestForwardWaitFuncOneReplica(t *testing.T) {
ctx := context.Background()
const waitFuncWait = 1 * time.Second
r := require.New(t) r := require.New(t)
const ns = "testNS" const ns = "testNS"
const deployName = "TestForwardingHandlerDeploy" const deployName = "TestForwardingHandlerDeploy"
cache := k8s.NewMemoryDeploymentCache(map[string]*appsv1.Deployment{ cache := k8s.NewMemoryDeploymentCache(map[string]appsv1.Deployment{
deployName: k8s.NewDeployment( deployName: *newDeployment(
ns, ns,
deployName, deployName,
"myimage", "myimage",
@ -30,26 +33,30 @@ func TestForwardWaitFuncOneReplica(t *testing.T) {
corev1.PullAlways, corev1.PullAlways,
), ),
}) })
ctx, done := context.WithTimeout(ctx, waitFuncWait)
defer done()
group, ctx := errgroup.WithContext(ctx)
waitFunc := newDeployReplicasForwardWaitFunc( waitFunc := newDeployReplicasForwardWaitFunc(
cache, cache,
deployName,
1*time.Second,
) )
ctx, done := context.WithTimeout(context.Background(), 100*time.Millisecond) group.Go(func() error {
defer done() return waitFunc(ctx, deployName)
group, _ := errgroup.WithContext(ctx) })
group.Go(waitFunc) r.NoError(group.Wait(), "wait function failed, but it shouldn't have")
r.NoError(group.Wait())
} }
// Test to make sure the wait function returns an error if there are no replicas, and that doesn't change // Test to make sure the wait function returns an error if there are no replicas, and that doesn't change
// within a timeout // within a timeout
func TestForwardWaitFuncNoReplicas(t *testing.T) { func TestForwardWaitFuncNoReplicas(t *testing.T) {
ctx := context.Background()
const waitFuncWait = 1 * time.Second
r := require.New(t) r := require.New(t)
const ns = "testNS" const ns = "testNS"
const deployName = "TestForwardingHandlerHoldsDeployment" const deployName = "TestForwardingHandlerHoldsDeployment"
deployment := k8s.NewDeployment( deployment := newDeployment(
ns, ns,
deployName, deployName,
"myimage", "myimage",
@ -58,28 +65,29 @@ func TestForwardWaitFuncNoReplicas(t *testing.T) {
map[string]string{}, map[string]string{},
corev1.PullAlways, corev1.PullAlways,
) )
deployment.Spec.Replicas = k8s.Int32P(0) deployment.Status.ReadyReplicas = 0
cache := k8s.NewMemoryDeploymentCache(map[string]*appsv1.Deployment{ cache := k8s.NewMemoryDeploymentCache(map[string]appsv1.Deployment{
deployName: deployment, deployName: *deployment,
}) })
ctx, done := context.WithTimeout(ctx, waitFuncWait)
defer done()
waitFunc := newDeployReplicasForwardWaitFunc( waitFunc := newDeployReplicasForwardWaitFunc(
cache, cache,
deployName,
1*time.Second,
) )
err := waitFunc() err := waitFunc(ctx, deployName)
r.Error(err) r.Error(err)
} }
func TestWaitFuncWaitsUntilReplicas(t *testing.T) { func TestWaitFuncWaitsUntilReplicas(t *testing.T) {
ctx := context.Background()
r := require.New(t) r := require.New(t)
totalWaitDur := 500 * time.Millisecond totalWaitDur := 500 * time.Millisecond
const ns = "testNS" const ns = "testNS"
const deployName = "TestForwardingHandlerHoldsDeployment" const deployName = "TestForwardingHandlerHoldsDeployment"
deployment := k8s.NewDeployment( deployment := newDeployment(
ns, ns,
deployName, deployName,
"myimage", "myimage",
@ -89,10 +97,14 @@ func TestWaitFuncWaitsUntilReplicas(t *testing.T) {
corev1.PullAlways, corev1.PullAlways,
) )
deployment.Spec.Replicas = k8s.Int32P(0) deployment.Spec.Replicas = k8s.Int32P(0)
cache := k8s.NewMemoryDeploymentCache(map[string]*appsv1.Deployment{ cache := k8s.NewMemoryDeploymentCache(map[string]appsv1.Deployment{
deployName: deployment, deployName: *deployment,
}) })
waitFunc := newDeployReplicasForwardWaitFunc(cache, deployName, totalWaitDur) ctx, done := context.WithTimeout(ctx, totalWaitDur)
defer done()
waitFunc := newDeployReplicasForwardWaitFunc(
cache,
)
// this channel will be closed immediately after the replicas were increased // this channel will be closed immediately after the replicas were increased
replicasIncreasedCh := make(chan struct{}) replicasIncreasedCh := make(chan struct{})
go func() { go func() {
@ -105,5 +117,5 @@ func TestWaitFuncWaitsUntilReplicas(t *testing.T) {
watcher.Action(watch.Modified, modifiedDeployment) watcher.Action(watch.Modified, modifiedDeployment)
close(replicasIncreasedCh) close(replicasIncreasedCh)
}() }()
r.NoError(waitFunc()) r.NoError(waitFunc(ctx, deployName))
} }

View File

@ -1,36 +0,0 @@
package main
import (
"net/http/httptest"
echo "github.com/labstack/echo/v4"
)
func newTestCtx(method, path string) (*echo.Echo, echo.Context, *httptest.ResponseRecorder) {
req := httptest.NewRequest(method, path, nil)
rec := httptest.NewRecorder()
e := echo.New()
return e, e.NewContext(req, rec), rec
}
type fakeQueueCounter struct {
resizedCh chan int
}
func (f *fakeQueueCounter) Resize(i int) error {
f.resizedCh <- i
return nil
}
func (f *fakeQueueCounter) Current() (int, error) {
return 0, nil
}
type fakeQueueCountReader struct {
current int
err error
}
func (f *fakeQueueCountReader) Current() (int, error) {
return f.current, f.err
}

View File

@ -0,0 +1,66 @@
package main
import (
"github.com/kedacore/http-add-on/pkg/k8s"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// newDeployment creates a new deployment object
// with the given name and the given image. This does not actually create
// the deployment in the cluster, it just creates the deployment object
// in memory
func newDeployment(
namespace,
name,
image string,
ports []int32,
env []corev1.EnvVar,
labels map[string]string,
pullPolicy corev1.PullPolicy,
) *appsv1.Deployment {
containerPorts := make([]corev1.ContainerPort, len(ports))
for i, port := range ports {
containerPorts[i] = corev1.ContainerPort{
ContainerPort: port,
}
}
deployment := &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: labels,
},
Spec: appsv1.DeploymentSpec{
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Replicas: k8s.Int32P(1),
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Image: image,
Name: name,
ImagePullPolicy: pullPolicy,
Ports: containerPorts,
Env: env,
},
},
},
},
},
Status: appsv1.DeploymentStatus{
ReadyReplicas: 1,
},
}
return deployment
}

View File

@ -2,19 +2,22 @@ package main
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"log"
"math/rand" "math/rand"
nethttp "net/http" nethttp "net/http"
"os"
"net/url"
"time" "time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/interceptor/config" "github.com/kedacore/http-add-on/interceptor/config"
"github.com/kedacore/http-add-on/pkg/http" kedahttp "github.com/kedacore/http-add-on/pkg/http"
"github.com/kedacore/http-add-on/pkg/k8s" "github.com/kedacore/http-add-on/pkg/k8s"
pkglog "github.com/kedacore/http-add-on/pkg/log"
kedanet "github.com/kedacore/http-add-on/pkg/net" kedanet "github.com/kedacore/http-add-on/pkg/net"
echo "github.com/labstack/echo/v4" "github.com/kedacore/http-add-on/pkg/queue"
"github.com/kedacore/http-add-on/pkg/routing"
"golang.org/x/sync/errgroup"
"k8s.io/client-go/kubernetes" "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest" "k8s.io/client-go/rest"
) )
@ -24,89 +27,216 @@ func init() {
} }
func main() { func main() {
lggr, err := pkglog.NewZapr()
if err != nil {
fmt.Println("Error building logger", err)
os.Exit(1)
}
timeoutCfg := config.MustParseTimeouts() timeoutCfg := config.MustParseTimeouts()
originCfg := config.MustParseOrigin()
servingCfg := config.MustParseServing() servingCfg := config.MustParseServing()
ctx := context.Background() ctx, ctxDone := context.WithCancel(
context.Background(),
)
deployName := originCfg.TargetDeploymentName
ns := originCfg.Namespace
proxyPort := servingCfg.ProxyPort proxyPort := servingCfg.ProxyPort
adminPort := servingCfg.AdminPort adminPort := servingCfg.AdminPort
svcURL, err := originCfg.ServiceURL()
if err != nil {
log.Fatalf("Invalid origin service URL: %s", err)
}
q := http.NewMemoryQueue()
cfg, err := rest.InClusterConfig() cfg, err := rest.InClusterConfig()
if err != nil { if err != nil {
log.Fatalf("Kubernetes client config not found (%s)", err) lggr.Error(err, "Kubernetes client config not found")
os.Exit(1)
} }
cl, err := kubernetes.NewForConfig(cfg) cl, err := kubernetes.NewForConfig(cfg)
if err != nil { if err != nil {
log.Fatalf("Error creating new Kubernetes ClientSet (%s)", err) lggr.Error(err, "creating new Kubernetes ClientSet")
os.Exit(1)
} }
deployInterface := cl.AppsV1().Deployments(ns) deployInterface := cl.AppsV1().Deployments(
servingCfg.CurrentNamespace,
)
deployCache, err := k8s.NewK8sDeploymentCache( deployCache, err := k8s.NewK8sDeploymentCache(
ctx, ctx,
lggr,
deployInterface, deployInterface,
) )
if err != nil { if err != nil {
log.Fatalf("Error creating new deployment cache (%s)", err) lggr.Error(err, "creating new deployment cache")
os.Exit(1)
} }
waitFunc := newDeployReplicasForwardWaitFunc(deployCache, deployName, 1*time.Second)
log.Printf( configMapsInterface := cl.CoreV1().ConfigMaps(servingCfg.CurrentNamespace)
"Interceptor started, forwarding to service %s:%s, watching deployment %s",
originCfg.AppServiceName, waitFunc := newDeployReplicasForwardWaitFunc(deployCache)
originCfg.AppServicePort,
originCfg.TargetDeploymentName, lggr.Info("Interceptor starting")
q := queue.NewMemory()
routingTable := routing.NewTable()
lggr.Info(
"Fetching initial routing table",
) )
if err := routing.GetTable(
go runAdminServer(q, adminPort) ctx,
lggr,
go runProxyServer( configMapsInterface,
routingTable,
q, q,
deployName, ); err != nil {
waitFunc, lggr.Error(err, "fetching routing table")
svcURL, os.Exit(1)
timeoutCfg, }
proxyPort,
)
select {} errGrp, ctx := errgroup.WithContext(ctx)
// start the deployment cache updater
errGrp.Go(func() error {
defer ctxDone()
err := deployCache.StartWatcher(
ctx,
lggr,
time.Duration(servingCfg.DeploymentCachePollIntervalMS)*time.Millisecond,
)
lggr.Error(err, "deployment cache watcher failed")
return err
})
// start the update loop that updates the routing table from
// the ConfigMap that the operator updates as HTTPScaledObjects
// enter and exit the system
errGrp.Go(func() error {
defer ctxDone()
err := routing.StartConfigMapRoutingTableUpdater(
ctx,
lggr,
time.Duration(servingCfg.RoutingTableUpdateDurationMS)*time.Millisecond,
configMapsInterface,
routingTable,
q,
)
lggr.Error(err, "config map routing table updater failed")
return err
})
// start the administrative server. this is the server
// that serves the queue size API
errGrp.Go(func() error {
defer ctxDone()
lggr.Info(
"starting the admin server",
"port",
adminPort,
)
err := runAdminServer(
ctx,
lggr,
configMapsInterface,
q,
routingTable,
deployCache,
adminPort,
)
lggr.Error(err, "admin server failed")
return err
})
// start the proxy server. this is the server that
// accepts, holds and forwards user requests
errGrp.Go(func() error {
defer ctxDone()
lggr.Info(
"starting the proxy server",
"port",
proxyPort,
)
err := runProxyServer(
ctx,
lggr,
q,
waitFunc,
routingTable,
timeoutCfg,
proxyPort,
)
lggr.Error(err, "proxy server failed")
return err
})
// errGrp.Wait() should hang forever for healthy admin and proxy servers.
// if it returns an error, log and exit immediately.
waitErr := errGrp.Wait()
lggr.Error(waitErr, "error with interceptor")
os.Exit(1)
} }
func runAdminServer(q http.QueueCountReader, port int) { func runAdminServer(
adminServer := echo.New() ctx context.Context,
adminServer.GET("/queue", newQueueSizeHandler(q)) lggr logr.Logger,
cmGetter k8s.ConfigMapGetter,
q queue.Counter,
routingTable *routing.Table,
deployCache k8s.DeploymentCache,
port int,
) error {
lggr = lggr.WithName("runAdminServer")
adminServer := nethttp.NewServeMux()
queue.AddCountsRoute(
lggr,
adminServer,
q,
)
routing.AddFetchRoute(
lggr,
adminServer,
routingTable,
)
routing.AddPingRoute(
lggr,
adminServer,
cmGetter,
routingTable,
q,
)
adminServer.HandleFunc(
"/deployments",
func(w nethttp.ResponseWriter, r *nethttp.Request) {
if err := json.NewEncoder(w).Encode(deployCache); err != nil {
lggr.Error(err, "encoding deployment cache")
}
},
)
addr := fmt.Sprintf("0.0.0.0:%d", port) addr := fmt.Sprintf("0.0.0.0:%d", port)
log.Printf("admin server running on %s", addr) lggr.Info("admin server starting", "address", addr)
log.Fatal(adminServer.Start(addr)) return kedahttp.ServeContext(ctx, addr, adminServer)
} }
func runProxyServer( func runProxyServer(
q http.QueueCounter, ctx context.Context,
targetDeployName string, lggr logr.Logger,
q queue.Counter,
waitFunc forwardWaitFunc, waitFunc forwardWaitFunc,
svcURL *url.URL, routingTable *routing.Table,
timeouts *config.Timeouts, timeouts *config.Timeouts,
port int, port int,
) { ) error {
lggr = lggr.WithName("runProxyServer")
dialer := kedanet.NewNetDialer(timeouts.Connect, timeouts.KeepAlive) dialer := kedanet.NewNetDialer(timeouts.Connect, timeouts.KeepAlive)
dialContextFunc := kedanet.DialContextWithRetry(dialer, timeouts.DefaultBackoff()) dialContextFunc := kedanet.DialContextWithRetry(dialer, timeouts.DefaultBackoff())
proxyHdl := newForwardingHandler( proxyHdl := countMiddleware(
svcURL, lggr,
dialContextFunc, q,
waitFunc, newForwardingHandler(
timeouts.DeploymentReplicas, lggr,
timeouts.ResponseHeader, routingTable,
dialContextFunc,
waitFunc,
timeouts.DeploymentReplicas,
timeouts.ResponseHeader,
),
) )
addr := fmt.Sprintf("0.0.0.0:%d", port) addr := fmt.Sprintf("0.0.0.0:%d", port)
log.Printf("proxy server starting on %s", addr) lggr.Info("proxy server starting", "address", addr)
nethttp.ListenAndServe(addr, countMiddleware(q, proxyHdl)) return kedahttp.ServeContext(ctx, addr, proxyHdl)
} }

20
interceptor/main_test.go Normal file
View File

@ -0,0 +1,20 @@
package main
import (
"testing"
)
func TestRunProxyServerCountMiddleware(t *testing.T) {
// r := require.New(t)
// ctx, done := context.WithCancel(
// context.Background(),
// )
// defer done()
// r.NoError(runProxyServer(ctx, logr.Discard(), q, waitFunc, routingTable, timeouts, port))
// see https://github.com/kedacore/http-add-on/issues/245
}
func TestRunAdminServerDeploymentsEndpoint(t *testing.T) {
// see https://github.com/kedacore/http-add-on/issues/245
}

View File

@ -1,28 +1,47 @@
package main package main
import ( import (
"fmt"
"log" "log"
nethttp "net/http" nethttp "net/http"
"github.com/kedacore/http-add-on/pkg/http" "github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/queue"
) )
// countMiddleware takes que MemoryQueue previously initiated and increments the func getHost(r *nethttp.Request) (string, error) {
// size of it before sending the request to the original app, after the request // check the host header first, then the request host
// is finished, it decrements the queue size // field (which may contain the actual URL if there is no
func countMiddleware(q http.QueueCounter, next nethttp.Handler) nethttp.Handler { // host header)
if r.Header.Get("Host") != "" {
return r.Header.Get("Host"), nil
}
if r.Host != "" {
return r.Host, nil
}
return "", fmt.Errorf("host not found")
}
// countMiddleware adds 1 to the given queue counter, executes next
// (by calling ServeHTTP on it), then decrements the queue counter
func countMiddleware(
lggr logr.Logger,
q queue.Counter,
next nethttp.Handler,
) nethttp.Handler {
return nethttp.HandlerFunc(func(w nethttp.ResponseWriter, r *nethttp.Request) { return nethttp.HandlerFunc(func(w nethttp.ResponseWriter, r *nethttp.Request) {
// TODO: need to figure out a way to get the increment host, err := getHost(r)
// to happen before fn(w, r) happens below. otherwise, if err != nil {
// the counter won't get incremented right away and the actual lggr.Error(err, "not forwarding request")
// handler will hang longer than it needs to w.WriteHeader(400)
go func() { w.Write([]byte("Host not found, not forwarding request"))
if err := q.Resize(+1); err != nil { return
log.Printf("Error incrementing queue for %q (%s)", r.RequestURI, err) }
} if err := q.Resize(host, +1); err != nil {
}() log.Printf("Error incrementing queue for %q (%s)", r.RequestURI, err)
}
defer func() { defer func() {
if err := q.Resize(-1); err != nil { if err := q.Resize(host, -1); err != nil {
log.Printf("Error decrementing queue for %q (%s)", r.RequestURI, err) log.Printf("Error decrementing queue for %q (%s)", r.RequestURI, err)
} }
}() }()

View File

@ -1,65 +1,125 @@
package main package main
import ( import (
"context"
"fmt"
"math" "math"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"sync"
"testing" "testing"
"time" "time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/queue"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
) )
func TestCountMiddleware(t *testing.T) { func TestCountMiddleware(t *testing.T) {
ctx := context.Background()
const host = "testingkeda.com"
r := require.New(t) r := require.New(t)
queueCounter := &fakeQueueCounter{} queueCounter := queue.NewFakeCounter()
var wg sync.WaitGroup
wg.Add(1)
middleware := countMiddleware( middleware := countMiddleware(
logr.Discard(),
queueCounter, queueCounter,
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
wg.Done()
w.WriteHeader(200) w.WriteHeader(200)
w.Write([]byte("OK")) w.Write([]byte("OK"))
}), }),
) )
// no host in the request
req, err := http.NewRequest("GET", "/something", nil) req, err := http.NewRequest("GET", "/something", nil)
r.NoError(err) r.NoError(err)
rec := httptest.NewRecorder() agg, respRecorder := expectResizes(
ctx,
t,
0,
middleware,
req,
queueCounter,
func(t *testing.T, hostAndCount queue.HostAndCount) {},
)
r.Equal(400, respRecorder.Code)
r.Equal("Host not found, not forwarding request", respRecorder.Body.String())
r.Equal(0, agg)
go func() { // run middleware with the host in the request
middleware.ServeHTTP(rec, req) req, err = http.NewRequest("GET", "/something", nil)
}() r.NoError(err)
req.Host = host
// after the handler was called, first wait for it to complete. // for a valid request, we expect the queue to be resized twice.
// then check to make sure the pending queue size was increased and decreased. // once to mark a pending HTTP request, then a second time to remove it.
// // by the end of both sends, resize1 + resize2 should be 0,
// the increase and decrease operations happen in goroutines, so the ordering // or in other words, the queue size should be back to zero
// isn't guaranteed agg, respRecorder = expectResizes(
wg.Wait() ctx,
timer := time.NewTimer(200 * time.Millisecond) t,
defer timer.Stop() 2,
resizes := []int{} middleware,
done := false req,
for i := 0; i < 2; i++ { queueCounter,
if len(resizes) == 2 || done { func(t *testing.T, hostAndCount queue.HostAndCount) {
break t.Helper()
} r := require.New(t)
select { r.Equal(float64(1), math.Abs(float64(hostAndCount.Count)))
case i := <-queueCounter.resizedCh: r.Equal(host, hostAndCount.Host)
resizes = append(resizes, i) },
case <-timer.C: )
// effectively breaks out of the outer loop. r.Equal(200, respRecorder.Code)
// putting a 'break' here will only break out r.Equal("OK", respRecorder.Body.String())
// of the select block r.Equal(0, agg)
done = true }
}
} // expectResizes creates a new httptest.ResponseRecorder, then passes req through
agg := 0 // the middleware. every time the middleware calls fakeCounter.Resize(), it calls
for _, delta := range resizes { // resizeCheckFn with t and the queue.HostCount that represents the resize call
r.Equal(1, math.Abs(float64(delta))) // that was made. it also maintains an aggregate delta of the counts passed to
agg += delta // Resize. If, for example, the following integers were passed to resize over
} // 4 calls: [-1, 1, 1, 2], the aggregate would be -1+1+1+2=3
r.Equal(0, agg, "sum of all the resize operations") //
// this function returns the aggregate and the httptest.ResponseRecorder that was
// created and used with the middleware
func expectResizes(
ctx context.Context,
t *testing.T,
nResizes int,
middleware http.Handler,
req *http.Request,
fakeCounter *queue.FakeCounter,
resizeCheckFn func(*testing.T, queue.HostAndCount),
) (int, *httptest.ResponseRecorder) {
t.Helper()
r := require.New(t)
const timeout = 1 * time.Second
ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
grp, ctx := errgroup.WithContext(ctx)
agg := 0
grp.Go(func() error {
// we expect the queue to be resized nResizes times
for i := 0; i < nResizes; i++ {
select {
case hostAndCount := <-fakeCounter.ResizedCh:
agg += hostAndCount.Count
resizeCheckFn(t, hostAndCount)
case <-ctx.Done():
return fmt.Errorf(
"timed out waiting for the count middleware. expected %d resizes, timeout was %s, iteration %d",
nResizes,
timeout,
i,
)
}
}
return nil
})
respRecorder := httptest.NewRecorder()
middleware.ServeHTTP(respRecorder, req)
r.NoError(grp.Wait())
return agg, respRecorder
} }

View File

@ -3,13 +3,12 @@ package main
import ( import (
"context" "context"
"fmt" "fmt"
"log"
"net/http" "net/http"
"net/url"
"time" "time"
"github.com/go-logr/logr"
kedanet "github.com/kedacore/http-add-on/pkg/net" kedanet "github.com/kedacore/http-add-on/pkg/net"
"golang.org/x/sync/errgroup" "github.com/kedacore/http-add-on/pkg/routing"
) )
func moreThanPtr(i *int32, target int32) bool { func moreThanPtr(i *int32, target int32) bool {
@ -23,7 +22,8 @@ func moreThanPtr(i *int32, target int32) bool {
// fwdSvcURL must have a valid scheme in it. The best way to do this is // fwdSvcURL must have a valid scheme in it. The best way to do this is
// create a URL with url.Parse("https://...") // create a URL with url.Parse("https://...")
func newForwardingHandler( func newForwardingHandler(
fwdSvcURL *url.URL, lggr logr.Logger,
routingTable *routing.Table,
dialCtxFunc kedanet.DialContextFunc, dialCtxFunc kedanet.DialContextFunc,
waitFunc forwardWaitFunc, waitFunc forwardWaitFunc,
waitTimeout time.Duration, waitTimeout time.Duration,
@ -40,18 +40,33 @@ func newForwardingHandler(
ResponseHeaderTimeout: respHeaderTimeout, ResponseHeaderTimeout: respHeaderTimeout,
} }
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx, done := context.WithTimeout(r.Context(), waitTimeout) host, err := getHost(r)
defer done() if err != nil {
grp, _ := errgroup.WithContext(ctx) w.WriteHeader(400)
grp.Go(waitFunc) w.Write([]byte("Host not found in request"))
waitErr := grp.Wait()
if waitErr != nil {
log.Printf("Error, not forwarding request")
w.WriteHeader(502)
w.Write([]byte(fmt.Sprintf("error on backend (%s)", waitErr)))
return return
} }
routingTarget, err := routingTable.Lookup(host)
forwardRequest(w, r, roundTripper, fwdSvcURL) if err != nil {
w.WriteHeader(404)
w.Write([]byte(fmt.Sprintf("Host %s not found", r.Host)))
return
}
ctx, done := context.WithTimeout(r.Context(), waitTimeout)
defer done()
if err := waitFunc(ctx, routingTarget.Deployment); err != nil {
lggr.Error(err, "wait function failed, not forwarding request")
w.WriteHeader(502)
w.Write([]byte(fmt.Sprintf("error on backend (%s)", err)))
return
}
targetSvcURL, err := routingTarget.ServiceURL()
if err != nil {
lggr.Error(err, "forwarding failed")
w.WriteHeader(500)
w.Write([]byte("error getting backend service URL"))
return
}
forwardRequest(w, r, roundTripper, targetSvcURL)
}) })
} }

View File

@ -0,0 +1,363 @@
package main
import (
"context"
"fmt"
"io"
"net"
"net/http"
"net/http/httptest"
"net/url"
"strconv"
"strings"
"testing"
"time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/k8s"
kedanet "github.com/kedacore/http-add-on/pkg/net"
"github.com/kedacore/http-add-on/pkg/routing"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
)
// happy path - deployment is scaled to 1 and host in routing table
func TestIntegrationHappyPath(t *testing.T) {
const (
deploymentReplicasTimeout = 200 * time.Millisecond
responseHeaderTimeout = 1 * time.Second
deplName = "testdeployment"
)
r := require.New(t)
h, err := newHarness(deploymentReplicasTimeout, responseHeaderTimeout)
r.NoError(err)
defer h.close()
t.Logf("Harness: %s", h.String())
h.deplCache.Set(deplName, appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{Name: deplName},
Spec: appsv1.DeploymentSpec{
// note that the forwarding wait function doesn't care about
// the replicas field, it only cares about ReadyReplicas in the status.
// regardless, we're setting this because in a running cluster,
// it's likely that most of the time, this is equal to ReadyReplicas
Replicas: i32Ptr(3),
},
Status: appsv1.DeploymentStatus{
ReadyReplicas: 3,
},
})
originHost, originPort, err := splitHostPort(h.originURL.Host)
r.NoError(err)
h.routingTable.AddTarget(hostForTest(t), routing.Target{
Service: originHost,
Port: originPort,
Deployment: deplName,
})
// happy path
res, err := doRequest(
http.DefaultClient,
"GET",
h.proxyURL.String(),
hostForTest(t),
nil,
)
r.NoError(err)
r.Equal(200, res.StatusCode)
}
// deployment scaled to 1 but host not in routing table
//
// NOTE: the interceptor needs to check in the routing table
// _before_ checking the deployment cache, so we don't technically
// need to set the replicas to 1, but we're doing so anyway to
// isolate the routing table behavior
func TestIntegrationNoRoutingTableEntry(t *testing.T) {
host := fmt.Sprintf("%s.integrationtest.interceptor.kedahttp.dev", t.Name())
r := require.New(t)
h, err := newHarness(time.Second, time.Second)
r.NoError(err)
defer h.close()
h.deplCache.Set(host, appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{Name: host},
Spec: appsv1.DeploymentSpec{
Replicas: i32Ptr(1),
},
})
// not in the routing table
res, err := doRequest(
http.DefaultClient,
"GET",
h.proxyURL.String(),
"not-in-the-table",
nil,
)
r.NoError(err)
r.Equal(404, res.StatusCode)
}
// host in the routing table but deployment has no replicas
func TestIntegrationNoReplicas(t *testing.T) {
const (
deployTimeout = 100 * time.Millisecond
)
host := hostForTest(t)
deployName := "testdeployment"
r := require.New(t)
h, err := newHarness(deployTimeout, time.Second)
r.NoError(err)
originHost, originPort, err := splitHostPort(h.originURL.Host)
r.NoError(err)
h.routingTable.AddTarget(hostForTest(t), routing.Target{
Service: originHost,
Port: originPort,
Deployment: deployName,
})
// 0 replicas
h.deplCache.Set(deployName, appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{Name: deployName},
Spec: appsv1.DeploymentSpec{
Replicas: i32Ptr(0),
},
})
start := time.Now()
res, err := doRequest(
http.DefaultClient,
"GET",
h.proxyURL.String(),
host,
nil,
)
r.NoError(err)
r.Equal(502, res.StatusCode)
elapsed := time.Since(start)
// we should have slept more than the deployment replicas wait timeout
r.GreaterOrEqual(elapsed, deployTimeout)
r.Less(elapsed, deployTimeout+50*time.Millisecond)
}
// the request comes in while there are no replicas, and one is added
// while it's pending
func TestIntegrationWaitReplicas(t *testing.T) {
const (
deployTimeout = 2 * time.Second
responseTimeout = 1 * time.Second
deployName = "testdeployment"
)
ctx := context.Background()
r := require.New(t)
h, err := newHarness(deployTimeout, responseTimeout)
r.NoError(err)
// add host to routing table
originHost, originPort, err := splitHostPort(h.originURL.Host)
r.NoError(err)
h.routingTable.AddTarget(hostForTest(t), routing.Target{
Service: originHost,
Port: originPort,
Deployment: deployName,
})
// set up a deployment with zero replicas and create
// a watcher we can use later to fake-send a deployment
// event
initialDeployment := appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{Name: deployName},
Spec: appsv1.DeploymentSpec{
Replicas: i32Ptr(0),
},
}
h.deplCache.Set(deployName, initialDeployment)
watcher := h.deplCache.SetWatcher(deployName)
// make the request in one goroutine, and in the other, wait a bit
// and then add replicas to the deployment cache
var response *http.Response
grp, _ := errgroup.WithContext(ctx)
grp.Go(func() error {
resp, err := doRequest(
http.DefaultClient,
"GET",
h.proxyURL.String(),
hostForTest(t),
nil,
)
if err != nil {
return err
}
response = resp
return nil
})
const sleepDur = deployTimeout / 4
grp.Go(func() error {
t.Logf("Sleeping for %s", sleepDur)
time.Sleep(sleepDur)
t.Logf("Woke up, setting replicas to 10")
modifiedDeployment := initialDeployment.DeepCopy()
// note that the wait function only cares about Status.ReadyReplicas
// but we're setting Spec.Replicas to 10 as well because the common
// case in the cluster is that they would be equal
modifiedDeployment.Spec.Replicas = i32Ptr(10)
modifiedDeployment.Status.ReadyReplicas = 10
// send a watch event (instead of setting replicas) so that the watch
// func sees that it can forward the request now
watcher.Modify(modifiedDeployment)
return nil
})
start := time.Now()
r.NoError(grp.Wait())
elapsed := time.Since(start)
// assert here so that we can check all of these cases
// rather than just failing at the first one
a := assert.New(t)
a.GreaterOrEqual(elapsed, sleepDur)
a.Less(
elapsed,
sleepDur*2,
"the handler took too long. this is usually because it timed out, not because it didn't find the watch event in time",
)
a.Equal(200, response.StatusCode)
}
func doRequest(
cl *http.Client,
method,
urlStr,
host string,
body io.ReadCloser,
) (*http.Response, error) {
req, err := http.NewRequest("GET", urlStr, nil)
if err != nil {
return nil, err
}
req.Host = host
res, err := cl.Do(req)
if err != nil {
return nil, err
}
return res, nil
}
type harness struct {
lggr logr.Logger
proxyHdl http.Handler
proxySrv *httptest.Server
proxyURL *url.URL
originHdl http.Handler
originSrv *httptest.Server
originURL *url.URL
routingTable *routing.Table
dialCtxFunc kedanet.DialContextFunc
deplCache *k8s.FakeDeploymentCache
waitFunc forwardWaitFunc
}
func newHarness(
deployReplicasTimeout,
responseHeaderTimeout time.Duration,
) (*harness, error) {
lggr := logr.Discard()
routingTable := routing.NewTable()
dialContextFunc := kedanet.DialContextWithRetry(
&net.Dialer{
Timeout: 2 * time.Second,
},
wait.Backoff{
Steps: 2,
Duration: time.Second,
},
)
deplCache := k8s.NewFakeDeploymentCache()
waitFunc := newDeployReplicasForwardWaitFunc(deplCache)
proxyHdl := newForwardingHandler(
lggr,
routingTable,
dialContextFunc,
waitFunc,
deployReplicasTimeout,
responseHeaderTimeout,
)
originHdl := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("hello!"))
})
testOriginSrv, originSrvURL, err := kedanet.StartTestServer(originHdl)
if err != nil {
return nil, err
}
proxySrv, proxySrvURL, err := kedanet.StartTestServer(proxyHdl)
if err != nil {
return nil, err
}
return &harness{
lggr: lggr,
proxyHdl: proxyHdl,
proxySrv: proxySrv,
proxyURL: proxySrvURL,
originHdl: originHdl,
originSrv: testOriginSrv,
originURL: originSrvURL,
routingTable: routingTable,
dialCtxFunc: dialContextFunc,
deplCache: deplCache,
waitFunc: waitFunc,
}, nil
}
func (h *harness) close() {
h.proxySrv.Close()
h.originSrv.Close()
}
func (h *harness) String() string {
return fmt.Sprintf(
"harness{proxy: %s, origin: %s}",
h.proxyURL.String(),
h.originURL.String(),
)
}
func i32Ptr(i int32) *int32 {
return &i
}
func hostForTest(t *testing.T) string {
t.Helper()
return fmt.Sprintf("%s.integrationtest.interceptor.kedahttp.dev", t.Name())
}
// similar to net.SplitHostPort (https://pkg.go.dev/net#SplitHostPort)
// but returns the port as a string, not an int.
//
// useful because url.Host can contain the port, so ensure we only get the actual host
func splitHostPort(hostPortStr string) (string, int, error) {
spl := strings.Split(hostPortStr, ":")
if len(spl) != 2 {
return "", 0, fmt.Errorf("invalid host:port: %s", hostPortStr)
}
host := spl[0]
port, err := strconv.Atoi(spl[1])
if err != nil {
return "", 0, errors.Wrap(err, "port was invalid")
}
return host, port, nil
}

View File

@ -1,34 +1,52 @@
package main package main
import ( import (
"context"
"fmt"
"net/http" "net/http"
"net/url" "strconv"
"strings"
"testing" "testing"
"time" "time"
"github.com/go-logr/logr"
kedanet "github.com/kedacore/http-add-on/pkg/net" kedanet "github.com/kedacore/http-add-on/pkg/net"
"github.com/kedacore/http-add-on/pkg/routing"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
// the proxy should successfully forward a request to a running server // the proxy should successfully forward a request to a running server
func TestImmediatelySuccessfulProxy(t *testing.T) { func TestImmediatelySuccessfulProxy(t *testing.T) {
const host = "TestImmediatelySuccessfulProxy.testing"
r := require.New(t) r := require.New(t)
originHdl := kedanet.NewTestHTTPHandlerWrapper(func(w http.ResponseWriter, r *http.Request) { originHdl := kedanet.NewTestHTTPHandlerWrapper(
w.WriteHeader(200) http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("test response")) w.WriteHeader(200)
}) w.Write([]byte("test response"))
}),
)
srv, originURL, err := kedanet.StartTestServer(originHdl) srv, originURL, err := kedanet.StartTestServer(originHdl)
r.NoError(err) r.NoError(err)
defer srv.Close() defer srv.Close()
routingTable := routing.NewTable()
portInt, err := strconv.Atoi(originURL.Port())
r.NoError(err)
target := routing.Target{
Service: strings.Split(originURL.Host, ":")[0],
Port: portInt,
Deployment: "testdepl",
}
routingTable.AddTarget(host, target)
timeouts := defaultTimeouts() timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff()) dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
waitFunc := func() error { waitFunc := func(context.Context, string) error {
return nil return nil
} }
hdl := newForwardingHandler( hdl := newForwardingHandler(
originURL, logr.Discard(),
routingTable,
dialCtxFunc, dialCtxFunc,
waitFunc, waitFunc,
timeouts.DeploymentReplicas, timeouts.DeploymentReplicas,
@ -36,28 +54,41 @@ func TestImmediatelySuccessfulProxy(t *testing.T) {
) )
const path = "/testfwd" const path = "/testfwd"
res, req, err := reqAndRes(path) res, req, err := reqAndRes(path)
req.Host = host
r.NoError(err) r.NoError(err)
hdl.ServeHTTP(res, req) hdl.ServeHTTP(res, req)
r.Equal(200, res.Code, "response code was unexpected") r.Equal(200, res.Code, "expected response code 200")
r.Equal("test response", res.Body.String()) r.Equal("test response", res.Body.String())
} }
// the proxy should wait for a timeout and fail if there is no origin to connect // the proxy should wait for a timeout and fail if there is no
// to // origin to which to connect
func TestWaitFailedConnection(t *testing.T) { func TestWaitFailedConnection(t *testing.T) {
const host = "TestWaitFailedConnection.testing"
r := require.New(t) r := require.New(t)
timeouts := defaultTimeouts() timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff()) backoff := timeouts.DefaultBackoff()
waitFunc := func() error { backoff.Steps = 2
dialCtxFunc := retryDialContextFunc(
timeouts,
backoff,
)
waitFunc := func(context.Context, string) error {
return nil return nil
} }
noSuchURL, err := url.Parse("http://localhost:60002") routingTable := routing.NewTable()
r.NoError(err) routingTable.AddTarget(host, routing.Target{
Service: "nosuchdepl",
Port: 8081,
Deployment: "nosuchdepl",
})
hdl := newForwardingHandler( hdl := newForwardingHandler(
noSuchURL, logr.Discard(),
routingTable,
dialCtxFunc, dialCtxFunc,
waitFunc, waitFunc,
timeouts.DeploymentReplicas, timeouts.DeploymentReplicas,
@ -65,6 +96,7 @@ func TestWaitFailedConnection(t *testing.T) {
) )
const path = "/testfwd" const path = "/testfwd"
res, req, err := reqAndRes(path) res, req, err := reqAndRes(path)
req.Host = host
r.NoError(err) r.NoError(err)
hdl.ServeHTTP(res, req) hdl.ServeHTTP(res, req)
@ -72,27 +104,29 @@ func TestWaitFailedConnection(t *testing.T) {
r.Equal(502, res.Code, "response code was unexpected") r.Equal(502, res.Code, "response code was unexpected")
} }
// the proxy handler should wait for the wait function until it hits
// a timeout, then it should fail
func TestTimesOutOnWaitFunc(t *testing.T) { func TestTimesOutOnWaitFunc(t *testing.T) {
r := require.New(t) r := require.New(t)
timeouts := defaultTimeouts() timeouts := defaultTimeouts()
timeouts.DeploymentReplicas = 10 * time.Millisecond timeouts.DeploymentReplicas = 1 * time.Millisecond
timeouts.ResponseHeader = 1 * time.Millisecond
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff()) dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
// the wait func will close this channel immediately after it's called, but before it starts waitFunc, waitFuncCalledCh, finishWaitFunc := notifyingFunc()
// waiting for waitFuncCh defer finishWaitFunc()
waitFuncCalledCh := make(chan struct{}) noSuchHost := fmt.Sprintf("%s.testing", t.Name())
// the wait func will wait for waitFuncCh to receive or be closed before it proceeds
waitFuncCh := make(chan struct{}) routingTable := routing.NewTable()
waitFunc := func() error { routingTable.AddTarget(noSuchHost, routing.Target{
close(waitFuncCalledCh) Service: "nosuchsvc",
<-waitFuncCh Port: 9091,
return nil Deployment: "nosuchdepl",
} })
noSuchURL, err := url.Parse("http://localhost:60002")
r.NoError(err)
hdl := newForwardingHandler( hdl := newForwardingHandler(
noSuchURL, logr.Discard(),
routingTable,
dialCtxFunc, dialCtxFunc,
waitFunc, waitFunc,
timeouts.DeploymentReplicas, timeouts.DeploymentReplicas,
@ -101,44 +135,61 @@ func TestTimesOutOnWaitFunc(t *testing.T) {
const path = "/testfwd" const path = "/testfwd"
res, req, err := reqAndRes(path) res, req, err := reqAndRes(path)
r.NoError(err) r.NoError(err)
req.Host = noSuchHost
start := time.Now() start := time.Now()
waitDur := timeouts.DeploymentReplicas * 2
go func() {
time.Sleep(waitDur)
close(waitFuncCh)
}()
hdl.ServeHTTP(res, req) hdl.ServeHTTP(res, req)
elapsed := time.Since(start)
t.Logf("elapsed time was %s", elapsed)
// serving should take at least timeouts.DeploymentReplicas, but no more than
// timeouts.DeploymentReplicas*2
// elapsed time should be more than the deployment replicas wait time
// but not an amount that is much greater than that
r.GreaterOrEqual(elapsed, timeouts.DeploymentReplicas)
r.LessOrEqual(elapsed, timeouts.DeploymentReplicas*4)
r.Equal(502, res.Code, "response code was unexpected")
// waitFunc should have been called, even though it timed out
waitFuncCalled := false
select { select {
case <-waitFuncCalledCh: case <-waitFuncCalledCh:
case <-time.After(1 * time.Second): waitFuncCalled = true
r.Fail("the wait function wasn't called") default:
} }
r.GreaterOrEqual(time.Since(start), waitDur)
r.Equal(502, res.Code, "response code was unexpected") r.True(waitFuncCalled, "wait function was not called")
} }
// Test to make sure the proxy handler will wait for the waitFunc to
// complete
func TestWaitsForWaitFunc(t *testing.T) { func TestWaitsForWaitFunc(t *testing.T) {
r := require.New(t) r := require.New(t)
timeouts := defaultTimeouts() timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff()) dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
// the wait func will close this channel immediately after it's called, but before it starts waitFunc, waitFuncCalledCh, finishWaitFunc := notifyingFunc()
// waiting for waitFuncCh noSuchHost := "TestWaitsForWaitFunc.test"
waitFuncCalledCh := make(chan struct{}) const originRespCode = 201
// the wait func will wait for waitFuncCh to receive or be closed before it proceeds testSrv, testSrvURL, err := kedanet.StartTestServer(
waitFuncCh := make(chan struct{}) http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
waitFunc := func() error { w.WriteHeader(originRespCode)
close(waitFuncCalledCh) }),
<-waitFuncCh )
return nil
}
noSuchURL, err := url.Parse("http://localhost:60002")
r.NoError(err) r.NoError(err)
defer testSrv.Close()
originHost, originPort, err := splitHostPort(testSrvURL.Host)
r.NoError(err)
routingTable := routing.NewTable()
routingTable.AddTarget(noSuchHost, routing.Target{
Service: originHost,
Port: originPort,
Deployment: "nosuchdepl",
})
hdl := newForwardingHandler( hdl := newForwardingHandler(
noSuchURL, logr.Discard(),
routingTable,
dialCtxFunc, dialCtxFunc,
waitFunc, waitFunc,
timeouts.DeploymentReplicas, timeouts.DeploymentReplicas,
@ -147,22 +198,29 @@ func TestWaitsForWaitFunc(t *testing.T) {
const path = "/testfwd" const path = "/testfwd"
res, req, err := reqAndRes(path) res, req, err := reqAndRes(path)
r.NoError(err) r.NoError(err)
req.Host = noSuchHost
start := time.Now() // make the wait function finish after a short duration
waitDur := 10 * time.Millisecond const waitDur = 100 * time.Millisecond
go func() { go func() {
time.Sleep(waitDur) time.Sleep(waitDur)
close(waitFuncCh) finishWaitFunc()
}() }()
hdl.ServeHTTP(res, req)
select {
case <-waitFuncCalledCh:
case <-time.After(1 * time.Second):
r.Fail("the wait function wasn't called")
}
r.GreaterOrEqual(time.Since(start), waitDur)
r.Equal(502, res.Code, "response code was unexpected") start := time.Now()
hdl.ServeHTTP(res, req)
elapsed := time.Since(start)
r.NoError(waitForSignal(waitFuncCalledCh, 1*time.Second))
// should take at least waitDur, but no more than waitDur*4
r.GreaterOrEqual(elapsed, waitDur)
r.Less(elapsed, waitDur*4)
r.Equal(
originRespCode,
res.Code,
"response code was unexpected",
)
} }
// the proxy should connect to a server, and then time out if the server doesn't // the proxy should connect to a server, and then time out if the server doesn't
@ -173,22 +231,32 @@ func TestWaitHeaderTimeout(t *testing.T) {
// the origin will wait for this channel to receive or close before it sends any data back to the // the origin will wait for this channel to receive or close before it sends any data back to the
// proxy // proxy
originHdlCh := make(chan struct{}) originHdlCh := make(chan struct{})
originHdl := kedanet.NewTestHTTPHandlerWrapper(func(w http.ResponseWriter, r *http.Request) { originHdl := kedanet.NewTestHTTPHandlerWrapper(
<-originHdlCh http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200) <-originHdlCh
w.Write([]byte("test response")) w.WriteHeader(200)
}) w.Write([]byte("test response"))
}),
)
srv, originURL, err := kedanet.StartTestServer(originHdl) srv, originURL, err := kedanet.StartTestServer(originHdl)
r.NoError(err) r.NoError(err)
defer srv.Close() defer srv.Close()
timeouts := defaultTimeouts() timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff()) dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
waitFunc := func() error { waitFunc := func(context.Context, string) error {
return nil return nil
} }
routingTable := routing.NewTable()
target := routing.Target{
Service: "testsvc",
Port: 9094,
Deployment: "testdepl",
}
routingTable.AddTarget(originURL.Host, target)
hdl := newForwardingHandler( hdl := newForwardingHandler(
originURL, logr.Discard(),
routingTable,
dialCtxFunc, dialCtxFunc,
waitFunc, waitFunc,
timeouts.DeploymentReplicas, timeouts.DeploymentReplicas,
@ -197,6 +265,7 @@ func TestWaitHeaderTimeout(t *testing.T) {
const path = "/testfwd" const path = "/testfwd"
res, req, err := reqAndRes(path) res, req, err := reqAndRes(path)
r.NoError(err) r.NoError(err)
req.Host = originURL.Host
hdl.ServeHTTP(res, req) hdl.ServeHTTP(res, req)
@ -216,3 +285,40 @@ func ensureSignalBeforeTimeout(signalCh <-chan struct{}, timeout time.Duration)
return true return true
} }
} }
func waitForSignal(sig <-chan struct{}, waitDur time.Duration) error {
tmr := time.NewTimer(waitDur)
defer tmr.Stop()
select {
case <-sig:
return nil
case <-tmr.C:
return fmt.Errorf("signal didn't happen within %s", waitDur)
}
}
// notifyingFunc creates a new function to be used as a waitFunc in the
// newForwardingHandler function. it also returns a channel that will
// be closed immediately after the function is called (not necessarily
// before it returns).
//
// the _returned_ function won't itself return until the returned func()
// is called, or the context that is passed to it is done (e.g. cancelled, timed out,
// etc...). in the former case, the returned func itself returns nil. in the latter,
// it returns ctx.Err()
func notifyingFunc() (func(context.Context, string) error, <-chan struct{}, func()) {
calledCh := make(chan struct{})
finishCh := make(chan struct{})
finishFunc := func() {
close(finishCh)
}
return func(ctx context.Context, _ string) error {
close(calledCh)
select {
case <-finishCh:
return nil
case <-ctx.Done():
return fmt.Errorf("TEST FUNCTION CONTEXT ERROR: %w", ctx.Err())
}
}, calledCh, finishFunc
}

View File

@ -26,7 +26,10 @@ func forwardRequest(
} }
proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) { proxy.ErrorHandler = func(w http.ResponseWriter, r *http.Request, err error) {
w.WriteHeader(502) w.WriteHeader(502)
errMsg := fmt.Errorf("Error on backend (%w)", err).Error() // note: we can only use the '%w' directive inside of fmt.Errorf,
// not Sprintf or anything similar. this means we have to create the
// failure string in this slightly convoluted way.
errMsg := fmt.Errorf("error on backend (%w)", err).Error()
w.Write([]byte(errMsg)) w.Write([]byte(errMsg))
} }

View File

@ -41,7 +41,10 @@ func retryDialContextFunc(
timeouts config.Timeouts, timeouts config.Timeouts,
backoff wait.Backoff, backoff wait.Backoff,
) kedanet.DialContextFunc { ) kedanet.DialContextFunc {
dialer := kedanet.NewNetDialer(timeouts.Connect, timeouts.KeepAlive) dialer := kedanet.NewNetDialer(
timeouts.Connect,
timeouts.KeepAlive,
)
return kedanet.DialContextWithRetry(dialer, backoff) return kedanet.DialContextWithRetry(dialer, backoff)
} }
@ -61,11 +64,13 @@ func TestForwarderSuccess(t *testing.T) {
reqRecvCh := make(chan struct{}) reqRecvCh := make(chan struct{})
const respCode = 302 const respCode = 302
const respBody = "TestForwardingHandler" const respBody = "TestForwardingHandler"
originHdl := kedanet.NewTestHTTPHandlerWrapper(func(w http.ResponseWriter, r *http.Request) { originHdl := kedanet.NewTestHTTPHandlerWrapper(
close(reqRecvCh) http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(respCode) close(reqRecvCh)
w.Write([]byte(respBody)) w.WriteHeader(respCode)
}) w.Write([]byte(respBody))
}),
)
testServer := httptest.NewServer(originHdl) testServer := httptest.NewServer(originHdl)
defer testServer.Close() defer testServer.Close()
forwardURL, err := url.Parse(testServer.URL) forwardURL, err := url.Parse(testServer.URL)
@ -106,16 +111,21 @@ func TestForwarderHeaderTimeout(t *testing.T) {
r := require.New(t) r := require.New(t)
// the origin will wait until this channel receives or is closed // the origin will wait until this channel receives or is closed
originWaitCh := make(chan struct{}) originWaitCh := make(chan struct{})
hdl := kedanet.NewTestHTTPHandlerWrapper(func(w http.ResponseWriter, r *http.Request) { hdl := kedanet.NewTestHTTPHandlerWrapper(
<-originWaitCh http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200) <-originWaitCh
}) w.WriteHeader(200)
}),
)
srv, originURL, err := kedanet.StartTestServer(hdl) srv, originURL, err := kedanet.StartTestServer(hdl)
r.NoError(err) r.NoError(err)
defer srv.Close() defer srv.Close()
timeouts := defaultTimeouts() timeouts := defaultTimeouts()
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff()) timeouts.Connect = 10 * time.Millisecond
timeouts.ResponseHeader = 10 * time.Millisecond
backoff := timeouts.Backoff(2, 2, 1)
dialCtxFunc := retryDialContextFunc(timeouts, backoff)
res, req, err := reqAndRes("/testfwd") res, req, err := reqAndRes("/testfwd")
r.NoError(err) r.NoError(err)
forwardRequest( forwardRequest(
@ -128,7 +138,7 @@ func TestForwarderHeaderTimeout(t *testing.T) {
forwardedRequests := hdl.IncomingRequests() forwardedRequests := hdl.IncomingRequests()
r.Equal(0, len(forwardedRequests)) r.Equal(0, len(forwardedRequests))
r.Equal(502, res.Code) r.Equal(502, res.Code)
r.Contains(res.Body.String(), "Error on backend") r.Contains(res.Body.String(), "error on backend")
// the proxy has bailed out, so tell the origin to stop // the proxy has bailed out, so tell the origin to stop
close(originWaitCh) close(originWaitCh)
} }
@ -140,19 +150,21 @@ func TestForwarderWaitsForSlowOrigin(t *testing.T) {
originWaitCh := make(chan struct{}) originWaitCh := make(chan struct{})
const originRespCode = 200 const originRespCode = 200
const originRespBodyStr = "Hello World!" const originRespBodyStr = "Hello World!"
hdl := kedanet.NewTestHTTPHandlerWrapper(func(w http.ResponseWriter, r *http.Request) { hdl := kedanet.NewTestHTTPHandlerWrapper(
<-originWaitCh http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(originRespCode) <-originWaitCh
w.Write([]byte(originRespBodyStr)) w.WriteHeader(originRespCode)
}) w.Write([]byte(originRespBodyStr))
}),
)
srv, originURL, err := kedanet.StartTestServer(hdl) srv, originURL, err := kedanet.StartTestServer(hdl)
r.NoError(err) r.NoError(err)
defer srv.Close() defer srv.Close()
// the origin is gonna wait this long, and we'll make the proxy // the origin is gonna wait this long, and we'll make the proxy
// have a much longer timeout than this to account for timing issues // have a much longer timeout than this to account for timing issues
const originDelay = 500 * time.Millisecond const originDelay = 5 * time.Millisecond
timeouts := config.Timeouts{ timeouts := config.Timeouts{
Connect: 500 * time.Millisecond, Connect: originDelay,
KeepAlive: 2 * time.Second, KeepAlive: 2 * time.Second,
// the handler is going to take 500 milliseconds to respond, so make the // the handler is going to take 500 milliseconds to respond, so make the
// forwarder wait much longer than that // forwarder wait much longer than that
@ -161,7 +173,6 @@ func TestForwarderWaitsForSlowOrigin(t *testing.T) {
dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff()) dialCtxFunc := retryDialContextFunc(timeouts, timeouts.DefaultBackoff())
go func() { go func() {
// wait for 100ms less than
time.Sleep(originDelay) time.Sleep(originDelay)
close(originWaitCh) close(originWaitCh)
}() }()
@ -224,5 +235,5 @@ func TestForwarderConnectionRetryAndTimeout(t *testing.T) {
"unexpected code (response body was '%s')", "unexpected code (response body was '%s')",
res.Body.String(), res.Body.String(),
) )
r.Contains(res.Body.String(), "Error on backend") r.Contains(res.Body.String(), "error on backend")
} }

View File

@ -1,11 +1,11 @@
//+build mage //go:build mage
// +build mage
package main package main
import ( import (
"context" "context"
"fmt" "fmt"
"log"
"github.com/kedacore/http-add-on/pkg/build" "github.com/kedacore/http-add-on/pkg/build"
"github.com/kedacore/http-add-on/pkg/env" "github.com/kedacore/http-add-on/pkg/env"
@ -13,10 +13,23 @@ import (
"github.com/magefile/mage/sh" "github.com/magefile/mage/sh"
) )
// Note for Mac M1 users building Docker images:
// If you want to build images for Linux (like, for example, an
// AKS/GKE/EKS, DOKS cluster), you need to use docker's buildx driver
// to do so. This would be the command to build the interceptor for
// 64 Bit Amd and ARM platforms on Linux, for example:
//
// docker buildx build --platform linux/amd64,linux/arm64 --push -t testingkeda.azurecr.io/interceptor:testing -f interceptor/Dockerfile .
//
// See
// https://blog.jaimyn.dev/how-to-build-multi-architecture-docker-images-on-an-m1-mac/
// for more details.
// Global consts // Global consts
const ( const (
DEFAULT_NAMESPACE string = "kedahttp" DEFAULT_NAMESPACE string = "kedahttp"
ACR_REGISTRY_NAME = "KEDAHTTP_ACR_REGISTRY"
SCALER_IMAGE_ENV_VAR = "KEDAHTTP_SCALER_IMAGE" SCALER_IMAGE_ENV_VAR = "KEDAHTTP_SCALER_IMAGE"
INTERCEPTOR_IMAGE_ENV_VAR = "KEDAHTTP_INTERCEPTOR_IMAGE" INTERCEPTOR_IMAGE_ENV_VAR = "KEDAHTTP_INTERCEPTOR_IMAGE"
OPERATOR_IMAGE_ENV_VAR = "KEDAHTTP_OPERATOR_IMAGE" OPERATOR_IMAGE_ENV_VAR = "KEDAHTTP_OPERATOR_IMAGE"
@ -60,6 +73,27 @@ func (Scaler) DockerBuild(ctx context.Context) error {
return build.DockerBuild(img, "scaler/Dockerfile", ".") return build.DockerBuild(img, "scaler/Dockerfile", ".")
} }
// Build the scaler docker image using ACR tasks.
//
// This command reads the value of the following environment variables:
//
// - KEDAHTTP_ACR_REGISTRY - for the value of the --registry flag
// - KEDAHTTP_SCALER_IMAGE -- for the value of the --image flag
//
// it returns an error if either of the env vars are not set or they are and
// the build fails.
func (Scaler) DockerBuildACR(ctx context.Context) error {
registry, err := env.Get(ACR_REGISTRY_NAME)
if err != nil {
return err
}
image, err := build.GetImageName(SCALER_IMAGE_ENV_VAR)
if err != nil {
return err
}
return build.DockerBuildACR(registry, image, "scaler/Dockerfile", ".")
}
func (Scaler) DockerPush(ctx context.Context) error { func (Scaler) DockerPush(ctx context.Context) error {
image, err := build.GetImageName(SCALER_IMAGE_ENV_VAR) image, err := build.GetImageName(SCALER_IMAGE_ENV_VAR)
if err != nil { if err != nil {
@ -103,6 +137,27 @@ func (Operator) DockerBuild(ctx context.Context) error {
return build.DockerBuild(img, "operator/Dockerfile", ".") return build.DockerBuild(img, "operator/Dockerfile", ".")
} }
// Build the operator docker image using ACR tasks.
//
// This command reads the value of the following environment variables:
//
// - KEDAHTTP_ACR_REGISTRY - for the value of the --registry flag
// - KEDAHTTP_INTERCEPTOR_IMAGE -- for the value of the --image flag
//
// it returns an error if either of the env vars are not set or they are and
// the build fails.
func (Operator) DockerBuildACR(ctx context.Context) error {
registry, err := env.Get(ACR_REGISTRY_NAME)
if err != nil {
return err
}
image, err := build.GetImageName(OPERATOR_IMAGE_ENV_VAR)
if err != nil {
return err
}
return build.DockerBuildACR(registry, image, "operator/Dockerfile", ".")
}
func (Operator) DockerPush(ctx context.Context) error { func (Operator) DockerPush(ctx context.Context) error {
image, err := build.GetImageName(OPERATOR_IMAGE_ENV_VAR) image, err := build.GetImageName(OPERATOR_IMAGE_ENV_VAR)
if err != nil { if err != nil {
@ -138,9 +193,9 @@ func (Interceptor) Test(ctx context.Context) error {
return nil return nil
} }
// Build the interceptor docker image. This command reads the value of the // DockerBuild builds the interceptor docker image. It looks for the
// KEDAHTTP_INTERCEPTOR_IMAGE environment variable to get the interceptor image // KEDAHTTP_INTERCEPTOR_IMAGE environment variable and builds the image with
// name. It fails otherwise // that as the name
func (Interceptor) DockerBuild(ctx context.Context) error { func (Interceptor) DockerBuild(ctx context.Context) error {
image, err := build.GetImageName(INTERCEPTOR_IMAGE_ENV_VAR) image, err := build.GetImageName(INTERCEPTOR_IMAGE_ENV_VAR)
if err != nil { if err != nil {
@ -149,6 +204,27 @@ func (Interceptor) DockerBuild(ctx context.Context) error {
return build.DockerBuild(image, "interceptor/Dockerfile", ".") return build.DockerBuild(image, "interceptor/Dockerfile", ".")
} }
// Build the interceptor docker image using ACR tasks.
//
// This command reads the value of the following environment variables:
//
// - KEDAHTTP_ACR_REGISTRY - for the value of the --registry flag
// - KEDAHTTP_INTERCEPTOR_IMAGE -- for the value of the --image flag
//
// it returns an error if either of the env vars are not set or they are and
// the build fails.
func (Interceptor) DockerBuildACR(ctx context.Context) error {
registry, err := env.Get(ACR_REGISTRY_NAME)
if err != nil {
return err
}
image, err := build.GetImageName(INTERCEPTOR_IMAGE_ENV_VAR)
if err != nil {
return err
}
return build.DockerBuildACR(registry, image, "interceptor/Dockerfile", ".")
}
func (Interceptor) DockerPush(ctx context.Context) error { func (Interceptor) DockerPush(ctx context.Context) error {
image, err := build.GetImageName(INTERCEPTOR_IMAGE_ENV_VAR) image, err := build.GetImageName(INTERCEPTOR_IMAGE_ENV_VAR)
if err != nil { if err != nil {
@ -168,23 +244,26 @@ func Build() {
// Run tests on all the components in this project // Run tests on all the components in this project
func Test() error { func Test() error {
out, err := sh.Output("go", "test", "./...") return sh.RunV("go", "test", "-timeout=20s", "./...")
if err != nil {
return err
}
log.Print(out)
return nil
} }
// --- Docker --- // // --- Docker --- //
// Builds a docker image specified by the name argument with the repository prefix // DockerBuild builds the operator, scaler and interceptor images in parallel
func DockerBuild(ctx context.Context) error { func DockerBuild(ctx context.Context) error {
scaler, operator, interceptor := Scaler{}, Interceptor{}, Operator{} scaler, operator, interceptor := Scaler{}, Interceptor{}, Operator{}
mg.Deps(scaler.DockerBuild, operator.DockerBuild, interceptor.DockerBuild) mg.Deps(scaler.DockerBuild, operator.DockerBuild, interceptor.DockerBuild)
return nil return nil
} }
// DockerBuildACR builds the operator, scaler and interceptor images in parallel,
// all using ACR tasks
func DockerBuildACR(ctx context.Context) error {
scaler, operator, interceptor := Scaler{}, Interceptor{}, Operator{}
mg.Deps(scaler.DockerBuildACR, operator.DockerBuildACR, interceptor.DockerBuildACR)
return nil
}
// Pushes a given image name to a given repository // Pushes a given image name to a given repository
func DockerPush(ctx context.Context) error { func DockerPush(ctx context.Context) error {
scaler, operator, interceptor := Scaler{}, Interceptor{}, Operator{} scaler, operator, interceptor := Scaler{}, Interceptor{}, Operator{}
@ -291,6 +370,27 @@ func DeleteKeda(ctx context.Context) error {
return nil return nil
} }
func InstallXKCD(ctx context.Context) error {
namespace, err := env.Get(NAMESPACE_ENV_VAR)
if err != nil {
namespace = DEFAULT_NAMESPACE
}
if err := sh.RunV(
"helm",
"upgrade",
"xkcd",
"./examples/xkcd",
"--install",
"--namespace",
namespace,
"--create-namespace",
); err != nil {
return err
}
return nil
}
// --- Operator tasks --- // // --- Operator tasks --- //
// Generates the operator // Generates the operator
@ -354,3 +454,7 @@ func DeleteHTTPSO(ctx context.Context, namespace string) error {
"kubectl", "delete", "httpscaledobject", "xkcd", "-n", namespace, "kubectl", "delete", "httpscaledobject", "xkcd", "-n", namespace,
) )
} }
func TestE2E(ctx context.Context) error {
return sh.RunV("go", "test", "-test.v", "./e2e...")
}

View File

@ -6,7 +6,7 @@ ARG GOOS=linux
FROM golang:${GOLANG_VERSION}-alpine AS builder FROM golang:${GOLANG_VERSION}-alpine AS builder
WORKDIR $GOPATH/src/github.com/kedahttp/http-add-on WORKDIR $GOPATH/src/github.com/kedacore/http-add-on
COPY go.mod go.mod COPY go.mod go.mod
COPY go.sum go.sum COPY go.sum go.sum

View File

@ -26,38 +26,17 @@ import (
type HTTPScaledObjectCreationStatus string type HTTPScaledObjectCreationStatus string
// HTTPScaledObjectConditionReason describes the reason why the condition transitioned // HTTPScaledObjectConditionReason describes the reason why the condition transitioned
// +kubebuilder:validation:Enum=ErrorCreatingExternalScaler;ErrorCreatingExternalScalerService;CreatedExternalScaler;ErrorCreatingInterceptorScaledObject;ErrorCreatingAppScaledObject;AppScaledObjectCreated;InterceptorScaledObjectCreated;ErrorCreatingInterceptor;ErrorCreatingInterceptorAdminService;ErrorCreatingInterceptorProxyService;InterceptorCreated;TerminatingResources;InterceptorDeploymentTerminated;InterceptorDeploymentTerminationError;InterceptorAdminServiceTerminationError;InterceptorAdminServiceTerminated;InterceptorProxyServiceTerminationError;InterceptorProxyServiceTerminated;ExternalScalerDeploymentTerminationError;ExternalScalerDeploymentTerminated;ExternalScalerServiceTerminationError;ExternalScalerServiceTerminated;InterceptorScaledObjectTerminated;AppScaledObjectTerminated;AppScaledObjectTerminationError;InterceptorScaledObjectTerminationError;PendingCreation;HTTPScaledObjectIsReady; // +kubebuilder:validation:Enum=ErrorCreatingAppScaledObject;AppScaledObjectCreated;TerminatingResources;AppScaledObjectTerminated;AppScaledObjectTerminationError;PendingCreation;HTTPScaledObjectIsReady;
type HTTPScaledObjectConditionReason string type HTTPScaledObjectConditionReason string
const ( const (
ErrorCreatingExternalScaler HTTPScaledObjectConditionReason = "ErrorCreatingExternalScaler" ErrorCreatingAppScaledObject HTTPScaledObjectConditionReason = "ErrorCreatingAppScaledObject"
ErrorCreatingExternalScalerService HTTPScaledObjectConditionReason = "ErrorCreatingExternalScalerService" AppScaledObjectCreated HTTPScaledObjectConditionReason = "AppScaledObjectCreated"
CreatedExternalScaler HTTPScaledObjectConditionReason = "CreatedExternalScaler" TerminatingResources HTTPScaledObjectConditionReason = "TerminatingResources"
ErrorCreatingInterceptorScaledObject HTTPScaledObjectConditionReason = "ErrorCreatingInterceptorScaledObject" AppScaledObjectTerminated HTTPScaledObjectConditionReason = "AppScaledObjectTerminated"
ErrorCreatingAppScaledObject HTTPScaledObjectConditionReason = "ErrorCreatingAppScaledObject" AppScaledObjectTerminationError HTTPScaledObjectConditionReason = "AppScaledObjectTerminationError"
AppScaledObjectCreated HTTPScaledObjectConditionReason = "AppScaledObjectCreated" PendingCreation HTTPScaledObjectConditionReason = "PendingCreation"
InterceptorScaledObjectCreated HTTPScaledObjectConditionReason = "InterceptorScaledObjectCreated" HTTPScaledObjectIsReady HTTPScaledObjectConditionReason = "HTTPScaledObjectIsReady"
ErrorCreatingInterceptor HTTPScaledObjectConditionReason = "ErrorCreatingInterceptor"
ErrorCreatingInterceptorAdminService HTTPScaledObjectConditionReason = "ErrorCreatingInterceptorAdminService"
ErrorCreatingInterceptorProxyService HTTPScaledObjectConditionReason = "ErrorCreatingInterceptorProxyService"
InterceptorCreated HTTPScaledObjectConditionReason = "InterceptorCreated"
TerminatingResources HTTPScaledObjectConditionReason = "TerminatingResources"
InterceptorDeploymentTerminated HTTPScaledObjectConditionReason = "InterceptorDeploymentTerminated"
InterceptorDeploymentTerminationError HTTPScaledObjectConditionReason = "InterceptorDeploymentTerminationError"
InterceptorAdminServiceTerminationError HTTPScaledObjectConditionReason = "InterceptorAdminServiceTerminationError"
InterceptorAdminServiceTerminated HTTPScaledObjectConditionReason = "InterceptorAdminServiceTerminated"
InterceptorProxyServiceTerminationError HTTPScaledObjectConditionReason = "InterceptorProxyServiceTerminationError"
InterceptorProxyServiceTerminated HTTPScaledObjectConditionReason = "InterceptorProxyServiceTerminated"
ExternalScalerDeploymentTerminationError HTTPScaledObjectConditionReason = "ExternalScalerDeploymentTerminationError"
ExternalScalerDeploymentTerminated HTTPScaledObjectConditionReason = "ExternalScalerDeploymentTerminated"
ExternalScalerServiceTerminationError HTTPScaledObjectConditionReason = "ExternalScalerServiceTerminationError"
ExternalScalerServiceTerminated HTTPScaledObjectConditionReason = "ExternalScalerServiceTerminated"
InterceptorScaledObjectTerminated HTTPScaledObjectConditionReason = "InterceptorScaledObjectTerminated"
AppScaledObjectTerminated HTTPScaledObjectConditionReason = "AppScaledObjectTerminated"
AppScaledObjectTerminationError HTTPScaledObjectConditionReason = "AppScaledObjectTerminationError"
InterceptorScaledObjectTerminationError HTTPScaledObjectConditionReason = "InterceptorScaledObjectTerminationError"
PendingCreation HTTPScaledObjectConditionReason = "PendingCreation"
HTTPScaledObjectIsReady HTTPScaledObjectConditionReason = "HTTPScaledObjectIsReady"
) )
const ( const (
@ -109,6 +88,10 @@ type ReplicaStruct struct {
// HTTPScaledObjectSpec defines the desired state of HTTPScaledObject // HTTPScaledObjectSpec defines the desired state of HTTPScaledObject
type HTTPScaledObjectSpec struct { type HTTPScaledObjectSpec struct {
// The host to route. All requests with this host in the "Host"
// header will be routed to the Service and Port specified
// in the scaleTargetRef
Host string `json:"host"`
// The name of the deployment to route HTTP requests to (and to autoscale). Either this // The name of the deployment to route HTTP requests to (and to autoscale). Either this
// or Image must be set // or Image must be set
ScaleTargetRef *ScaleTargetRef `json:"scaleTargetRef"` ScaleTargetRef *ScaleTargetRef `json:"scaleTargetRef"`

View File

@ -29,7 +29,7 @@ func (in *HTTPScaledObject) DeepCopyInto(out *HTTPScaledObject) {
*out = *in *out = *in
out.TypeMeta = in.TypeMeta out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec in.Spec.DeepCopyInto(&out.Spec)
in.Status.DeepCopyInto(&out.Status) in.Status.DeepCopyInto(&out.Status)
} }
@ -43,12 +43,8 @@ func (in *HTTPScaledObject) DeepCopy() *HTTPScaledObject {
return out return out
} }
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *HTTPScaledObject) DeepCopyObject() runtime.Object { func (in *HTTPScaledObject) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil { return in.DeepCopy()
return c
}
return nil
} }
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
@ -101,6 +97,12 @@ func (in *HTTPScaledObjectList) DeepCopyObject() runtime.Object {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HTTPScaledObjectSpec) DeepCopyInto(out *HTTPScaledObjectSpec) { func (in *HTTPScaledObjectSpec) DeepCopyInto(out *HTTPScaledObjectSpec) {
*out = *in *out = *in
if in.ScaleTargetRef != nil {
in, out := &in.ScaleTargetRef, &out.ScaleTargetRef
*out = new(ScaleTargetRef)
**out = **in
}
out.Replicas = in.Replicas
} }
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPScaledObjectSpec. // DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HTTPScaledObjectSpec.
@ -132,3 +134,33 @@ func (in *HTTPScaledObjectStatus) DeepCopy() *HTTPScaledObjectStatus {
in.DeepCopyInto(out) in.DeepCopyInto(out)
return out return out
} }
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ReplicaStruct) DeepCopyInto(out *ReplicaStruct) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ReplicaStruct.
func (in *ReplicaStruct) DeepCopy() *ReplicaStruct {
if in == nil {
return nil
}
out := new(ReplicaStruct)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleTargetRef) DeepCopyInto(out *ScaleTargetRef) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaleTargetRef.
func (in *ScaleTargetRef) DeepCopy() *ScaleTargetRef {
if in == nil {
return nil
}
out := new(ScaleTargetRef)
in.DeepCopyInto(out)
return out
}

View File

@ -59,6 +59,11 @@ spec:
spec: spec:
description: HTTPScaledObjectSpec defines the desired state of HTTPScaledObject description: HTTPScaledObjectSpec defines the desired state of HTTPScaledObject
properties: properties:
host:
description: The host to route. All requests with this host in the
"Host" header will be routed to the Service and Port specified in
the scaleTargetRef
type: string
replicas: replicas:
description: (optional) Replica information description: (optional) Replica information
properties: properties:
@ -98,6 +103,7 @@ spec:
format: int32 format: int32
type: integer type: integer
required: required:
- host
- scaleTargetRef - scaleTargetRef
type: object type: object
status: status:
@ -115,32 +121,11 @@ spec:
reason: reason:
description: The reason for the condition's last transition. description: The reason for the condition's last transition.
enum: enum:
- ErrorCreatingExternalScaler
- ErrorCreatingExternalScalerService
- CreatedExternalScaler
- ErrorCreatingInterceptorScaledObject
- ErrorCreatingAppScaledObject - ErrorCreatingAppScaledObject
- AppScaledObjectCreated - AppScaledObjectCreated
- InterceptorScaledObjectCreated
- ErrorCreatingInterceptor
- ErrorCreatingInterceptorAdminService
- ErrorCreatingInterceptorProxyService
- InterceptorCreated
- TerminatingResources - TerminatingResources
- InterceptorDeploymentTerminated
- InterceptorDeploymentTerminationError
- InterceptorAdminServiceTerminationError
- InterceptorAdminServiceTerminated
- InterceptorProxyServiceTerminationError
- InterceptorProxyServiceTerminated
- ExternalScalerDeploymentTerminationError
- ExternalScalerDeploymentTerminated
- ExternalScalerServiceTerminationError
- ExternalScalerServiceTerminated
- InterceptorScaledObjectTerminated
- AppScaledObjectTerminated - AppScaledObjectTerminated
- AppScaledObjectTerminationError - AppScaledObjectTerminationError
- InterceptorScaledObjectTerminationError
- PendingCreation - PendingCreation
- HTTPScaledObjectIsReady - HTTPScaledObjectIsReady
type: string type: string

View File

@ -6,12 +6,6 @@ import (
"github.com/kedacore/http-add-on/operator/api/v1alpha1" "github.com/kedacore/http-add-on/operator/api/v1alpha1"
) )
// DeploymentName is a convenience function for
// a.HTTPScaledObject.Spec.ScaleTargetRef.Deployment
func DeploymentName(httpso v1alpha1.HTTPScaledObject) string {
return httpso.Spec.ScaleTargetRef.Deployment
}
// AppInfo contains configuration for the Interceptor and External Scaler, and holds // AppInfo contains configuration for the Interceptor and External Scaler, and holds
// data about the name and namespace of the scale target. // data about the name and namespace of the scale target.
type AppInfo struct { type AppInfo struct {
@ -21,40 +15,6 @@ type AppInfo struct {
ExternalScalerConfig ExternalScaler ExternalScalerConfig ExternalScaler
} }
// ExternalScalerServiceName is a convenience method to get the name of the external scaler
// service in Kubernetes
func (a AppInfo) ExternalScalerServiceName() string {
return fmt.Sprintf("%s-external-scaler", a.Name)
}
// ExternalScalerDeploymentName is a convenience method to get the name of the external scaler
// deployment in Kubernetes
func (a AppInfo) ExternalScalerDeploymentName() string {
return fmt.Sprintf("%s-external-scaler", a.Name)
}
// InterceptorAdminServiceName is a convenience method to get the name of the interceptor
// service for the admin endpoints in Kubernetes
func (a AppInfo) InterceptorAdminServiceName() string {
return fmt.Sprintf("%s-interceptor-admin", a.Name)
}
// InterceptorProxyServiceName is a convenience method to get the name of the interceptor
// service for the proxy in Kubernetes
func (a AppInfo) InterceptorProxyServiceName() string {
return fmt.Sprintf("%s-interceptor-proxy", a.Name)
}
// InterceptorDeploymentName is a convenience method to get the name of the interceptor
// deployment in Kubernetes
func (a AppInfo) InterceptorDeploymentName() string {
return fmt.Sprintf("%s-interceptor", a.Name)
}
func AppScaledObjectName(httpso *v1alpha1.HTTPScaledObject) string { func AppScaledObjectName(httpso *v1alpha1.HTTPScaledObject) string {
return fmt.Sprintf("%s-app", httpso.Spec.ScaleTargetRef.Deployment) return fmt.Sprintf("%s-app", httpso.Spec.ScaleTargetRef.Deployment)
} }
func InterceptorScaledObjectName(httpso *v1alpha1.HTTPScaledObject) string {
return fmt.Sprintf("%s-interceptor", httpso.Spec.ScaleTargetRef.Deployment)
}

View File

@ -0,0 +1,28 @@
package config
import (
"fmt"
"testing"
"github.com/kedacore/http-add-on/operator/api/v1alpha1"
"github.com/stretchr/testify/require"
)
func TestAppScaledObjectName(t *testing.T) {
r := require.New(t)
obj := &v1alpha1.HTTPScaledObject{
Spec: v1alpha1.HTTPScaledObjectSpec{
ScaleTargetRef: &v1alpha1.ScaleTargetRef{
Deployment: "TestAppScaledObjectNameDeployment",
},
},
}
name := AppScaledObjectName(obj)
r.Equal(
fmt.Sprintf(
"%s-app",
obj.Spec.ScaleTargetRef.Deployment,
),
name,
)
}

View File

@ -2,76 +2,69 @@ package config
import ( import (
"fmt" "fmt"
"strconv"
"github.com/kedacore/http-add-on/pkg/env" "github.com/kedacore/http-add-on/pkg/env"
corev1 "k8s.io/api/core/v1"
) )
// Interceptor holds static configuration info for the interceptor // Interceptor holds static configuration info for the interceptor
type Interceptor struct { type Interceptor struct {
Image string ServiceName string `envconfig:"INTERCEPTOR_SERVICE_NAME" required:"true"`
ProxyPort int32 ProxyPort int32 `envconfig:"INTERCEPTOR_PROXY_PORT" required:"true"`
AdminPort int32 AdminPort int32 `envconfig:"INTERCEPTOR_ADMIN_PORT" required:"true"`
PullPolicy corev1.PullPolicy
} }
func ensureValidPolicy (policy string) error { // ExternalScaler holds static configuration info for the external scaler
converted := corev1.PullPolicy(policy) type ExternalScaler struct {
switch (converted) { ServiceName string `envconfig:"EXTERNAL_SCALER_SERVICE_NAME" required:"true"`
case corev1.PullAlways, corev1.PullIfNotPresent, corev1.PullNever: Port int32 `envconfig:"EXTERNAL_SCALER_PORT" required:"true"`
return nil }
}
return fmt.Errorf("Policy %q is not a valid Pull Policy. Accepted values are: %s, %s, %s", policy, corev1.PullAlways, corev1.PullIfNotPresent, corev1.PullNever) func (e ExternalScaler) HostName(namespace string) string {
return fmt.Sprintf(
"%s.%s.svc.cluster.local:%d",
e.ServiceName,
namespace,
e.Port,
)
}
// AdminPortString returns i.AdminPort in string format, rather than
// as an int32.
func (i Interceptor) AdminPortString() string {
return strconv.Itoa(int(i.AdminPort))
} }
// NewInterceptorFromEnv gets interceptor configuration values from environment variables and/or // NewInterceptorFromEnv gets interceptor configuration values from environment variables and/or
// sensible defaults if values were missing. // sensible defaults if values were missing.
// and returns the interceptor struct to match. Returns an error if required values were missing. // and returns the interceptor struct to match. Returns an error if required values were missing.
func NewInterceptorFromEnv() (*Interceptor, error) { func NewInterceptorFromEnv() (*Interceptor, error) {
image, err := env.Get("KEDAHTTP_OPERATOR_INTERCEPTOR_IMAGE") serviceName, err := env.Get("KEDAHTTP_INTERCEPTOR_SERVICE")
if err != nil { if err != nil {
return nil, fmt.Errorf("missing KEDAHTTP_OPERATOR_INTERCEPTOR_IMAGE") return nil, fmt.Errorf("missing 'KEDAHTTP_INTERCEPTOR_SERVICE'")
}
adminPort := env.GetInt32Or("KEDAHTTP_OPERATOR_INTERCEPTOR_ADMIN_PORT", 8090)
proxyPort := env.GetInt32Or("KEDAHTTP_OPERATOR_INTERCEPTOR_PROXY_PORT", 8091)
pullPolicy := env.GetOr("INTERCEPTOR_PULL_POLICY", "Always")
if policyErr := ensureValidPolicy(pullPolicy); policyErr != nil {
return nil, policyErr
} }
adminPort := env.GetInt32Or("KEDAHTTP_INTERCEPTOR_ADMIN_PORT", 8090)
proxyPort := env.GetInt32Or("KEDAHTTP_INTERCEPTOR_PROXY_PORT", 8091)
return &Interceptor{ return &Interceptor{
Image: image, ServiceName: serviceName,
AdminPort: adminPort, AdminPort: adminPort,
ProxyPort: proxyPort, ProxyPort: proxyPort,
PullPolicy: corev1.PullPolicy(pullPolicy),
}, nil }, nil
} }
// ExternalScaler holds static configuration info for the external scaler
type ExternalScaler struct {
Image string
Port int32
PullPolicy corev1.PullPolicy
}
// NewExternalScalerFromEnv gets external scaler configuration values from environment variables and/or // NewExternalScalerFromEnv gets external scaler configuration values from environment variables and/or
// sensible defaults if values were missing. // sensible defaults if values were missing.
// and returns the interceptor struct to match. Returns an error if required values were missing. // and returns the interceptor struct to match. Returns an error if required values were missing.
func NewExternalScalerFromEnv() (*ExternalScaler, error) { func NewExternalScalerFromEnv() (*ExternalScaler, error) {
image, err := env.Get("KEDAHTTP_OPERATOR_EXTERNAL_SCALER_IMAGE") // image, err := env.Get("KEDAHTTP_OPERATOR_EXTERNAL_SCALER_IMAGE")
port := env.GetInt32Or("KEDAHTTP_OPERATOR_EXTERNAL_SCALER_PORT", 8091) serviceName, err := env.Get("KEDAHTTP_OPERATOR_EXTERNAL_SCALER_SERVICE")
if err != nil { if err != nil {
return nil, fmt.Errorf("Missing KEDAHTTP_EXTERNAL_SCALER_IMAGE") return nil, fmt.Errorf("missing KEDAHTTP_EXTERNAL_SCALER_SERVICE")
} }
port := env.GetInt32Or("KEDAHTTP_OPERATOR_EXTERNAL_SCALER_PORT", 8091)
pullPolicy := env.GetOr("SCALER_PULL_POLICY", "Always")
if policyErr := ensureValidPolicy(pullPolicy); policyErr != nil {
return nil, policyErr
}
return &ExternalScaler{ return &ExternalScaler{
Image: image, ServiceName: serviceName,
Port: port, Port: port,
PullPolicy: corev1.PullPolicy(pullPolicy),
}, nil }, nil
} }

View File

@ -0,0 +1,27 @@
package config
import (
"fmt"
"strings"
"testing"
"github.com/stretchr/testify/require"
)
func TestExternalScalerHostName(t *testing.T) {
r := require.New(t)
sc := ExternalScaler{
ServiceName: "TestExternalScalerHostNameSvc",
Port: int32(8098),
}
const ns = "testns"
hst := sc.HostName(ns)
spl := strings.Split(hst, ".")
r.Equal(5, len(spl), "HostName should return a hostname with 5 parts")
r.Equal(sc.ServiceName, spl[0])
r.Equal(ns, spl[1])
r.Equal("svc", spl[2])
r.Equal("cluster", spl[3])
r.Equal(fmt.Sprintf("local:%d", sc.Port), spl[4])
}

View File

@ -1,203 +0,0 @@
package controllers
import (
"context"
"fmt"
"time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/operator/api/v1alpha1"
"github.com/kedacore/http-add-on/operator/controllers/config"
"github.com/kedacore/http-add-on/pkg/k8s"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
// creates the external scaler and returns the cluster-DNS hostname of it.
// if something went wrong creating it, returns empty string and a non-nil error
func createExternalScaler(
ctx context.Context,
appInfo config.AppInfo,
cl client.Client,
logger logr.Logger,
httpso *v1alpha1.HTTPScaledObject,
) (string, error) {
scalerPort := appInfo.ExternalScalerConfig.Port
healthCheckPort := scalerPort + 1
scalerDeployment := k8s.NewDeployment(
appInfo.Namespace,
appInfo.ExternalScalerDeploymentName(),
appInfo.ExternalScalerConfig.Image,
[]int32{
appInfo.ExternalScalerConfig.Port,
},
[]corev1.EnvVar{
{
Name: "KEDA_HTTP_SCALER_PORT",
Value: fmt.Sprintf("%d", scalerPort),
},
{
Name: "KEDA_HTTP_HEALTH_PORT",
Value: fmt.Sprintf("%d", healthCheckPort),
},
{
Name: "KEDA_HTTP_SCALER_TARGET_ADMIN_NAMESPACE",
Value: appInfo.Namespace,
},
{
Name: "KEDA_HTTP_SCALER_TARGET_ADMIN_SERVICE",
Value: appInfo.InterceptorAdminServiceName(),
},
{
Name: "KEDA_HTTP_SCALER_TARGET_ADMIN_PORT",
Value: fmt.Sprintf("%d", appInfo.InterceptorConfig.AdminPort),
},
{
Name: "KEDA_HTTP_SCALER_TARGET_PENDING_REQUESTS",
Value: fmt.Sprintf("%d", httpso.Spec.TargetPendingRequests),
},
},
k8s.Labels(appInfo.ExternalScalerDeploymentName()),
appInfo.ExternalScalerConfig.PullPolicy,
)
if err := k8s.AddLivenessProbe(
scalerDeployment,
"/livez",
int(healthCheckPort),
); err != nil {
logger.Error(err, "Creating liveness check")
condition := v1alpha1.CreateCondition(v1alpha1.Error, metav1.ConditionFalse, v1alpha1.ErrorCreatingExternalScaler).SetMessage(err.Error())
httpso.AddCondition(*condition)
return "", err
}
if err := k8s.AddReadinessProbe(
scalerDeployment,
"/healthz",
int(healthCheckPort),
); err != nil {
logger.Error(err, "Creating readiness check")
condition := v1alpha1.CreateCondition(v1alpha1.Error, metav1.ConditionFalse, v1alpha1.ErrorCreatingExternalScaler).SetMessage(err.Error())
httpso.AddCondition(*condition)
return "", err
}
logger.Info("Creating external scaler Deployment", "Deployment", *scalerDeployment)
if err := cl.Create(ctx, scalerDeployment); err != nil {
if errors.IsAlreadyExists(err) {
logger.Info("External scaler deployment already exists, moving on")
} else {
logger.Error(err, "Creating scaler deployment")
condition := v1alpha1.CreateCondition(v1alpha1.Error, metav1.ConditionFalse, v1alpha1.ErrorCreatingExternalScaler).SetMessage(err.Error())
httpso.AddCondition(*condition)
return "", err
}
}
// NOTE: Scaler port is fixed here because it's a fixed on the scaler main (@see ../scaler/main.go:17)
servicePorts := []corev1.ServicePort{
k8s.NewTCPServicePort(
"externalscaler",
appInfo.ExternalScalerConfig.Port,
appInfo.ExternalScalerConfig.Port,
),
}
scalerService := k8s.NewService(
appInfo.Namespace,
appInfo.ExternalScalerServiceName(),
servicePorts,
corev1.ServiceTypeClusterIP,
k8s.Labels(appInfo.ExternalScalerDeploymentName()),
)
logger.Info("Creating external scaler Service", "Service", *scalerService)
if err := cl.Create(ctx, scalerService); err != nil {
if errors.IsAlreadyExists(err) {
logger.Info("External scaler service already exists, moving on")
} else {
logger.Error(err, "Creating scaler service")
condition := v1alpha1.CreateCondition(v1alpha1.Error, metav1.ConditionFalse, v1alpha1.ErrorCreatingExternalScalerService).SetMessage(err.Error())
httpso.AddCondition(*condition)
return "", err
}
}
condition := v1alpha1.CreateCondition(v1alpha1.Created, metav1.ConditionTrue, v1alpha1.CreatedExternalScaler).SetMessage("External scaler object is created")
httpso.AddCondition(*condition)
externalScalerHostName := fmt.Sprintf(
"%s.%s.svc.cluster.local:%d",
appInfo.ExternalScalerServiceName(),
appInfo.Namespace,
appInfo.ExternalScalerConfig.Port,
)
return externalScalerHostName, nil
}
// waitForScaler uses the gRPC scaler client's IsActive call to determine
// whether the scaler is active, retrying numRetries times with retryDelay
// in between each retry.
//
// This function considers the scaler to be active when IsActive returns
// a nil error and a non-nil IsActiveResponse type. If that happens, it immediately
// returns a nil error. If that doesn't happen after all retries, returns a non-nil error.
//
// waitForScaler also establishes a gRPC client connection and may return
// a non-nil error if that fails.
func waitForScaler(
ctx context.Context,
cl client.Client,
scalerDeplNS,
scalerDeplName string,
retries uint,
retryDelay time.Duration,
) error {
checkStatus := func() error {
depl := &appsv1.Deployment{}
if err := cl.Get(ctx, client.ObjectKey{
Namespace: scalerDeplNS,
Name: scalerDeplName,
}, depl); err != nil {
return err
}
if depl.Status.ReadyReplicas > 0 {
return nil
}
return fmt.Errorf(
"No replicas ready for scaler deployment %s/%s",
scalerDeplNS,
scalerDeplName,
)
}
// this returns an error if the context is done, so need to
// always bail out if this gets a non-nil
waitForRetry := func(ctx context.Context) error {
t := time.NewTimer(retryDelay)
defer t.Stop()
select {
case <-t.C:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
for tryNum := uint(0); tryNum < retries; tryNum++ {
statusErr := checkStatus()
if statusErr == nil {
return nil
}
if retryErr := waitForRetry(ctx); retryErr != nil {
return retryErr
}
}
return fmt.Errorf(
"Scaler failed to start up within %d",
retryDelay*time.Duration(retries),
)
}

View File

@ -1,77 +0,0 @@
package controllers
import (
"fmt"
"time"
"github.com/kedacore/http-add-on/operator/api/v1alpha1"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
var _ = Describe("ExternalScaler", func() {
Context("Creating the external scaler", func() {
var testInfra *commonTestInfra
BeforeEach(func() {
testInfra = newCommonTestInfra("testns", "testapp")
})
It("Should properly create the Deployment and Service", func() {
scalerHostName, err := createExternalScaler(
testInfra.ctx,
testInfra.cfg,
testInfra.cl,
testInfra.logger,
&testInfra.httpso,
)
Expect(err).To(BeNil())
cfg := testInfra.cfg
Expect(scalerHostName).To(Equal(fmt.Sprintf(
"%s.%s.svc.cluster.local:%d",
cfg.ExternalScalerServiceName(),
cfg.Namespace,
cfg.ExternalScalerConfig.Port,
)))
// // make sure that httpso has the right conditions on it
Expect(len(testInfra.httpso.Status.Conditions)).To(Equal(1))
cond1 := testInfra.httpso.Status.Conditions[0]
cond1ts, err := time.Parse(time.RFC3339, cond1.Timestamp)
Expect(err).To(BeNil())
Expect(time.Now().Sub(cond1ts) >= 0).To(BeTrue())
Expect(cond1.Type).To(Equal(v1alpha1.Created))
Expect(cond1.Status).To(Equal(metav1.ConditionTrue))
Expect(cond1.Reason).To(Equal(v1alpha1.CreatedExternalScaler))
// check that the external scaler deployment was created
deployment := new(appsv1.Deployment)
err = testInfra.cl.Get(testInfra.ctx, client.ObjectKey{
Name: testInfra.cfg.ExternalScalerDeploymentName(),
Namespace: testInfra.cfg.Namespace,
}, deployment)
Expect(err).To(BeNil())
// check that the external scaler service deployment object has liveness
// and readiness probes set to the correct values
Expect(len(deployment.Spec.Template.Spec.Containers)).To(Equal(1))
container := deployment.Spec.Template.Spec.Containers[0]
Expect(container.LivenessProbe).To(Not(BeNil()))
Expect(container.LivenessProbe.Handler.HTTPGet).To(Not(BeNil()))
Expect(container.LivenessProbe.Handler.HTTPGet.Path).To(Equal("/livez"))
Expect(container.ReadinessProbe).To(Not(BeNil()))
Expect(container.ReadinessProbe.Handler.HTTPGet).To(Not(BeNil()))
Expect(container.ReadinessProbe.Handler.HTTPGet.Path).To(Equal("/healthz"))
// check that the external scaler service was created
service := new(corev1.Service)
err = testInfra.cl.Get(testInfra.ctx, client.ObjectKey{
Name: testInfra.cfg.ExternalScalerServiceName(),
Namespace: testInfra.cfg.Namespace,
}, service)
Expect(err).To(BeNil())
})
})
})

View File

@ -31,6 +31,7 @@ import (
httpv1alpha1 "github.com/kedacore/http-add-on/operator/api/v1alpha1" httpv1alpha1 "github.com/kedacore/http-add-on/operator/api/v1alpha1"
"github.com/kedacore/http-add-on/operator/controllers/config" "github.com/kedacore/http-add-on/operator/controllers/config"
"github.com/kedacore/http-add-on/pkg/routing"
) )
// HTTPScaledObjectReconciler reconciles a HTTPScaledObject object // HTTPScaledObjectReconciler reconciles a HTTPScaledObject object
@ -40,6 +41,7 @@ type HTTPScaledObjectReconciler struct {
Scheme *runtime.Scheme Scheme *runtime.Scheme
InterceptorConfig config.Interceptor InterceptorConfig config.Interceptor
ExternalScalerConfig config.ExternalScaler ExternalScalerConfig config.ExternalScaler
RoutingTable *routing.Table
} }
// +kubebuilder:rbac:groups=keda.sh,resources=scaledobjects,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=keda.sh,resources=scaledobjects,verbs=get;list;watch;create;update;patch;delete

View File

@ -2,15 +2,12 @@ package controllers
import ( import (
"context" "context"
"time"
"github.com/go-logr/logr" "github.com/go-logr/logr"
"github.com/kedacore/http-add-on/operator/api/v1alpha1" "github.com/kedacore/http-add-on/operator/api/v1alpha1"
"github.com/kedacore/http-add-on/operator/controllers/config" "github.com/kedacore/http-add-on/operator/controllers/config"
appsv1 "k8s.io/api/apps/v1" "github.com/kedacore/http-add-on/pkg/routing"
corev1 "k8s.io/api/core/v1"
apierrs "k8s.io/apimachinery/pkg/api/errors" apierrs "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1" v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/runtime/schema"
@ -40,136 +37,6 @@ func (rec *HTTPScaledObjectReconciler) removeApplicationResources(
appInfo.Namespace, appInfo.Namespace,
) )
// Delete interceptor deployment
interceptorDeployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: appInfo.InterceptorDeploymentName(),
Namespace: appInfo.Namespace,
},
}
if err := rec.Client.Delete(ctx, interceptorDeployment); err != nil {
if apierrs.IsNotFound(err) {
logger.Info("Interceptor deployment not found, moving on")
} else {
logger.Error(err, "Deleting interceptor deployment")
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Error,
v1.ConditionFalse,
v1alpha1.InterceptorDeploymentTerminationError,
).SetMessage(err.Error()))
return err
}
}
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Terminated,
v1.ConditionTrue,
v1alpha1.InterceptorDeploymentTerminated,
))
// Delete externalscaler deployment
externalScalerDeployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: appInfo.ExternalScalerDeploymentName(),
Namespace: appInfo.Namespace,
},
}
if err := rec.Client.Delete(ctx, externalScalerDeployment); err != nil {
if apierrs.IsNotFound(err) {
logger.Info("External scaler not found, moving on")
} else {
logger.Error(err, "Deleting external scaler deployment")
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Error,
v1.ConditionFalse,
v1alpha1.ExternalScalerDeploymentTerminationError,
).SetMessage(err.Error()))
return err
}
}
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Terminated,
v1.ConditionTrue,
v1alpha1.ExternalScalerDeploymentTerminated,
))
// Delete interceptor admin and proxy services
interceptorAdminService := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: appInfo.InterceptorAdminServiceName(),
Namespace: appInfo.Namespace,
},
}
interceptorProxyService := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: appInfo.InterceptorProxyServiceName(),
Namespace: appInfo.Namespace,
},
}
if err := rec.Client.Delete(ctx, interceptorAdminService); err != nil {
if apierrs.IsNotFound(err) {
logger.Info("Interceptor admin service not found, moving on")
} else {
logger.Error(err, "Deleting interceptor admin service")
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Error,
v1.ConditionFalse,
v1alpha1.InterceptorAdminServiceTerminationError,
).SetMessage(err.Error()))
return err
}
}
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Terminated,
v1.ConditionTrue,
v1alpha1.InterceptorAdminServiceTerminated,
))
if err := rec.Client.Delete(ctx, interceptorProxyService); err != nil {
if apierrs.IsNotFound(err) {
logger.Info("Interceptor proxy service not found, moving on")
} else {
logger.Error(err, "Deleting interceptor proxy service")
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Error,
v1.ConditionFalse,
v1alpha1.InterceptorProxyServiceTerminationError,
).SetMessage(err.Error()))
return err
}
}
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Terminated,
v1.ConditionTrue,
v1alpha1.InterceptorProxyServiceTerminated,
))
// Delete external scaler service
externalScalerService := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: appInfo.ExternalScalerServiceName(),
Namespace: appInfo.Namespace,
},
}
if err := rec.Client.Delete(ctx, externalScalerService); err != nil {
if apierrs.IsNotFound(err) {
logger.Info("External scaler service not found, moving on")
} else {
logger.Error(err, "Deleting external scaler service")
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Error,
v1.ConditionFalse,
v1alpha1.ExternalScalerServiceTerminationError,
).SetMessage(err.Error()))
return err
}
}
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Terminated,
v1.ConditionTrue,
v1alpha1.ExternalScalerServiceTerminated,
))
// Delete App ScaledObject // Delete App ScaledObject
scaledObject := &unstructured.Unstructured{} scaledObject := &unstructured.Unstructured{}
scaledObject.SetNamespace(appInfo.Namespace) scaledObject.SetNamespace(appInfo.Namespace)
@ -198,26 +65,16 @@ func (rec *HTTPScaledObjectReconciler) removeApplicationResources(
v1alpha1.AppScaledObjectTerminated, v1alpha1.AppScaledObjectTerminated,
)) ))
// delete interceptor ScaledObject if err := removeAndUpdateRoutingTable(
scaledObject.SetName(config.InterceptorScaledObjectName(httpso)) ctx,
if err := rec.Client.Delete(ctx, scaledObject); err != nil { logger,
if apierrs.IsNotFound(err) { rec.Client,
logger.Info("Interceptor ScaledObject not found, moving on") rec.RoutingTable,
} else { httpso.Spec.Host,
logger.Error(err, "Deleting interceptor scaledobject") httpso.ObjectMeta.Namespace,
httpso.AddCondition(*v1alpha1.CreateCondition( ); err != nil {
v1alpha1.Error, return err
v1.ConditionFalse,
v1alpha1.InterceptorScaledObjectTerminationError,
).SetMessage(err.Error()))
return err
}
} }
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Terminated,
v1.ConditionTrue,
v1alpha1.InterceptorScaledObjectTerminated,
))
return nil return nil
} }
@ -245,37 +102,8 @@ func (rec *HTTPScaledObjectReconciler) createOrUpdateApplicationResources(
v1alpha1.PendingCreation, v1alpha1.PendingCreation,
).SetMessage("Identified HTTPScaledObject creation signal")) ).SetMessage("Identified HTTPScaledObject creation signal"))
// CREATING INTERNAL ADD-ON OBJECTS // create the KEDA core ScaledObjects (not the HTTP one) for
// Creating the dedicated interceptor // the app deployment and the interceptor deployment.
if err := createInterceptor(ctx, appInfo, rec.Client, logger, httpso); err != nil {
return err
}
// create dedicated external scaler for this app
externalScalerHostName, createScalerErr := createExternalScaler(
ctx,
appInfo,
rec.Client,
logger,
httpso,
)
if createScalerErr != nil {
return createScalerErr
}
if err := waitForScaler(
ctx,
rec.Client,
appInfo.Namespace,
appInfo.ExternalScalerDeploymentName(),
5,
500*time.Millisecond,
); err != nil {
return err
}
// create the KEDA core ScaledObjects (not the HTTP one) for the app deployment
// and the interceptor deployment.
// this needs to be submitted so that KEDA will scale both the app and // this needs to be submitted so that KEDA will scale both the app and
// interceptor // interceptor
if err := createScaledObjects( if err := createScaledObjects(
@ -283,12 +111,26 @@ func (rec *HTTPScaledObjectReconciler) createOrUpdateApplicationResources(
appInfo, appInfo,
rec.Client, rec.Client,
logger, logger,
externalScalerHostName, appInfo.ExternalScalerConfig.HostName(appInfo.Namespace),
httpso, httpso,
); err != nil { ); err != nil {
return err return err
} }
if err := addAndUpdateRoutingTable(
ctx,
logger,
rec.Client,
rec.RoutingTable,
httpso.Spec.Host,
routing.NewTarget(
httpso.Spec.ScaleTargetRef.Service,
int(httpso.Spec.ScaleTargetRef.Port),
httpso.Spec.ScaleTargetRef.Deployment,
),
httpso.ObjectMeta.Namespace,
); err != nil {
return err
}
return nil return nil
} }

View File

@ -1,133 +0,0 @@
package controllers
import (
"context"
"fmt"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/operator/api/v1alpha1"
"github.com/kedacore/http-add-on/operator/controllers/config"
"github.com/kedacore/http-add-on/pkg/k8s"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func createInterceptor(
ctx context.Context,
appInfo config.AppInfo,
cl client.Client,
logger logr.Logger,
httpso *v1alpha1.HTTPScaledObject,
) error {
interceptorEnvs := []corev1.EnvVar{
// timeouts all have reasonable defaults in the interceptor config
// config regarding the origin
{
Name: "KEDA_HTTP_APP_SERVICE_NAME",
Value: appInfo.Name,
},
{
Name: "KEDA_HTTP_APP_SERVICE_PORT",
Value: fmt.Sprintf("%d", httpso.Spec.ScaleTargetRef.Port),
},
{
Name: "KEDA_HTTP_TARGET_DEPLOYMENT_NAME",
Value: httpso.Spec.ScaleTargetRef.Deployment,
},
{
Name: "KEDA_HTTP_NAMESPACE",
Value: httpso.Namespace,
},
// config about how the interceptor should serve
{
Name: "KEDA_HTTP_PROXY_PORT",
Value: fmt.Sprintf("%d", appInfo.InterceptorConfig.ProxyPort),
},
{
Name: "KEDA_HTTP_ADMIN_PORT",
Value: fmt.Sprintf("%d", appInfo.InterceptorConfig.AdminPort),
},
}
deployment := k8s.NewDeployment(
appInfo.Namespace,
appInfo.InterceptorDeploymentName(),
appInfo.InterceptorConfig.Image,
[]int32{
appInfo.InterceptorConfig.AdminPort,
appInfo.InterceptorConfig.ProxyPort,
},
interceptorEnvs,
k8s.Labels(appInfo.InterceptorDeploymentName()),
appInfo.InterceptorConfig.PullPolicy,
)
logger.Info("Creating interceptor Deployment", "Deployment", *deployment)
if err := cl.Create(ctx, deployment); err != nil {
if errors.IsAlreadyExists(err) {
logger.Info("Interceptor deployment already exists, moving on")
} else {
logger.Error(err, "Creating interceptor deployment")
httpso.AddCondition(*v1alpha1.CreateCondition(v1alpha1.Error, metav1.ConditionFalse, v1alpha1.ErrorCreatingInterceptor).SetMessage(err.Error()))
return err
}
}
// create two services for the interceptor:
// - for the public proxy
// - for the admin server (that has the /queue endpoint)
publicPorts := []corev1.ServicePort{
k8s.NewTCPServicePort(
"proxy",
// TODO: make this the public port - probably 80
80,
appInfo.InterceptorConfig.ProxyPort,
),
}
publicProxyService := k8s.NewService(
appInfo.Namespace,
appInfo.InterceptorProxyServiceName(),
publicPorts,
corev1.ServiceTypeClusterIP,
k8s.Labels(appInfo.InterceptorDeploymentName()),
)
adminPorts := []corev1.ServicePort{
k8s.NewTCPServicePort(
"admin",
appInfo.InterceptorConfig.AdminPort,
appInfo.InterceptorConfig.AdminPort,
),
}
adminService := k8s.NewService(
appInfo.Namespace,
appInfo.InterceptorAdminServiceName(),
adminPorts,
corev1.ServiceTypeClusterIP,
k8s.Labels(appInfo.InterceptorDeploymentName()),
)
adminErr := cl.Create(ctx, adminService)
proxyErr := cl.Create(ctx, publicProxyService)
if adminErr != nil {
if errors.IsAlreadyExists(adminErr) {
logger.Info("interceptor admin service already exists, moving on")
} else {
logger.Error(adminErr, "Creating interceptor admin service")
httpso.AddCondition(*v1alpha1.CreateCondition(v1alpha1.Error, metav1.ConditionFalse, v1alpha1.ErrorCreatingInterceptorAdminService).SetMessage(adminErr.Error()))
return adminErr
}
}
if proxyErr != nil {
if errors.IsAlreadyExists(adminErr) {
logger.Info("interceptor proxy service already exists, moving on")
} else {
logger.Error(proxyErr, "Creating interceptor proxy service")
httpso.AddCondition(*v1alpha1.CreateCondition(v1alpha1.Error, metav1.ConditionFalse, v1alpha1.ErrorCreatingInterceptorProxyService).SetMessage(proxyErr.Error()))
return proxyErr
}
}
httpso.AddCondition(*v1alpha1.CreateCondition(v1alpha1.Created, metav1.ConditionTrue, v1alpha1.InterceptorCreated).SetMessage("Created interceptor"))
return nil
}

View File

@ -0,0 +1,42 @@
package controllers
import (
"context"
"fmt"
"net/http"
"github.com/kedacore/http-add-on/pkg/k8s"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func pingInterceptors(
ctx context.Context,
cl client.Client,
httpCl *http.Client,
ns,
interceptorSvcName,
interceptorPort string,
) error {
endpointURLs, err := k8s.EndpointsForService(
ctx,
ns,
interceptorSvcName,
interceptorPort,
k8s.EndpointsFuncForControllerClient(cl),
)
if err != nil {
return errors.Wrap(err, "pingInterceptors")
}
errGrp, _ := errgroup.WithContext(ctx)
for _, endpointURL := range endpointURLs {
endpointStr := endpointURL.String()
errGrp.Go(func() error {
fullAddr := fmt.Sprintf("%s/routing_ping", endpointStr)
_, err := httpCl.Get(fullAddr)
return err
})
}
return errGrp.Wait()
}

View File

@ -0,0 +1,44 @@
package controllers
import (
"context"
"net/http"
"testing"
"github.com/kedacore/http-add-on/pkg/k8s"
kedanet "github.com/kedacore/http-add-on/pkg/net"
"github.com/stretchr/testify/require"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
func TestPingInterceptors(t *testing.T) {
const (
ns = "testns"
svcName = "testsvc"
)
r := require.New(t)
// create a new server (that we can introspect later on) to act
// like a fake interceptor. we expect that pingInterceptors()
// will make requests to this server
hdl := kedanet.NewTestHTTPHandlerWrapper(
http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200)
}),
)
srv, url, err := kedanet.StartTestServer(hdl)
r.NoError(err)
defer srv.Close()
ctx := context.Background()
endpoints := k8s.FakeEndpointsForURL(url, ns, svcName, 2)
cl := fake.NewClientBuilder().WithObjects(endpoints).Build()
r.NoError(pingInterceptors(
ctx,
cl,
srv.Client(),
ns,
svcName,
url.Port(),
))
reqs := hdl.IncomingRequests()
r.Equal(len(endpoints.Subsets[0].Addresses), len(reqs))
}

View File

@ -0,0 +1,119 @@
package controllers
import (
"context"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/k8s"
"github.com/kedacore/http-add-on/pkg/routing"
pkgerrs "github.com/pkg/errors"
"k8s.io/apimachinery/pkg/api/errors"
"sigs.k8s.io/controller-runtime/pkg/client"
)
func removeAndUpdateRoutingTable(
ctx context.Context,
lggr logr.Logger,
cl client.Client,
table *routing.Table,
host,
namespace string,
) error {
lggr = lggr.WithName("removeAndUpdateRoutingTable")
if err := table.RemoveTarget(host); err != nil {
lggr.Error(
err,
"could not remove host from routing table, progressing anyway",
"host",
host,
)
}
return updateRoutingMap(ctx, lggr, cl, namespace, table)
}
func addAndUpdateRoutingTable(
ctx context.Context,
lggr logr.Logger,
cl client.Client,
table *routing.Table,
host string,
target routing.Target,
namespace string,
) error {
lggr = lggr.WithName("addAndUpdateRoutingTable")
if err := table.AddTarget(host, target); err != nil {
lggr.Error(
err,
"could not add host to routing table, progressing anyway",
"host",
host,
)
}
return updateRoutingMap(ctx, lggr, cl, namespace, table)
}
func updateRoutingMap(
ctx context.Context,
lggr logr.Logger,
cl client.Client,
namespace string,
table *routing.Table,
) error {
lggr = lggr.WithName("updateRoutingMap")
routingConfigMap, err := k8s.GetConfigMap(ctx, cl, namespace, routing.ConfigMapRoutingTableName)
// if there is an error other than not found on the ConfigMap, we should
// fail
if err != nil && !errors.IsNotFound(err) {
lggr.Error(
err,
"other issue fetching the routing table ConfigMap",
"configMapName",
routing.ConfigMapRoutingTableName,
)
return pkgerrs.Wrap(err, "routing table ConfigMap fetch error")
}
// if either the routing table ConfigMap doesn't exist or for some reason it's
// nil in memory, we need to create it
if errors.IsNotFound(err) || routingConfigMap == nil {
lggr.Info(
"routing table ConfigMap didn't exist, creating it",
"configMapName",
routing.ConfigMapRoutingTableName,
)
routingTableLabels := map[string]string{
"control-plane": "operator",
"keda.sh/addon": "http-add-on",
"app": "http-add-on",
"name": "http-add-on-routing-table",
}
cm := k8s.NewConfigMap(
namespace,
routing.ConfigMapRoutingTableName,
routingTableLabels,
map[string]string{},
)
if err := routing.SaveTableToConfigMap(table, cm); err != nil {
return err
}
if err := k8s.CreateConfigMap(
ctx,
lggr,
cl,
cm,
); err != nil {
return err
}
} else {
newCM := routingConfigMap.DeepCopy()
if err := routing.SaveTableToConfigMap(table, newCM); err != nil {
return err
}
if _, patchErr := k8s.PatchConfigMap(ctx, lggr, cl, routingConfigMap, newCM); patchErr != nil {
return patchErr
}
}
return nil
}

View File

@ -0,0 +1,63 @@
package controllers
import (
"context"
"testing"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/routing"
"github.com/stretchr/testify/require"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
func TestRoutingTable(t *testing.T) {
table := routing.NewTable()
const (
host = "myhost.com"
ns = "testns"
svcName = "testsvc"
deplName = "testdepl"
)
r := require.New(t)
ctx := context.Background()
cl := fake.NewClientBuilder().Build()
target := routing.Target{
Service: svcName,
Port: 8080,
Deployment: deplName,
}
r.NoError(addAndUpdateRoutingTable(
ctx,
logr.Discard(),
cl,
table,
host,
target,
ns,
))
// TODO: ensure that the ConfigMap was updated.
// requires
// https://github.com/kubernetes-sigs/controller-runtime/issues/1633
// to be implemented.
retTarget, err := table.Lookup(host)
r.NoError(err)
r.Equal(target, retTarget)
r.NoError(removeAndUpdateRoutingTable(
ctx,
logr.Discard(),
cl,
table,
host,
ns,
))
// TODO: ensure that the ConfigMap was updated.
// requires
// https://github.com/kubernetes-sigs/controller-runtime/issues/1633
// to be implemnented
_, err = table.Lookup(host)
r.Error(err)
}

View File

@ -29,6 +29,7 @@ func createScaledObjects(
config.AppScaledObjectName(httpso), config.AppScaledObjectName(httpso),
appInfo.Name, appInfo.Name,
externalScalerHostName, externalScalerHostName,
httpso.Spec.Host,
httpso.Spec.Replicas.Min, httpso.Spec.Replicas.Min,
httpso.Spec.Replicas.Max, httpso.Spec.Replicas.Max,
) )
@ -36,18 +37,6 @@ func createScaledObjects(
return appErr return appErr
} }
interceptorScaledObject, interceptorErr := k8s.NewScaledObject(
appInfo.Namespace,
config.InterceptorScaledObjectName(httpso),
appInfo.InterceptorDeploymentName(),
externalScalerHostName,
httpso.Spec.Replicas.Min,
httpso.Spec.Replicas.Max,
)
if interceptorErr != nil {
return interceptorErr
}
logger.Info("Creating App ScaledObject", "ScaledObject", *appScaledObject) logger.Info("Creating App ScaledObject", "ScaledObject", *appScaledObject)
if err := cl.Create(ctx, appScaledObject); err != nil { if err := cl.Create(ctx, appScaledObject); err != nil {
if errors.IsAlreadyExists(err) { if errors.IsAlreadyExists(err) {
@ -69,27 +58,5 @@ func createScaledObjects(
v1alpha1.AppScaledObjectCreated, v1alpha1.AppScaledObjectCreated,
).SetMessage("App ScaledObject created")) ).SetMessage("App ScaledObject created"))
// Interceptor ScaledObject
logger.Info("Creating Interceptor ScaledObject", "ScaledObject", *interceptorScaledObject)
if err := cl.Create(ctx, interceptorScaledObject); err != nil {
if errors.IsAlreadyExists(err) {
logger.Info("Interceptor ScaledObject already exists, moving on")
} else {
logger.Error(err, "Creating Interceptor ScaledObject")
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Error,
v1.ConditionFalse,
v1alpha1.ErrorCreatingInterceptorScaledObject,
).SetMessage(err.Error()))
return err
}
}
httpso.AddCondition(*v1alpha1.CreateCondition(
v1alpha1.Created,
v1.ConditionTrue,
v1alpha1.InterceptorScaledObjectCreated,
).SetMessage("Interceptor Scaled object created"))
return nil return nil
} }

View File

@ -33,11 +33,9 @@ var _ = Describe("UserApp", func() {
) )
Expect(err).To(BeNil()) Expect(err).To(BeNil())
// make sure that httpso has the right conditions on it: // make sure that httpso has the AppScaledObjectCreated
// // condition on it
// - AppScaledObjectCreated Expect(len(testInfra.httpso.Status.Conditions)).To(Equal(1))
// - InterceptorScaledObjectCreated
Expect(len(testInfra.httpso.Status.Conditions)).To(Equal(2))
cond1 := testInfra.httpso.Status.Conditions[0] cond1 := testInfra.httpso.Status.Conditions[0]
cond1ts, err := time.Parse(time.RFC3339, cond1.Timestamp) cond1ts, err := time.Parse(time.RFC3339, cond1.Timestamp)
@ -47,14 +45,6 @@ var _ = Describe("UserApp", func() {
Expect(cond1.Status).To(Equal(metav1.ConditionTrue)) Expect(cond1.Status).To(Equal(metav1.ConditionTrue))
Expect(cond1.Reason).To(Equal(v1alpha1.AppScaledObjectCreated)) Expect(cond1.Reason).To(Equal(v1alpha1.AppScaledObjectCreated))
cond2 := testInfra.httpso.Status.Conditions[1]
cond2ts, err := time.Parse(time.RFC3339, cond2.Timestamp)
Expect(err).To(BeNil())
Expect(time.Since(cond2ts) >= 0).To(BeTrue())
Expect(cond2.Type).To(Equal(v1alpha1.Created))
Expect(cond2.Status).To(Equal(metav1.ConditionTrue))
Expect(cond2.Reason).To(Equal(v1alpha1.InterceptorScaledObjectCreated))
// check that the app ScaledObject was created // check that the app ScaledObject was created
u := &unstructured.Unstructured{} u := &unstructured.Unstructured{}
u.SetGroupVersionKind(schema.GroupVersionKind{ u.SetGroupVersionKind(schema.GroupVersionKind{
@ -78,22 +68,6 @@ var _ = Describe("UserApp", func() {
Expect(err).To(BeNil()) Expect(err).To(BeNil())
Expect(spec["minReplicaCount"]).To(BeNumerically("==", testInfra.httpso.Spec.Replicas.Min)) Expect(spec["minReplicaCount"]).To(BeNumerically("==", testInfra.httpso.Spec.Replicas.Min))
Expect(spec["maxReplicaCount"]).To(BeNumerically("==", testInfra.httpso.Spec.Replicas.Max)) Expect(spec["maxReplicaCount"]).To(BeNumerically("==", testInfra.httpso.Spec.Replicas.Max))
// check that the interceptor ScaledObject was created
objectKey.Name = config.InterceptorScaledObjectName(&testInfra.httpso)
err = testInfra.cl.Get(testInfra.ctx, objectKey, u)
Expect(err).To(BeNil())
metadata, err = getKeyAsMap(u.Object, "metadata")
Expect(err).To(BeNil())
Expect(metadata["namespace"]).To(Equal(testInfra.ns))
Expect(metadata["name"]).To(Equal(config.InterceptorScaledObjectName(&testInfra.httpso)))
spec, err = getKeyAsMap(u.Object, "spec")
Expect(err).To(BeNil())
Expect(spec["minReplicaCount"]).To(BeNumerically("==", testInfra.httpso.Spec.Replicas.Min))
Expect(spec["maxReplicaCount"]).To(BeNumerically("==", testInfra.httpso.Spec.Replicas.Max))
}) })
}) })
}) })

View File

@ -17,9 +17,13 @@ limitations under the License.
package main package main
import ( import (
"context"
"flag" "flag"
"fmt"
"net/http"
"os" "os"
"golang.org/x/sync/errgroup"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme" clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp" _ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
@ -29,6 +33,7 @@ import (
httpv1alpha1 "github.com/kedacore/http-add-on/operator/api/v1alpha1" httpv1alpha1 "github.com/kedacore/http-add-on/operator/api/v1alpha1"
"github.com/kedacore/http-add-on/operator/controllers" "github.com/kedacore/http-add-on/operator/controllers"
"github.com/kedacore/http-add-on/operator/controllers/config" "github.com/kedacore/http-add-on/operator/controllers/config"
"github.com/kedacore/http-add-on/pkg/routing"
// +kubebuilder:scaffold:imports // +kubebuilder:scaffold:imports
) )
@ -47,10 +52,17 @@ func init() {
func main() { func main() {
var metricsAddr string var metricsAddr string
var enableLeaderElection bool var enableLeaderElection bool
var adminPort int
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.") flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false, flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. "+ "Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller manager.") "Enabling this will ensure there is only one active controller manager.")
flag.IntVar(
&adminPort,
"admin-port",
9090,
"The port on which to run the admin server. This is the port on which RPCs will be accepted to get the routing table",
)
flag.Parse() flag.Parse()
ctrl.SetLogger(zap.New(zap.UseDevMode(true))) ctrl.SetLogger(zap.New(zap.UseDevMode(true)))
@ -77,21 +89,42 @@ func main() {
setupLog.Error(err, "unable to get external scaler configuration") setupLog.Error(err, "unable to get external scaler configuration")
os.Exit(1) os.Exit(1)
} }
if err = (&controllers.HTTPScaledObjectReconciler{ routingTable := routing.NewTable()
if err := (&controllers.HTTPScaledObjectReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("HTTPScaledObject"), Log: ctrl.Log.WithName("controllers").WithName("HTTPScaledObject"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
InterceptorConfig: *interceptorCfg, InterceptorConfig: *interceptorCfg,
ExternalScalerConfig: *externalScalerCfg, ExternalScalerConfig: *externalScalerCfg,
RoutingTable: routingTable,
}).SetupWithManager(mgr); err != nil { }).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "HTTPScaledObject") setupLog.Error(err, "unable to create controller", "controller", "HTTPScaledObject")
os.Exit(1) os.Exit(1)
} }
// +kubebuilder:scaffold:builder // +kubebuilder:scaffold:builder
setupLog.Info("starting manager") ctx := context.Background()
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { errGrp, _ := errgroup.WithContext(ctx)
setupLog.Error(err, "problem running manager")
os.Exit(1) // start the control loop
} errGrp.Go(func() error {
setupLog.Info("starting manager")
return mgr.Start(ctrl.SetupSignalHandler())
})
// start the admin server to serve routing table information
// to the interceptors
errGrp.Go(func() error {
mux := http.NewServeMux()
routing.AddFetchRoute(setupLog, mux, routingTable)
addr := fmt.Sprintf(":%d", adminPort)
setupLog.Info(
"starting admin RPC server",
"port",
adminPort,
)
return http.ListenAndServe(addr, mux)
})
setupLog.Error(errGrp.Wait(), "running the operator")
} }

View File

@ -27,7 +27,22 @@ func DockerBuild(image, dockerfileLocation, context string) error {
image, image,
"-f", "-f",
dockerfileLocation, dockerfileLocation,
".", context,
)
}
func DockerBuildACR(registry, image, dockerfileLocation, context string) error {
return sh.RunV(
"az",
"acr",
"build",
"--image",
image,
"--registry",
registry,
"--file",
dockerfileLocation,
context,
) )
} }

View File

@ -1,58 +0,0 @@
package http
import "sync"
// QueueCountReader represents the size of a virtual HTTP queue, possibly
// distributed across multiple HTTP server processes. It only can access
// the current size of the queue, not any other information about requests.
//
// It is concurrency safe.
type QueueCountReader interface {
Current() (int, error)
}
// QueueCounter represents a virtual HTTP queue, possibly distributed across
// multiple HTTP server processes. It can only increase or decrease the
// size of the queue or read the current size of the queue, but not read
// or modify any other information about it.
//
// Both the mutation and read functionality is concurrency safe, but
// the read functionality is point-in-time only
type QueueCounter interface {
QueueCountReader
Resize(int) error
}
// MemoryQueue is a reference QueueCounter implementation that holds the
// HTTP queue in memory only. Always use NewMemoryQueue to create one
// of these.
type MemoryQueue struct {
count int
mut *sync.RWMutex
}
// NewMemoryQueue creates a new empty memory queue
func NewMemoryQueue() *MemoryQueue {
lock := new(sync.RWMutex)
return &MemoryQueue{
count: 0,
mut: lock,
}
}
// Resize changes the size of the queue. Further calls to Current() return
// the newly calculated size if no other Resize() calls were made in the
// interim.
func (r *MemoryQueue) Resize(delta int) error {
r.mut.Lock()
defer r.mut.Unlock()
r.count += delta
return nil
}
// Current returns the current size of the queue.
func (r *MemoryQueue) Current() (int, error) {
r.mut.RLock()
defer r.mut.RUnlock()
return r.count, nil
}

19
pkg/http/server.go Normal file
View File

@ -0,0 +1,19 @@
package http
import (
"context"
"net/http"
)
func ServeContext(ctx context.Context, addr string, hdl http.Handler) error {
srv := &http.Server{
Handler: hdl,
Addr: addr,
}
go func() {
<-ctx.Done()
srv.Shutdown(ctx)
}()
return srv.ListenAndServe()
}

36
pkg/http/server_test.go Normal file
View File

@ -0,0 +1,36 @@
package http
import (
"context"
"errors"
"net/http"
"testing"
"time"
"github.com/stretchr/testify/require"
)
func TestServeContext(t *testing.T) {
r := require.New(t)
ctx, done := context.WithCancel(
context.Background(),
)
hdl := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("foo", "bar")
w.Write([]byte("hello world"))
})
addr := "localhost:1234"
const cancelDur = 500 * time.Millisecond
go func() {
time.Sleep(cancelDur)
done()
}()
start := time.Now()
err := ServeContext(ctx, addr, hdl)
elapsed := time.Since(start)
r.Error(err)
r.True(errors.Is(err, http.ErrServerClosed), "error is not a http.ErrServerClosed (%w)", err)
r.Greater(elapsed, cancelDur)
r.Less(elapsed, cancelDur*4)
}

15
pkg/http/test_utils.go Normal file
View File

@ -0,0 +1,15 @@
package http
import (
nethttp "net/http"
"net/http/httptest"
)
func NewTestCtx(
method,
path string,
) (*nethttp.Request, *httptest.ResponseRecorder) {
req := httptest.NewRequest(method, path, nil)
rec := httptest.NewRecorder()
return req, rec
}

View File

@ -5,6 +5,7 @@ import (
"k8s.io/client-go/dynamic" "k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes" "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest" "k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
) )
// NewClientset gets a new Kubernetes clientset, or calls log.Fatal // NewClientset gets a new Kubernetes clientset, or calls log.Fatal
@ -27,3 +28,9 @@ func NewClientset() (*kubernetes.Clientset, dynamic.Interface, error) {
} }
return clientset, dynamic, nil return clientset, dynamic, nil
} }
// ObjKey creates a new client.ObjectKey with the given
// name and namespace
func ObjKey(ns, name string) client.ObjectKey {
return client.ObjectKey{Namespace: ns, Name: name}
}

145
pkg/k8s/config_map.go Normal file
View File

@ -0,0 +1,145 @@
package k8s
import (
"context"
"github.com/go-logr/logr"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
"sigs.k8s.io/controller-runtime/pkg/client"
)
// ConfigMapGetter is a pared down version of a ConfigMapInterface
// (found here: https://pkg.go.dev/k8s.io/client-go@v0.21.3/kubernetes/typed/core/v1#ConfigMapInterface).
//
// Pass this whenever possible to functions that only need to get individual ConfigMaps
// from Kubernetes, and nothing else.
type ConfigMapGetter interface {
Get(ctx context.Context, name string, opts metav1.GetOptions) (*corev1.ConfigMap, error)
}
// ConfigMapWatcher is a pared down version of a ConfigMapInterface
// (found here: https://pkg.go.dev/k8s.io/client-go@v0.21.3/kubernetes/typed/core/v1#ConfigMapInterface).
//
// Pass this whenever possible to functions that only need to watch for ConfigMaps
// from Kubernetes, and nothing else.
type ConfigMapWatcher interface {
Watch(ctx context.Context, opts metav1.ListOptions) (watch.Interface, error)
}
// ConfigMapGetterWatcher is a pared down version of a ConfigMapInterface
// (found here: https://pkg.go.dev/k8s.io/client-go@v0.21.3/kubernetes/typed/core/v1#ConfigMapInterface).
//
// Pass this whenever possible to functions that only need to watch for ConfigMaps
// from Kubernetes, and nothing else.
type ConfigMapGetterWatcher interface {
ConfigMapGetter
ConfigMapWatcher
}
// newConfigMap creates a new configMap structure
func NewConfigMap(
namespace string,
name string,
labels map[string]string,
data map[string]string,
) *corev1.ConfigMap {
configMap := &corev1.ConfigMap{
TypeMeta: metav1.TypeMeta{
Kind: "ConfigMap",
},
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: labels,
},
Data: data,
}
return configMap
}
// CreateConfigMap sends a request to Kubernetes using the client cl
// to create configMap. Returns a non-nil error if anything failed with the creation,
// including if the config map already existed.
func CreateConfigMap(
ctx context.Context,
logger logr.Logger,
cl client.Writer,
configMap *corev1.ConfigMap,
) error {
logger = logger.WithName("pkg.k8s.CreateConfigMap")
if err := cl.Create(ctx, configMap); err != nil {
logger.Error(
err,
"failed to create ConfigMap",
"configMap",
*configMap,
)
return err
}
return nil
}
func DeleteConfigMap(
ctx context.Context,
cl client.Writer,
configMap *corev1.ConfigMap,
logger logr.Logger,
) error {
logger = logger.WithName("pkg.k8s.DeleteConfigMap")
err := cl.Delete(ctx, configMap)
if err != nil {
logger.Error(
err,
"failed to delete configmap",
"configMap",
*configMap,
)
return err
}
return nil
}
func PatchConfigMap(
ctx context.Context,
logger logr.Logger,
cl client.Writer,
originalConfigMap *corev1.ConfigMap,
patchConfigMap *corev1.ConfigMap,
) (*corev1.ConfigMap, error) {
logger = logger.WithName("pkg.k8s.PatchConfigMap")
if err := cl.Patch(
ctx,
patchConfigMap,
client.MergeFrom(originalConfigMap),
); err != nil {
logger.Error(
err,
"failed to patch ConfigMap",
"originalConfigMap",
*originalConfigMap,
"patchConfigMap",
*patchConfigMap,
)
return nil, err
}
return patchConfigMap, nil
}
func GetConfigMap(
ctx context.Context,
cl client.Client,
namespace string,
name string,
) (*corev1.ConfigMap, error) {
configMap := &corev1.ConfigMap{}
err := cl.Get(ctx, client.ObjectKey{Name: name, Namespace: namespace}, configMap)
if err != nil {
return nil, err
}
return configMap, nil
}

View File

@ -2,36 +2,37 @@ package k8s
import ( import (
"context" "context"
"errors"
"fmt"
"sigs.k8s.io/controller-runtime/pkg/client"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr" "k8s.io/apimachinery/pkg/watch"
) )
// DeleteDeployment deletes the deployment given using the client given type DeploymentLister interface {
func DeleteDeployment(ctx context.Context, namespace, name string, cl client.Client) error { List(ctx context.Context, options metav1.ListOptions) (*appsv1.DeploymentList, error)
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
}
if err := cl.Delete(ctx, deployment, &client.DeleteOptions{}); err != nil {
return err
}
return nil
} }
// NewDeployment creates a new deployment object // DeploymentLister knows how to watch deployments. This interface is
// implemented by Kubernetes client-go
type DeploymentWatcher interface {
Watch(ctx context.Context, options metav1.ListOptions) (watch.Interface, error)
}
// DeploymentListerWatcher knows how to list and watch deployments. This
// interface is implemented by Kubernetes client-go
type DeploymentListerWatcher interface {
DeploymentLister
DeploymentWatcher
}
// newDeployment creates a new deployment object
// with the given name and the given image. This does not actually create // with the given name and the given image. This does not actually create
// the deployment in the cluster, it just creates the deployment object // the deployment in the cluster, it just creates the deployment object
// in memory // in memory
func NewDeployment( //
// this function is only used in tests
func newDeployment(
namespace, namespace,
name, name,
image string, image string,
@ -81,84 +82,3 @@ func NewDeployment(
return deployment return deployment
} }
// AddLivenessProbe adds a liveness probe to the first container on depl.
// the probe will do an HTTP GET to path on port.
//
// returns a non-nil error if there is not at least one container on the given
// deployment's container list (depl.Spec.Template.Spec.Containers)
func AddLivenessProbe(
depl *appsv1.Deployment,
path string,
port int,
) error {
if len(depl.Spec.Template.Spec.Containers) < 1 {
return errors.New("no conatiners to set liveness/readiness checks on")
}
depl.Spec.Template.Spec.Containers[0].LivenessProbe = &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: path,
Port: intstr.FromInt(port),
},
},
PeriodSeconds: 1,
}
return nil
}
// AddReadinessProbe adds a readiness probe to the first container on depl.
// the probe will do an HTTP GET to path on port.
//
// returns a non-nil error if there is not at least one container on the given
// deployment's container list (depl.Spec.Template.Spec.Containers)
func AddReadinessProbe(
depl *appsv1.Deployment,
path string,
port int,
) error {
depl.Spec.Template.Spec.Containers[0].ReadinessProbe = &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: path,
Port: intstr.FromInt(port),
},
},
PeriodSeconds: 1,
}
return nil
}
func ensureLeadingSlash(str string) string {
if len(str) == 0 {
return str
}
if str[0] != '/' {
str = fmt.Sprintf("/%s", str)
}
return str
}
func AddHTTPLivenessProbe(depl *appsv1.Deployment, httpPath string, port int) {
httpPath = ensureLeadingSlash(httpPath)
depl.Spec.Template.Spec.Containers[0].LivenessProbe = &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: httpPath,
Port: intstr.FromInt(port),
},
},
}
}
func AddHTTPReadinessProbe(depl *appsv1.Deployment, httpPath string, port int) {
httpPath = ensureLeadingSlash(httpPath)
depl.Spec.Template.Spec.Containers[0].ReadinessProbe = &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: httpPath,
Port: intstr.FromInt(port),
},
},
}
}

View File

@ -2,82 +2,194 @@ package k8s
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"sync" "sync"
"time"
"github.com/go-logr/logr"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch" "k8s.io/apimachinery/pkg/watch"
typedappsv1 "k8s.io/client-go/kubernetes/typed/apps/v1"
) )
type DeploymentCache interface { type DeploymentCache interface {
Get(name string) (*appsv1.Deployment, error) Get(name string) (appsv1.Deployment, error)
Watch(name string) watch.Interface Watch(name string) watch.Interface
} }
type K8sDeploymentCache struct { type K8sDeploymentCache struct {
latestEvts map[string]watch.Event latest map[string]appsv1.Deployment
rwm *sync.RWMutex rwm *sync.RWMutex
cl DeploymentListerWatcher
broadcaster *watch.Broadcaster broadcaster *watch.Broadcaster
} }
func NewK8sDeploymentCache( func NewK8sDeploymentCache(
ctx context.Context, ctx context.Context,
cl typedappsv1.DeploymentInterface, lggr logr.Logger,
cl DeploymentListerWatcher,
) (*K8sDeploymentCache, error) { ) (*K8sDeploymentCache, error) {
deployList, err := cl.List(ctx, metav1.ListOptions{}) lggr = lggr.WithName("pkg.k8s.NewK8sDeploymentCache")
if err != nil {
return nil, err
}
latestEvts := map[string]watch.Event{}
for _, depl := range deployList.Items {
latestEvts[depl.ObjectMeta.Name] = watch.Event{
Type: watch.Added,
Object: &depl,
}
}
bcaster := watch.NewBroadcaster(5, watch.DropIfChannelFull) bcaster := watch.NewBroadcaster(5, watch.DropIfChannelFull)
watcher, err := cl.Watch(ctx, metav1.ListOptions{})
if err != nil {
return nil, err
}
ret := &K8sDeploymentCache{ ret := &K8sDeploymentCache{
latestEvts: latestEvts, latest: map[string]appsv1.Deployment{},
rwm: new(sync.RWMutex), rwm: new(sync.RWMutex),
broadcaster: bcaster, broadcaster: bcaster,
cl: cl,
} }
go func() { deployList, err := cl.List(ctx, metav1.ListOptions{})
defer watcher.Stop() if err != nil {
ch := watcher.ResultChan() lggr.Error(
for { err,
// TODO: add a timeout "failed to fetch initial deployment list",
evt := <-ch )
ret.broadcaster.Action(evt.Type, evt.Object) return nil, err
ret.rwm.Lock() }
depl, ok := evt.Object.(*appsv1.Deployment) ret.mergeAndBroadcastList(deployList)
// if we didn't get back a deployment in the event,
// something is wrong that we can't fix, so just continue
if !ok {
continue
}
ret.latestEvts[depl.GetObjectMeta().GetName()] = evt
ret.rwm.Unlock()
}
}()
return ret, nil return ret, nil
} }
func (k *K8sDeploymentCache) Get(name string) (*appsv1.Deployment, error) { func (k *K8sDeploymentCache) MarshalJSON() ([]byte, error) {
k.rwm.RLock() k.rwm.RLock()
defer k.rwm.RUnlock() defer k.rwm.RUnlock()
evt, ok := k.latestEvts[name] ret := map[string]int32{}
if !ok { for name, depl := range k.latest {
return nil, fmt.Errorf("no deployment %s found", name) ret[name] = depl.Status.ReadyReplicas
} }
return evt.Object.(*appsv1.Deployment), nil return json.Marshal(ret)
}
func (k *K8sDeploymentCache) StartWatcher(
ctx context.Context,
lggr logr.Logger,
fetchTickDur time.Duration,
) error {
lggr = lggr.WithName(
"pkg.k8s.K8sDeploymentCache.StartWatcher",
)
watcher, err := k.cl.Watch(ctx, metav1.ListOptions{})
if err != nil {
lggr.Error(
err,
"couldn't create new watch stream",
)
return errors.Wrap(
err,
"error creating new watch stream",
)
}
ch := watcher.ResultChan()
fetchTicker := time.NewTicker(fetchTickDur)
defer fetchTicker.Stop()
for {
select {
case <-fetchTicker.C:
deplList, err := k.cl.List(ctx, metav1.ListOptions{})
if err != nil {
lggr.Error(
err,
"error with periodic deployment fetch",
)
return errors.Wrap(
err,
"error with periodic deployment fetch",
)
}
k.mergeAndBroadcastList(deplList)
case evt, validRecv := <-ch:
// handle closed watch stream
if !validRecv {
newWatcher, err := k.cl.Watch(ctx, metav1.ListOptions{})
if err != nil {
lggr.Error(
err,
"watch stream was closed and couldn't re-open it",
)
return errors.Wrap(
err,
"failed to re-open watch stream",
)
}
ch = newWatcher.ResultChan()
} else {
if err := k.addEvt(evt); err != nil {
lggr.Error(
err,
"couldn't add event to the deployment cache",
)
return errors.Wrap(
err,
"error adding event to the deployment cache",
)
}
k.broadcaster.Action(evt.Type, evt.Object)
}
case <-ctx.Done():
lggr.Error(
ctx.Err(),
"context is done",
)
return errors.Wrap(
ctx.Err(),
"context is marked done",
)
}
}
}
// mergeList adds each deployment in lst to the internal
// list of events and broadcasts a new event for each
// one.
func (k *K8sDeploymentCache) mergeAndBroadcastList(
lst *appsv1.DeploymentList,
) {
k.rwm.Lock()
defer k.rwm.Unlock()
for _, depl := range lst.Items {
k.latest[depl.ObjectMeta.Name] = depl
// if the deployment isn't already in the cache,
// we need to broadcast an ADDED event, otherwise
// broadcast a MODIFIED event
_, ok := k.latest[depl.ObjectMeta.Name]
evtType := watch.Modified
if !ok {
evtType = watch.Added
}
k.broadcaster.Action(evtType, &depl)
}
}
// addEvt checks to make sure evt.Object is an actual
// Deployment. if it isn't, returns a descriptive error.
// otherwise, adds evt to the internal events list
func (k *K8sDeploymentCache) addEvt(evt watch.Event) error {
k.rwm.Lock()
defer k.rwm.Unlock()
depl, ok := evt.Object.(*appsv1.Deployment)
// if we didn't get back a deployment in the event,
// something is wrong that we can't fix, so just continue
if !ok {
return fmt.Errorf(
"watch event did not contain a Deployment",
)
}
k.latest[depl.GetObjectMeta().GetName()] = *depl
return nil
}
func (k *K8sDeploymentCache) Get(name string) (appsv1.Deployment, error) {
k.rwm.RLock()
defer k.rwm.RUnlock()
depl, ok := k.latest[name]
if !ok {
return appsv1.Deployment{}, fmt.Errorf("no deployment %s found", name)
}
return depl, nil
} }
func (k *K8sDeploymentCache) Watch(name string) watch.Interface { func (k *K8sDeploymentCache) Watch(name string) watch.Interface {
@ -87,10 +199,7 @@ func (k *K8sDeploymentCache) Watch(name string) watch.Interface {
if !ok { if !ok {
return evt, false return evt, false
} }
if depl.ObjectMeta.Name != name { return evt, depl.ObjectMeta.Name == name
return evt, false
}
return evt, true
}) })
} }
@ -110,19 +219,19 @@ type MemoryDeploymentCache struct {
// Deployments holds the deployments to be returned in calls to Get. If Get is called // Deployments holds the deployments to be returned in calls to Get. If Get is called
// with a name that exists as a key in this map, the corresponding value will be returned. // with a name that exists as a key in this map, the corresponding value will be returned.
// Otherwise, an error will be returned // Otherwise, an error will be returned
Deployments map[string]*appsv1.Deployment Deployments map[string]appsv1.Deployment
} }
// NewMemoryDeploymentCache creates a new MemoryDeploymentCache with the Deployments map set to // NewMemoryDeploymentCache creates a new MemoryDeploymentCache with the Deployments map set to
// initialDeployments, and the Watchers map initialized with a newly created and otherwise // initialDeployments, and the Watchers map initialized with a newly created and otherwise
// untouched FakeWatcher for each key in the initialDeployments map // untouched FakeWatcher for each key in the initialDeployments map
func NewMemoryDeploymentCache( func NewMemoryDeploymentCache(
initialDeployments map[string]*appsv1.Deployment, initialDeployments map[string]appsv1.Deployment,
) *MemoryDeploymentCache { ) *MemoryDeploymentCache {
ret := &MemoryDeploymentCache{ ret := &MemoryDeploymentCache{
RWM: new(sync.RWMutex), RWM: new(sync.RWMutex),
Watchers: make(map[string]*watch.RaceFreeFakeWatcher), Watchers: make(map[string]*watch.RaceFreeFakeWatcher),
Deployments: make(map[string]*appsv1.Deployment), Deployments: make(map[string]appsv1.Deployment),
} }
ret.Deployments = initialDeployments ret.Deployments = initialDeployments
for deployName := range initialDeployments { for deployName := range initialDeployments {
@ -131,12 +240,25 @@ func NewMemoryDeploymentCache(
return ret return ret
} }
func (m *MemoryDeploymentCache) Get(name string) (*appsv1.Deployment, error) { func (m *MemoryDeploymentCache) MarshalJSON() ([]byte, error) {
m.RWM.RLock()
defer m.RWM.RUnlock()
ret := map[string]int32{}
for name, depl := range m.Deployments {
ret[name] = depl.Status.ReadyReplicas
}
return json.Marshal(ret)
}
func (m *MemoryDeploymentCache) Get(name string) (appsv1.Deployment, error) {
m.RWM.RLock() m.RWM.RLock()
defer m.RWM.RUnlock() defer m.RWM.RUnlock()
val, ok := m.Deployments[name] val, ok := m.Deployments[name]
if !ok { if !ok {
return nil, fmt.Errorf("Deployment %s not found", name) return appsv1.Deployment{}, fmt.Errorf(
"deployment %s not found",
name,
)
} }
return val, nil return val, nil
} }

View File

@ -0,0 +1,68 @@
package k8s
import (
"fmt"
"sync"
appsv1 "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/watch"
)
type FakeDeploymentCache struct {
Mut *sync.RWMutex
Current map[string]appsv1.Deployment
Watchers map[string]*watch.RaceFreeFakeWatcher
}
func NewFakeDeploymentCache() *FakeDeploymentCache {
return &FakeDeploymentCache{
Mut: &sync.RWMutex{},
Current: make(map[string]appsv1.Deployment),
Watchers: make(map[string]*watch.RaceFreeFakeWatcher),
}
}
func (f *FakeDeploymentCache) Get(name string) (appsv1.Deployment, error) {
f.Mut.RLock()
defer f.Mut.RUnlock()
ret, ok := f.Current[name]
if ok {
return ret, nil
}
return appsv1.Deployment{}, fmt.Errorf("no deployment %s found", name)
}
func (f *FakeDeploymentCache) Watch(name string) watch.Interface {
f.Mut.RLock()
defer f.Mut.RUnlock()
watcher, ok := f.Watchers[name]
if !ok {
return watch.NewRaceFreeFake()
}
return watcher
}
func (f *FakeDeploymentCache) Set(name string, deployment appsv1.Deployment) {
f.Mut.Lock()
defer f.Mut.Unlock()
f.Current[name] = deployment
}
func (f *FakeDeploymentCache) SetWatcher(name string) *watch.RaceFreeFakeWatcher {
f.Mut.Lock()
defer f.Mut.Unlock()
watcher := watch.NewRaceFreeFake()
f.Watchers[name] = watcher
return watcher
}
func (f *FakeDeploymentCache) SetReplicas(name string, num int32) error {
f.Mut.Lock()
defer f.Mut.Unlock()
deployment, err := f.Get(name)
if err != nil {
return fmt.Errorf("no deployment %s found", name)
}
deployment.Spec.Replicas = &num
return nil
}

View File

@ -2,27 +2,31 @@ package k8s
import ( import (
"context" "context"
"errors"
"sync"
"testing" "testing"
"time" "time"
"github.com/go-logr/logr"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
appsv1 "k8s.io/api/apps/v1" appsv1 "k8s.io/api/apps/v1"
core "k8s.io/api/core/v1" core "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/watch"
k8sfake "k8s.io/client-go/kubernetes/fake" k8sfake "k8s.io/client-go/kubernetes/fake"
) )
func TestK8DeploymentCacheGet(t *testing.T) { func TestK8DeploymentCacheGet(t *testing.T) {
r := require.New(t) r := require.New(t)
ctx := context.Background() ctx, done := context.WithCancel(context.Background())
defer done()
const ns = "testns" const ns = "testns"
const name = "testdepl" const name = "testdepl"
expectedDepl := NewDeployment( expectedDepl := newDeployment(
ns, ns,
name, name,
"testimg", "testing",
nil, nil,
nil, nil,
make(map[string]string), make(map[string]string),
@ -31,28 +35,193 @@ func TestK8DeploymentCacheGet(t *testing.T) {
fakeClientset := k8sfake.NewSimpleClientset(expectedDepl) fakeClientset := k8sfake.NewSimpleClientset(expectedDepl)
fakeApps := fakeClientset.AppsV1() fakeApps := fakeClientset.AppsV1()
cache, err := NewK8sDeploymentCache(ctx, fakeApps.Deployments(ns)) cache, err := NewK8sDeploymentCache(
ctx,
logr.Discard(),
fakeApps.Deployments(ns),
)
r.NoError(err) r.NoError(err)
depl, err := cache.Get(name) depl, err := cache.Get(name)
r.NoError(err) r.NoError(err)
r.Equal(name, depl.ObjectMeta.Name) r.Equal(name, depl.ObjectMeta.Name)
none, err := cache.Get(name + "noexist") noneRet, err := cache.Get("noexist")
r.NotNil(err) r.NotNil(err)
r.Nil(none) // note: the returned deployment will be empty, not nil,
// because this function doesn't return a pointer. so,
// we have to check some of the fields inside the deployment
// to make sure they're empty
r.Nil(noneRet.Spec.Replicas)
r.Empty(noneRet.ObjectMeta.Name)
} }
func TestK8sDeploymentCacheWatch(t *testing.T) { func TestK8sDeploymentCacheMergeAndBroadcastList(t *testing.T) {
r := require.New(t) r := require.New(t)
ctx := context.Background() ctx, done := context.WithCancel(
context.Background(),
)
defer done()
cache, err := NewK8sDeploymentCache(ctx, logr.Discard(), newFakeDeploymentListerWatcher())
r.NoError(err)
depl := newDeployment("testns", "testdepl1", "testing", nil, nil, nil, core.PullAlways)
deplList := &appsv1.DeploymentList{
Items: []appsv1.Deployment{*depl},
}
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
cache.mergeAndBroadcastList(deplList)
}()
evts := []watch.Event{}
go func() {
defer wg.Done()
watcher := cache.Watch(depl.ObjectMeta.Name)
watchCh := watcher.ResultChan()
for i := 0; i < len(deplList.Items); i++ {
func() {
tmr := time.NewTimer(1 * time.Second)
defer tmr.Stop()
select {
case <-tmr.C:
t.Error("timeout waiting for event")
case evt := <-watchCh:
evts = append(evts, evt)
}
}()
}
}()
wg.Wait()
r.Equal(len(deplList.Items), len(evts))
for i := 0; i < len(deplList.Items); i++ {
evt := evts[i]
depl, ok := evt.Object.(*appsv1.Deployment)
if !ok {
t.Fatal("event came through with no deployment")
}
r.Equal(deplList.Items[i].Name, depl.Name)
}
}
func TestK8sDeploymentCacheAddEvt(t *testing.T) {
// see https://github.com/kedacore/http-add-on/issues/245
}
// test to make sure that, even when no events come through, the
// update loop eventually fetches the latest state of deployments
func TestK8sDeploymentCachePeriodicFetch(t *testing.T) {
r := require.New(t)
ctx, done := context.WithCancel(
context.Background(),
)
defer done()
lw := newFakeDeploymentListerWatcher()
cache, err := NewK8sDeploymentCache(ctx, logr.Discard(), lw)
r.NoError(err)
const tickDur = 10 * time.Millisecond
go cache.StartWatcher(ctx, logr.Discard(), tickDur)
depl := newDeployment("testns", "testdepl", "testing", nil, nil, nil, core.PullAlways)
// add the deployment without sending an event, to make sure that
// the internal loop won't receive any events and will rely on
// just the ticker
lw.addDeployment(*depl, false)
time.Sleep(tickDur * 2)
// make sure that the deployment was fetched
fetched, err := cache.Get(depl.ObjectMeta.Name)
r.NoError(err)
r.Equal(*depl, fetched)
r.Equal(0, len(lw.getWatcher().getEvents()))
}
// test to make sure that the update loop tries to re-establish watch
// streams when they're broken
func TestK8sDeploymentCacheRewatch(t *testing.T) {
r := require.New(t)
ctx, done := context.WithCancel(
context.Background(),
)
defer done()
lw := newFakeDeploymentListerWatcher()
cache, err := NewK8sDeploymentCache(ctx, logr.Discard(), lw)
r.NoError(err)
// start up the cache watcher with a very long tick duration,
// to ensure that the only way it will get updates is from the
// watch stream
const tickDur = 1000 * time.Second
watcherErrCh := make(chan error)
go func() {
watcherErrCh <- cache.StartWatcher(ctx, logr.Discard(), tickDur)
}()
// wait 1/2 second to make sure the watcher goroutine can start up
// and doesn't return any errors
select {
case err := <-watcherErrCh:
r.NoError(err)
case <-time.After(500 * time.Millisecond):
}
// close all open watch channels after waiting a bit for the watcher to start.
// in this call we're allowing channels to be reopened
lw.getWatcher().closeOpenChans(true)
time.Sleep(500 * time.Millisecond)
// add the deployment and send an event.
depl := newDeployment("testns", "testdepl", "testing", nil, nil, nil, core.PullAlways)
lw.addDeployment(*depl, true)
// sleep for a bit to make sure the watcher has had time to re-establish the watch
// and receive the event
time.Sleep(500 * time.Millisecond)
// make sure that an event came through
r.Equal(1, len(lw.getWatcher().getEvents()))
// make sure that the deployment was fetched
fetched, err := cache.Get(depl.ObjectMeta.Name)
r.NoError(err)
r.Equal(*depl, fetched)
}
// test to make sure that when the context is closed, the deployment
// cache stops
func TestK8sDeploymentCacheStopped(t *testing.T) {
r := require.New(t)
ctx, done := context.WithCancel(context.Background())
fakeClientset := k8sfake.NewSimpleClientset()
fakeApps := fakeClientset.AppsV1()
cache, err := NewK8sDeploymentCache(
ctx,
logr.Discard(),
fakeApps.Deployments("doesn't matter"),
)
r.NoError(err)
done()
err = cache.StartWatcher(ctx, logr.Discard(), time.Millisecond)
r.Error(err, "deployment cache watcher didn't return an error")
r.True(errors.Is(err, context.Canceled), "expected a context cancel error")
}
func TestK8sDeploymentCacheBasicWatch(t *testing.T) {
r := require.New(t)
ctx, done := context.WithCancel(
context.Background(),
)
defer done()
const ns = "testns" const ns = "testns"
const name = "testdepl" const name = "testdepl"
expectedDepl := NewDeployment( expectedDepl := newDeployment(
ns, ns,
name, name,
"testimg", "testing",
nil, nil,
nil, nil,
make(map[string]string), make(map[string]string),
@ -61,8 +230,13 @@ func TestK8sDeploymentCacheWatch(t *testing.T) {
fakeClientset := k8sfake.NewSimpleClientset() fakeClientset := k8sfake.NewSimpleClientset()
fakeDeployments := fakeClientset.AppsV1().Deployments(ns) fakeDeployments := fakeClientset.AppsV1().Deployments(ns)
cache, err := NewK8sDeploymentCache(ctx, fakeDeployments) cache, err := NewK8sDeploymentCache(
ctx,
logr.Discard(),
fakeDeployments,
)
r.NoError(err) r.NoError(err)
go cache.StartWatcher(ctx, logr.Discard(), time.Millisecond)
watcher := cache.Watch(name) watcher := cache.Watch(name)
defer watcher.Stop() defer watcher.Stop()
@ -83,7 +257,8 @@ func TestK8sDeploymentCacheWatch(t *testing.T) {
} }
}() }()
// first make sure that the send happened, and there was no error // first make sure that the send happened, and there was
// no error
select { select {
case <-createSentCh: case <-createSentCh:
case err := <-createErrCh: case err := <-createErrCh:
@ -92,7 +267,8 @@ func TestK8sDeploymentCacheWatch(t *testing.T) {
r.Fail("the create operation didn't happen after 400 ms") r.Fail("the create operation didn't happen after 400 ms")
} }
// then make sure that the deployment was actually received // then make sure that the deployment was actually
// received
select { select {
case obj := <-watcher.ResultChan(): case obj := <-watcher.ResultChan():
depl, ok := obj.Object.(*appsv1.Deployment) depl, ok := obj.Object.(*appsv1.Deployment)
@ -103,12 +279,3 @@ func TestK8sDeploymentCacheWatch(t *testing.T) {
r.Fail("didn't get a watch event after 500 ms") r.Fail("didn't get a watch event after 500 ms")
} }
} }
func gvrForDeployment(depl *appsv1.Deployment) schema.GroupVersionResource {
gvk := depl.GroupVersionKind()
return schema.GroupVersionResource{
Group: gvk.Group,
Version: gvk.Version,
Resource: "Deployment",
}
}

82
pkg/k8s/endpoints.go Normal file
View File

@ -0,0 +1,82 @@
package k8s
import (
"context"
"fmt"
"net/url"
"github.com/pkg/errors"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"sigs.k8s.io/controller-runtime/pkg/client"
)
// GetEndpointsFunc is a type that represents a function that can
// fetch endpoints
type GetEndpointsFunc func(
ctx context.Context,
namespace,
serviceName string,
) (*v1.Endpoints, error)
func EndpointsForService(
ctx context.Context,
ns,
serviceName,
servicePort string,
endpointsFn GetEndpointsFunc,
) ([]*url.URL, error) {
endpoints, err := endpointsFn(ctx, ns, serviceName)
if err != nil {
return nil, errors.Wrap(err, "pkg.k8s.EndpointsForService")
}
ret := []*url.URL{}
for _, subset := range endpoints.Subsets {
for _, addr := range subset.Addresses {
u, err := url.Parse(
fmt.Sprintf("http://%s:%s", addr.IP, servicePort),
)
if err != nil {
return nil, err
}
ret = append(ret, u)
}
}
return ret, nil
}
// EndpointsFuncForControllerClient returns a new GetEndpointsFunc
// that uses the controller-runtime client.Client to fetch endpoints
func EndpointsFuncForControllerClient(
cl client.Client,
) GetEndpointsFunc {
return func(
ctx context.Context,
namespace,
serviceName string,
) (*v1.Endpoints, error) {
endpts := &v1.Endpoints{}
if err := cl.Get(ctx, client.ObjectKey{
Namespace: namespace,
Name: serviceName,
}, endpts); err != nil {
return nil, err
}
return endpts, nil
}
}
func EndpointsFuncForK8sClientset(
cl *kubernetes.Clientset,
) GetEndpointsFunc {
return func(
ctx context.Context,
namespace,
serviceName string,
) (*v1.Endpoints, error) {
endpointsCl := cl.CoreV1().Endpoints(namespace)
return endpointsCl.Get(ctx, serviceName, metav1.GetOptions{})
}
}

112
pkg/k8s/endpoints_test.go Normal file
View File

@ -0,0 +1,112 @@
package k8s
import (
"context"
"fmt"
"testing"
"github.com/stretchr/testify/require"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
)
func TestGetEndpoints(t *testing.T) {
r := require.New(t)
ctx := context.Background()
const (
ns = "testns"
svcName = "testsvc"
svcPort = "8081"
)
endpoints := &v1.Endpoints{
ObjectMeta: metav1.ObjectMeta{
Name: svcName,
Namespace: ns,
},
Subsets: []v1.EndpointSubset{
{
Addresses: []v1.EndpointAddress{
{
IP: "1.2.3.4",
Hostname: "testhost1",
},
},
},
{
Addresses: []v1.EndpointAddress{
{
IP: "2.3.4.5",
Hostname: "testhost2",
},
},
},
},
}
urls, err := EndpointsForService(
ctx,
ns,
svcName,
svcPort,
func(context.Context, string, string) (*v1.Endpoints, error) {
return endpoints, nil
},
)
r.NoError(err)
addrLookup := map[string]*v1.EndpointAddress{}
for _, subset := range endpoints.Subsets {
for _, addr := range subset.Addresses {
key := fmt.Sprintf("http://%s:%s", addr.IP, svcPort)
addrLookup[key] = &addr
}
}
r.Equal(len(addrLookup), len(urls))
for _, url := range urls {
_, ok := addrLookup[url.String()]
r.True(ok, "address %s was returned but not expected", url)
}
}
func TestEndpointsFuncForControllerClient(t *testing.T) {
ctx := context.Background()
const (
ns = "testns"
svcName = "testsvc"
svcPort = "8081"
)
r := require.New(t)
endpoints := &v1.Endpoints{
ObjectMeta: metav1.ObjectMeta{
Name: svcName,
Namespace: ns,
},
Subsets: []v1.EndpointSubset{
{
Addresses: []v1.EndpointAddress{
{
IP: "1.2.3.4",
Hostname: "testhost1",
},
},
},
{
Addresses: []v1.EndpointAddress{
{
IP: "2.3.4.5",
Hostname: "testhost2",
},
},
},
},
}
cl := fake.NewClientBuilder().WithObjects(
endpoints,
).Build()
fn := EndpointsFuncForControllerClient(cl)
ret, err := fn(ctx, ns, svcName)
r.NoError(err)
r.Equal(len(endpoints.Subsets), len(ret.Subsets))
// we don't need to introspect the return value, because we
// do so in depth in the above TestGetEndpoints test
}

38
pkg/k8s/fake_endpoints.go Normal file
View File

@ -0,0 +1,38 @@
package k8s
import (
"net/url"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// FakeEndpointsForURL creates and returns a new *v1.Endpoints with a
// single v1.EndpointSubset in it, which has num v1.EndpointAddresses
// in it. Each of those EndpointAddresses has a Hostname and IP both
// equal to u.Hostname()
func FakeEndpointsForURL(
u *url.URL,
namespace,
name string,
num int,
) *v1.Endpoints {
addrs := make([]v1.EndpointAddress, num)
for i := 0; i < num; i++ {
addrs[i] = v1.EndpointAddress{
Hostname: u.Hostname(),
IP: u.Hostname(),
}
}
return &v1.Endpoints{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
},
Subsets: []v1.EndpointSubset{
{
Addresses: addrs,
},
},
}
}

View File

@ -15,14 +15,6 @@ import (
//go:embed templates //go:embed templates
var scaledObjectTemplateFS embed.FS var scaledObjectTemplateFS embed.FS
func kedaGVR() schema.GroupVersionResource {
return schema.GroupVersionResource{
Group: "keda.sh",
Version: "v1alpha1",
Resource: "scaledobjects",
}
}
// DeleteScaledObject deletes a scaled object with the given name // DeleteScaledObject deletes a scaled object with the given name
func DeleteScaledObject(ctx context.Context, name string, namespace string, cl client.Client) error { func DeleteScaledObject(ctx context.Context, name string, namespace string, cl client.Client) error {
scaledObj := &unstructured.Unstructured{} scaledObj := &unstructured.Unstructured{}
@ -45,8 +37,9 @@ func NewScaledObject(
namespace, namespace,
name, name,
deploymentName, deploymentName,
scalerAddress string, scalerAddress,
minReplicas int32, host string,
minReplicas,
maxReplicas int32, maxReplicas int32,
) (*unstructured.Unstructured, error) { ) (*unstructured.Unstructured, error) {
// https://keda.sh/docs/1.5/faq/ // https://keda.sh/docs/1.5/faq/
@ -68,13 +61,14 @@ func NewScaledObject(
var scaledObjectTemplateBuffer bytes.Buffer var scaledObjectTemplateBuffer bytes.Buffer
if tplErr := tpl.Execute(&scaledObjectTemplateBuffer, map[string]interface{}{ if tplErr := tpl.Execute(&scaledObjectTemplateBuffer, map[string]interface{}{
"Name": name, "Name": name,
"Namespace": namespace, "Namespace": namespace,
"Labels": labels, "Labels": labels,
"MinReplicas": minReplicas, "MinReplicas": minReplicas,
"MaxReplicas": maxReplicas, "MaxReplicas": maxReplicas,
"DeploymentName": deploymentName, "DeploymentName": deploymentName,
"ScalerAddress": scalerAddress, "ScalerAddress": scalerAddress,
"Host": host,
}); tplErr != nil { }); tplErr != nil {
return nil, tplErr return nil, tplErr
} }

View File

@ -1,52 +0,0 @@
package k8s
import (
context "context"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
k8scorev1 "k8s.io/client-go/kubernetes/typed/core/v1"
)
func NewTCPServicePort(name string, port int32, targetPort int32) corev1.ServicePort {
return corev1.ServicePort{
Name: name,
Protocol: corev1.ProtocolTCP,
Port: port,
TargetPort: intstr.IntOrString{
Type: intstr.Int,
IntVal: targetPort,
},
}
}
func DeleteService(ctx context.Context, name string, cl k8scorev1.ServiceInterface) error {
return cl.Delete(ctx, name, metav1.DeleteOptions{})
}
// NewService creates a new Service object in memory according to the input parameters.
// This function operates in memory only and doesn't do any I/O whatsoever.
func NewService(
namespace,
name string,
servicePorts []corev1.ServicePort,
svcType corev1.ServiceType,
selector map[string]string,
) *corev1.Service {
return &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
},
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: name,
Labels: selector,
},
Spec: corev1.ServiceSpec{
Ports: servicePorts,
Selector: selector,
Type: svcType,
},
}
}

View File

@ -10,7 +10,7 @@ metadata:
spec: spec:
minReplicaCount: {{ .MinReplicas }} minReplicaCount: {{ .MinReplicas }}
maxReplicaCount: {{ .MaxReplicas }} maxReplicaCount: {{ .MaxReplicas }}
pollingInterval: 250 pollingInterval: 1
scaleTargetRef: scaleTargetRef:
name: {{ .DeploymentName }} name: {{ .DeploymentName }}
kind: Deployment kind: Deployment
@ -18,3 +18,4 @@ spec:
- type: external - type: external
metadata: metadata:
scalerAddress: {{ .ScalerAddress }} scalerAddress: {{ .ScalerAddress }}
host: {{ .Host }}

View File

@ -5,7 +5,3 @@ package k8s
func Int32P(i int32) *int32 { func Int32P(i int32) *int32 {
return &i return &i
} }
func str(s string) *string {
return &s
}

144
pkg/k8s/watch_test.go Normal file
View File

@ -0,0 +1,144 @@
package k8s
import (
"context"
"fmt"
"sync"
"github.com/google/uuid"
appsv1 "k8s.io/api/apps/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
)
// closeableWatcher is a watch.Interface that can be closed
// and optionally reopened
type closeableWatcher struct {
uid uuid.UUID
mut *sync.RWMutex
ch chan watch.Event
events []watch.Event
closed bool
allowReopen bool
}
func newCloseableWatcher() *closeableWatcher {
return &closeableWatcher{
uid: uuid.New(),
mut: new(sync.RWMutex),
ch: make(chan watch.Event),
closed: false,
allowReopen: true,
}
}
func (w *closeableWatcher) String() string {
return fmt.Sprintf(
"closeableWatcher %s. events = %v",
w.uid.String(),
w.events,
)
}
func (w *closeableWatcher) Stop() {
w.mut.RLock()
defer w.mut.RUnlock()
close(w.ch)
}
func (w *closeableWatcher) ResultChan() <-chan watch.Event {
w.mut.Lock()
defer w.mut.Unlock()
if w.closed && w.allowReopen {
w.ch = make(chan watch.Event)
w.closed = false
}
return w.ch
}
func (w *closeableWatcher) closeOpenChans(allowReopen bool) {
w.mut.Lock()
defer w.mut.Unlock()
close(w.ch)
w.closed = true
w.allowReopen = allowReopen
}
func (w *closeableWatcher) Add(d *appsv1.Deployment) {
w.mut.RLock()
defer w.mut.RUnlock()
evt := watch.Event{
Type: watch.Added,
Object: d,
}
w.ch <- evt
w.events = append(w.events, evt)
}
func (w *closeableWatcher) Modify(d *appsv1.Deployment) {
w.mut.RLock()
defer w.mut.RUnlock()
evt := watch.Event{
Type: watch.Modified,
Object: d,
}
w.ch <- evt
w.events = append(w.events, evt)
}
func (w *closeableWatcher) getEvents() []watch.Event {
w.mut.RLock()
defer w.mut.RUnlock()
return w.events
}
type fakeDeploymentListerWatcher struct {
mut *sync.RWMutex
watcher *closeableWatcher
items map[string]appsv1.Deployment
}
func newFakeDeploymentListerWatcher() *fakeDeploymentListerWatcher {
w := newCloseableWatcher()
return &fakeDeploymentListerWatcher{
mut: new(sync.RWMutex),
watcher: w,
items: map[string]appsv1.Deployment{},
}
}
func (lw *fakeDeploymentListerWatcher) List(ctx context.Context, options metav1.ListOptions) (*appsv1.DeploymentList, error) {
lw.mut.Lock()
defer lw.mut.Unlock()
lst := []appsv1.Deployment{}
for _, depl := range lw.items {
lst = append(lst, depl)
}
return &appsv1.DeploymentList{Items: lst}, nil
}
func (lw *fakeDeploymentListerWatcher) Watch(ctx context.Context, options metav1.ListOptions) (watch.Interface, error) {
return lw.watcher, nil
}
func (lw *fakeDeploymentListerWatcher) getWatcher() *closeableWatcher {
return lw.watcher
}
// addDeployment adds d to the internal deployments list, or overwrites it if it
// already existed. in either case, it will be returned by a future call to List.
// in the former case, an ADD event if sent if sendEvent is true, and in the latter
// case, a MODIFY event is sent if sendEvent is true
func (lw *fakeDeploymentListerWatcher) addDeployment(d appsv1.Deployment, sendEvent bool) {
lw.mut.Lock()
defer lw.mut.Unlock()
_, existed := lw.items[d.ObjectMeta.Name]
lw.items[d.ObjectMeta.Name] = d
if sendEvent {
if existed {
lw.watcher.Modify(&d)
} else {
lw.watcher.Add(&d)
}
}
}

20
pkg/log/zapr.go Normal file
View File

@ -0,0 +1,20 @@
package log
import (
"github.com/go-logr/logr"
"github.com/go-logr/zapr"
"go.uber.org/zap"
)
func NewZapr() (logr.Logger, error) {
zapCfg := zap.NewProductionConfig()
zapCfg.Sampling = &zap.SamplingConfig{
Initial: 1,
Thereafter: 5,
}
zapLggr, err := zapCfg.Build()
if err != nil {
return nil, err
}
return zapr.NewLogger(zapLggr), nil
}

View File

@ -34,7 +34,7 @@ func (t *TestHTTPHandlerWrapper) IncomingRequests() []http.Request {
return retSlice return retSlice
} }
func NewTestHTTPHandlerWrapper(hdl http.HandlerFunc) *TestHTTPHandlerWrapper { func NewTestHTTPHandlerWrapper(hdl http.Handler) *TestHTTPHandlerWrapper {
return &TestHTTPHandlerWrapper{ return &TestHTTPHandlerWrapper{
rwm: new(sync.RWMutex), rwm: new(sync.RWMutex),
hdl: hdl, hdl: hdl,

90
pkg/queue/queue.go Normal file
View File

@ -0,0 +1,90 @@
package queue
import (
"sync"
)
// CountReader represents the size of a virtual HTTP queue, possibly
// distributed across multiple HTTP server processes. It only can access
// the current size of the queue, not any other information about requests.
//
// It is concurrency safe.
type CountReader interface {
// Current returns the current count of pending requests
// for the given hostname
Current() (*Counts, error)
}
// QueueCounter represents a virtual HTTP queue, possibly distributed across
// multiple HTTP server processes. It can only increase or decrease the
// size of the queue or read the current size of the queue, but not read
// or modify any other information about it.
//
// Both the mutation and read functionality is concurrency safe, but
// the read functionality is point-in-time only
type Counter interface {
CountReader
// Resize resizes the queue size by delta for the given host.
Resize(host string, delta int) error
// Ensure ensures that host is represented in this counter.
// If host already has a nonzero value, then it is unchanged. If
// it is missing, it is set to 0.
Ensure(host string)
// Remove tries to remove the given host and its
// associated counts from the queue. returns true if it existed,
// false otherwise.
Remove(host string) bool
}
// MemoryQueue is a reference QueueCounter implementation that holds the
// HTTP queue in memory only. Always use NewMemoryQueue to create one
// of these.
type Memory struct {
countMap map[string]int
mut *sync.RWMutex
}
// NewMemoryQueue creates a new empty memory queue
func NewMemory() *Memory {
lock := new(sync.RWMutex)
return &Memory{
countMap: make(map[string]int),
mut: lock,
}
}
// Resize changes the size of the queue. Further calls to Current() return
// the newly calculated size if no other Resize() calls were made in the
// interim.
func (r *Memory) Resize(host string, delta int) error {
r.mut.Lock()
defer r.mut.Unlock()
r.countMap[host] += delta
return nil
}
func (r *Memory) Ensure(host string) {
r.mut.Lock()
defer r.mut.Unlock()
_, ok := r.countMap[host]
if !ok {
r.countMap[host] = 0
}
}
func (r *Memory) Remove(host string) bool {
r.mut.Lock()
defer r.mut.Unlock()
_, ok := r.countMap[host]
delete(r.countMap, host)
return ok
}
// Current returns the current size of the queue.
func (r *Memory) Current() (*Counts, error) {
r.mut.RLock()
defer r.mut.RUnlock()
cts := NewCounts()
cts.Counts = r.countMap
return cts, nil
}

41
pkg/queue/queue_counts.go Normal file
View File

@ -0,0 +1,41 @@
package queue
import (
"encoding/json"
"fmt"
)
// Counts is a snapshot of the HTTP pending request queue counts
// for each host.
// This is a json.Marshaler, json.Unmarshaler, and fmt.Stringer
// implementation.
//
// Use NewQueueCounts to create a new one of these.
type Counts struct {
json.Marshaler
json.Unmarshaler
fmt.Stringer
Counts map[string]int
}
// NewQueueCounts creates a new empty QueueCounts struct
func NewCounts() *Counts {
return &Counts{
Counts: map[string]int{},
}
}
// MarshalJSON implements json.Marshaler
func (q *Counts) MarshalJSON() ([]byte, error) {
return json.Marshal(q.Counts)
}
// UnmarshalJSON implements json.Unmarshaler
func (q *Counts) UnmarshalJSON(data []byte) error {
return json.Unmarshal(data, &q.Counts)
}
// String implements fmt.Stringer
func (q *Counts) String() string {
return fmt.Sprintf("%v", q.Counts)
}

60
pkg/queue/queue_fakes.go Normal file
View File

@ -0,0 +1,60 @@
package queue
var _ Counter = &FakeCounter{}
type HostAndCount struct {
Host string
Count int
}
type FakeCounter struct {
RetMap map[string]int
ResizedCh chan HostAndCount
}
func NewFakeCounter() *FakeCounter {
return &FakeCounter{
RetMap: map[string]int{},
ResizedCh: make(chan HostAndCount),
}
}
func (f *FakeCounter) Resize(host string, i int) error {
f.RetMap[host] = i
f.ResizedCh <- HostAndCount{Host: host, Count: i}
return nil
}
func (f *FakeCounter) Ensure(host string) {
f.RetMap[host] = 0
}
func (f *FakeCounter) Remove(host string) bool {
_, ok := f.RetMap[host]
delete(f.RetMap, host)
return ok
}
func (f *FakeCounter) Current() (*Counts, error) {
ret := NewCounts()
retMap := f.RetMap
if len(retMap) == 0 {
retMap["sample.com"] = 0
}
ret.Counts = retMap
return ret, nil
}
var _ CountReader = &FakeCountReader{}
type FakeCountReader struct {
current int
err error
}
func (f *FakeCountReader) Current() (*Counts, error) {
ret := NewCounts()
ret.Counts = map[string]int{
"sample.com": f.current,
}
return ret, f.err
}

83
pkg/queue/queue_rpc.go Normal file
View File

@ -0,0 +1,83 @@
package queue
import (
"context"
"encoding/json"
"fmt"
"net/http"
nethttp "net/http"
"net/url"
"github.com/go-logr/logr"
"github.com/pkg/errors"
)
const countsPath = "/queue"
func AddCountsRoute(lggr logr.Logger, mux *nethttp.ServeMux, q CountReader) {
lggr = lggr.WithName("pkg.queue.AddCountsRoute")
lggr.Info("adding queue counts route", "path", countsPath)
mux.Handle(countsPath, newSizeHandler(lggr, q))
}
// newForwardingHandler takes in the service URL for the app backend
// and forwards incoming requests to it. Note that it isn't multitenant.
// It's intended to be deployed and scaled alongside the application itself
func newSizeHandler(
lggr logr.Logger,
q CountReader,
) nethttp.Handler {
return http.HandlerFunc(func(w nethttp.ResponseWriter, r *nethttp.Request) {
cur, err := q.Current()
if err != nil {
lggr.Error(err, "getting queue size")
w.WriteHeader(500)
w.Write([]byte(
"error getting queue size",
))
return
}
if err := json.NewEncoder(w).Encode(cur); err != nil {
lggr.Error(err, "encoding QueueCounts")
w.WriteHeader(500)
w.Write([]byte(
"error encoding queue counts",
))
return
}
})
}
// GetQueueCounts issues an RPC call to get the queue counts
// from the given hostAndPort. Note that the hostAndPort should
// not end with a "/" and shouldn't include a path.
func GetCounts(
ctx context.Context,
lggr logr.Logger,
httpCl *nethttp.Client,
interceptorURL url.URL,
) (*Counts, error) {
interceptorURL.Path = countsPath
resp, err := httpCl.Get(interceptorURL.String())
if err != nil {
errMsg := fmt.Sprintf(
"requesting the queue counts from %s",
interceptorURL.String(),
)
return nil, errors.Wrap(err, errMsg)
}
defer resp.Body.Close()
counts := NewCounts()
if err := json.NewDecoder(resp.Body).Decode(counts); err != nil {
return nil, errors.Wrap(
err,
fmt.Sprintf(
"decoding response from the interceptor at %s",
interceptorURL.String(),
),
)
}
return counts, nil
}

View File

@ -0,0 +1,78 @@
package queue
import (
"context"
"encoding/json"
"errors"
"testing"
"github.com/go-logr/logr"
pkghttp "github.com/kedacore/http-add-on/pkg/http"
kedanet "github.com/kedacore/http-add-on/pkg/net"
"github.com/stretchr/testify/require"
)
func TestQueueSizeHandlerSuccess(t *testing.T) {
lggr := logr.Discard()
r := require.New(t)
reader := &FakeCountReader{
current: 123,
err: nil,
}
handler := newSizeHandler(lggr, reader)
req, rec := pkghttp.NewTestCtx("GET", "/queue")
handler.ServeHTTP(rec, req)
r.Equal(200, rec.Code, "response code")
respMap := map[string]int{}
decodeErr := json.NewDecoder(rec.Body).Decode(&respMap)
r.NoError(decodeErr)
r.Equalf(1, len(respMap), "response JSON length was not 1")
sizeVal, ok := respMap["sample.com"]
r.Truef(ok, "'sample.com' entry not available in return JSON")
r.Equalf(reader.current, sizeVal, "returned JSON queue size was wrong")
reader.err = errors.New("test error")
req, rec = pkghttp.NewTestCtx("GET", "/queue")
handler.ServeHTTP(rec, req)
r.Equal(500, rec.Code, "response code was not expected")
}
func TestQueueSizeHandlerFail(t *testing.T) {
lggr := logr.Discard()
r := require.New(t)
reader := &FakeCountReader{
current: 0,
err: errors.New("test error"),
}
handler := newSizeHandler(lggr, reader)
req, rec := pkghttp.NewTestCtx("GET", "/queue")
handler.ServeHTTP(rec, req)
r.Equal(500, rec.Code, "response code")
}
func TestQueueSizeHandlerIntegration(t *testing.T) {
ctx := context.Background()
lggr := logr.Discard()
r := require.New(t)
reader := &FakeCountReader{
current: 50,
err: nil,
}
hdl := kedanet.NewTestHTTPHandlerWrapper(newSizeHandler(lggr, reader))
srv, url, err := kedanet.StartTestServer(hdl)
r.NoError(err)
defer srv.Close()
httpCl := srv.Client()
counts, err := GetCounts(ctx, lggr, httpCl, *url)
r.NoError(err)
r.Equal(1, len(counts.Counts))
for _, val := range counts.Counts {
r.Equal(reader.current, val)
}
reqs := hdl.IncomingRequests()
r.Equal(1, len(reqs))
}

156
pkg/routing/config_map.go Normal file
View File

@ -0,0 +1,156 @@
package routing
import (
"context"
"fmt"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/k8s"
"github.com/kedacore/http-add-on/pkg/queue"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
// the name of the ConfigMap that stores the routing table
ConfigMapRoutingTableName = "keda-http-routing-table"
// the key in the ConfigMap data that stores the JSON routing table
configMapRoutingTableKey = "routing-table"
)
// SaveTableToConfigMap saves the contents of table to the Data field in
// configMap
func SaveTableToConfigMap(table *Table, configMap *corev1.ConfigMap) error {
tableAsJSON, err := table.MarshalJSON()
if err != nil {
return err
}
configMap.Data[configMapRoutingTableKey] = string(tableAsJSON)
return nil
}
// FetchTableFromConfigMap fetches the Data field from configMap, converts it
// to a routing table, and returns it
func FetchTableFromConfigMap(configMap *corev1.ConfigMap, q queue.Counter) (*Table, error) {
data, found := configMap.Data[configMapRoutingTableKey]
if !found {
return nil, fmt.Errorf(
"no '%s' key found in the %s ConfigMap",
configMapRoutingTableKey,
ConfigMapRoutingTableName,
)
}
ret := NewTable()
if err := ret.UnmarshalJSON([]byte(data)); err != nil {
retErr := errors.Wrap(
err,
fmt.Sprintf(
"error decoding '%s' key in %s ConfigMap",
configMapRoutingTableKey,
ConfigMapRoutingTableName,
),
)
return nil, retErr
}
return ret, nil
}
// updateQueueFromTable ensures that every host in the routing table
// exists in the given queue, and no hosts exist in the queue that
// don't exist in the routing table. It uses q.Ensure() and q.Remove()
// to do those things, respectively.
func updateQueueFromTable(
lggr logr.Logger,
table *Table,
q queue.Counter,
) error {
// ensure that every host is in the queue, even if it has
// zero pending requests. This is important so that the
// scaler can report on all applications.
for host := range table.m {
q.Ensure(host)
}
// ensure that the queue doesn't have any extra hosts that don't exist in the table
qCur, err := q.Current()
if err != nil {
lggr.Error(
err,
"failed to get current queue counts (in order to prune it of missing routing table hosts)",
)
return errors.Wrap(err, "pkg.routing.updateQueueFromTable")
}
for host := range qCur.Counts {
if _, err := table.Lookup(host); err != nil {
q.Remove(host)
}
}
return nil
}
// GetTable fetches the contents of the appropriate ConfigMap that stores
// the routing table, then tries to decode it into a temporary routing table
// data structure.
//
// If that succeeds, it calls table.Replace(newTable), then ensures that
// every host in the routing table exists in the given queue, and no hosts
// exist in the queue that don't exist in the routing table. It uses q.Ensure()
// and q.Remove() to do those things, respectively.
func GetTable(
ctx context.Context,
lggr logr.Logger,
getter k8s.ConfigMapGetter,
table *Table,
q queue.Counter,
) error {
lggr = lggr.WithName("pkg.routing.GetTable")
cm, err := getter.Get(
ctx,
ConfigMapRoutingTableName,
metav1.GetOptions{},
)
if err != nil {
lggr.Error(
err,
"failed to fetch routing table config map",
"configMapName",
ConfigMapRoutingTableName,
)
return errors.Wrap(
err,
fmt.Sprintf(
"failed to fetch ConfigMap %s",
ConfigMapRoutingTableName,
),
)
}
newTable, err := FetchTableFromConfigMap(cm, q)
if err != nil {
lggr.Error(
err,
"failed decoding routing table ConfigMap",
"configMapName",
ConfigMapRoutingTableName,
)
return errors.Wrap(
err,
fmt.Sprintf(
"failed decoding ConfigMap %s into a routing table",
ConfigMapRoutingTableName,
),
)
}
table.Replace(newTable)
if err := updateQueueFromTable(lggr, table, q); err != nil {
lggr.Error(
err,
"unable to update the queue from the new routing table",
)
return errors.Wrap(err, "pkg.routing.GetTable")
}
return nil
}

View File

@ -0,0 +1,88 @@
package routing
import (
"context"
"time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/k8s"
"github.com/kedacore/http-add-on/pkg/queue"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
)
// StartConfigMapRoutingTableUpdater starts a loop that does the following:
//
// - Fetches a full version of the ConfigMap called ConfigMapRoutingTableName in
// the given namespace ns, and calls table.Replace(newTable) after it does so
// - Uses watcher to watch for all ADDED or CREATED events on the ConfigMap
// called ConfigMapRoutingTableName. On either of those events, decodes
// that ConfigMap into a routing table and stores the new table into table
// using table.Replace(newTable)
// - Returns an appropriate non-nil error if ctx.Done() receives
func StartConfigMapRoutingTableUpdater(
ctx context.Context,
lggr logr.Logger,
updateEvery time.Duration,
getterWatcher k8s.ConfigMapGetterWatcher,
table *Table,
q queue.Counter,
) error {
lggr = lggr.WithName("pkg.routing.StartConfigMapRoutingTableUpdater")
watchIface, err := getterWatcher.Watch(ctx, metav1.ListOptions{})
if err != nil {
return err
}
defer watchIface.Stop()
ticker := time.NewTicker(updateEvery)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "context is done")
case <-ticker.C:
if err := GetTable(ctx, lggr, getterWatcher, table, q); err != nil {
return errors.Wrap(err, "failed to fetch routing table")
}
case evt := <-watchIface.ResultChan():
evtType := evt.Type
obj := evt.Object
if evtType == watch.Added || evtType == watch.Modified {
cm, ok := obj.(*corev1.ConfigMap)
// by definition of watchIface, all returned objects should
// be assertable to a ConfigMap. In the likely impossible
// case that it isn't, just ignore and move on.
// This check is just to be defensive.
if !ok {
continue
}
// the watcher is open on all ConfigMaps in the namespace, so
// bail out of this loop iteration immediately if the event
// isn't for the routing table ConfigMap.
if cm.Name != ConfigMapRoutingTableName {
continue
}
newTable, err := FetchTableFromConfigMap(cm, q)
if err != nil {
return err
}
table.Replace(newTable)
if err := updateQueueFromTable(lggr, table, q); err != nil {
// if we couldn't update the queue, just log but don't bail.
// we want to give the loop a chance to tick (or receive a new event)
// and update the table & queue again
lggr.Error(
err,
"failed to update queue from table on ConfigMap change event",
)
continue
}
}
}
}
}

View File

@ -0,0 +1,193 @@
package routing
import (
"context"
"errors"
"testing"
"time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/k8s"
"github.com/kedacore/http-add-on/pkg/queue"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/kubernetes/fake"
clgotesting "k8s.io/client-go/testing"
)
// fake adapters for the k8s.GetterWatcher interface.
//
// Note that there is another way to fake the k8s getter and
// watcher types.
//
// we could use the "fake" package in k8s.io/client-go
// (https://pkg.go.dev/k8s.io/client-go@v0.22.0/kubernetes/fake)
// instead of creating and using these structs, but doing so
// requires internal knowledge of several layers of the client-go
// module, since it's not well documented (even if it were,
// you would need to touch a few different packages to get it
// working).
//
// I've (arschles) chosen to create these structs and sidestep
// the entire process, since this approach is explicit and only
// requires knowledge of the k8s.GetterWatcher interface in this
// codebase, the standard k8s/client-go package (which you
// already need to know to understand this codebase), and the
// fake watcher, which you would need to understand using either
// approach. The fake watcher documentation is linked below:
//
// (https://pkg.go.dev/k8s.io/apimachinery@v0.21.3/pkg/watch#NewFake),
type fakeCMGetterWatcher struct {
k8s.ConfigMapGetter
k8s.ConfigMapWatcher
}
type fakeConfigMapWatcher struct {
watchIface watch.Interface
}
func (c fakeConfigMapWatcher) Watch(
ctx context.Context,
opts metav1.ListOptions,
) (watch.Interface, error) {
return c.watchIface, nil
}
func TestStartUpdateLoop(t *testing.T) {
r := require.New(t)
a := assert.New(t)
lggr := logr.Discard()
ctx, done := context.WithCancel(context.Background())
// ensure that we call done so that we clean
// up running test resources like the update loop, etc...
defer done()
const (
interval = 10 * time.Millisecond
ns = "testns"
)
q := queue.NewFakeCounter()
table := NewTable()
table.AddTarget("host1", NewTarget(
"svc1",
8080,
"depl1",
))
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: ConfigMapRoutingTableName,
Namespace: ns,
},
Data: map[string]string{},
}
r.NoError(SaveTableToConfigMap(table, cm))
fakeWatcher := watch.NewFake()
fakeGetter := fake.NewSimpleClientset(cm)
getterWatcher := fakeCMGetterWatcher{
ConfigMapGetter: fakeGetter.
CoreV1().
ConfigMaps(ns),
ConfigMapWatcher: fakeConfigMapWatcher{fakeWatcher},
}
defer fakeWatcher.Stop()
grp, ctx := errgroup.WithContext(ctx)
grp.Go(func() error {
err := StartConfigMapRoutingTableUpdater(
ctx,
lggr,
interval,
getterWatcher,
table,
q,
)
// we purposefully cancel the context below,
// so we need to ignore that error.
if !errors.Is(err, context.Canceled) {
return err
}
return nil
})
// send a watch event in parallel. we'll ensure that it
// made it through in the below loop
grp.Go(func() error {
fakeWatcher.Add(cm)
return nil
})
cmGetActions := []clgotesting.Action{}
otherGetActions := []clgotesting.Action{}
const waitDur = interval * 5
time.Sleep(waitDur)
for _, action := range fakeGetter.Actions() {
verb := action.GetVerb()
resource := action.GetResource().Resource
// record, then ignore all actions that were not for
// ConfigMaps.
// the loop should not do anything with other resources
if resource != "configmaps" {
otherGetActions = append(otherGetActions, action)
continue
} else if verb == "get" {
cmGetActions = append(cmGetActions, action)
}
}
// assert (don't require) these conditions so that
// we can check them, fail if necessary, but continue onward
// to check the result of the error group afterward
a.Equal(
0,
len(otherGetActions),
"unexpected actions on non-ConfigMap resources: %s",
otherGetActions,
)
a.Greater(
len(cmGetActions),
0,
"no get actions for ConfigMaps",
)
done()
// if this test returns without timing out,
// then we can be sure that the fakeWatcher was
// able to send a watch event. if that times out
// or otherwise fails, the update loop was not properly
// listening for these events.
r.NoError(grp.Wait())
// ensure that the queue and table host lists matches
// exactly
table.l.RLock()
curTable := table.m
curQCounts, err := q.Current()
r.NoError(err)
// check that the queue has every host in the table
for tableHost := range curTable {
_, ok := curQCounts.Counts[tableHost]
r.True(
ok,
"host %s not found in queue",
tableHost,
)
}
// check that the table has every host in the queue
for qHost := range curQCounts.Counts {
_, ok := curTable[qHost]
r.True(
ok,
"host %s not found in table",
qHost,
)
}
}

129
pkg/routing/table.go Normal file
View File

@ -0,0 +1,129 @@
package routing
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"net/url"
"sync"
)
var ErrTargetNotFound = errors.New("Target not found")
type Target struct {
Service string `json:"service"`
Port int `json:"port"`
Deployment string `json:"deployment"`
}
func NewTarget(svc string, port int, depl string) Target {
return Target{
Service: svc,
Port: port,
Deployment: depl,
}
}
func (t *Target) ServiceURL() (*url.URL, error) {
urlStr := fmt.Sprintf("http://%s:%d", t.Service, t.Port)
u, err := url.Parse(urlStr)
if err != nil {
return nil, err
}
return u, nil
}
type Table struct {
fmt.Stringer
m map[string]Target
l *sync.RWMutex
}
func NewTable() *Table {
return &Table{
m: make(map[string]Target),
l: new(sync.RWMutex),
}
}
func (t *Table) String() string {
t.l.RLock()
defer t.l.RUnlock()
return fmt.Sprintf("%v", t.m)
}
func (t *Table) MarshalJSON() ([]byte, error) {
t.l.RLock()
defer t.l.RUnlock()
var b bytes.Buffer
err := json.NewEncoder(&b).Encode(t.m)
if err != nil {
return nil, err
}
return b.Bytes(), nil
}
func (t *Table) UnmarshalJSON(data []byte) error {
t.l.Lock()
defer t.l.Unlock()
t.m = map[string]Target{}
b := bytes.NewBuffer(data)
return json.NewDecoder(b).Decode(&t.m)
}
func (t *Table) Lookup(host string) (Target, error) {
t.l.RLock()
defer t.l.RUnlock()
ret, ok := t.m[host]
if !ok {
return Target{}, ErrTargetNotFound
}
return ret, nil
}
// AddTarget registers target for host in the routing table t
// if it didn't already exist.
//
// returns a non-nil error if it did already exist
func (t *Table) AddTarget(
host string,
target Target,
) error {
t.l.Lock()
defer t.l.Unlock()
_, ok := t.m[host]
if ok {
return fmt.Errorf(
"host %s is already registered in the routing table",
host,
)
}
t.m[host] = target
return nil
}
// RemoveTarget removes host, if it exists, and its corresponding Target entry in
// the routing table. If it does not exist, returns a non-nil error
func (t *Table) RemoveTarget(host string) error {
t.l.Lock()
defer t.l.Unlock()
_, ok := t.m[host]
if !ok {
return fmt.Errorf("host %s did not exist in the routing table", host)
}
delete(t.m, host)
return nil
}
// Replace replaces t's routing table with newTable's.
//
// This function is concurrency safe for t, but not for newTable.
// The caller must ensure that no other goroutine is writing to
// newTable at the time at which they call this function.
func (t *Table) Replace(newTable *Table) {
t.l.Lock()
defer t.l.Unlock()
t.m = newTable.m
}

82
pkg/routing/table_rpc.go Normal file
View File

@ -0,0 +1,82 @@
package routing
import (
"encoding/json"
"net/http"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/k8s"
"github.com/kedacore/http-add-on/pkg/queue"
)
const (
routingPingPath = "/routing_ping"
routingFetchPath = "/routing_table"
)
// AddFetchRoute adds a route to mux that fetches the current state of table,
// encodes it as JSON, and returns it to the HTTP client
func AddFetchRoute(
lggr logr.Logger,
mux *http.ServeMux,
table *Table,
) {
lggr = lggr.WithName("pkg.routing.AddFetchRoute")
lggr.Info("adding routing ping route", "path", routingPingPath)
mux.Handle(routingFetchPath, newTableHandler(lggr, table))
}
// AddPingRoute adds a route to mux that will accept an empty GET request,
// fetch the current state of the routing table from the standard routing
// table ConfigMap (ConfigMapRoutingTableName), save it to local memory, and
// return the contents of the routing table to the client.
func AddPingRoute(
lggr logr.Logger,
mux *http.ServeMux,
getter k8s.ConfigMapGetter,
table *Table,
q queue.Counter,
) {
lggr = lggr.WithName("pkg.routing.AddPingRoute")
lggr.Info("adding interceptor routing ping route", "path", routingPingPath)
mux.HandleFunc(routingPingPath, func(w http.ResponseWriter, r *http.Request) {
err := GetTable(
r.Context(),
lggr,
getter,
table,
q,
)
if err != nil {
lggr.Error(err, "fetching new routing table")
w.WriteHeader(500)
w.Write([]byte(
"error fetching routing table",
))
return
}
w.WriteHeader(200)
if err := json.NewEncoder(w).Encode(table); err != nil {
w.WriteHeader(500)
lggr.Error(err, "writing new routing table to the client")
return
}
})
}
func newTableHandler(
lggr logr.Logger,
table *Table,
) http.Handler {
lggr = lggr.WithName("pkg.routing.TableHandler")
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
err := json.NewEncoder(w).Encode(table)
if err != nil {
w.WriteHeader(500)
lggr.Error(err, "encoding logging table JSON")
w.Write([]byte(
"error encoding and transmitting the routing table",
))
}
})
}

View File

@ -0,0 +1,91 @@
package routing
import (
"context"
"testing"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/queue"
"github.com/stretchr/testify/require"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes/fake"
)
func newTableFromMap(m map[string]Target) *Table {
table := NewTable()
for host, target := range m {
table.AddTarget(host, target)
}
return table
}
func TestRPCIntegration(t *testing.T) {
const ns = "testns"
ctx := context.Background()
lggr := logr.Discard()
r := require.New(t)
// fetch an empty table
retTable := NewTable()
k8sCl, err := fakeConfigMapClientForTable(
NewTable(),
ns,
ConfigMapRoutingTableName,
)
r.NoError(err)
r.NoError(GetTable(
ctx,
lggr,
k8sCl.CoreV1().ConfigMaps("testns"),
retTable,
queue.NewFakeCounter(),
))
r.Equal(0, len(retTable.m))
// fetch a table with lots of targets in it
targetMap := map[string]Target{
"host1": {
Service: "svc1",
Port: 1234,
Deployment: "depl1",
},
"host2": {
Service: "svc2",
Port: 2345,
Deployment: "depl2",
},
}
retTable = NewTable()
k8sCl, err = fakeConfigMapClientForTable(
newTableFromMap(targetMap),
ns,
ConfigMapRoutingTableName,
)
r.NoError(err)
r.NoError(GetTable(
ctx,
lggr,
k8sCl.CoreV1().ConfigMaps("testns"),
retTable,
queue.NewFakeCounter(),
))
r.Equal(len(targetMap), len(retTable.m))
r.Equal(targetMap, retTable.m)
}
func fakeConfigMapClientForTable(t *Table, ns, name string) (*fake.Clientset, error) {
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns,
},
Data: map[string]string{},
}
if err := SaveTableToConfigMap(t, cm); err != nil {
return nil, err
}
return fake.NewSimpleClientset(cm), nil
}

82
pkg/routing/table_test.go Normal file
View File

@ -0,0 +1,82 @@
package routing
import (
"encoding/json"
"testing"
"github.com/stretchr/testify/require"
)
func TestTableJSONRoundTrip(t *testing.T) {
const host = "testhost"
r := require.New(t)
tbl := NewTable()
tgt := Target{
Service: "testsvc",
Port: 8082,
Deployment: "testdepl",
}
tbl.AddTarget(host, tgt)
b, err := json.Marshal(&tbl)
r.NoError(err)
returnTbl := NewTable()
r.NoError(json.Unmarshal(b, returnTbl))
retTarget, err := returnTbl.Lookup(host)
r.NoError(err)
r.Equal(tgt.Service, retTarget.Service)
r.Equal(tgt.Port, retTarget.Port)
r.Equal(tgt.Deployment, retTarget.Deployment)
}
func TestTableRemove(t *testing.T) {
const host = "testrm"
r := require.New(t)
tgt := Target{
Service: "testrm",
Port: 8084,
Deployment: "testrmdepl",
}
tbl := NewTable()
// add the target to the table and ensure that you can look it up
tbl.AddTarget(host, tgt)
retTgt, err := tbl.Lookup(host)
r.Equal(tgt, retTgt)
r.NoError(err)
// remove the target and ensure that you can't look it up
r.NoError(tbl.RemoveTarget(host))
retTgt, err = tbl.Lookup(host)
r.Equal(Target{}, retTgt)
r.Equal(ErrTargetNotFound, err)
}
func TestTableReplace(t *testing.T) {
r := require.New(t)
const host1 = "testreplhost1"
const host2 = "testreplhost2"
tgt1 := Target{
Service: "tgt1",
Port: 9090,
Deployment: "depl1",
}
tgt2 := Target{
Service: "tgt2",
Port: 9091,
Deployment: "depl2",
}
// create two routing tables, each with different targets
tbl1 := NewTable()
tbl1.AddTarget(host1, tgt1)
tbl2 := NewTable()
tbl2.AddTarget(host2, tgt2)
// replace the second table with the first and ensure that the tables
// are now equal
tbl2.Replace(tbl1)
r.Equal(tbl1, tbl2)
}

View File

@ -0,0 +1,24 @@
package routing
import (
"fmt"
"testing"
"github.com/stretchr/testify/require"
)
func TestTargetServiceURL(t *testing.T) {
r := require.New(t)
target := Target{
Service: "testsvc",
Port: 8081,
Deployment: "testdeploy",
}
svcURL, err := target.ServiceURL()
r.NoError(err)
r.Equal(
fmt.Sprintf("%s:%d", target.Service, target.Port),
svcURL.Host,
)
}

View File

@ -7,7 +7,7 @@ ARG GOOS=linux
FROM golang:${GOLANG_VERSION}-alpine AS builder FROM golang:${GOLANG_VERSION}-alpine AS builder
WORKDIR $GOPATH/src/github.com/kedahttp/http-add-on WORKDIR $GOPATH/src/github.com/kedacore/http-add-on
COPY go.mod go.mod COPY go.mod go.mod
COPY go.sum go.sum COPY go.sum go.sum

View File

@ -6,9 +6,11 @@ package main
import ( import (
context "context" context "context"
"fmt"
"math/rand" "math/rand"
"time" "time"
"github.com/go-logr/logr"
empty "github.com/golang/protobuf/ptypes/empty" empty "github.com/golang/protobuf/ptypes/empty"
externalscaler "github.com/kedacore/http-add-on/proto" externalscaler "github.com/kedacore/http-add-on/proto"
) )
@ -18,13 +20,22 @@ func init() {
} }
type impl struct { type impl struct {
lggr logr.Logger
pinger *queuePinger pinger *queuePinger
targetMetric int64 targetMetric int64
externalscaler.UnimplementedExternalScalerServer externalscaler.UnimplementedExternalScalerServer
} }
func newImpl(pinger *queuePinger, targetMetric int64) *impl { func newImpl(
return &impl{pinger: pinger, targetMetric: targetMetric} lggr logr.Logger,
pinger *queuePinger,
targetMetric int64,
) *impl {
return &impl{
lggr: lggr,
pinger: pinger,
targetMetric: targetMetric,
}
} }
func (e *impl) Ping(context.Context, *empty.Empty) (*empty.Empty, error) { func (e *impl) Ping(context.Context, *empty.Empty) (*empty.Empty, error) {
@ -35,13 +46,33 @@ func (e *impl) IsActive(
ctx context.Context, ctx context.Context,
scaledObject *externalscaler.ScaledObjectRef, scaledObject *externalscaler.ScaledObjectRef,
) (*externalscaler.IsActiveResponse, error) { ) (*externalscaler.IsActiveResponse, error) {
lggr := e.lggr.WithName("IsActive")
host, ok := scaledObject.ScalerMetadata["host"]
if !ok {
err := fmt.Errorf("no 'host' field found in ScaledObject metadata")
lggr.Error(err, "returning immediately from IsActive RPC call", "ScaledObject", scaledObject)
return nil, err
}
if host == "interceptor" {
return &externalscaler.IsActiveResponse{
Result: true,
}, nil
}
allCounts := e.pinger.counts()
hostCount, ok := allCounts[host]
if !ok {
err := fmt.Errorf("host '%s' not found in counts", host)
lggr.Error(err, "Given host was not found in queue count map", "host", host, "allCounts", allCounts)
return nil, err
}
active := hostCount > 0
return &externalscaler.IsActiveResponse{ return &externalscaler.IsActiveResponse{
Result: true, Result: active,
}, nil }, nil
} }
func (e *impl) StreamIsActive( func (e *impl) StreamIsActive(
in *externalscaler.ScaledObjectRef, scaledObject *externalscaler.ScaledObjectRef,
server externalscaler.ExternalScaler_StreamIsActiveServer, server externalscaler.ExternalScaler_StreamIsActiveServer,
) error { ) error {
// this function communicates with KEDA via the 'server' parameter. // this function communicates with KEDA via the 'server' parameter.
@ -54,8 +85,16 @@ func (e *impl) StreamIsActive(
case <-server.Context().Done(): case <-server.Context().Done():
return nil return nil
case <-ticker.C: case <-ticker.C:
active, err := e.IsActive(server.Context(), scaledObject)
if err != nil {
e.lggr.Error(
err,
"error getting active status in stream, continuing",
)
continue
}
server.Send(&externalscaler.IsActiveResponse{ server.Send(&externalscaler.IsActiveResponse{
Result: true, Result: active.Result,
}) })
} }
} }
@ -65,14 +104,22 @@ func (e *impl) GetMetricSpec(
_ context.Context, _ context.Context,
sor *externalscaler.ScaledObjectRef, sor *externalscaler.ScaledObjectRef,
) (*externalscaler.GetMetricSpecResponse, error) { ) (*externalscaler.GetMetricSpecResponse, error) {
targetMetricValue := e.targetMetric lggr := e.lggr.WithName("GetMetricSpec")
return &externalscaler.GetMetricSpecResponse{ host, ok := sor.ScalerMetadata["host"]
MetricSpecs: []*externalscaler.MetricSpec{ if !ok {
{ err := fmt.Errorf("'host' not found in ScaledObject metadata")
MetricName: "queueSize", lggr.Error(err, "no 'host' found in ScaledObject metadata")
TargetSize: targetMetricValue, return nil, err
}, }
metricSpecs := []*externalscaler.MetricSpec{
{
MetricName: host,
TargetSize: int64(e.targetMetric),
}, },
}
return &externalscaler.GetMetricSpecResponse{
MetricSpecs: metricSpecs,
}, nil }, nil
} }
@ -80,13 +127,31 @@ func (e *impl) GetMetrics(
_ context.Context, _ context.Context,
metricRequest *externalscaler.GetMetricsRequest, metricRequest *externalscaler.GetMetricsRequest,
) (*externalscaler.GetMetricsResponse, error) { ) (*externalscaler.GetMetricsResponse, error) {
size := int64(e.pinger.count()) lggr := e.lggr.WithName("GetMetrics")
return &externalscaler.GetMetricsResponse{ host, ok := metricRequest.ScaledObjectRef.ScalerMetadata["host"]
MetricValues: []*externalscaler.MetricValue{ if !ok {
{ err := fmt.Errorf("no 'host' field found in ScaledObject metadata")
MetricName: "queueSize", lggr.Error(err, "ScaledObjectRef", metricRequest.ScaledObjectRef)
MetricValue: size, return nil, err
}, }
allCounts := e.pinger.counts()
hostCount, ok := allCounts[host]
if !ok {
if host == "interceptor" {
hostCount = e.pinger.aggregate()
} else {
err := fmt.Errorf("host '%s' not found in counts", host)
lggr.Error(err, "allCounts", allCounts)
return nil, err
}
}
metricValues := []*externalscaler.MetricValue{
{
MetricName: host,
MetricValue: int64(hostCount),
}, },
}
return &externalscaler.GetMetricsResponse{
MetricValues: metricValues,
}, nil }, nil
} }

288
scaler/handlers_test.go Normal file
View File

@ -0,0 +1,288 @@
package main
import (
context "context"
"fmt"
"strconv"
"testing"
"time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/queue"
externalscaler "github.com/kedacore/http-add-on/proto"
"github.com/stretchr/testify/require"
)
func TestIsActive(t *testing.T) {
const host = "TestIsActive.testing.com"
r := require.New(t)
ctx := context.Background()
lggr := logr.Discard()
ticker, pinger := newFakeQueuePinger(ctx, lggr)
defer ticker.Stop()
pinger.pingMut.Lock()
pinger.allCounts[host] = 0
pinger.pingMut.Unlock()
hdl := newImpl(
lggr,
pinger,
123,
)
res, err := hdl.IsActive(
ctx,
&externalscaler.ScaledObjectRef{
ScalerMetadata: map[string]string{
"host": host,
},
},
)
r.NoError(err)
r.NotNil(res)
// initially, IsActive should return false since the
// count for the host is 0
r.False(res.Result)
// incrment the count for the host and then expect
// active to be true
pinger.pingMut.Lock()
pinger.allCounts[host]++
pinger.pingMut.Unlock()
res, err = hdl.IsActive(
ctx,
&externalscaler.ScaledObjectRef{
ScalerMetadata: map[string]string{
"host": host,
},
},
)
r.NoError(err)
r.NotNil(res)
r.True(res.Result)
}
func TestGetMetricSpec(t *testing.T) {
const (
host = "abcd"
target = int64(200)
)
r := require.New(t)
ctx := context.Background()
lggr := logr.Discard()
ticker, pinger := newFakeQueuePinger(ctx, lggr)
defer ticker.Stop()
hdl := newImpl(lggr, pinger, 123)
meta := map[string]string{
"host": host,
"targetPendingRequests": strconv.Itoa(int(target)),
}
ref := &externalscaler.ScaledObjectRef{
ScalerMetadata: meta,
}
ret, err := hdl.GetMetricSpec(ctx, ref)
r.NoError(err)
r.NotNil(ret)
r.Equal(1, len(ret.MetricSpecs))
spec := ret.MetricSpecs[0]
r.Equal(host, spec.MetricName)
// NOTE: spec.TargetSize needs to be equal to the 'target' const.
// this is a TODO in https://github.com/kedacore/http-add-on/issues/234
// to fix this
r.Equal(int64(123), spec.TargetSize)
}
// GetMetrics with a ScaledObjectRef in the RPC request that has
// no 'host' field in the metadata field
func TestGetMetricsMissingHostInMetadata(t *testing.T) {
r := require.New(t)
ctx := context.Background()
lggr := logr.Discard()
req := &externalscaler.GetMetricsRequest{
ScaledObjectRef: &externalscaler.ScaledObjectRef{},
}
ticker, pinger := newFakeQueuePinger(ctx, lggr)
defer ticker.Stop()
hdl := newImpl(lggr, pinger, 123)
// no 'host' in the ScalerObjectRef's metadata field
res, err := hdl.GetMetrics(ctx, req)
r.Error(err)
r.Nil(res)
r.Contains(
err.Error(),
"no 'host' field found in ScaledObject metadata",
)
}
// 'host' field found in ScalerObjectRef.ScalerMetadata, but
// not found in the queuePinger
func TestGetMetricsMissingHostInQueue(t *testing.T) {
r := require.New(t)
ctx := context.Background()
lggr := logr.Discard()
const host = "TestGetMetricsMissingHostInQueue.com"
meta := map[string]string{
"host": host,
}
ticker, pinger := newFakeQueuePinger(ctx, lggr)
defer ticker.Stop()
hdl := newImpl(lggr, pinger, 123)
req := &externalscaler.GetMetricsRequest{
ScaledObjectRef: &externalscaler.ScaledObjectRef{},
}
req.ScaledObjectRef.ScalerMetadata = meta
res, err := hdl.GetMetrics(ctx, req)
r.Error(err)
r.Contains(err.Error(), fmt.Sprintf(
"host '%s' not found in counts", host,
))
r.Nil(res)
}
// GetMetrics RPC call with host found in both the incoming
// ScaledObject and in the queue counter
func TestGetMetricsHostFoundInQueueCounts(t *testing.T) {
const (
ns = "testns"
svcName = "testsrv"
pendingQLen = 203
)
host := fmt.Sprintf("%s.scaler.testing.com", t.Name())
// create a request for the GetMetrics RPC call. it instructs
// GetMetrics to return the counts for one specific host.
// below, we do setup to ensure that we have a fake
// interceptor, and that interceptor knows about the given host
req := &externalscaler.GetMetricsRequest{
ScaledObjectRef: &externalscaler.ScaledObjectRef{
ScalerMetadata: map[string]string{
"host": host,
},
},
}
r := require.New(t)
ctx := context.Background()
lggr := logr.Discard()
// we need to create a new queuePinger with valid endpoints
// to query this time, so that when counts are requested by
// the internal queuePinger logic, there is a valid host from
// which to request those counts
q := queue.NewFakeCounter()
// NOTE: don't call .Resize here or you'll have to make sure
// to receive on q.ResizedCh
q.RetMap[host] = pendingQLen
// create a fake interceptor
fakeSrv, fakeSrvURL, endpoints, err := startFakeQueueEndpointServer(
ns,
svcName,
q,
1,
)
r.NoError(err)
defer fakeSrv.Close()
// create a fake queue pinger. this is the simulated
// scaler that pings the above fake interceptor
ticker, pinger := newFakeQueuePinger(
ctx,
lggr,
func(opts *fakeQueuePingerOpts) { opts.endpoints = endpoints },
func(opts *fakeQueuePingerOpts) { opts.tickDur = 1 * time.Millisecond },
func(opts *fakeQueuePingerOpts) { opts.port = fakeSrvURL.Port() },
)
defer ticker.Stop()
time.Sleep(50 * time.Millisecond)
// sleep for more than enough time for the pinger to do its
// first tick
time.Sleep(5 * time.Millisecond)
hdl := newImpl(lggr, pinger, 123)
res, err := hdl.GetMetrics(ctx, req)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricValues))
metricVal := res.MetricValues[0]
r.Equal(host, metricVal.MetricName)
r.Equal(int64(pendingQLen), metricVal.MetricValue)
}
// Ensure that the queue pinger returns the aggregate request
// count when the host is set to "interceptor"
func TestGetMetricsInterceptorReturnsAggregate(t *testing.T) {
const (
ns = "testns"
svcName = "testsrv"
pendingQLen = 203
)
// create a request for the GetMetrics RPC call. it instructs
// GetMetrics to return the counts for one specific host.
// below, we do setup to ensure that we have a fake
// interceptor, and that interceptor knows about the given host
req := &externalscaler.GetMetricsRequest{
ScaledObjectRef: &externalscaler.ScaledObjectRef{
ScalerMetadata: map[string]string{
"host": "interceptor",
},
},
}
r := require.New(t)
ctx := context.Background()
lggr := logr.Discard()
// we need to create a new queuePinger with valid endpoints
// to query this time, so that when counts are requested by
// the internal queuePinger logic, there is a valid host from
// which to request those counts
q := queue.NewFakeCounter()
// NOTE: don't call .Resize here or you'll have to make sure
// to receive on q.ResizedCh
q.RetMap["host1"] = pendingQLen
q.RetMap["host2"] = pendingQLen
// create a fake interceptor
fakeSrv, fakeSrvURL, endpoints, err := startFakeQueueEndpointServer(
ns,
svcName,
q,
1,
)
r.NoError(err)
defer fakeSrv.Close()
// create a fake queue pinger. this is the simulated
// scaler that pings the above fake interceptor
const tickDur = 5 * time.Millisecond
ticker, pinger := newFakeQueuePinger(
ctx,
lggr,
func(opts *fakeQueuePingerOpts) { opts.endpoints = endpoints },
func(opts *fakeQueuePingerOpts) { opts.tickDur = tickDur },
func(opts *fakeQueuePingerOpts) { opts.port = fakeSrvURL.Port() },
)
defer ticker.Stop()
// sleep for more than enough time for the pinger to do its
// first tick
time.Sleep(tickDur * 5)
hdl := newImpl(lggr, pinger, 123)
res, err := hdl.GetMetrics(ctx, req)
r.NoError(err)
r.NotNil(res)
r.Equal(1, len(res.MetricValues))
metricVal := res.MetricValues[0]
r.Equal("interceptor", metricVal.MetricName)
aggregate := pinger.aggregate()
r.Equal(int64(aggregate), metricVal.MetricValue)
}

View File

@ -6,13 +6,17 @@ package main
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"log" "log"
"net" "net"
"net/http" "net/http"
"os"
"time" "time"
"github.com/go-logr/logr"
"github.com/kedacore/http-add-on/pkg/k8s" "github.com/kedacore/http-add-on/pkg/k8s"
pkglog "github.com/kedacore/http-add-on/pkg/log"
externalscaler "github.com/kedacore/http-add-on/proto" externalscaler "github.com/kedacore/http-add-on/proto"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
"google.golang.org/grpc" "google.golang.org/grpc"
@ -20,7 +24,10 @@ import (
) )
func main() { func main() {
lggr, err := pkglog.NewZapr()
if err != nil {
log.Fatalf("error creating new logger (%v)", err)
}
ctx := context.Background() ctx := context.Background()
cfg := mustParseConfig() cfg := mustParseConfig()
grpcPort := cfg.GRPCPort grpcPort := cfg.GRPCPort
@ -32,11 +39,13 @@ func main() {
k8sCl, _, err := k8s.NewClientset() k8sCl, _, err := k8s.NewClientset()
if err != nil { if err != nil {
log.Fatalf("Couldn't get a Kubernetes client (%s)", err) lggr.Error(err, "getting a Kubernetes client")
os.Exit(1)
} }
pinger := newQueuePinger( pinger := newQueuePinger(
context.Background(), context.Background(),
k8sCl, lggr,
k8s.EndpointsFuncForK8sClientset(k8sCl),
namespace, namespace,
svcName, svcName,
targetPortStr, targetPortStr,
@ -44,27 +53,39 @@ func main() {
) )
grp, ctx := errgroup.WithContext(ctx) grp, ctx := errgroup.WithContext(ctx)
grp.Go(startGrpcServer(ctx, grpcPort, pinger, int64(targetPendingRequests))) grp.Go(
grp.Go(startHealthcheckServer(ctx, healthPort)) startGrpcServer(
log.Fatalf("One or more of the servers failed: %s", grp.Wait()) ctx,
lggr,
grpcPort,
pinger,
int64(targetPendingRequests),
),
)
grp.Go(startHealthcheckServer(ctx, lggr, healthPort, pinger))
lggr.Error(grp.Wait(), "one or more of the servers failed")
} }
func startGrpcServer( func startGrpcServer(
ctx context.Context, ctx context.Context,
lggr logr.Logger,
port int, port int,
pinger *queuePinger, pinger *queuePinger,
targetPendingRequests int64, targetPendingRequests int64,
) func() error { ) func() error {
return func() error { return func() error {
addr := fmt.Sprintf("0.0.0.0:%d", port) addr := fmt.Sprintf("0.0.0.0:%d", port)
log.Printf("Serving external scaler on %s", addr) lggr.Info("starting grpc server", "address", addr)
lis, err := net.Listen("tcp", addr) lis, err := net.Listen("tcp", addr)
if err != nil { if err != nil {
log.Fatalf("failed to listen: %v", err) return err
} }
grpcServer := grpc.NewServer() grpcServer := grpc.NewServer()
externalscaler.RegisterExternalScalerServer(grpcServer, newImpl(pinger, targetPendingRequests)) externalscaler.RegisterExternalScalerServer(
grpcServer,
newImpl(lggr, pinger, targetPendingRequests),
)
reflection.Register(grpcServer) reflection.Register(grpcServer)
go func() { go func() {
<-ctx.Done() <-ctx.Done()
@ -74,7 +95,13 @@ func startGrpcServer(
} }
} }
func startHealthcheckServer(ctx context.Context, port int) func() error { func startHealthcheckServer(
ctx context.Context,
lggr logr.Logger,
port int,
pinger *queuePinger,
) func() error {
lggr = lggr.WithName("startHealthcheckServer")
return func() error { return func() error {
mux := http.NewServeMux() mux := http.NewServeMux()
mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) { mux.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
@ -83,11 +110,38 @@ func startHealthcheckServer(ctx context.Context, port int) func() error {
mux.HandleFunc("/livez", func(w http.ResponseWriter, r *http.Request) { mux.HandleFunc("/livez", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(200) w.WriteHeader(200)
}) })
mux.HandleFunc("/queue", func(w http.ResponseWriter, r *http.Request) {
lggr = lggr.WithName("route.counts")
cts := pinger.counts()
lggr.Info("counts endpoint", "counts", cts)
if err := json.NewEncoder(w).Encode(&cts); err != nil {
lggr.Error(err, "writing counts information to client")
w.WriteHeader(500)
}
})
mux.HandleFunc("/queue_ping", func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
lggr := lggr.WithName("route.counts_ping")
if err := pinger.requestCounts(ctx); err != nil {
lggr.Error(err, "requesting counts failed")
w.WriteHeader(500)
w.Write([]byte("error requesting counts from interceptors"))
return
}
cts := pinger.counts()
lggr.Info("counts ping endpoint", "counts", cts)
if err := json.NewEncoder(w).Encode(&cts); err != nil {
lggr.Error(err, "writing counts data to caller")
w.WriteHeader(500)
w.Write([]byte("error writing counts data to caller"))
}
})
srv := &http.Server{ srv := &http.Server{
Addr: fmt.Sprintf(":%d", port), Addr: fmt.Sprintf(":%d", port),
Handler: mux, Handler: mux,
} }
log.Printf("Serving health check server on port %d", port) lggr.Info("starting health check server", "port", port)
go func() { go func() {
<-ctx.Done() <-ctx.Done()

View File

@ -7,17 +7,24 @@ import (
"testing" "testing"
"time" "time"
"github.com/go-logr/logr"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
func TestHealthChecks(t *testing.T) { func TestHealthChecks(t *testing.T) {
ctx := context.Background()
ctx, done := context.WithCancel(ctx)
defer done()
lggr := logr.Discard()
r := require.New(t) r := require.New(t)
const port = 8080 const port = 8080
ctx, done := context.WithCancel(context.Background())
defer done()
errgrp, ctx := errgroup.WithContext(ctx) errgrp, ctx := errgroup.WithContext(ctx)
srvFunc := startHealthcheckServer(ctx, port)
ticker, pinger := newFakeQueuePinger(ctx, lggr)
defer ticker.Stop()
srvFunc := startHealthcheckServer(ctx, lggr, port, pinger)
errgrp.Go(srvFunc) errgrp.Go(srvFunc)
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)

Some files were not shown because too many files have changed in this diff Show More