Compare commits

...

3 Commits
main ... v0.2.1

Author SHA1 Message Date
Lancelot Robson 65394d8fc8 Update versions for v0.2.1 release 2025-05-13 11:01:50 +01:00
Lance Robson 79644b46e1
Don't swallow errors without logging them (#12)
* Don't swallow errors without logging them

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-13 10:58:26 +01:00
Lancelot Robson 6287a5f4d6 Update versions for release-v0.2 2025-04-17 10:14:41 +01:00
11 changed files with 104 additions and 66 deletions

View File

@ -12,9 +12,9 @@ It also provides a framework for us to extend later with other tests.
`docker pull <image>`, `docker tag <private-image-name>`, `docker push <private-image-name>`
The images are:
`quay.io/tigeradev/tiger-bench-nginx:latest`
`quay.io/tigeradev/tiger-bench-perf:latest`
`quay.io/tigeradev/tiger-bench:latest` - this is the tool itself.
`quay.io/tigeradev/tiger-bench-nginx:v0.2.1`
`quay.io/tigeradev/tiger-bench-perf:v0.2.1`
`quay.io/tigeradev/tiger-bench:v0.2.1` - this is the tool itself.
1. Create a `testconfig.yaml` file, containing a list of test definitions you'd like to run (see example provided)
1. Run the tool, substituting the image names in the command below if needed, and modifying the test parameters if desired:
@ -28,9 +28,9 @@ It also provides a framework for us to extend later with other tests.
-e AWS_ACCESS_KEY_ID \
-e AWS_SESSION_TOKEN \
-e LOG_LEVEL=INFO \
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:latest" \
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:latest" \
quay.io/tigeradev/tiger-bench:latest
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:v0.2.1" \
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:v0.2.1" \
quay.io/tigeradev/tiger-bench:v0.2.1
```
1. See results in the `results.json` file in your local directory!
@ -63,9 +63,9 @@ docker run --rm --net=host \
-v $HOME/.aws:/root/.aws \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_ACCESS_KEY_ID \
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:latest" \
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:latest" \
quay.io/tigeradev/tiger-bench:latest
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:v0.2.1" \
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:v0.2.1" \
quay.io/tigeradev/tiger-bench:v0.2.1
```
The tool runs in the hosts network namespace to ensure it has the same access as a user running kubectl on the host.
@ -137,11 +137,11 @@ external: false
`direct` is a boolean, which determines whether the test should run a direct pod-to-pod test.
`service` is a boolean, which determines whether the test should run a pod-to-service-to-pod test.
`external` is a boolean, which determines whether the test should run a test from whereever this test is being run to an externally exposed service.
If `external=true`, you must also supply `ExternalIPOrFQDN`, `TestPort` and `ControlPort` (for a thruput-latency test) to tell the test the IP and ports it should connect to. The ExternalIPOrFQDN will be whatever is exposed to the world, and might be a LoadBalancer IP, or a node IP, or something else, depending on how you exposed the service. The Test and Control ports need to be the same as used on the test server pod (because the test tools were not designed to work in an environment with NAT).
If `external=true`, you must also supply `ExternalIPOrFQDN`, `TestPort` and `ControlPort` (for a thruput-latency test) to tell the test the IP and ports it should connect to. The ExternalIPOrFQDN will be whatever is exposed to the world, and might be a LoadBalancer IP, or a node IP, or something else, depending on how you exposed the service. The Test and Control ports need to be the same as used on the test server pod (because the test tools were not designed to work in an environment with NAT).
Note that the tool will NOT expose the services for you, because there are too many different ways to expose services to the world. You will need to expose pods with the label `app: qperf` in the test namespace to the world for this test to work. An example of exposing these pods using NodePorts can be found in `external_service_example.yaml`. If you wanted to change that to use a LoadBalancer, simply change `type: NodePort` to `type: LoadBalancer`.
Note that the tool will NOT expose the services for you, because there are too many different ways to expose services to the world. You will need to expose pods with the label `app: qperf` in the test namespace to the world for this test to work. An example of exposing these pods using NodePorts can be found in `external_service_example.yaml`. If you wanted to change that to use a LoadBalancer, simply change `type: NodePort` to `type: LoadBalancer`.
For `thruput-latency` tests, you will need to expose 2 ports from those pods: A TCP `TestPort` and a `ControlPort`. You must not map the port numbers between the pod and the external service, but they do NOT need to be consecutive. i.e. if you specify TestPort=32221, the pod will listen on port 32221 and whatever method you use to expose that service to the outside world must also use that port number.
For `thruput-latency` tests, you will need to expose 2 ports from those pods: A TCP `TestPort` and a `ControlPort`. You must not map the port numbers between the pod and the external service, but they do NOT need to be consecutive. i.e. if you specify TestPort=32221, the pod will listen on port 32221 and whatever method you use to expose that service to the outside world must also use that port number.
### Settings which can reconfigure your cluster

View File

@ -198,7 +198,8 @@ func writeResultToFile(filename string, results []results.Result) (err error) {
log.Debug("entering writeResultToFile function")
file, err := os.Create(filename)
if err != nil {
return fmt.Errorf("failed to open output file: %s", filename)
log.WithError(err).Errorf("failed to open output file: %s", filename)
return err
}
defer func() {
closeErr := file.Close()
@ -208,11 +209,13 @@ func writeResultToFile(filename string, results []results.Result) (err error) {
}()
output, err := json.MarshalIndent(results, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal results: %s", err)
log.WithError(err).Errorf("failed to marshal results: %s", err)
return err
}
_, err = file.Write(output)
if err != nil {
return fmt.Errorf("failed to write results to file: %s", err)
log.WithError(err).Errorf("failed to write results to file: %s", err)
return err
}
return nil
}

View File

@ -1,7 +1,7 @@
all: image
REGISTRY?=quay.io
VERSION?=latest
VERSION?=v0.2.1
ORG?=tigeradev
NAME?=nginx

View File

@ -1,7 +1,7 @@
all: image
REGISTRY?=quay.io
VERSION?=latest
VERSION?=v0.2.1
ORG?=tigeradev
NAME?=perf

View File

@ -1,7 +1,7 @@
all: image
REGISTRY?=quay.io
VERSION?=latest
VERSION?=v0.2.1
ORG?=tigeradev
NAME?=ttfr

View File

@ -43,17 +43,20 @@ func ConfigureCluster(ctx context.Context, cfg config.Config, clients config.Cli
log.Debug("entering configureCluster function")
err := updateEncap(ctx, cfg, clients, testConfig.Encap)
if err != nil {
return fmt.Errorf("failed to update encapsulation")
log.WithError(err).Error("failed to update encapsulation")
return err
}
if testConfig.Dataplane == config.DataPlaneBPF {
err = enableBPF(ctx, cfg, clients)
if err != nil {
return fmt.Errorf("failed to enable BPF")
log.WithError(err).Error("failed to enable BPF")
return err
}
} else if testConfig.Dataplane == config.DataPlaneIPTables {
err = enableIptables(ctx, clients)
if err != nil {
return fmt.Errorf("failed to enable iptables")
log.WithError(err).Error("failed to enable iptables")
return err
}
} else if testConfig.Dataplane == config.DataPlaneUnset {
log.Info("No dataplane specified, using whatever is already set")
@ -65,7 +68,8 @@ func ConfigureCluster(ctx context.Context, cfg config.Config, clients config.Cli
if testConfig.DNSPerf.Mode != config.DNSPerfModeUnset {
err = patchFelixConfig(ctx, clients, testConfig)
if err != nil {
return fmt.Errorf("failed to patch felixconfig")
log.WithError(err).Error("failed to patch felixconfig")
return err
}
} else {
log.Warn("No DNSPerfMode specified, using whatever is already set")
@ -74,7 +78,8 @@ func ConfigureCluster(ctx context.Context, cfg config.Config, clients config.Cli
if testConfig.CalicoNodeCPULimit != "" {
err = SetCalicoNodeCPULimit(ctx, clients, testConfig.CalicoNodeCPULimit)
if err != nil {
return fmt.Errorf("failed to set calico-node CPU limit")
log.WithError(err).Error("failed to set calico-node CPU limit")
return err
}
} else {
log.Warn("No CalicoNodeCPULimit specified, using whatever is already set")
@ -88,7 +93,8 @@ func patchFelixConfig(ctx context.Context, clients config.Clients, testConfig co
felixconfig := &v3.FelixConfiguration{}
err := clients.CtrlClient.Get(ctx, ctrlclient.ObjectKey{Name: "default"}, felixconfig)
if err != nil {
return fmt.Errorf("failed to get felixconfig")
log.WithError(err).Error("failed to get felixconfig")
return err
}
log.Debug("felixconfig is", felixconfig)
dnsPolicyMode := testConfig.DNSPerf.Mode
@ -192,7 +198,8 @@ func SetCalicoNodeCPULimit(ctx context.Context, clients config.Clients, limit st
err = waitForTigeraStatus(ctx, clients)
if err != nil {
return fmt.Errorf("error waiting for tigera status")
log.WithError(err).Error("error waiting for tigera status")
return err
}
return err
}
@ -427,7 +434,8 @@ func SetupStandingConfig(ctx context.Context, clients config.Clients, testConfig
log.Info("Waiting for all pods to be running")
err = utils.WaitForDeployment(ctx, clients, deployment)
if err != nil {
return fmt.Errorf("error waiting for pods to deploy in standing-deployment")
log.WithError(err).Error("error waiting for pods to deploy in standing-deployment")
return err
}
// Deploy services
@ -435,17 +443,20 @@ func SetupStandingConfig(ctx context.Context, clients config.Clients, testConfig
deployment = makeDeployment(namespace, "standing-svc", 10, false, webServerImage, []string{})
deployment, err = utils.GetOrCreateDeployment(ctx, clients, deployment)
if err != nil {
return fmt.Errorf("error creating deployment standing-svc")
log.WithError(err).Error("error creating deployment standing-svc")
return err
}
err = utils.ScaleDeployment(ctx, clients, deployment, 10) // When deployment exists but isn't scaled right, this might be needed.
if err != nil {
return fmt.Errorf("error scaling deployment standing-svc")
log.WithError(err).Error("error scaling deployment standing-svc")
return err
}
//wait for pods to deploy
log.Info("Waiting for all pods to be running")
err = utils.WaitForDeployment(ctx, clients, deployment)
if err != nil {
return fmt.Errorf("error waiting for pods to deploy in standing-svc")
log.WithError(err).Error("error waiting for pods to deploy in standing-svc")
return err
}
// Spin up a channel with multiple threads to create services, because a single thread is limited to 5 actions per second
const numThreads = 10

View File

@ -44,7 +44,8 @@ func enableBPF(ctx context.Context, cfg config.Config, clients config.Clients) e
defer cancel()
err := clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Name: "default"}, installation)
if err != nil {
return fmt.Errorf("failed to get installation")
log.WithError(err).Error("failed to get installation")
return err
}
if *installation.Spec.CalicoNetwork.LinuxDataplane == operatorv1.LinuxDataplaneBPF {
log.Info("BPF already enabled")
@ -57,7 +58,8 @@ func enableBPF(ctx context.Context, cfg config.Config, clients config.Clients) e
kubesvc := &corev1.Endpoints{}
err := clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Name: "kubernetes", Namespace: "default"}, kubesvc)
if err != nil {
return fmt.Errorf("failed to get kubernetes service endpoints")
log.WithError(err).Error("failed to get kubernetes service endpoints")
return err
}
log.Infof("first kubernetes service endpoint IP is %v", kubesvc.Subsets[0].Addresses[0].IP)
log.Infof("first kubernetes service endpoint port is %v", kubesvc.Subsets[0].Ports[0].Port)
@ -71,7 +73,8 @@ func enableBPF(ctx context.Context, cfg config.Config, clients config.Clients) e
// if it doesn't exist already, create configMap with k8s endpoint data in it
err = createOrUpdateCM(childCtx, clients, host, port)
if err != nil {
return fmt.Errorf("failed to create or update configMap")
log.WithError(err).Error("failed to create or update configMap")
return err
}
// kubectl patch ds -n kube-system kube-proxy -p '{"spec":{"template":{"spec":{"nodeSelector":{"non-calico": "true"}}}}}'
@ -80,13 +83,15 @@ func enableBPF(ctx context.Context, cfg config.Config, clients config.Clients) e
log.Debug("Getting kube-proxy ds")
err = clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Namespace: "kube-system", Name: "kube-proxy"}, proxyds)
if err != nil {
return fmt.Errorf("failed to get kube-proxy ds")
log.WithError(err).Error("failed to get kube-proxy ds")
return err
}
log.Debugf("patching with %v", string(patch[:]))
log.Info("enabling BPF dataplane")
err = clients.CtrlClient.Patch(childCtx, proxyds, ctrlclient.RawPatch(ctrlclient.Merge.Type(), patch))
if err != nil {
return fmt.Errorf("failed to patch kube-proxy ds")
log.WithError(err).Error("failed to patch kube-proxy ds")
return err
}
// kubectl patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"BPF"}}}'
@ -96,16 +101,19 @@ func enableBPF(ctx context.Context, cfg config.Config, clients config.Clients) e
log.Debug("Getting installation")
err = clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Name: "default"}, installation)
if err != nil {
return fmt.Errorf("failed to get installation")
log.WithError(err).Error("failed to get installation")
return err
}
log.Debugf("patching with %v", string(patch[:]))
err = clients.CtrlClient.Patch(childCtx, installation, ctrlclient.RawPatch(ctrlclient.Merge.Type(), patch))
if err != nil {
return fmt.Errorf("failed to patch installation")
log.WithError(err).Error("failed to patch installation")
return err
}
err = waitForTigeraStatus(ctx, clients)
if err != nil {
return fmt.Errorf("error waiting for tigera status")
log.WithError(err).Error("error waiting for tigera status")
return err
}
return nil
}
@ -119,7 +127,8 @@ func enableIptables(ctx context.Context, clients config.Clients) error {
log.Debug("Getting installation")
err := clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Name: "default"}, installation)
if err != nil {
return fmt.Errorf("failed to get installation")
log.WithError(err).Error("failed to get installation")
return err
}
if *installation.Spec.CalicoNetwork.LinuxDataplane == operatorv1.LinuxDataplaneIptables {
log.Info("IPtables already enabled")
@ -133,13 +142,15 @@ func enableIptables(ctx context.Context, clients config.Clients) error {
log.Debug("Getting installation")
err = clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Name: "default"}, installation)
if err != nil {
return fmt.Errorf("failed to get installation")
log.WithError(err).Error("failed to get installation")
return err
}
log.Debugf("patching with %v", string(patch[:]))
log.Info("enabling iptables dataplane")
err = clients.CtrlClient.Patch(childCtx, installation, ctrlclient.RawPatch(ctrlclient.Merge.Type(), patch))
if err != nil {
return fmt.Errorf("failed to patch installation")
log.WithError(err).Error("failed to patch installation")
return err
}
// kubectl patch ds -n kube-system kube-proxy --type merge -p '{"spec":{"template":{"spec":{"nodeSelector":{"non-calico": null}}}}}'
@ -148,17 +159,20 @@ func enableIptables(ctx context.Context, clients config.Clients) error {
log.Debug("Getting kube-proxy ds")
err = clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Namespace: "kube-system", Name: "kube-proxy"}, proxyds)
if err != nil {
return fmt.Errorf("failed to get kube-proxy ds")
log.WithError(err).Error("failed to get kube-proxy ds")
return err
}
log.Debugf("patching with %v", string(patch[:]))
err = clients.CtrlClient.Patch(childCtx, proxyds, ctrlclient.RawPatch(ctrlclient.Merge.Type(), patch))
if err != nil {
return fmt.Errorf("failed to patch kube-proxy ds")
log.WithError(err).Error("failed to patch kube-proxy ds")
return err
}
err = waitForTigeraStatus(ctx, clients)
if err != nil {
return fmt.Errorf("error waiting for tigera status")
log.WithError(err).Error("error waiting for tigera status")
return err
}
return nil
}
@ -208,11 +222,13 @@ func waitForTigeraStatus(ctx context.Context, clients config.Clients) error {
time.Sleep(10 * time.Second)
err := clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Name: "apiserver"}, apiStatus)
if err != nil {
return fmt.Errorf("failed to get apiserver status")
log.WithError(err).Error("failed to get apiserver status")
return err
}
err = clients.CtrlClient.Get(childCtx, ctrlclient.ObjectKey{Name: "calico"}, calicoStatus)
if err != nil {
return fmt.Errorf("failed to get calico status")
log.WithError(err).Error("failed to get calico status")
return err
}
for _, apiCondition := range apiStatus.Status.Conditions {
log.Debugf("apiserver condition: %v", apiCondition)
@ -243,21 +259,24 @@ func updateEncap(ctx context.Context, cfg config.Config, clients config.Clients,
patch = []byte(`{"spec":{"ipipMode":"Never","vxlanMode":"Never"}}`)
err = patchInstallation(ctx, clients, "None")
if err != nil {
return fmt.Errorf("failed to patch installation")
log.WithError(err).Error("failed to patch installation")
return err
}
} else if encap == config.EncapIPIP {
// kubectl patch ippool default-ipv4-ippool -p '{"spec": {"ipipMode": "Always"}, {vxlanMode: "Never"}}'
patch = []byte(`{"spec":{"ipipMode":"Always","vxlanMode":"Never"}}`)
err = patchInstallation(ctx, clients, "IPIP")
if err != nil {
return fmt.Errorf("failed to patch installation")
log.WithError(err).Error("failed to patch installation")
return err
}
} else if encap == config.EncapVXLAN {
// kubectl patch ippool default-ipv4-ippool -p '{"spec": {"ipipMode": "Never"}, {vxlanMode: "Always"}}'
patch = []byte(`{"spec":{"ipipMode":"Never","vxlanMode":"Always"}}`)
err = patchInstallation(ctx, clients, "VXLAN")
if err != nil {
return fmt.Errorf("failed to patch installation")
log.WithError(err).Error("failed to patch installation")
return err
}
} else if encap == config.EncapUnset {
log.Info("No encapsulation specified, using whatever is already set")
@ -269,12 +288,14 @@ func updateEncap(ctx context.Context, cfg config.Config, clients config.Clients,
log.Debug("Calico version is less than v3.28.0, patching IPPool")
err = patchIPPool(ctx, clients, patch)
if err != nil {
return fmt.Errorf("failed to patch IPPool")
log.WithError(err).Error("failed to patch IPPool")
return err
}
}
err = waitForTigeraStatus(ctx, clients)
if err != nil {
return fmt.Errorf("error waiting for tigera status")
log.WithError(err).Error("error waiting for tigera status")
return err
}
return nil
}
@ -293,7 +314,8 @@ func patchInstallation(ctx context.Context, clients config.Clients, encap string
installation := &operatorv1.Installation{}
err := clients.CtrlClient.Get(ctx, ctrlclient.ObjectKey{Name: "default"}, installation)
if err != nil {
return fmt.Errorf("failed to get installation")
log.WithError(err).Error("failed to get installation")
return err
}
log.Debug("installation is", installation)
installation.Spec.CalicoNetwork.IPPools[0].Encapsulation = v1encap

View File

@ -57,9 +57,9 @@ type Config struct {
ProxyAddress string `envconfig:"HTTP_PROXY" default:""`
TestConfigFile string `envconfig:"TESTCONFIGFILE" required:"true"`
LogLevel string `envconfig:"LOG_LEVEL" default:"info"`
WebServerImage string `envconfig:"WEBSERVER_IMAGE" default:"quay.io/tigeradev/tiger-bench-nginx:latest"`
PerfImage string `envconfig:"PERF_IMAGE" default:"quay.io/tigeradev/tiger-bench-perf:latest"`
TTFRImage string `envconfig:"TTFR_IMAGE" default:"quay.io/tigeradev/ttfr:latest"`
WebServerImage string `envconfig:"WEBSERVER_IMAGE" default:"quay.io/tigeradev/tiger-bench-nginx:v0.2.1"`
PerfImage string `envconfig:"PERF_IMAGE" default:"quay.io/tigeradev/tiger-bench-perf:v0.2.1"`
TTFRImage string `envconfig:"TTFR_IMAGE" default:"quay.io/tigeradev/ttfr:v0.2.1"`
TestConfigs testConfigs
}
@ -135,11 +135,11 @@ type TestConfig struct {
// PerfConfig details which tests to run in thruput-latency and iperf tests.
type PerfConfig struct {
Direct bool // Whether to do a direct pod-pod test
Service bool // Whether to do a pod-service-pod test
External bool // Whether to test from this container to the external IP for an external-service-pod test
ControlPort int // The port to use for the control connection in tests. Used by qperf tests.
TestPort int // The port to use for the test connection in tests. Used by qperf and iperf tests
Direct bool // Whether to do a direct pod-pod test
Service bool // Whether to do a pod-service-pod test
External bool // Whether to test from this container to the external IP for an external-service-pod test
ControlPort int // The port to use for the control connection in tests. Used by qperf tests.
TestPort int // The port to use for the test connection in tests. Used by qperf and iperf tests
ExternalIPOrFQDN string // The external IP or DNS name to connect to for an external-service-pod test
}
@ -243,7 +243,7 @@ func defaultAndValidate(cfg *Config) error {
}
if tcfg.TestKind == "thruput-latency" || tcfg.TestKind == "iperf" {
if tcfg.Perf == nil {
tcfg.Perf = &PerfConfig{true, true, false, 32000, 0, ""} // Default so that old configs don't break
tcfg.Perf = &PerfConfig{true, true, false, 32000, 0, ""} // Default so that old configs don't break
continue
}
if tcfg.Perf.External {

View File

@ -269,7 +269,8 @@ func DeployIperfPods(ctx context.Context, clients config.Clients, namespace stri
nodelist := &corev1.NodeList{}
err := clients.CtrlClient.List(ctx, nodelist)
if err != nil {
return fmt.Errorf("failed to list nodes: %w", err)
log.WithError(err).Error("failed to list nodes")
return err
}
for _, node := range nodelist.Items {
if node.Labels["tigera.io/test-nodepool"] == "default-pool" {

View File

@ -315,7 +315,8 @@ func DeployQperfPods(ctx context.Context, clients config.Clients, namespace stri
nodelist := &corev1.NodeList{}
err := clients.CtrlClient.List(ctx, nodelist)
if err != nil {
return fmt.Errorf("failed to list nodes: %w", err)
log.WithError(err).Error("failed to list nodes")
return err
}
for _, node := range nodelist.Items {
if node.Labels["tigera.io/test-nodepool"] == "default-pool" {

8
run.sh
View File

@ -1,6 +1,6 @@
#!/bin/bash
set -ex
docker build -t quay.io/tigeradev/tiger-bench:latest .
docker build -t quay.io/tigeradev/tiger-bench:v0.2.1 .
docker run --rm --net=host \
-v "${PWD}":/results \
-v ${KUBECONFIG}:/kubeconfig \
@ -10,6 +10,6 @@ docker run --rm --net=host \
-e AWS_ACCESS_KEY_ID \
-e AWS_SESSION_TOKEN \
-e LOG_LEVEL=INFO \
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:main" \
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:main" \
quay.io/tigeradev/tiger-bench:latest
-e WEBSERVER_IMAGE="quay.io/tigeradev/tiger-bench-nginx:v0.2.1" \
-e PERF_IMAGE="quay.io/tigeradev/tiger-bench-perf:v0.2.1" \
quay.io/tigeradev/tiger-bench:v0.2.1