add -b branch and fix codeblocks (#1339)

This commit is contained in:
RichieEscarez 2019-05-16 22:52:15 -07:00 committed by Knative Prow Robot
parent 8586421657
commit f1174b1b51
2 changed files with 145 additions and 145 deletions

View File

@ -11,7 +11,7 @@ ContainerSource will start a container image which will generate events under ce
Knative [event-sources](https://github.com/knative/eventing-sources) has a sample of heartbeats event source. You could clone the source codes by
```
git clone https://github.com/knative/eventing-sources.git
git clone -b "release-0.6" https://github.com/knative/eventing-sources.git
```
And then build a heartbeats image and publish to your image repo with
```
@ -63,7 +63,7 @@ spec:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: event-display
args:
args:
- --period=1
env:
- name: POD_NAME
@ -130,4 +130,4 @@ The container image can be developed with any language, build and publish with a
### Create the ContainerSource using this container image
When the container image is ready, a YAML file will be used to create a concrete ContainerSource. Use [heartbeats-source.yaml](./heartbeats-source.yaml) as a sample for reference. You can get more details about ContainerSource specification [here](https://github.com/knative/docs/tree/master/docs/eventing#containersource).
When the container image is ready, a YAML file will be used to create a concrete ContainerSource. Use [heartbeats-source.yaml](./heartbeats-source.yaml) as a sample for reference. You can get more details about ContainerSource specification [here](https://github.com/knative/docs/tree/master/docs/eventing#containersource).

View File

@ -9,28 +9,28 @@ A demonstration of the autoscaling capabilities of a Knative Serving Revision.
1. The `hey` load generator installed (`go get -u github.com/rakyll/hey`).
1. Clone this repository, and move into the sample directory:
```shell
git clone https://github.com/knative/docs knative-docs
cd knative-docs
```
```shell
git clone -b "release-0.6" https://github.com/knative/docs knative-docs
cd knative-docs
```
## Deploy the Service
1. Deploy the [sample](./service.yaml) Knative Service:
```
kubectl apply --filename docs/serving/samples/autoscale-go/service.yaml
```
```
kubectl apply --filename docs/serving/samples/autoscale-go/service.yaml
```
1. Obtain both the hostname and IP address of the `istio-ingressgateway` service in the `istio-system` namespace,
and then `export` them into the `IP_ADDRESS` environment variable.
Note that each platform where you run your Kubernetes cluster is configured differently. Details about the
Note that each platform where you run your Kubernetes cluster is configured differently. Details about the
various ways that you can obatin your ingress hostname and IP address is available in the Istio documentation
under the [Control Ingress Traffic](https://istio.io/docs/tasks/traffic-management/ingress/) topic.
**Examples**:
* For GKE, you run the following commands:
```shell
@ -46,95 +46,95 @@ A demonstration of the autoscaling capabilities of a Knative Serving Revision.
export IP_ADDRESS=`kubectl get svc $INGRESSGATEWAY --namespace istio-system --output jsonpath="{.status.loadBalancer.ingress[*].ip}"`
```
* For Minikube, you run the following command:
```shell
export IP_ADDRESS=$(minikube ip)
```
* For Docker Desktop, you run the following command:
```shell
# The value can be 127.0.0.1 as well
export IP_ADDRESS=localhost
```
```
## Load the Service
1. Make a request to the autoscale app to see it consume some resources.
```shell
curl --header "Host: autoscale-go.default.example.com" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
```
```shell
curl --header "Host: autoscale-go.default.example.com" "http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
```
```
Allocated 5 Mb of memory.
The largest prime less than 10000 is 9973.
Slept for 100.13 milliseconds.
```
```
Allocated 5 Mb of memory.
The largest prime less than 10000 is 9973.
Slept for 100.13 milliseconds.
```
1. Send 30 seconds of traffic maintaining 50 in-flight requests.
```shell
hey -z 30s -c 50 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5" \
&& kubectl get pods
```
```shell
hey -z 30s -c 50 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5" \
&& kubectl get pods
```
```shell
Summary:
Total: 30.3379 secs
Slowest: 0.7433 secs
Fastest: 0.1672 secs
Average: 0.2778 secs
Requests/sec: 178.7861
```shell
Summary:
Total: 30.3379 secs
Slowest: 0.7433 secs
Fastest: 0.1672 secs
Average: 0.2778 secs
Requests/sec: 178.7861
Total data: 542038 bytes
Size/request: 99 bytes
Total data: 542038 bytes
Size/request: 99 bytes
Response time histogram:
0.167 [1] |
0.225 [1462] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.282 [1303] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.340 [1894] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.398 [471] |■■■■■■■■■■
0.455 [159] |■■■
0.513 [68] |■
0.570 [18] |
0.628 [14] |
0.686 [21] |
0.743 [13] |
Response time histogram:
0.167 [1] |
0.225 [1462] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.282 [1303] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.340 [1894] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.398 [471] |■■■■■■■■■■
0.455 [159] |■■■
0.513 [68] |■
0.570 [18] |
0.628 [14] |
0.686 [21] |
0.743 [13] |
Latency distribution:
10% in 0.1805 secs
25% in 0.2197 secs
50% in 0.2801 secs
75% in 0.3129 secs
90% in 0.3596 secs
95% in 0.4020 secs
99% in 0.5457 secs
Latency distribution:
10% in 0.1805 secs
25% in 0.2197 secs
50% in 0.2801 secs
75% in 0.3129 secs
90% in 0.3596 secs
95% in 0.4020 secs
99% in 0.5457 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0007 secs, 0.1672 secs, 0.7433 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs
req write: 0.0001 secs, 0.0000 secs, 0.0045 secs
resp wait: 0.2766 secs, 0.1669 secs, 0.6633 secs
resp read: 0.0002 secs, 0.0000 secs, 0.0065 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0007 secs, 0.1672 secs, 0.7433 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs
req write: 0.0001 secs, 0.0000 secs, 0.0045 secs
resp wait: 0.2766 secs, 0.1669 secs, 0.6633 secs
resp read: 0.0002 secs, 0.0000 secs, 0.0065 secs
Status code distribution:
[200] 5424 responses
```
Status code distribution:
[200] 5424 responses
```
```shell
NAME READY STATUS RESTARTS AGE
autoscale-go-00001-deployment-78cdc67bf4-2w4sk 3/3 Running 0 26s
autoscale-go-00001-deployment-78cdc67bf4-dd2zb 3/3 Running 0 24s
autoscale-go-00001-deployment-78cdc67bf4-pg55p 3/3 Running 0 18s
autoscale-go-00001-deployment-78cdc67bf4-q8bf9 3/3 Running 0 1m
autoscale-go-00001-deployment-78cdc67bf4-thjbq 3/3 Running 0 26s
```
```shell
NAME READY STATUS RESTARTS AGE
autoscale-go-00001-deployment-78cdc67bf4-2w4sk 3/3 Running 0 26s
autoscale-go-00001-deployment-78cdc67bf4-dd2zb 3/3 Running 0 24s
autoscale-go-00001-deployment-78cdc67bf4-pg55p 3/3 Running 0 18s
autoscale-go-00001-deployment-78cdc67bf4-q8bf9 3/3 Running 0 1m
autoscale-go-00001-deployment-78cdc67bf4-thjbq 3/3 Running 0 26s
```
## Analysis
@ -182,56 +182,56 @@ autoscaler classes built into Knative:
2. `hpa.autoscaling.knative.dev` which delegates to the Kubernetes HPA which
autoscales on CPU usage.
Example of a Service scaled on CPU:
Example of a Service scaled on CPU:
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: autoscale-go
namespace: default
spec:
template:
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
annotations:
# Standard Kubernetes CPU-based autoscaling.
autoscaling.knative.dev/class: hpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: cpu
name: autoscale-go
namespace: default
spec:
containers:
- image: gcr.io/knative-samples/autoscale-go:0.1
```
template:
metadata:
annotations:
# Standard Kubernetes CPU-based autoscaling.
autoscaling.knative.dev/class: hpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: cpu
spec:
containers:
- image: gcr.io/knative-samples/autoscale-go:0.1
```
Additionally the autoscaler targets and scaling bounds can be specified in
annotations. Example of a Service with custom targets and scale bounds:
Additionally the autoscaler targets and scaling bounds can be specified in
annotations. Example of a Service with custom targets and scale bounds:
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: autoscale-go
namespace: default
spec:
template:
```yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
annotations:
# Knative concurrency-based autoscaling (default).
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: concurrency
# Target 10 requests in-flight per pod.
autoscaling.knative.dev/target: "10"
# Disable scale to zero with a minScale of 1.
autoscaling.knative.dev/minScale: "1"
# Limit scaling to 100 pods.
autoscaling.knative.dev/maxScale: "100"
name: autoscale-go
namespace: default
spec:
containers:
- image: gcr.io/knative-samples/autoscale-go:0.1
```
template:
metadata:
annotations:
# Knative concurrency-based autoscaling (default).
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/metric: concurrency
# Target 10 requests in-flight per pod.
autoscaling.knative.dev/target: "10"
# Disable scale to zero with a minScale of 1.
autoscaling.knative.dev/minScale: "1"
# Limit scaling to 100 pods.
autoscaling.knative.dev/maxScale: "100"
spec:
containers:
- image: gcr.io/knative-samples/autoscale-go:0.1
```
Note: for an `hpa.autoscaling.knative.dev` class service, the
`autoscaling.knative.dev/target` specifies the CPU percentage target (default
`"80"`).
Note: for an `hpa.autoscaling.knative.dev` class service, the
`autoscaling.knative.dev/target` specifies the CPU percentage target (default
`"80"`).
#### Demo
@ -254,45 +254,45 @@ kubectl port-forward --namespace knative-monitoring $(kubectl get pods --namespa
1. Send 60 seconds of traffic maintaining 100 concurrent requests.
```shell
hey -z 60s -c 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
```
```shell
hey -z 60s -c 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=100&prime=10000&bloat=5"
```
1. Send 60 seconds of traffic maintaining 100 qps with short requests (10 ms).
```shell
hey -z 60s -q 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=10"
```
```shell
hey -z 60s -q 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=10"
```
1. Send 60 seconds of traffic maintaining 100 qps with long requests (1 sec).
```shell
hey -z 60s -q 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=1000"
```
```shell
hey -z 60s -q 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?sleep=1000"
```
1. Send 60 seconds of traffic with heavy CPU usage (~1 cpu/sec/request, total
100 cpus).
```shell
hey -z 60s -q 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?prime=40000000"
```
```shell
hey -z 60s -q 100 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?prime=40000000"
```
1. Send 60 seconds of traffic with heavy memory usage (1 gb/request, total 5
gb).
```shell
hey -z 60s -c 5 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?bloat=1000"
```
```shell
hey -z 60s -c 5 \
-host "autoscale-go.default.example.com" \
"http://${IP_ADDRESS?}?bloat=1000"
```
## Cleanup