Update the doc on how to test for flakiness to actually work and to use kubectl.

This commit is contained in:
Alex Robinson 2015-01-09 22:38:14 +00:00
parent d45be03704
commit 3c3d2468b9
1 changed files with 6 additions and 4 deletions

View File

@ -12,6 +12,8 @@ There is a testing image ```brendanburns/flake``` up on the docker hub. We will
Create a replication controller with the following config: Create a replication controller with the following config:
```yaml ```yaml
id: flakeController id: flakeController
kind: ReplicationController
apiVersion: v1beta1
desiredState: desiredState:
replicas: 24 replicas: 24
replicaSelector: replicaSelector:
@ -37,14 +39,14 @@ labels:
name: flake name: flake
``` ```
```./cluster/kubecfg.sh -c controller.yaml create replicaControllers``` ```./cluster/kubectl.sh create -f controller.yaml```
This will spin up 100 instances of the test. They will run to completion, then exit, the kubelet will restart them, eventually you will have sufficient This will spin up 100 instances of the test. They will run to completion, then exit, the kubelet will restart them, eventually you will have sufficient
runs for your purposes, and you can stop the replication controller: runs for your purposes, and you can stop the replication controller by setting the ```replicas``` field to 0 and then running:
```sh ```sh
./cluster/kubecfg.sh stop flakeController ./cluster/kubectl.sh update -f controller.yaml
./cluster/kubecfg.sh rm flakeController ./cluster/kubectl.sh delete -f controller.yaml
``` ```
Now examine the machines with ```docker ps -a``` and look for tasks that exited with non-zero exit codes (ignore those that exited -1, since that's what happens when you stop the replica controller) Now examine the machines with ```docker ps -a``` and look for tasks that exited with non-zero exit codes (ignore those that exited -1, since that's what happens when you stop the replica controller)