Eventually we want our framework to work nicely with just `go test`. To
get there we need to
- inject KUBE_ASSETS_DIR
- make the framework work when run multiple times in parallel (port
collitions, expose bound ports the the subject under test, ...)
We decided to make sure our tests are run in sequence (and not in
parallel to any other thing using etcd, for that matter) by making this
explicit in the `pre-commit.sh` - for now.
As soon as we are there, we can rollback the change to the
`pre-commit.sh` end have the test framework be tested the same as
everything else.
[#153248975]
While doing that we found that we needed to refactor the fakes to handle
command line arguments which are not known up front; we do this by using
regular expresseions.
We use the standard go client for kubernetes `client-go`. To vendor it
and all its denpendecies we use
```
dep ensure -add k8s.io/client-go@5.0.0
```
We create a new cobra command with
```
cobra add listPods
```
Note: The new command in cmd/listPods.go uses [the "magic" function
init()](https://golang.org/ref/spec#Package_initialization) to register
itself.
- Start() should only return when the process is actually up and
listening
- It may take some time to tear down a process, so we increased the
timeout for Stop() (to some random number)
- We make sure Std{Out,Err} is properly initialized, we should not rely
on Ginkgo/Gomega to do that for us
- Store stdout,stderr in private buffers
- Configure the etcURL on construction instead of at start time
- Handle the creation of the temporary directory (for the data
directory) internally
Testing the lifecycle of our fixtures with the real binaries. Test if we
can start the fixtures, the porcesses actually listen on the ports and
if we tear down all the parts successfully again.
We're not exercising the test framework yet, but it's in place.
Our democli expects its test assets to be in `./assets/bin`. We have a
script `./scripts/download-binaries.sh` which will populate that directory
from a google storage bucket.
Once those assets are in place, you can run tests with
`./scripts/run-tests.sh`.
Create a new set of test fixtures by doing:
```
f := test.NewFixtures("/path/to/etcd", "/path/to/apiserver")
```
Before running your integration tests, start all your fixtures:
```
err := f.Start()
Expect(err).NotTo(HaveOccurred())
```
Now that you have started your etcd and apiserver, you'll find the
apiserver listening locally on the default port. When you're done with
your testing, stop and clean up:
```
err := f.Stop()
Expect(err).NotTo(HaveOccurred())
```
To start an apiserver:
```
apiServer := APIServer{Path: "/path/to/my/apiserver/binary"}
session, err := apiServer.Start("tcp://whereever.is.my.etcd:port")
Expect(err).NotTo(HaveOccurred())
```
When you're done testing against that apiserver:
```
session.Terminate().Wait()
```
...or if you prefer:
```
gexec.Terminate()
```
...which will terminate not only this apiserver, but also all other
command sessions you started in this test.
We use [ginkgo](http://onsi.github.io/ginkgo/) and
[gomega](http://onsi.github.io/gomega/) for testing. We generate some
boilerplate with:
```
mkdir integration
cd integration
ginkgo bootstrap
ginkgo generate integration
```
We use
[gexec](http://onsi.github.io/gomega/#gexec-testing-external-processes)
to compile and run the CLI under test, and to inspect its output.
We use `dep ensure` to ensure that all our dependencies are properly
vendored. From now on, this will be our workflow with every commit.
We use [ginkgo](http://onsi.github.io/ginkgo/) and
[gomega](http://onsi.github.io/gomega/) for testing. We generate some
boilerplate with:
```
mkdir integration
cd integration
ginkgo bootstrap
ginkgo generate integration
```
We use
[gexec](http://onsi.github.io/gomega/#gexec-testing-external-processes)
to compile and run the CLI under test, and to inspect its output.
We use `dep ensure` to ensure that all our dependencies are properly
vendored. From now on, this will be our workflow with every commit.