* viewer controller is now namespaced so no need for cluster role
* our default namespaced install (kubeflow namespace) can also use Role instead of ClusterRole
* Viewer CRD controller running under namespace
* Change docker file and add manifest deployment yaml to support the new flag namespace
* Change docker file to support new flag namespace for viewer crd controller
* Modify kustomization.yaml and namespaced-install.yaml
* Change file name from ml-pipeline-viewer-crd-deployment to ml-pipeline-viewer-crd-deployment-patch
* Fix typo
* Remove some duplicate configs in namespaced-install
* Run `go vet` as part of the Travis CI.
Also fix existing issues found by Go vet.
* Explicitly check for shadowing
* Fix shadowing problems throughout codebase
* Actually run all checks including shadow
change.
That change was submitted in parallel with the PR that moved everything
to Bazel and so wasn't included.
Along the way, fix conflicts in imports that was using
controller-runtime library. We can import it either through
github.com/kubernetes-sigs or sigs.k8s.io/controller-runtime, but
shouldn't be using both imports, which was causing conflicts at build
time.
* Add initial CRD types for Viewer resource, and generate corresponding
code.
* Use controller-runtime to scaffold out a controller main
* Start adding a deployment
* Clean up and separate reconciler logic into its own package for future testing.
* Clean up with comments
* Run dep ensure
* Update auto-generate script. Only need deepcopy funcs for viewer crd types
* Cleanup previously generated but unused viewer client code
* [WIP] Adding tests
* More tests
* Completed unit tests for reconciler with logic for max viewers
* Add CRD definition, sample instance and update README.
* Fix merge conflict
* Fix readme typo for kube and add direct port-forwarding instructions.
* Add tests for when persistent volume is used with Tensorboard viewer.
Also add a sample YAML to show how to mount and use a GCE persistent
disk in the viewer CRD.
* Remove vendor directory
* Use Bazel to build the entire backend.
This also uses Bazel to generate code from the API definition in the
proto files.
The Makefile is replaced with a script that uses Bazel to first generate
the code, and then copy them back into the source tree.
Most of the BUILD files were generated automatically using Gazelle.
* Fix indentation in generate_api.sh
* Clean up WORKSPACE
* Add README for building/testing backend.
Also fix the missing licenses in the generated proto files.
* Add license to files under go_http_client
The code generator should not be run from HEAD, as it will generate code
that diverges from the pinned version of client-go, and also any
previously generated CRD controller clients.
This change pins both code generator and client-go to the specified
kubernetes release, and ensures the update-codegen.sh script uses the
code-generator specified in the vendor directory rather than HEAD. This
ensures the build is always reproducible.