Change to add base framework for cleaning up resources in a GCP project.
The resource specification is specified declaratively using a YAML file.
As per current requirements this change only adds cleaning up of GKE
clusters.
* Backend - Marking auto-added artifacts as optional
* Updated Argo version in the WORKSPACE file
* Updated WORKSPACE using gazelle
* Added the package that gazelle has missed
* Fixed syntax error
* Updated Argo package to v2.3.0-rc3
* Reworded the comment
* WIP: ML Metadata in KFP
* Move metadata tracking to its own package.
* Clean up
* Address review comments, update travis.yml
* Add dependencies for building in Dockerfile
* Log errors but continue to update run when metadata storing fails.
* Update workspace to get latest ml-metadata version.
* Update errors
If sort by or filtering criteria is specified in conjunction with a next
page token, ensure they match up, otherwise return an error.
Also, change the errors to be InvalidInputErrors instead of standard Go
string errors to be consistent with the rest of apiserver.
* Add IS_SUBSTRING operator for use in API resource filtering.
This should allow substring matches on fields like names and labels and
so on.
Also bump the version of Mastermind/squirrel so we get the new 'like'
operator for use when building the SQL query.
Additionally, I also had to fix the generate_api.sh script which had a
bug (it modified the wrong file permissions before), and add a dummy
service to generate Swagger definitions for the Filter itself (this was
a hack in the previous Makefile that I lost when we moved to Bazel).
* Add comments for DummyFilterService
* Add more comments
* change errors returned
* fix import
change.
That change was submitted in parallel with the PR that moved everything
to Bazel and so wasn't included.
Along the way, fix conflicts in imports that was using
controller-runtime library. We can import it either through
github.com/kubernetes-sigs or sigs.k8s.io/controller-runtime, but
shouldn't be using both imports, which was causing conflicts at build
time.
* Add initial CRD types for Viewer resource, and generate corresponding
code.
* Use controller-runtime to scaffold out a controller main
* Start adding a deployment
* Clean up and separate reconciler logic into its own package for future testing.
* Clean up with comments
* Run dep ensure
* Update auto-generate script. Only need deepcopy funcs for viewer crd types
* Cleanup previously generated but unused viewer client code
* [WIP] Adding tests
* More tests
* Completed unit tests for reconciler with logic for max viewers
* Add CRD definition, sample instance and update README.
* Fix merge conflict
* Fix readme typo for kube and add direct port-forwarding instructions.
* Add tests for when persistent volume is used with Tensorboard viewer.
Also add a sample YAML to show how to mount and use a GCE persistent
disk in the viewer CRD.
* Remove vendor directory
* Use Bazel to build the entire backend.
This also uses Bazel to generate code from the API definition in the
proto files.
The Makefile is replaced with a script that uses Bazel to first generate
the code, and then copy them back into the source tree.
Most of the BUILD files were generated automatically using Gazelle.
* Fix indentation in generate_api.sh
* Clean up WORKSPACE
* Add README for building/testing backend.
Also fix the missing licenses in the generated proto files.
* Add license to files under go_http_client
* add vendor to gitignore
* switch to go module
* switch to go module
* switch to go module
* prune go mod
* prune go mod
* turn on go mod for test
* enable go module in docker image
* enable go module in docker image
* fix images
* debug
* debug
* debug
* update image