* Fix API package names and regenerate checked-in proto files. Also bump version of GRPC gateway used.
* Fix BUILD.bazel file for api as well.
* Update Bazel version
* clean up
* argo
* expose configuration for max number of viewers
* add sample how to configure
* Revert "argo"
This reverts commit 3ff0d07679.
* update namespaced-install.yaml
* Backend - Marking auto-added artifacts as optional
* Updated Argo version in the WORKSPACE file
* Updated WORKSPACE using gazelle
* Added the package that gazelle has missed
* Fixed syntax error
* Updated Argo package to v2.3.0-rc3
* Reworded the comment
* SDK - Separated the generated api client package
* Splitting the package build scripts
* Pinning the API client package version
* Moved import kfp_server_api to the top of the file
* Added the Mac OS X prerequisite install instructions
* Moved the build_kfp_server_api_python_package.sh script to the backend dir
* Updated the dependency version span
* Clear default exp table on delete and create default exp on run create
if no default exists
With this change, if the delete experiment API is called on the default
experiment, then the ID will also be removed from the default_experiments
table.
Additionally, if the default experiment doesn't exist and a new run is
created without an experiment, a new default experiment will be created,
and the run will be placed within this experiment.
* Adds integration test for creating a run without an experiment
* Fixes failure to close database connection and adds tests for recreating and deleting default experiment
* Rename function
* Revert some row.Close() calls
not match expected format.
Previously we assume the fields 'artifact_type' and 'artifact' always
exist. This change ensures we guard against the case when one or both of
these required fields aren't present.
* WIP - Create default experiment upon API server initialization
* Default experiment initialization caused crashes if API server pod was restarted without clearing DB
* Adding new table to store default experiment ID
* Add default experiment type model definition
* Minor fixes, everything seems to work now
* Clean up. Renamed to default_experiment_store
* Adds tests for the default_experiment_store
* Add integration test for verifying initial cluster state. Currently only covers existence of default experiment
* Don't run initialization tests except as integration tests
* Fixes comments
* PR comments and cleanup
* Extract code in resource_manager to helper func
Without setting to 0, the finished at field could be null if the argo workflow is already evicted from the cluster.
This result in errors parsing the table.
Alternatively we can use sql.NullInt64 type to parse the sql but that's less elegant.
* add finished time for list runs
* add finished time for list runs
* fix tests
* add finished time for list runs
* Update run.proto
* address comments
* make query more robust
* fix e2e test
* fix e2e test
* update query
* fix test
* return run details for list run
* return run details
* Revert "return run details"
This reverts commit 085ead3530.
* Revert "return run details for list run"
This reverts commit f6b8139e19.
* Update swagger definitions
* WIP - Adds ability to terminate runs to frontend
* Update snapshots
* Adds tests. Also changes warning message color to orange rather than red
* Remove refresh button from run details page
* Elaborate terminate confirmation message
* Minor fixes
* Remove references to refresh button from integration tests
* Enable pipeline packages with multiple files
* Added tests
* Initialize the variables to nil
* Trying to read the archive file entry immediately
* Fixed the pipeline packages used by the `TestPipelineAPI` test.
Also added a failing test case. Will disable it in next commit.
* Disabling the test for the UploadFile bug I've discovered
* Fixed the pipeline name.
* Removed the disabled extra test.
* Addressed the feedback.
* Removed the "header == nil" check (feedback).
* Fixed typo
* Addressed the PR feedback
Added space before comment.
Checking for the error again.
These tests check for correct recording of metadata (specifically those
produced by TFX components).
Also, ensure that we check for non-nil values of output parameter values
before attempting to parse the value for metadata.
* Added the terminate run command to backend and CLI.
No generated files for now.
* Added the generated files.
* Moved the code to run_store.go
Now the call chain is run_client->run_server->resource_manager->run_store
* Using the backoff package for retries.
* Trying to update run status in the DB to "Terminating" before patching the workflow.
* Stopped using the Argo errors module.
* Fixed the compilation errors due to recent backend changes.
* RILEY - WIP - Implementation of workflow_fake.go and first test
Added successful test in resource_manager_test.go and completed, barring nits and conventions, the implementation of Patch() and isTerimated() within workflow_fake.go
Additional tests and lots of clean-up are still necessary
* Adds a few more tests to resource_manager_test and cleans up a bit
* Further clean up. Stopped using squirrel for UPDATE query. Added run_store_tests
* Adds terminate run integration test
* Undo changes to go.sum
* Fixes path to long-running.yaml in integration test
* Allow runs with no Conditions to be terminated
* Add fake metadata store and fix tests.
Also, add instructions on how to build/run the backend with Bazel.
Note that the fake metadata store works, but I need to add proper tests
that exercise it. That'll be done in a separate PR.
One thing I'm missing here is how to make Bazel run well in Travis. I
will send a follow up PR for doing this.
* move select for update to the db interface
* Detecting file format using signature instead of file extension
* Added tests for extension-independent pipeline loading
* Added another malformed zip test
* WIP: ML Metadata in KFP
* Move metadata tracking to its own package.
* Clean up
* Address review comments, update travis.yml
* Add dependencies for building in Dockerfile
* Log errors but continue to update run when metadata storing fails.
* Update workspace to get latest ml-metadata version.
* Update errors
* add count to protos and libs
* close db rows before second query
* count -> total_size
* int32 -> int
* move scan count row to util
* add comments
* add logs when transactions fail
* dedup from and where clauses
* simplify job count query
* job count queries
* run count queries
* add job_store total size test
* added tests for list util
* pr comments
* list_utils -> list
* fix clients and fake clients to support TotalSize
* added TotalSize checks in api integration tests
* Add Dockerfile for building Viewer CRD controller.
Also build it as part of the CloudBuild process.
* Revert change to add build to bootstrapper script
If sort by or filtering criteria is specified in conjunction with a next
page token, ensure they match up, otherwise return an error.
Also, change the errors to be InvalidInputErrors instead of standard Go
string errors to be consistent with the rest of apiserver.
* Add IS_SUBSTRING operator for use in API resource filtering.
This should allow substring matches on fields like names and labels and
so on.
Also bump the version of Mastermind/squirrel so we get the new 'like'
operator for use when building the SQL query.
Additionally, I also had to fix the generate_api.sh script which had a
bug (it modified the wrong file permissions before), and add a dummy
service to generate Swagger definitions for the Filter itself (this was
a hack in the previous Makefile that I lost when we moved to Bazel).
* Add comments for DummyFilterService
* Add more comments
* change errors returned
* fix import
* Run `go vet` as part of the Travis CI.
Also fix existing issues found by Go vet.
* Explicitly check for shadowing
* Fix shadowing problems throughout codebase
* Actually run all checks including shadow
change.
That change was submitted in parallel with the PR that moved everything
to Bazel and so wasn't included.
Along the way, fix conflicts in imports that was using
controller-runtime library. We can import it either through
github.com/kubernetes-sigs or sigs.k8s.io/controller-runtime, but
shouldn't be using both imports, which was causing conflicts at build
time.