* enable pagination when expanding experiment in both the home page and the archive page
* Revert "enable pagination when expanding experiment in both the home page and the archive page"
This reverts commit 5b672739dd.
* add a quick guide on how to generate api reference from kfp api definition
* remove trailing lines
* Generate python client package into repo using kfp_api_single_file.swagger.json
* Commit python client generated by swagger
* Use openapi-generator instead
* Regenerate using openapi-generator
* Add extra info into swagger single file
* Update more info
* update
* Move python http client to upper folder
* Clean up build script
* Update kfp_server_api from new swagger.json
* list experiment desc
* changes should be made in proto
* add comments and descriptions
* comments/descriptions in run.proto
* comments in job.proto and pipeline.proto
* try starting a new line
* newline doesnt help
* add swagger gen'ed file
* address comments
* regenerate json and client via swagger
* address comments
* regenerate go_http_client and swagger from proto
* two periods
* re-generate
* include namespace in CreateVisualization
* include namespace in post body
* put namespace in the path and in front of visualization
* post /apis/v1beta1/visualizations/{namespace}
* Add namespace field to CreateVisualizationRequest
* Support getting visualization service URL with namespace
* fix typo
* Add auth checking & allow empty namespace
* add desscription to client interface
* autogen
* version doesn't have description field
* swagger autogen
* remove two accidentally committed local python package
* add new field in db schema and api schema
* auto genereted types for experiment storage state
* add archive and unarchive methods to backend for experiments.
* auto generated archive/unarchive methods for epxeriments
* add archive and unarchive to client
* set proper storage state when creating experiment
* retrieve storage state when we get/list epxeriment(s)
* change expection in test to have storage state
* add storage state in resource manager test
* revise experiemnt server test
* revise api converter test
* integration test of experiment archive
* archive/unarchive experiment affect the storage state of runs in it
* test all the runs in archive/unarchive experiment
* test all runs are archived/unarchived with their experiment in experiment server
* integration test
* integration test: value type mismatch in assertion
* unused import; default value for storage state
* autogen code for frontend
* reorder the fields in api experiment schema
* switch the position of the two enum to verify a hypothesis
* Put a place hodler to prevent any valid item to take the value 0
* Get rid of the place holder since the cause of issue related to value 0 is found and fixed.
* The returned api experiment now has storage state field
* create experiment return doesn't contain storege state
* Cleanup needs to clean runs and pipeliens now
* a missing client
* use resource reference as fileter instead of experiment uuid
* use same namespace in archive unit test
* Leave archive/unarchive experiment integration test to a separate PR
* also need to update jobs when experiments are archived
* Change of unarchiving logic. When experiment is unarchived, jobs/runs in
it stay archived
* add unit test for the job status in archived/unarchived experiment
* change archive state to 3 value enum; add experiment integration test
* make archive state 3 value enum to avoid 0 value mapped to available; add integration test
* run swagger autogen
* fix an expected value
* fix experiment server test
* add job check in experiment server test
* update job crds
* fix a typo
* remove accidentally included irrelevant changes
* GetNextScheduledEpochNoCatchup implementation
* Add tests for cron schedule nocatchup
* Add tests for periodic schedule and fix a corner case
* Integrate no catchup behavior in swf controller
* Update job api proto
* Regenerate backend client
* Pass catchup parameter in backend api
* Rename proto field to no_catchup, so that it has backward compatible default value
* Update generated backend api
* Use no_catchup field instead
* Add some comments
* add upload pipeline version to upload_pipeline_server and http main
* add apiPipelineVersion to pipeline upload swagger json
* add apiResourceReference to pipeline upload swagger json
* Add yet more types to pipeline upload swagger json
* Unit tests
* add namespace to some run APIs
* update only the create run api
* add resourcereference for namespace runs
* add variables in const
* add types to toModel func
* bug fix
* strip the namespace resource reference when mapping to the db model
* add unit tests
* use gofmt
* replace belonging relationshipreference to owner
* put a todo for further investigation of using namespace or uuid
* apply gofmt
* revert minor change
* Update model_converter.go
* Open the version api in BE for later FE PR to use. Including
auto-generated BE and FE code.
* format FE
* re-generate
* remove an unnecessary auto-generated file
* format
* add version api
* unit tests
* remove debug fmt
* remove unused func
* remove another unused method
* formatting
* remove unused consts
* some comments
* build
* unit tests
* unit tests
* formatting
* unit tests
* run from pipeline version
* pipeline version as resource type
* run store and resource reference store
* formatting and removing debug traces
* run server test
* job created from pipeline version
* variable names
* address comments
* Get pipeline version template is used on pipeline details page, which fetches pipelien version file.
* a temp revert
* address comment
* address comment
* add comment
* get pipeline version template
* verify pipeline version in resource reference
* add unit test for create run from pipeline version
* unit test for create job from pipeline version
* remove some comments
* reformat
* reformat again
* Remove an unrelated change
* change method name
* Add necessary data types/tables for pipeline version. Mostly based
on Yang's branch at https://github.com/IronPan/pipelines/tree/kfpci/.
Backward compatible.
* Modified comment
* Modify api converter according with new pipeline (version) definition
* Change pipeline_store for DefaultVersionId field
* Add pipeline spec to pipeline version
* fix model converter
* fix a comment
* Add foreign key, pagination of list request, refactor code source
* Refactor code source
* Foreign key
* Change code source and package source type
* Fix ; separator
* Add versions table and modify existing pipeline apis
* Remove api pipeline defintiion change and leave it for later PR
* Add comment
* Make schema changing and data backfilling a single transaction
* Tolerate null default version id in code
* fix status
* Revise delete pipeline func
* Use raw query to migrate data
* No need to update versions status
* rename and minor changes
* accidentally removed a where clause
* Fix a model name prefix
* Refine comments
* Revise if condition
* Address comments
* address more comments
* Rearrange pipeline and version related parts inside CreatePipeline, to make them more separate.
* Add package url to pipeline version. Required when calling CreatePipelineVersionRequest
* Single code source url; remove pipeline id as sorting field; reformat
* resolve remote branch and local branch diff
* remove unused func
* Remove an empty line
* Added custom visualization type
* Added support for custom visualizations to the VisualizationCreator component
* Re-generated API
* Updated VisualizationCreator.test.tsx.snap
* Updated VisualizationCreator.test.tsx tohave new and more specific tests
* Added tests to ensure Editor component is visible when specifying visualization type
* Updated test to properly validate provided source is rendered
* Added unit test to ensure that an argument placeholder is provided for every visualization type
* Fixed linting error
* Simplified canGenerate logic
* Added table and tfdv visualization
Also fixed issue surrounding ApiVisualizationType enum
* Fixed table visualization
* Removed byte limit
* Fixed issue where headers would not properly be applied
* Fixed issue where table would not be intractable
* Updated table visualizaiton to reflect changes made to dependency injection
* Fixed bug where checking if headers is provided to table visualizations could crash visualization
* Added TFMA visualization
* Updated new visualizations to match syntax of #1878
* Updated test snapshots to account for TFMA visualization
* Small if statement synax changes
* Add flake8 noqa comments to table.py and tfma.py
* InputPath -> Source
* Changed name of data path/pattern variable from InputPath to Source to improve consistency with current visualization method
* Updated unit tests to reflect name change
* Regenerated swagger definitions to reflect name change
* Readded test that was removed with previous commit
It was deleted by mistake
* String array -> string for arguments parameter in visualization.proto
Switching from a repeated string to a string was done to allow stringified JSON to be used for specifying arguments for visualizations. By doing this, a more generic approach for passing arguments can be taken within python, rather than using argparser, json can be used to decode arguments without having to specify them beforehand.
* Ran generate_api.sh
* Created visualization.proto
* Addressed most of PR feedback
* Fixed comments
* Addressed additional PR feedback
* Changed output from path to html
* Removed id parameter from visualization and changed inputPaths to inputPath
* Added support for command line arguments to be passed via the API
These are required for the new roc curve and will become import for passing any parameters form a user to the visualization.
* Fixed typo
* Fix API package names and regenerate checked-in proto files. Also bump version of GRPC gateway used.
* Fix BUILD.bazel file for api as well.
* Update Bazel version
* SDK - Separated the generated api client package
* Splitting the package build scripts
* Pinning the API client package version
* Moved import kfp_server_api to the top of the file
* Added the Mac OS X prerequisite install instructions
* Moved the build_kfp_server_api_python_package.sh script to the backend dir
* Updated the dependency version span
* add finished time for list runs
* add finished time for list runs
* fix tests
* add finished time for list runs
* Update run.proto
* address comments
* make query more robust
* fix e2e test
* fix e2e test
* Added the terminate run command to backend and CLI.
No generated files for now.
* Added the generated files.
* Moved the code to run_store.go
Now the call chain is run_client->run_server->resource_manager->run_store
* Using the backoff package for retries.
* Trying to update run status in the DB to "Terminating" before patching the workflow.
* Stopped using the Argo errors module.
* Fixed the compilation errors due to recent backend changes.
* RILEY - WIP - Implementation of workflow_fake.go and first test
Added successful test in resource_manager_test.go and completed, barring nits and conventions, the implementation of Patch() and isTerimated() within workflow_fake.go
Additional tests and lots of clean-up are still necessary
* Adds a few more tests to resource_manager_test and cleans up a bit
* Further clean up. Stopped using squirrel for UPDATE query. Added run_store_tests
* Adds terminate run integration test
* Undo changes to go.sum
* Fixes path to long-running.yaml in integration test
* Allow runs with no Conditions to be terminated
* add count to protos and libs
* close db rows before second query
* count -> total_size
* int32 -> int
* move scan count row to util
* add comments
* add logs when transactions fail
* dedup from and where clauses
* simplify job count query
* job count queries
* run count queries
* add job_store total size test
* added tests for list util
* pr comments
* list_utils -> list
* fix clients and fake clients to support TotalSize
* added TotalSize checks in api integration tests
* Add IS_SUBSTRING operator for use in API resource filtering.
This should allow substring matches on fields like names and labels and
so on.
Also bump the version of Mastermind/squirrel so we get the new 'like'
operator for use when building the SQL query.
Additionally, I also had to fix the generate_api.sh script which had a
bug (it modified the wrong file permissions before), and add a dummy
service to generate Swagger definitions for the Filter itself (this was
a hack in the previous Makefile that I lost when we moved to Bazel).
* Add comments for DummyFilterService
* Add more comments
* change errors returned
* fix import
* Use Bazel to build the entire backend.
This also uses Bazel to generate code from the API definition in the
proto files.
The Makefile is replaced with a script that uses Bazel to first generate
the code, and then copy them back into the source tree.
Most of the BUILD files were generated automatically using Gazelle.
* Fix indentation in generate_api.sh
* Clean up WORKSPACE
* Add README for building/testing backend.
Also fix the missing licenses in the generated proto files.
* Add license to files under go_http_client
* Make all ListXXX operations use POST instead of GET.
Generate new swagger definitions and use these to generate the frontend
APIs using `npm run apis`.
This is to support filtering in List requests, as the current
grpc-gateway swagger generator tool does not support repeated fields in
requests used in GET endpoints.
* Use base64-encoded JSON-stringified version of Filter instead.
This lets us keep filter as a simple parameter in the ListXXX requests,
and gets around having to use POST for List requests.
* refactor filter parsing to parseAPIFilter and add tests
* Hack to ensure correct Swagger definitions are generated for Filter.
* Fix merge conflicts with master after rebase
* fix indentation
* Fix hack so frontend apis compile.
* print failing experiments
* try print again.
* revert experiment_api_test
* Use StdEncoding for base64 encoding
* Fix nil pointer dereference error caused err variable shadowing
* skip integration tests when unit test flag is set to true
* wip
* add StorageState enum to proto
* add StorageState to model
* archive proto/model changes
* wip archive endpoint
* wip adding tests
* archive test
* unarchive proto and implementation
* cleanup
* make storage state required, with a default value
* remove unspecified value from storage state enum
* pr comments
* pr comments
* fix archive/unarchive endpoints, add api integration test
* typo
* WIP: Add filter package with tests.
* Add tests for IN predicate.
* Add listing functions
* Try updating list experiments
* Cleanup and finalize list API.
Add tests for list package, and let ExperimentStore use this new API.
Update tests for the latter as well.
* Add comments. BuildSQL -> AddToSelect for flexibility
* Run dep ensure
* Add filter proto to all other resources
* Add filtering for pipeline server
* Add filtering for job server
* Add filtering for run server
* Try to fix integration tests
This change pins the versions of the libraries that were used to
generate the proto definitions using dep. The Makefile is then modified
so that the tool and library versions used to build the proto generated
files are from the vendor directory. This is a hacky, short-term
solution to ensure a reproducible build while we work on switching to
bazel.
The versions in the Gopkg.toml file were chosen based on my experiments
that generated proto files that did not change from what is already
checked in.