* Add pipeline_id to run and recurring run protos
* Add pipeline_id to run and recurring_run
* Set status when creating a new job. Closes#9125.
* Enable sample tests
* SDK - Client - Added a way to set experiment name using environment variables
This is useful for launching notebooks or pipeline files that submit
themselves for execution.
* Switched to subprocess.run which supports env
* Setting the environment variable differently
Looks like `subprocess.run` uses `PATH` to search for the program.
* Convert return code to string
* Changed the way the experiment name is being set
* Changed how the notebook installs the SDK
Notebook is overriding the SDK that's being tested.
* Not installing the KFP SDK package
* Removed the experiment_name from samples and configs.
* Changed the SDK installation lines in samples
Otherwise the sample tests do not correctly test the new SDK code.
* Add logic to detect extension name.
* Rename notebook samples
* Change to use config yaml for papermill preprocess.
* Remove ad hoc logic
* Remove duplicated logic
* Refactor
* Add run_pipeline flag in config yaml
* Add run pipeline flag for .py sample as well.
* Fix extension name
* Fix
* Fix problems in docstring.
* refactor run_sample_test.py into two functions
* Refactor the procedure into 3 steps
* Fix bug in exit code format
* Remove two redundant functions.
* Clean unused import
* nit
* nit: improve docstring.
* Refactor sample test into digesting params from config yaml files.
* Fix argument assignment.
* Fix path.
* Fix output params. Not every test is using it.
* Add output placeholder in yaml config.
* Fix yaml config.
* Minor fix.
* Minor fix.
* Move timeout info to config.yaml, too
* Fix import in check_notebook_results.py
* Add type hints in config.yaml
* Remove redundant close.
* Remove redundant import.
* Simplify sample_test.yaml by using withItem syntax.
* Simplify sample_test.yaml by using withItem syntax.
* Change dict to str in withItems.
* remove redundant sed options.
* Fix format/style issues
* [WIP] Refactor repeated logic into two utility functions.
* [WIP] Add a utility function to validate the test results from a notebook test.
* [WIP] Refactor test cases (except for notebook sample tests) into adopting utility functions.
TODO: Need to move the functions of run_*_test.py into a unified run_sample_test.py.
* [WIP] Fix a typo in test name and incorporate tfx-cab-classification, kubeflow-training-classification, xgboost-training-cm and basic ones into one run_sample_test.py
* Fix/add some comments.
* Refactor notebook tests into using utility functions
* lint
* Unify naming in sample_test.yaml
* Remove old *_test.py files
* Fix tests by fixing test names.
* Fix string formatting, per Ark-kun's comment.
* Fix names of papermill-generated python notebook.
* Fix tests
* Fix test by fixing experiment names, and test names in yaml.
* Fix test by fixing experiment names.
* Fix dsl type checking test that does not require experiment set-up.
* Remove redundant commands and usage of ipython
* Revert "Remove redundant commands and usage of ipython"
This reverts commit 23a0e014
* Remove redundant string subs and edit an AI.
* Move image name injection to a utility function to improve readability.
* Revert lint changes of check_notebook_results.py
* Unify test case naming convention to underscore.
* Fix .py name
* Fix README.md
* Fix test
* Add TODO items.
* Add a utility function to inject kubeflow_training_classification python sample file.
* Fix redundant cd command.
* Fix indentation.
* Fix test names in component_test.yaml
* Remove redundant clean_cmle_models.py
* Fix nit problem.
* Fix comment.
* add type checking sample to sample tests
* Dadd the test script exit code to the sample test result; update the check_notebook_result script to not validate the pipeline runs when experiment arg is not provided
* fix typo
* add get_experiment_id and list_runs_by_experiment
* offer only one get_experiment function
* return experiment body instead of id
* simply codes
* simply code 2
* remove experiment_id check in the while loop
* minor bug
* add notebook sample tests for tfx
* parameterize component image tag
* parameterize base and target image tags
* install tensorflow package for the notebook tfx sample test
* bug fixes
* start debug mode
* fix bugs
* add namespace arg to check_notebook_results, copy test results to gcs, fix minor bugs
add CMLE model deletion
* install the correct KFP version in the notebook; parameterize deployer model name and version
* fix CMLE model name bug
* add notebook sample test in v2
* add gcp sa in notebook tfx sample and shutdown debug mode
* import kfp.gcp