* Add runtime resource request for GPUs
* clean up
* Updated docks and add check
* updated with test
* remove from branch
* run tests
* fix gpu vendor format
* Update after feedback
* add unit tet
* remove integration test
* clean up
* Clean up
* Updated to resource_constraints instead of resource
* fix(launcher): handle parameter values with special characters stably
* include new test case
* add tensorboard minio test case
* fix go unit tests
* update test golden
* address feedback
* fix tests
* feat(sample) Add markdown visualization example for v1 and v2 compatible
* address comment
* use multi line markdown
* Update markdown.py
Co-authored-by: Yuan (Bob) Gong <4957653+Bobgy@users.noreply.github.com>
* added resource request at runtime
* fixed things
* Update to use read only parameter insteadt
* added test case and better example
* Updated again
* add the validation
* add to the test suit
* work in progress
* update after feedback
* fix the test
* clean up
* clean up
* fix the path
* add the test again
* clean up
* fix tests
* feedback fix
* comment out and clean up
Update pipeline names in sample/core directory to contain only lowercase
letters, hyphen, and numbers.
Signed-off-by: Diana Atanasova <dianaa@vmware.com>
* feat(samples): sample to use [[RunUUID]] macro in a pipeline
* use kfp.dsl.RUN_ID_PLACEHOLDER instead
* fix, use latest test infra tools
* fix
* address comments
* Update use_run_id.py
* feat: customizable tensorboard image and env vars
* feat: sample pipeline using tensorboard visualization with minio
* change podtemplatespec format to be JSON in mlpipeline-ui-metadata
* fix default value
* update test config
* increase test timeout
* fix test
* fix args
* fix
* address comments
* improve component logging
* Escape strings in RuntimeInfo.
This allows strings to be serialized dictionaries etc.
Also, re-enable a couple of v2 tests.
* update goldens.
* Update exit_handler.py
* add comment.
* keep last error
Co-authored-by: Yuan (Bob) Gong <4957653+Bobgy@users.noreply.github.com>
* test: add exit_handler to v2 test infra
* test: add loop_parallelism_test
* test: add output_a_directory_test
* load common component from url
* add tests to samples/test/config.yaml
* enable output directory test
* test: set up sample test for many samples
* test: rm loop_* tests from v1 sample test, they are already covered in v2 sample test
* fix condition pipeline for basic e2e test
* remove condition from e2e test
* Samples - Added the caching sample
* Added the sample to presubmit tests
* Replaced the sample with the one that is compatible with the current compiler and execution caching
Currently the cached executions cannot be reused in the same pipeline due to the fact that the compiler generates unique output names for every task.
* Fixed the max_cache_staleness value
* Fixed the time format
* Set max_cache_staleness to 0
* Set max_cache_staleness to P0D
* Switched to 60-second work time to avoid rare but possible flakyness
* Switched parameter type to float
This makes it possible to add small random noise, so that the sample retries are separte (not implemented).
* Replaced the sample with a notebook to overcome sample test infra limitation
Currently only notebook can launch pipelines itself. Python code files can only compile the pipeline.
* Added a sample test config
Perhaps we should make them unnecessary.
* Fixed the pipeline parameter type
The sample started failing with "ImportError: cannot import name 'collections_abc' from 'six.moves' (unknown location)".
Updating the sample and fixing the issue.
This sample demonstrates a common training scenario.
New models are being trained starting from the production model (if it
exists).
This sample produces two runs:
1. The trainer will train the model from scratch and set as prod after
testing it
2. Exact same configuration, but the pipeline will discover the existing
prod model (published by the 1st run) and warm-start the training from
it.
* Samples - Added Output a directory sample
* Added the explanation of the sample
* Added examples with non-python components
* Added the license header
* Samples - Added the Train until good pipeline
This sample demonstrates continuous training using a train-eval-check recursive loop.
The main pipeline trains the initial model and then gradually trains the model some more until the model evaluation metrics are good enough.
* Adressed the review feedback
* Backend - Only compiling the preloaded samples
Fixes https://github.com/kubeflow/pipelines/issues/4117
* Fixed the paths
* Removed -o pipefail for now since sh does not support it
* Fixed the quotes
* Removed the __future__ imports
Python 2 is no longer supported.
The annotations cause compilation problems:
```
File "/samples/core/iris/iris.py", line 18
from __future__ import absolute_import
^
SyntaxError: from __future__ imports must occur at the beginning of the file
```
* enable pagination when expanding experiment in both the home page and the archive page
* Revert "enable pagination when expanding experiment in both the home page and the archive page"
This reverts commit 5b672739dd.
* tfx 0.21.2 -> 0.22.2
* tfx 0.20.2 -> 0.22.0
* update requirements.txt
* init
* update comment
* fix module file
* clean up
* update to beam sample
* add doc of default bucket
* bump viz server tfma version
* update iris sample to keras native version
* update iris sample to keras native version
* pin TFMA
* add readme
* add to sample test corpus
* add prebuilt && update some config
* sync frontend
* update snapshot
* update snapshot
* fix gettingstarted page
* fix unit test
* fix unit test
* update description
* update some comments
* add some dependencies.