* Support per workflow TTL (ttl_seconds_after_finished) with new format of Argo workflow manifest
* Update test for TTL
* Declare fix in release note of SDK
* update syntaxe
* Update RELEASE.md
Add Braking change due to incompatibility with KFP pre 1.7 due to Argo 2.X
* Fix podSpecPatch bug
* wrong structure for nodeselector and bring back integreation test
* updated python compiler test
* updated to use the correct GPU type
* missed to commit the updated test config
* Updated release notes
* remove test to see if it solved the issue
* Reformat sdk only using the new yapf config.
* Reformat docstrings using docformatter.
* update golden files to resolve diff caused by whitespaces
* fix some tests
* format .py files under sdk/python/tests using yapf
* additional docformatter
* fix some tests
* feat(sdk): add default schema_version to pipeline
* sync api for go
* Fix tests and address comments
* Bump pipeline_spec version
* Fix v1 tests
* rebase to master
* sync api for go
* Fix tests and address comments
* Bump pipeline_spec version
* Fix v1 tests
* Refactor and move all v2 related code to under the v2 namespace.
Most of the changes are around imports and restructuring of the
codebase. While it looks like a lot of code was added, most of the code
already existed and was simply moved or copied over to v2. The only
exceptions are:
- under kfp/v2/components/component_factory.py: some helper functions
were copied with simplification from _python_op.py
- we no longer strip the `_path` suffix in v2 components.
Note: there is still some duplication of code (particularly between
component_factory.py and _python_op.py), but it's ok for now since we
intend to replace some of this with v2 ComponentSpec + BaseComponent.
* Update setup.py.
* update tests.
* revert accidental change of gcpc
* Fix component entrypoint.
* Update goldens.
* fix tests.
* fix merge conflict.
* revert gcpc change.
* fix tests.
* fix tests.
* Add type aliases for moved files.
* merge and update goldens.
* Add runtime resource request for GPUs
* clean up
* Updated docks and add check
* updated with test
* remove from branch
* run tests
* fix gpu vendor format
* Update after feedback
* add unit tet
* remove integration test
* clean up
* Clean up
* Updated to resource_constraints instead of resource
* fix uri placeholder in v2 compatible mode
* fix tests
* fix path generation
* fix tests
* fix test
* cleanup
* clean up
* fix test
* fix test
* fix test
* Updt argoproj/argo URLs to argoproj/argo-workflows
* Update link to workflows.ts
* Update license.txt to reduce # of changed lines
* Revert changes to backend Dockerfile & license.txt
* Update license.txt, keep line endings
* feat(sdk/dsl/compiler): support --mode flag which can turn on v2 compatible mode
* override compiler default mode using KF_PIPELINES_COMPILER_MODE env var
* update V1_LEGACY to V1
* add unit tests
* address feedback
* clean up
* cleanup again
* use absl.testing.parameterized for table driven tests
* update
* added resource request at runtime
* fixed things
* Update to use read only parameter insteadt
* added test case and better example
* Updated again
* add the validation
* add to the test suit
* work in progress
* update after feedback
* fix the test
* clean up
* clean up
* fix the path
* add the test again
* clean up
* fix tests
* feedback fix
* comment out and clean up
* feat(sdk): Support backoffs in retry strategy
Signed-off-by: Stefano Fioravanzo <stefano@arrikto.com>
* Add Optional type hint
Signed-off-by: Stefano Fioravanzo <stefano@arrikto.com>
* add test for keyword-only arguments in pipeline func
* fix: kwargs-only argument for pipeline func
* test: kwargs generate same yaml as args
* remove whole metadata
* assert -> self.assertEqual
* programmatic example --> fixed example
* same name for both
Co-authored-by: Alexey Volkov <alexey.volkov@ark-kun.com>
* add placeholder to spec
* add output_directory to pipeline
* respect uri placeholder in file outputs
* wip: add data passing rewriting logic to respect the uri semantics
* merge input_uri and paths when instantiating ContainerOp
* fix
* fix workflow rewriting
* Add topology rewriting
* add a test case, and various fixes
* make the test case more complex
* Fix the case when working with OpsGroup
* Fix test case
* fix resolving test
* fix redundant cmd lines
* fix redundant cmd lines
* resolve comments
* fix file outputs
* resolve comments
* copy file outputs instead of modifying inplace.
* feat(sdk): add ability to set retry policy
This fixes the second part of the issue described in #4333
The first part was addressed in #4392
* feat(sdk): validate retry policy name
* feat(sdk): simplify retry policy interface
ContainerOp has no concept of inputs, so it looses any information about them such as input names and in some cases even the passed argument values (which are just injected into the command line).
This commit fixes that issue by preserving the paramater arguments map and ultimately storing it in an Argo template annotation.
Fixes https://github.com/kubeflow/pipelines/issues/4556
* SDK - Compiler - Fixed the input argument mapping when using dsl.graph_component
Fixes https://github.com/kubeflow/pipelines/issues/3915
* Stopped relying on the argument order at all
This can make the compilation less fragile.
* SDK - Compiler - Added support for volume-based data passing
Currently artifact passing is performed by Argo sidecar containers what download input data and upload output data to artifact repository (usually, S3-compatible blob storage like Minio).
The performance of this method is not optimal and it requires that pod disks have enough capacity to hold all artifact data.
This commit adds support for volume-based data passing.
This method involves using a single milti-write Kubernetes data volume to pass all intermediate data.
Parts of the volume are mounted to the input/output artifact directories, so when the user program reads and writes files, the files actually reside in the data volume.
This method improves the performance and reduces storage resource requirements.
The data volume must exist and support "READ_WRITE_MANY".
Limitations:
* All artifact file names must be the same (e.g. "data"). All auto-generated paths are already consistent. Avoid using any hard-coded paths.
* Passing constant values (text) as arguments for artifact inputs is not supported.
* The feature is experimental.
* Added data_passing_methods.KubernetesVolume
This class represents a configured volume-based artifact passing method.
* Added PipelineConf.data_passing_method
This property allows setting the method that will be used for intermediate data passing.
Added the compiler support for the new feature.
Example:
```python
from kfp.dsl import PipelineConf, data_passing_methods
from kubernetes.client.models import V1Volume, V1PersistentVolumeClaim
pipeline_conf = PipelineConf()
pipeline_conf.data_passing_method = data_passing_methods.KubernetesVolume(
volume=V1Volume(
name='data',
persistent_volume_claim=V1PersistentVolumeClaim('data-volume'),
),
path_prefix='artifact_data/',
)
```
* Added unit test
* Fixed bug in the unit test
Kubernetes does not validate the structures at all...
* Fixed bug in the result structure
* Fixed the test data
The class should be V1PersistentVolumeClaimVolumeSource, not V1PersistentVolumeClaimSpec.
* Fixed the test
* add OOB component dict and utility function
* add test
* add a transformer, which appends the component name label
* add transformer function, compiler and test
* move telemetry test
* fix none uri
* applies comments
* revert dependency on frozendict
* fixes some tests
* resolve comments
* SDK - Annotate pods with component_ref
This preserves the information about the digest of the component and the location from which the component was loaded.
* Fixed compiler tests
* SDK - Compiler - Fixed ParallelFor name clashes
The ParallelFor argument reference resolving was really broken.
The logic "worked" like this - of the name of the referenced output
contained the name of the loop collection source output, then it was
considered to be the reference to the loop item.
This broke lots of scenarios especially in cases where there were
multiple components with same output name (e.g. the default "Output"
output name). The logic also did not distinguish between references to
the loop collection item vs. references to the loop collection source
itself.
I've rewritten the argument resolving logic, to fix the issues.
* Argo cannot use {{item}} when withParams items are dicts
* Stabilize the loop template names
* Renamed the test case
* SDK - Improve errors when ContainerOp.output is unavailable
ContainerOp.output is only available when there is only one output.
Right now, when there are multiple outputs it just holds `None` instead of the a task output reference.
In this case however it's indistinguishable from just passing None argument.
This PR gives a quick fix to make accessing the nonexistent `.output` a compile-time error.
* Fixed the implementation and added tests
* Trigger retests