* fix(sdk): compile ParallelFor in a deterministic manner
During compilataion ParallelFor components end up with randomized names,
which makes it very inconvenient to compare two versions of a pipeline.
This commit fixes this issue.
* fix(sdk): fix new parallel-for test cases
* add test for keyword-only arguments in pipeline func
* fix: kwargs-only argument for pipeline func
* test: kwargs generate same yaml as args
* remove whole metadata
* assert -> self.assertEqual
* programmatic example --> fixed example
* same name for both
Co-authored-by: Alexey Volkov <alexey.volkov@ark-kun.com>
* SDK - Components - Fixed python components that use \n
The escape sequence was being replaced by the `echo` command.
Apparently, unlike in the `bash` shell, the `echo` command of the `sh` shell expands the escape sequences by default and does not support an option to turn it off. (For some reason the -n option works properly even though it should not).
Fixes https://github.com/kubeflow/pipelines/issues/4939
* Fixed the test data
* Fixed the deprecated container component builder
* Fixed the new compiler test case
* Added test
* add placeholder to spec
* add output_directory to pipeline
* respect uri placeholder in file outputs
* wip: add data passing rewriting logic to respect the uri semantics
* merge input_uri and paths when instantiating ContainerOp
* fix
* fix workflow rewriting
* Add topology rewriting
* add a test case, and various fixes
* make the test case more complex
* Fix the case when working with OpsGroup
* Fix test case
* fix resolving test
* fix redundant cmd lines
* fix redundant cmd lines
* resolve comments
* fix file outputs
* resolve comments
* copy file outputs instead of modifying inplace.
When calling the delete() method of a ResourceOp we need to ensure we do
not wait for its deletion.
The reason for this is described in [1]: If a pipeline creates a
resource which is being consumed by its steps (e.g., a PVC), the step
deleting the resource will hang waiting for the Kubernetes resource
deletion which, in turn, is waiting for the other steps to get deleted.
As a result, the pipeline never finishes.
This commit allows specifying flags for the ResourceOp kubectl commands
and defaults to the '--wait=false' flag for the deletion.
Specifying flags for a ResourceTemplate is not supported in Argo v2.7
that we currently deploy. But they will be once we upgrade to v2.11+
[2]. This does not affect the delete() method because we don't rely on
Argo's ResourceTemplate for it.
[1] https://github.com/kubeflow/pipelines/issues/4506
[2] https://github.com/kubeflow/pipelines/issues/4553
Signed-off-by: Ilias Katsakioris <elikatsis@arrikto.com>
Currently were running the python code inline using `python -c <code>`.
This has two issues:
1) Python does not show source code line in exception stack traces
2) inspect.getsource does not work. This method is used in PyTorch JIT for example.
We solve these issues by writing the code into a file before executing it.
The disadvantage of the new approach is that it adds complexity, a filesystem write operation and also requires the `sh` executable to be present (we could replace it with python-based program if needed).
* feat(sdk): add ability to set retry policy
This fixes the second part of the issue described in #4333
The first part was addressed in #4392
* feat(sdk): validate retry policy name
* feat(sdk): simplify retry policy interface
* Compile IR proto in setup.py
* compile to IR
* Fix importer node logic and lint
* cleanup and lint
* merge, undo setup.py change
* cleanup and lint
* remove currently unused code
* format _component_bridge.py
* cleanup and format
* cleanup
* upgrade protobuf in test
* restructure and test
* address review comments
* fix bug
* avoid f-strings formatting
* address review comments
* address review comments
* limit the primitive types to only int, double, and string.
* Fix test for python3.5
* use instance_schema instead of schema_title
* add v2 to setup.py
* address review comments
* move the tests closer to the code
* add more tests
* cleanup and linting
* add more tests
* fix bug on input paramter connection
* linting
* restructure tests
* fix python3.5 test failure
* support outputs.parameters placeholder
* remove pipeline decorator from v2.dsl
ContainerOp has no concept of inputs, so it looses any information about them such as input names and in some cases even the passed argument values (which are just injected into the command line).
This commit fixes that issue by preserving the paramater arguments map and ultimately storing it in an Argo template annotation.
Fixes https://github.com/kubeflow/pipelines/issues/4556
* add tests for pythonic and non-pythonic component outputs
* fix: graph for non-pythonic container output's names
Loading container component from component.yaml creates both
pythonic and original output names. Graph component iterated over
all outputs, using pythonic-to-output conversion on all. If some
of the names are not identical to their pythonic versions, they
rised KeyError on the lookup table.
This commit fixes this problem by using default value for the lookup.
* remove depythonification of outputs - not needed anymore
* SDK - Added warning when not using components
We have long advised our users to create reusable components.
Creating reusable components is as easy as creating ContainerOp instances, but the components are shareable, portable and are easier to support going forward.
* Disable warning for TFX
* Fixed the warning disabling logic
* Added tests
* SDK - Compiler - Fixed the input argument mapping when using dsl.graph_component
Fixes https://github.com/kubeflow/pipelines/issues/3915
* Stopped relying on the argument order at all
This can make the compilation less fragile.
* SDK - Tests - Restored the ParallelFor compiler test data
Fixes https://github.com/kubeflow/pipelines/issues/4102
* Removed the pipeline-sdk-type annotations
* Fixed the test_artifact_passing_using_volume test data
* SDK - Compiler - Added support for volume-based data passing
Currently artifact passing is performed by Argo sidecar containers what download input data and upload output data to artifact repository (usually, S3-compatible blob storage like Minio).
The performance of this method is not optimal and it requires that pod disks have enough capacity to hold all artifact data.
This commit adds support for volume-based data passing.
This method involves using a single milti-write Kubernetes data volume to pass all intermediate data.
Parts of the volume are mounted to the input/output artifact directories, so when the user program reads and writes files, the files actually reside in the data volume.
This method improves the performance and reduces storage resource requirements.
The data volume must exist and support "READ_WRITE_MANY".
Limitations:
* All artifact file names must be the same (e.g. "data"). All auto-generated paths are already consistent. Avoid using any hard-coded paths.
* Passing constant values (text) as arguments for artifact inputs is not supported.
* The feature is experimental.
* Added data_passing_methods.KubernetesVolume
This class represents a configured volume-based artifact passing method.
* Added PipelineConf.data_passing_method
This property allows setting the method that will be used for intermediate data passing.
Added the compiler support for the new feature.
Example:
```python
from kfp.dsl import PipelineConf, data_passing_methods
from kubernetes.client.models import V1Volume, V1PersistentVolumeClaim
pipeline_conf = PipelineConf()
pipeline_conf.data_passing_method = data_passing_methods.KubernetesVolume(
volume=V1Volume(
name='data',
persistent_volume_claim=V1PersistentVolumeClaim('data-volume'),
),
path_prefix='artifact_data/',
)
```
* Added unit test
* Fixed bug in the unit test
Kubernetes does not validate the structures at all...
* Fixed bug in the result structure
* Fixed the test data
The class should be V1PersistentVolumeClaimVolumeSource, not V1PersistentVolumeClaimSpec.
* Fixed the test
Previously the default image was set to an old version of tensorflow image. That image is now outdated. It's also framework-specific and pretty big.
We're switching to the official python image which is small, official and framework-agnostic.
The users can easily switch to the old behavior by just specifying `base_image='tensorflow/tensorflow:1.13.2-py3'` during the component creation.
* SDK - Components - Stabilize JSON serialization by sorting keys
Otherwise serialization of the default values of the component/pipeline inputs is unstable on Python 3.5.
* Fixed the test data
* add OOB component dict and utility function
* add test
* add a transformer, which appends the component name label
* add transformer function, compiler and test
* move telemetry test
* fix none uri
* applies comments
* revert dependency on frozendict
* fixes some tests
* resolve comments
In some cases the input and output names need to be converted (for example, the input names need to be converted to python function parameter names).
With naive renaming, multiple inputs might be mapped to the same parameter name in some edge cases. The `generate_unique_name_conversion_table` creates a correct mapping.
However, in some really rare cases the resulting mapping could be confusing since it might rename an input whose name was already a correct parameter name and map a different input name to that parameter. E.g. {'AAA' -> 'aaa', 'aaa' -> 'aaa_2'}.
This PR fixes that. Names that do not change when applying the conversion_func will remain unchanged in the mapping. {'AAA' -> 'aaa_2', 'aaa' -> 'aaa'}.
* SDK - Made outputs with original names available in ContainerOp.outputs
Previously, ContainerOp had strict requirements for the output names, so we had to convert all the names before passing them to the ContainerOp constructor. Outputs with non-pythonic names could not be accessed using their original names.
Now ContainerOp supports any output names, so we're now using the original output names.
However to support legacy pipelines, we're also adding output references with pythonic names.
* Fixed the compiler test data
* Fixed the duplicate parameter outputs in the compiled workflow
* Fixed long line
* Stabilized the output naming conflict resolution
* Fix case of missing special outputs
* SDK - Components - Calculate component hash digest
The digest is calculated when loading the component from URL, tfile or text.
Slightly refactored component loading - streams are no longer used, only bytes.
TODO: Calculate the digest if missing
TODO: Report possible digest conflicts
* Updated the test graph component
* Using the actual digest in the test
* SDK - Prioritize lib2to3 when stripping type annotations
It's a standard python library (although not well supported) and it doe not leave training spaces.
* Fixed compiler test data
* SDK - Annotate pods with component_ref
This preserves the information about the digest of the component and the location from which the component was loaded.
* Fixed compiler tests