* fix(sdk): compile ParallelFor in a deterministic manner
During compilataion ParallelFor components end up with randomized names,
which makes it very inconvenient to compare two versions of a pipeline.
This commit fixes this issue.
* fix(sdk): fix new parallel-for test cases
* add test for keyword-only arguments in pipeline func
* fix: kwargs-only argument for pipeline func
* test: kwargs generate same yaml as args
* remove whole metadata
* assert -> self.assertEqual
* programmatic example --> fixed example
* same name for both
Co-authored-by: Alexey Volkov <alexey.volkov@ark-kun.com>
* SDK - Components - Fixed python components that use \n
The escape sequence was being replaced by the `echo` command.
Apparently, unlike in the `bash` shell, the `echo` command of the `sh` shell expands the escape sequences by default and does not support an option to turn it off. (For some reason the -n option works properly even though it should not).
Fixes https://github.com/kubeflow/pipelines/issues/4939
* Fixed the test data
* Fixed the deprecated container component builder
* Fixed the new compiler test case
* Added test
The component specification has always supported component annotations, but there was no way to specify them for the components generated from python. This PR fixes that.
* add placeholder to spec
* add output_directory to pipeline
* respect uri placeholder in file outputs
* wip: add data passing rewriting logic to respect the uri semantics
* merge input_uri and paths when instantiating ContainerOp
* fix
* fix workflow rewriting
* Add topology rewriting
* add a test case, and various fixes
* make the test case more complex
* Fix the case when working with OpsGroup
* Fix test case
* fix resolving test
* fix redundant cmd lines
* fix redundant cmd lines
* resolve comments
* fix file outputs
* resolve comments
* copy file outputs instead of modifying inplace.
When calling the delete() method of a ResourceOp we need to ensure we do
not wait for its deletion.
The reason for this is described in [1]: If a pipeline creates a
resource which is being consumed by its steps (e.g., a PVC), the step
deleting the resource will hang waiting for the Kubernetes resource
deletion which, in turn, is waiting for the other steps to get deleted.
As a result, the pipeline never finishes.
This commit allows specifying flags for the ResourceOp kubectl commands
and defaults to the '--wait=false' flag for the deletion.
Specifying flags for a ResourceTemplate is not supported in Argo v2.7
that we currently deploy. But they will be once we upgrade to v2.11+
[2]. This does not affect the delete() method because we don't rely on
Argo's ResourceTemplate for it.
[1] https://github.com/kubeflow/pipelines/issues/4506
[2] https://github.com/kubeflow/pipelines/issues/4553
Signed-off-by: Ilias Katsakioris <elikatsis@arrikto.com>
* allow calling ContainerOp.after with multiple ops
* m: simplify signature of OpsGroup.after
as it's not public anyway, chances of calling it with by-keyword
argument are small compared to calling it with unpacking an arbitrary
list, which could be empty (previous signature would fail for empty list)
Currently were running the python code inline using `python -c <code>`.
This has two issues:
1) Python does not show source code line in exception stack traces
2) inspect.getsource does not work. This method is used in PyTorch JIT for example.
We solve these issues by writing the code into a file before executing it.
The disadvantage of the new approach is that it adds complexity, a filesystem write operation and also requires the `sh` executable to be present (we could replace it with python-based program if needed).
* feat(sdk): add ability to set retry policy
This fixes the second part of the issue described in #4333
The first part was addressed in #4392
* feat(sdk): validate retry policy name
* feat(sdk): simplify retry policy interface