* Reformat sdk only using the new yapf config.
* Reformat docstrings using docformatter.
* update golden files to resolve diff caused by whitespaces
* fix some tests
* format .py files under sdk/python/tests using yapf
* additional docformatter
* fix some tests
* add local runner which will run ops in docker or locally
* use str.format rather than f-string
* add some brief doc string in local client
* comment the unittest about running op in docker, which is not supported in CI env for now
* Add some brief docstring about DAG used in local client
* make graph/reverse_graph of DAG as property to keep them in sync
* make some methods of LocalClient static
* remove circular reference in local client
* Incapsulate artifact storage root in the constuctor of LocalClient
* Add Alpha notice for kfp.run_pipeline_func_locally
* Support list of local images in kfp.run_pipeline_func_locally
* make staticmethod to module level private method
* Trivial modification according to code review, some renaming or docstring
* local runner support components without '--' as argument prefix
* make output file of op in loop unique
* Local runner decides whether run component in docker or in local process base on ExecutionMode
* SDK - Compiler - Allow creating portable pipelines
This change allows directly passing the PipelineConf instance to compiler or launcher which makes it easier to create portable pipelines by allowing the environment-specific configuration to be directly passed to the environment-specific launcher.
Background:
PipelineConf holds all pipeline-level configuration including `op_transformers`, `image_pull_secrets` etc. Some of these are specific to particular execution environment (e.g. GCP secret or Argo artifact location or Kubernetes-specific options).
Previously, the only way to modify `PipelineConf` was to do it inside the piepline function. That tied the pipeline function to specific execution environment (e.g. GCP, Argo or Kubernetes)
Solution: This change allows directly passing the PipelineConf instance to compiler or launcher. This allows writing portable enlauncher and environment agnostic pipeline functions. All environment-specific configurations can be moved to launching stage.
Before:
```python
# Defining pipeline
def my_pipeline():
# portable pipeline code
dsl.get_pipeline_conf().add_op_transformer(gcp.use_gcp_secret('user-gcp-sa'))
# Launching pipeline
kfp.Clinet().create_run_from_pipeline_func(my_pipeline, arguments={})
```
After:
```python
# Defining pipeline
def my_pipeline():
# portable pipeline code
# Launching pipeline
pipeline_conf = dsl.PipelineConf()
pipeline_conf.add_op_transformer(gcp.use_gcp_secret('user-gcp-sa'))
kfp.Clinet().create_run_from_pipeline_func(my_pipeline, arguments={}, pipeline_conf=pipeline_conf)
```
After 2 *(launching same portable pipeline using different launchers):
```python
# Loading portable pipeline
from portable_pipeline import my_pipeline
# Launching pipeline on Kubeflow
pipeline_conf = dsl.PipelineConf()
pipeline_conf.add_op_transformer(gcp.use_gcp_secret('user-gcp-sa'))
kfp.Clinet().create_run_from_pipeline_func(my_pipeline, arguments={}, pipeline_conf=pipeline_conf)
# Launching pipeline on locally (not implemented yet)
kfp.run_pipeline_func_locally(my_pipeline, arguments={})
```
* Added parameter docstring
This commit adds alias to the kfp.Client.create_run_from_pipeline_func method as the root kfp.run_pipeline_func_on_cluster function.
In future more runners can be added (local, etc).