Restructure samples (#1710)
* restructure samples * update apiserver and sample test for the new location
|
|
@ -29,7 +29,7 @@ COPY ./samples .
|
|||
#I think it's better to just use a shell loop though.
|
||||
#RUN for pipeline in $(find . -maxdepth 2 -name '*.py' -type f); do dsl-compile --py "$pipeline" --output "$pipeline.tar.gz"; done
|
||||
#The "for" loop breaks on all whitespace, so we either need to override IFS or use the "read" command instead.
|
||||
RUN find . -maxdepth 2 -name '*.py' -type f | while read pipeline; do dsl-compile --py "$pipeline" --output "$pipeline.tar.gz"; done
|
||||
RUN find . -maxdepth 3 -name '*.py' -type f | while read pipeline; do dsl-compile --py "$pipeline" --output "$pipeline.tar.gz"; done
|
||||
|
||||
FROM debian:stretch
|
||||
|
||||
|
|
|
|||
|
|
@ -1,32 +1,32 @@
|
|||
[
|
||||
{
|
||||
"name":"[Sample] ML - XGBoost - Training with Confusion Matrix",
|
||||
"description":"A trainer that does end-to-end distributed training for XGBoost models. For source code, refer to https://github.com/kubeflow/pipelines/tree/master/samples/xgboost-spark",
|
||||
"file":"/samples/xgboost-spark/xgboost-training-cm.py.tar.gz"
|
||||
"description":"A trainer that does end-to-end distributed training for XGBoost models. For source code, refer to https://github.com/kubeflow/pipelines/tree/master/samples/core/xgboost-spark",
|
||||
"file":"/samples/core/xgboost-spark/xgboost-training-cm.py.tar.gz"
|
||||
},
|
||||
{
|
||||
"name":"[Sample] ML - TFX - Taxi Tip Prediction Model Trainer",
|
||||
"description":"Example pipeline that does classification with model analysis based on a public tax cab BigQuery dataset. For source code, refer to https://github.com/kubeflow/pipelines/tree/master/samples/tfx",
|
||||
"file":"/samples/tfx/taxi-cab-classification-pipeline.py.tar.gz"
|
||||
"description":"Example pipeline that does classification with model analysis based on a public tax cab BigQuery dataset. For source code, refer to https://github.com/kubeflow/pipelines/tree/master/samples/core/tfx",
|
||||
"file":"/samples/core/tfx/taxi-cab-classification-pipeline.py.tar.gz"
|
||||
},
|
||||
{
|
||||
"name":"[Sample] Basic - Sequential execution",
|
||||
"description":"A pipeline with two sequential steps. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/basic/sequential.py",
|
||||
"file":"/samples/basic/sequential.py.tar.gz"
|
||||
"description":"A pipeline with two sequential steps. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/core/sequential/sequential.py",
|
||||
"file":"/samples/core/sequential/sequential.py.tar.gz"
|
||||
},
|
||||
{
|
||||
"name":"[Sample] Basic - Parallel execution",
|
||||
"description":"A pipeline that downloads two messages in parallel and prints the concatenated result. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/basic/parallel_join.py",
|
||||
"file":"/samples/basic/parallel_join.py.tar.gz"
|
||||
"description":"A pipeline that downloads two messages in parallel and prints the concatenated result. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/core/parallel_join/parallel_join.py",
|
||||
"file":"/samples/core/parallel_join/parallel_join.py.tar.gz"
|
||||
},
|
||||
{
|
||||
"name":"[Sample] Basic - Conditional execution",
|
||||
"description":"A pipeline shows how to use dsl.Condition. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/basic/condition.py",
|
||||
"file":"/samples/basic/condition.py.tar.gz"
|
||||
"description":"A pipeline shows how to use dsl.Condition. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/core/condition/condition.py",
|
||||
"file":"/samples/core/condition/condition.py.tar.gz"
|
||||
},
|
||||
{
|
||||
"name":"[Sample] Basic - Exit Handler",
|
||||
"description":"A pipeline that downloads a message and prints it out. Exit Handler will run at the end. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/basic/exit_handler.py",
|
||||
"file":"/samples/basic/exit_handler.py.tar.gz"
|
||||
"description":"A pipeline that downloads a message and prints it out. Exit Handler will run at the end. For source code, refer to https://github.com/kubeflow/pipelines/blob/master/samples/core/exit_handler/exit_handler.py",
|
||||
"file":"/samples/core/exit_handler/exit_handler.py.tar.gz"
|
||||
}
|
||||
]
|
||||
|
|
@ -1,3 +1,35 @@
|
|||
The sample pipelines give you a quick start to building and deploying machine learning pipelines with Kubeflow.
|
||||
* Follow the guide to [deploy the Kubeflow pipelines service](https://www.kubeflow.org/docs/guides/pipelines/deploy-pipelines-service/).
|
||||
* Build and deploy your pipeline [using the provided samples](https://www.kubeflow.org/docs/guides/pipelines/pipelines-samples/).
|
||||
|
||||
|
||||
|
||||
|
||||
This page tells you how to use the _basic_ sample pipelines contained in the repo.
|
||||
|
||||
## Compile the pipeline specification
|
||||
|
||||
Follow the guide to [building a pipeline](https://www.kubeflow.org/docs/guides/pipelines/build-pipeline/) to install the Kubeflow Pipelines SDK and compile the sample Python into a workflow specification. The specification takes the form of a YAML file compressed into a `.tar.gz` file.
|
||||
|
||||
For convenience, you can download a pre-compiled, compressed YAML file containing the
|
||||
specification of the `core/sequential.py` pipeline. This saves you the steps required
|
||||
to compile and compress the pipeline specification:
|
||||
[sequential.tar.gz](https://storage.googleapis.com/sample-package/sequential.tar.gz)
|
||||
|
||||
## Deploy
|
||||
|
||||
Open the Kubeflow pipelines UI, and follow the prompts to create a new pipeline and upload the generated workflow
|
||||
specification, `my-pipeline.tar.gz` (example: `sequential.tar.gz`).
|
||||
|
||||
## Run
|
||||
|
||||
Follow the pipeline UI to create pipeline runs.
|
||||
|
||||
Useful parameter values:
|
||||
|
||||
* For the "exit_handler" and "sequential" samples: `gs://ml-pipeline-playground/shakespeare1.txt`
|
||||
* For the "parallel_join" sample: `gs://ml-pipeline-playground/shakespeare1.txt` and `gs://ml-pipeline-playground/shakespeare2.txt`
|
||||
|
||||
## Components source
|
||||
|
||||
All samples use pre-built components. The command to run for each container is built into the pipeline file.
|
||||
|
|
|
|||
|
|
@ -1,30 +0,0 @@
|
|||
|
||||
|
||||
This page tells you how to use the _basic_ sample pipelines contained in the repo.
|
||||
|
||||
## Compile the pipeline specification
|
||||
|
||||
Follow the guide to [building a pipeline](https://www.kubeflow.org/docs/guides/pipelines/build-pipeline/) to install the Kubeflow Pipelines SDK and compile the sample Python into a workflow specification. The specification takes the form of a YAML file compressed into a `.tar.gz` file.
|
||||
|
||||
For convenience, you can download a pre-compiled, compressed YAML file containing the
|
||||
specification of the `sequential.py` pipeline. This saves you the steps required
|
||||
to compile and compress the pipeline specification:
|
||||
[sequential.tar.gz](https://storage.googleapis.com/sample-package/sequential.tar.gz)
|
||||
|
||||
## Deploy
|
||||
|
||||
Open the Kubeflow pipelines UI, and follow the prompts to create a new pipeline and upload the generated workflow
|
||||
specification, `my-pipeline.tar.gz` (example: `sequential.tar.gz`).
|
||||
|
||||
## Run
|
||||
|
||||
Follow the pipeline UI to create pipeline runs.
|
||||
|
||||
Useful parameter values:
|
||||
|
||||
* For the "exit_handler" and "sequential" samples: `gs://ml-pipeline-playground/shakespeare1.txt`
|
||||
* For the "parallel_join" sample: `gs://ml-pipeline-playground/shakespeare1.txt` and `gs://ml-pipeline-playground/shakespeare2.txt`
|
||||
|
||||
## Components source
|
||||
|
||||
All samples use pre-built components. The command to run for each container is built into the pipeline file.
|
||||
0
samples/ai-platform/ai_platform_training/trainer/__init__.py → samples/contrib/ai-platform/__init__.py
Executable file → Normal file
|
Before Width: | Height: | Size: 259 KiB After Width: | Height: | Size: 259 KiB |
|
Before Width: | Height: | Size: 72 KiB After Width: | Height: | Size: 72 KiB |
|
Before Width: | Height: | Size: 463 KiB After Width: | Height: | Size: 463 KiB |
|
Before Width: | Height: | Size: 81 KiB After Width: | Height: | Size: 81 KiB |
|
Before Width: | Height: | Size: 144 KiB After Width: | Height: | Size: 144 KiB |
|
Before Width: | Height: | Size: 64 KiB After Width: | Height: | Size: 64 KiB |
|
Before Width: | Height: | Size: 154 KiB After Width: | Height: | Size: 154 KiB |