pipelines/components
hongye-sun 7ec56978b6 Release 151c5349f1 (#1916)
* Updated component images to version 151c5349f1

* Updated components to version a97f1d0ad0

* Update setup.py

* Update setup.py
2019-08-21 16:06:31 -07:00
..
arena Lint Python code for undefined names (#1721) 2019-08-21 15:04:31 -07:00
aws Lint Python code for undefined names (#1721) 2019-08-21 15:04:31 -07:00
bigquery
dataflow Release 151c5349f1 (#1916) 2019-08-21 16:06:31 -07:00
dataproc
gcp Release 151c5349f1 (#1916) 2019-08-21 16:06:31 -07:00
ibm-components
kubeflow Release 151c5349f1 (#1916) 2019-08-21 16:06:31 -07:00
local Release 151c5349f1 (#1916) 2019-08-21 16:06:31 -07:00
nuclio
sample/keras/train_classifier
OWNERS
README.md
build_image.sh
license.sh
release.sh
test_load_all_components.sh
third_party_licenses.csv

README.md

Kubeflow pipeline components

Kubeflow pipeline components are implementations of Kubeflow pipeline tasks. Each task takes one or more artifacts as input and may produce one or more artifacts as output.

Example: XGBoost DataProc components

Each task usually includes two parts:

Client code The code that talks to endpoints to submit jobs. For example, code to talk to Google Dataproc API to submit a Spark job.

Runtime code The code that does the actual job and usually runs in the cluster. For example, Spark code that transforms raw data into preprocessed data.

Container A container image that runs the client code.

Note the naming convention for client code and runtime code—for a task named "mytask":

  • The mytask.py program contains the client code.
  • The mytask directory contains all the runtime code.

See how to use the Kubeflow Pipelines SDK and build your own components.