* Components - Added AutoML Tables components
* Added the sample - AutoML Tables - Retail product stockout prediction
* Replaced the project ID with dummy placeholder
* Fixed the description parameter passing
* Replaced pip with pip3 and changed quotes
* Added licenses
* Updated the component links
* Revert "Replaced pip with pip3"
This reverts commit
|
||
|---|---|---|
| .. | ||
| arena | ||
| aws | ||
| deprecated | ||
| gcp | ||
| ibm-components | ||
| kubeflow | ||
| local | ||
| nuclio | ||
| sample/keras/train_classifier | ||
| OWNERS | ||
| README.md | ||
| build_image.sh | ||
| license.sh | ||
| release.sh | ||
| test_load_all_components.sh | ||
| third_party_licenses.csv | ||
README.md
Kubeflow pipeline components
Kubeflow pipeline components are implementations of Kubeflow pipeline tasks. Each task takes one or more artifacts as input and may produce one or more artifacts as output.
Example: XGBoost DataProc components
Each task usually includes two parts:
Client code
The code that talks to endpoints to submit jobs. For example, code to talk to Google
Dataproc API to submit a Spark job.
Runtime code
The code that does the actual job and usually runs in the cluster. For example, Spark code
that transforms raw data into preprocessed data.
Container
A container image that runs the client code.
Note the naming convention for client code and runtime code—for a task named "mytask":
- The
mytask.pyprogram contains the client code. - The
mytaskdirectory contains all the runtime code.
See how to use the Kubeflow Pipelines SDK and build your own components.