* Update training to use Kubeflow 0.4 and add testing.
* To support testing we need to create a ksonnet template to train
the model so we can easily subsitute in different parameters during
training.
* We create a ksonnet component for just training; we don't use Argo.
This makes the example much simpler.
* To support S3 we add a generic ksonnet parameter to take environment
variables as a comma separated list of variables. This should make it
easy for users to set the environment variables needed to talk to S3.
This is compatible with the existing Argo workflow which supports S3.
* By default the training job runs non-distributed; this is because to
run distributed the user needs a shared filesystem (e.g. S3/GCS/NFS).
* Update the mnist workflow to correctly build the images.
* We didn't update the workflow in the previous example to actually
build the correct images.
* Update the workflow to run the tfjob_test
* Related to #460 E2E test for mnist.
* Add a parameter to specify a secret that can be used to mount
a secret such as the GCP service account key.
* Update the README with instructions for GCS and S3.
* Remove the instructions about Argo; the Argo workflow is outdated.
Using Argo adds complexity to the example and the thinking is to remove
that to provide a simpler example and to mirror the pytorch example.
* Add a TOC to the README
* Update prerequisite instructions.
* Delete instructions for installing Kubeflow; just link to the
getting started guide.
* Argo CLI should no longer be needed.
* GitHub token shouldn't be needed; I think that was only needed
for ksonnet to pull the registry.
* * Fix instructions; access keys shouldn't be stored as ksonnet parameters
as these will get checked into source control.
|
||
|---|---|---|
| agents | ||
| code_search | ||
| codelab-image | ||
| demos | ||
| financial_time_series | ||
| github_issue_summarization | ||
| mnist | ||
| object_detection | ||
| pipelines | ||
| pytorch_mnist | ||
| test/workflows | ||
| xgboost_ames_housing | ||
| .gitignore | ||
| .pylintrc | ||
| CONTRIBUTING.md | ||
| LICENSE | ||
| OWNERS | ||
| README.md | ||
| prow_config.yaml | ||
README.md
kubeflow-examples
A repository to share extended Kubeflow examples and tutorials to demonstrate machine learning concepts, data science workflows, and Kubeflow deployments. The examples illustrate the happy path, acting as a starting point for new users and a reference guide for experienced users.
This repository is home to the following types of examples and demos:
End-to-end
GitHub issue summarization
Author: Hamel Husain
This example covers the following concepts:
- Natural Language Processing (NLP) with Keras and Tensorflow
- Connecting to Jupyterhub
- Shared persistent storage
- Training a Tensorflow model
- CPU
- GPU
- Serving with Seldon Core
- Flask front-end
Pytorch MNIST
Author: David Sabater
This example covers the following concepts:
- Distributed Data Parallel (DDP) training with Pytorch on CPU and GPU
- Shared persistent storage
- Training a Pytorch model
- CPU
- GPU
- Serving with Seldon Core
- Flask front-end
MNIST
Author: Elson Rodriguez
This example covers the following concepts:
- Image recognition of handwritten digits
- S3 storage
- Training automation with Argo
- Monitoring with Argo UI and Tensorboard
- Serving with Tensorflow
Distributed Object Detection
Author: Daniel Castellanos
This example covers the following concepts:
- Gathering and preparing the data for model training using K8s jobs
- Using Kubeflow tf-job and tf-operator to launch a distributed object training job
- Serving the model through Kubeflow's tf-serving
Financial Time Series
Author: Sven Degroote
This example covers the following concepts:
- Deploying Kubeflow to a GKE cluster
- Exploration via JupyterHub (prospect data, preprocess data, develop ML model)
- Training several tensorflow models at scale with TF-jobs
- Deploy and serve with TF-serving
- Iterate training and serving
- Training on GPU
Component-focused
XGBoost - Ames housing price prediction
Author: Puneith Kaul
This example covers the following concepts:
- Training an XGBoost model
- Shared persistent storage
- GCS and GKE
- Serving with Seldon Core
Demos
Demos are for showing Kubeflow or one of its components publicly, with the intent of highlighting product vision, not necessarily teaching. In contrast, the goal of the examples is to provide a self-guided walkthrough of Kubeflow or one of its components, for the purpose of teaching you how to install and use the product.
In an example, all commands should be embedded in the process and explained. In a demo, most details should be done behind the scenes, to optimize for on-stage rhythm and limited timing.
You can find the demos in the /demos directory.
Third-party hosted
| Source | Example | Description |
|---|---|---|
Get Involved
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
The Kubeflow community is guided by our Code of Conduct, which we encourage everybody to read before participating.