mirror of https://github.com/kubeflow/examples.git
				
				
				
			
				
					
						
							* Create a test for submitting the TFJob for the GitHub issue summarization example.
* This test needs to be run manually right now. In a follow on PR we will
  integrate it into CI.
* We use the image built from Dockerfile.estimator because that is the image
  we are running train_test.py in.
  * Note: The current version of the code now requires Python3 (I think this
    is due to an earlier PR which refactored the code into a shared
    implementation for using TF estimator and not TF estimator).
* Create a TFJob component for TFJob v1beta1; this is the version
  in KF 0.4.
TFJob component
  * Upgrade to v1beta to work with 0.4
  * Update command line arguments to match the versions in the current code
      * input & output are now single parameters rather then separate parameters
        for bucket and name
  * change default input to a CSV file because the current version of the
    code doesn't handle unzipping it.
* Use ks_util from kubeflow/testing
* Address comments.
						
					
				
			 | 
			||
|---|---|---|
| .. | ||
| demo | ||
| docker | ||
| ks_app | ||
| notebooks | ||
| sql | ||
| testing | ||
| workflow | ||
| .gitignore | ||
| 01_setup_a_kubeflow_cluster.md | ||
| 02_distributed_training.md | ||
| 02_training_the_model.md | ||
| 02_training_the_model_tfjob.md | ||
| 03_serving_the_model.md | ||
| 04_querying_the_model.md | ||
| 05_teardown.md | ||
| Makefile | ||
| README.md | ||
| image_build.jsonnet | ||
| requirements.txt | ||
		
			
				
				README.md
			
		
		
			
			
		
	
	End-to-End kubeflow tutorial using a Sequence-to-Sequence model
This example demonstrates how you can use kubeflow end-to-end to train and
serve a Sequence-to-Sequence model on an existing kubernetes cluster. This
tutorial is based upon @hamelsmu's article "How To Create Data Products That
Are Magical Using Sequence-to-Sequence
Models".
Goals
There are two primary goals for this tutorial:
- Demonstrate an End-to-End kubeflow example
 - Present an End-to-End Sequence-to-Sequence model
 
By the end of this tutorial, you should learn how to:
- Setup a Kubeflow cluster on an existing Kubernetes deployment
 - Spawn a Jupyter Notebook on the cluster
 - Spawn a shared-persistent storage across the cluster to store large datasets
 - Train a Sequence-to-Sequence model using TensorFlow and GPUs on the cluster
 - Serve the model using Seldon Core
 - Query the model from a simple front-end application
 
Steps:
- Setup a Kubeflow cluster
 - Training the model. You can train the model using any of the following methods using Jupyter Notebook or using TFJob:
 - Serving the model
 - Querying the model
 - Teardown