Merge remote-tracking branch 'upstream/master' into third-party

This commit is contained in:
Michelle Casbon 2018-03-01 15:05:54 -05:00
commit adad73bad0
5 changed files with 127 additions and 1477 deletions

21
OWNERS Normal file
View File

@ -0,0 +1,21 @@
# TODO(jlewi): We should probably have OWNERs files in subdirectories that
# list approvers for individual components (e.g. Seldon folks for Seldon component)
approvers:
- ankushagarwal
- DjangoPeng
- gaocegege
- jlewi
- llunn
- ScorpioCPH
reviewers:
- ankushagarwal
- DjangoPeng
- gaocegege
- Jimexist
- jlewi
- llunn
- nkashy1
- ScorpioCPH
- texasmichelle
- wbuchwalter
- zjj2wry

View File

@ -26,4 +26,5 @@ By the end of this tutorial, you should learn how to:
## Steps:
1. [Setup a Kubeflow cluster](setup_a_kubeflow_cluster.md)
1. [Training the model](training_the_model.md)
1. [Teardown](teardown.md)

File diff suppressed because it is too large Load Diff

View File

@ -51,7 +51,7 @@ For this example, provision a `10GB` NFS mount with the name
After the NFS is ready, delete the `tf-hub-0` pod so that it gets recreated and
picks up the NFS mount. You can delete it by running `kubectl delete pod
tf-hub-0 -n={NAMESPACE}`
tf-hub-0 -n=${NAMESPACE}`
At this point you should have a 10GB mount `/mnt/github-issues-data` in your
Jupyter Notebook pod. Check this by running `!df` in your Jupyter Notebook.

View File

@ -0,0 +1,9 @@
# Training the model
By this point, you should have a Jupyter Notebook running at `http://127.0.0.1:8000`.
Open the Jupyter Notebook interface and create a new Terminal by clicking on New -> Terminal. In the Terminal, clone this git repo by executing: `git clone https://github.com/kubeflow/examples.git`.
Now you should have all the code required to complete this tutorial in the `examples/issue_summarization_github_isses/notebooks` folder. Navigate to this folder. Here you should see two files: `Tutorial.ipynb` and `seq2seq_utils.py`. Open `Tutorial.ipynb` - this contains a complete walk-through of how to go about downloading the training data, preprocessing it and training it.
Next: [Serving the model](serving_the_model.md)