[GitHub Issue Summarization] (very) simple front-end web app (#53)

* Add barebones frontend

Add instructions for querying the trained model via a simple frontend
deployed locally.

* Add instructions for running the ui in-cluster

TODO: Resolve ksonnet namespace collisions for deployed-service
prototype

* Remove reference to running trained model locally
This commit is contained in:
Michelle Casbon 2018-03-21 15:22:04 -07:00 committed by k8s-ci-robot
parent 611e98ef1e
commit 1d6946ead8
8 changed files with 144 additions and 1 deletions

View File

@ -22,10 +22,12 @@ By the end of this tutorial, you should learn how to:
* Train a Sequence-to-Sequence model using TensorFlow on the cluster using
GPUs
* Serve the model using [Seldon Core](https://github.com/SeldonIO/seldon-core/)
* Query the model from a simple front-end application
## Steps:
1. [Setup a Kubeflow cluster](setup_a_kubeflow_cluster.md)
1. [Training the model](training_the_model.md)
1. [Serving the model](serving_the_model.md)
1. [Querying the model](querying_the_model.md)
1. [Teardown](teardown.md)

View File

@ -0,0 +1,15 @@
FROM python:alpine
COPY ./flask_web/requirements.txt /app/
WORKDIR /app
RUN pip install -r requirements.txt
RUN pip install requests
COPY ./flask_web /app
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]

View File

@ -0,0 +1,39 @@
"""
Simple app that parses predictions from a trained model and displays them.
"""
from flask import Flask, json, render_template, request
import requests
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html")
@app.route("/summary", methods=['GET', 'POST'])
def summary():
if request.method == 'POST':
issue_text = request.form["issue_text"]
url = "http://ambassador:80/seldon/issue-summarization/api/v0.1/predictions"
headers = { 'content-type': 'application/json' }
json_data = {
"data" : {
"ndarray" : [[ issue_text ]]
}
}
r = requests.post(url = url,
headers = headers,
data = json.dumps(json_data))
rjs = json.loads(r.text)
summary = rjs["data"]["ndarray"][0][0]
return render_template("summary.html",
issue_text = issue_text,
summary = summary)
if __name__ == '__main__':
app.run(debug = True, host = '0.0.0.0', port = 80)

View File

@ -0,0 +1,2 @@
Flask==0.12.2

View File

@ -0,0 +1,10 @@
<h1>Issue text</h1>
<form action="summary" method="post">
<p>Enter GitHub issue text:</p>
<p><textarea class="scrollabletextbox" name="issue_text" rows=5 cols=100></textarea></p>
<p><input type="submit" value="Submit"/></p>
</form>

View File

@ -0,0 +1,7 @@
<h1>Summary</h1>
<p>{{summary}}</p>
<h2>Issue text</h1>
<p>{{issue_text}}</p>

View File

@ -0,0 +1,67 @@
# Querying the model
In this section, you will setup a barebones web server that displays the
prediction provided by the previously deployed model.
The following steps describe how to build a docker image and deploy it locally,
where it accepts as input any arbitrary text and displays a
machine-generated summary.
## Prerequisites
Ensure that your model is live and listening for HTTP requests as described in
[serving](serving_the_model.md).
## Build the frontend image
To build the frontend image, issue the following commands:
```
cd docker
docker build -t gcr.io/gcr-repository-name/issue-summarization-ui .
```
## Store the frontend image
To store the image in a location accessible to GKE, push it to the container
registry of your choice. Here, it is pushed to Google Container Registry.
```
gcloud docker -- push gcr.io/gcr-repository-name/issue-summarization-ui:0.1
```
## Deploy the frontend image to your kubernetes cluster
To serve the frontend interface, run the image using the following command:
```
ks generate deployed-service issue-summarization-ui \
--image gcr.io/gcr-repository-name/issue-summarization-ui:0.1 \
--type ClusterIP
ks param set issue-summarization-ui namespace $NAMESPACE
ks apply cloud -c issue-summarization-ui
```
TODO: Figure out why deployed-service prototype does not pick up the
namespace parameter. The workaround is to generate the yaml for
issue-summarization-ui service and deployment objects, inserting
the namespace parameter, and applying manually to the cluster.
## View results from the frontend
To setup a proxy to the UI port running in k8s, issue the following command:
```
kubectl port-forward $(kubectl get pods -n ${NAMESPACE} -l app=issue-summarization-ui -o jsonpath='{.items[0].metadata.name}') -n ${NAMESPACE} 8081:80
```
In a browser, navigate to `http://localhost:8081`, where you will be greeted by "Issue
text" Enter text into the input box and click submit. You should see a
summary that was provided by your trained model.
Next: [Teardown](teardown.md)

View File

@ -102,4 +102,5 @@ Response
}
```
Next: [Teardown](teardown.md)
Next: [Querying the model](querying_the_model.md)