* Follow argocd instructions
https://github.com/argoproj/argo-cd/blob/master/docs/getting_started.md
to install ArgoCD on the cluster
* Down the argocd manifest and update the namespace to argocd.
* Check it in so ArgoCD can be deployed declaratively.
* Update README.md with the instructions for deploying ArgoCD.
Move the web app components into their own ksonnet app.
* We do this because we want to be able to sync the web app components using
Argo CD
* ArgoCD doesn't allow us to apply autosync with granularity less than the
app. We don't want to sync any of the components except the servers.
* Rename the t2t-code-search-serving component to query-embed-server because
this is more descriptive.
* Check in a YAML spec defining the ksonnet application for the web UI.
Update the instructions in nodebook code-search.ipynb
* Provided updated instructions for deploying the web app due the
fact that the web app is now a separate component.
* Improve code-search.ipynb
* Use gcloud to get sensible defaults for parameters like the project.
* Provide more information about what the variables mean.
* Replace double quotes for field values (ks convention)
* Recreate the ksonnet application from scratch
* Fix pip commands to find requirements and redo installation, fix ks param set
* Use sed replace instead of ks param set.
* Add cells to first show JobSpec and then apply
* Upgrade T2T, fix conflicting problem types
* Update docker images
* Reduce to 200k samples for vocab
* Use Jupyter notebook service account
* Add illustrative gsutil commands to show output files, specify index files glob explicitly
* List files after index creation step
* Use the model in current repository and not upstream t2t
* Update Docker images
* Expose TF Serving Rest API at 9001
* Spawn terminal from the notebooks ui, no need to go to lab
* Add a Jupyter notebook to be used for Kubeflow codelabs
* Add help command for create_function_embeddings module
* Update README to point to Jupyter Notebook
* Add prerequisites to readme
* Update README and getting started with notebook guide
* [wip]
* Update noebook with BigQuery previews
* Update notebook to automatically select the latest MODEL_VERSION
* Upgrade TFJob and Ksonnet app
* Container name should be tensorflow. See #563.
* Working single node training and serving on Kubeflow
* Add issue link for fixme
* Remove redundant create secrets and use Kubeflow provided secrets
* Refactor the dataflow package
* Create placeholder for new prediction pipeline
* [WIP] add dofn for encoding
* Merge all modules under single package
* Pipeline data flow complete, wip prediction values
* Fallback to custom commands for extra dependency
* Working Dataflow runner installs, separate docker-related folder
* [WIP] Updated local user journey in README, fully working commands, easy container translation
* Working Batch Predictions.
* Remove docstring embeddings
* Complete batch prediction pipeline
* Update Dockerfiles and T2T Ksonnet components
* Fix linting
* Downgrade runtime to Python2, wip memory issues so use lesser data
* Pin master to index 0.
* Working batch prediction pipeline
* Modular Github Batch Prediction Pipeline, stores back to BigQuery
* Fix lint errors
* Fix module-wide imports, pin batch-prediction version
* Fix relative import, update docstrings
* Add references to issue and current workaround for Batch Prediction dependency.
* Add similarity transformer body
* Update pipeline to Write a single CSV file
* Fix lint errors
* Use CSV writer to handle formatting rows
* Use direct transformer encoding methods with variable scopes
* Complete end-to-end training with new model and problem
* Read from mutliple csv files
* Add new TF-Serving component with sample task
* Unify nmslib and t2t packages, need to be cohesive
* [WIP] update references to the package
* Replace old T2T problem
* Add representative code for encoding/decoding from tf serving service
* Add rest API port to TF serving (replaces custom http proxy)
* Fix linting
* Add NMSLib creator and server components
* Add docs to CLI module
* Add jobs derived from t2t component, GCP credentials assumed
* Add script to create IAM role bindings for Docker container to use
* Fix names to hyphens
* Add t2t-exporter wrapper
* Fix typos
* A temporary workaround for tensorflow/tensor2tensor#879
* Complete working pipeline of datagen, trainer and exporter
* Add docstring to create_secrets.sh
* [WIP] initialize ksonnet app
* Push images to GCR
* Upgrade Docker container to run T2T entrypoint with appropriate env vars
* Add a tf-job based t2t-job
* Fix GPU parameters
* New tensor2tensor problem for function summarization
* Consolidate README with improved docs
* Remove old readme
* Add T2T Trainer using Transformer Networks
* Fix missing requirement for t2t-trainer