* An Argo workflow to use as the E2E test for code_search example.
* The workflow builds the Docker images and then runs the python test
to train and export a model
* Move common utilities into util.libsonnet.
* Add the workflow to the set of triggered workflows.
* Update the test environment used by the test ksonnet app; we've since
changed the location of the app.
Related to #295
* Refactor the jsonnet file defining the GCB build workflow
* Use an external variable to conditionally pull and use a previous
Docker image as a cache
* Reduce code duplication by building a shared template for all the different
workflows.
* BUILD_ID needs to be defined in the default parameters otherwise we get an error when adding a new environment.
* Define suitable defaults.
* Create a script to count lines of code.
* This is used in the presentation to get an estimate of where the human effort is involved.
* Fix lint issues.
* We need to set the parameters for the model and index.
* It looks like when we split up the web app into its own ksonnet app
we forgot to set the parameters.
* SInce the web app is being deployed in a separate namespace we need to
copy the GCP credential to that namespace. Add instructions to the
demo README.md on how to do that.
* It looks like the pods were never getting started because the secret
couldn't be mounted.
* We need to disable TLS (its handled by ingress) because that leads to
endless redirects.
* ArgoCD is running in namespace argo-cd but Ambassador is running in a
different namespace and currently only configured with RBAC to monitor
a single namespace.
* So we add a service in namespace kubeflow just to define the Ambassador mapping.
* Datflow job should support writing embeddings to a different location (Fix#366).
* Dataflow job to compute code embeddings needs to have parameters controlling
the location of the outputs independent of the inputs. Prior to this fix the
same table in the dataset was always written and the files were always created
in the data dir.
* This made it very difficult to rerun the embeddings job on the latest GitHub
data (e.g to regularly update the code embeddings) without overwritting
the current embeddings.
* Refactor how we create BQ sinks and sources in this pipeline
* Rather than create a wrapper class that bundles together a sink and schema
we should have a separate helper class for creating BQ schemas and then
use WriteToBigQuery directly.
* Similarly for ReadTransforms we don't need a wrapper class that bundles
a query and source. We can just create a class/constant to represent
queries and pass them directly to the appropriate source.
* Change BQ write disposition to if empty so we don't overwrite existing data.
* Fix#390 worker setup fails because requirements.dataflow.txt not found
* Dataflow always uses the local file requirements.txt regardless of the
local file used as the source.
* When job is submitted it will also try to build a sdist package on
the client which invokes setup.py
* So we in setup.py we always refer to requirements.txt
* If trying to install the package in other contexts,
requirements.dataflow.txt should be renamed to requirements.txt
* We do this in the Dockerfile.
* Refactor the CreateFunctionEmbeddings code so that writing to BQ
is not part of the compute function embeddings code;
(will make it easier to test.)
* * Fix typo in jsonnet with output dir; missing an "=".
* Follow argocd instructions
https://github.com/argoproj/argo-cd/blob/master/docs/getting_started.md
to install ArgoCD on the cluster
* Down the argocd manifest and update the namespace to argocd.
* Check it in so ArgoCD can be deployed declaratively.
* Update README.md with the instructions for deploying ArgoCD.
Move the web app components into their own ksonnet app.
* We do this because we want to be able to sync the web app components using
Argo CD
* ArgoCD doesn't allow us to apply autosync with granularity less than the
app. We don't want to sync any of the components except the servers.
* Rename the t2t-code-search-serving component to query-embed-server because
this is more descriptive.
* Check in a YAML spec defining the ksonnet application for the web UI.
Update the instructions in nodebook code-search.ipynb
* Provided updated instructions for deploying the web app due the
fact that the web app is now a separate component.
* Improve code-search.ipynb
* Use gcloud to get sensible defaults for parameters like the project.
* Provide more information about what the variables mean.
* This script will be the last step in a pipeline to continuously update
the index for serving.
* The script updates the parameters of the search index server to point
to the supplied index files. It then commits them and creates a PR
to push those commits.
* Restructure the parameters for the search index server so that we can use
ks param set to override the indexFile and lookupFile.
* We do this because we want to be able to push a new index by doing
ks param set in a continuously running pipeline
* Remove default parameters from search-index-server
* Create a dockerfile suitable for running this script.
* The latest changes to the ksonnet components require certain values
to be defined as defaults.
* This is part of the move away from using a fake component to define
parameters that should be reused across different modules.
see #308
* Verify we can run ks show on a new environment and can evaluate the ksonnet.
Fix#353
* Upgrade and fix the serving components.
* Install a new version of the TFServing package so we can use the new template.
* Fix the UI image. Use the same requirements file as for Dataflow so we are
consistent w.r.t the version of TF and Tensor2Tesnro.
* remove nms.libsonnet; move all the manifests into the actual component
files rather than using a shared library.
* Fix the name of the TFServing service and deployment; need to use the same
name as used by the front end server.
* Change the port of TFServing; we are now using the built in http server
in TFServing which uses port 8500 as opposed to our custom http proxy.
* We encountered an error importning nmslib; moving it to the top of the file
appears to fix this.
* Fix lint.
* Install nmslib in the Dataflow container so its suitable for running
the index creation job.
* Use command not args in the job specs.
* Dockerfile.dataflow should install nmslib so that we can use that Docker
image to create the index.
* build.jsonnet should tag images as latest. We will use this to use
the latest images as a layer cache to speed up builds.
* Set logging level to info for start_search_server.py and
create_search_index.py
* Create search index pod keeps was getting evicted because node runs out of
memory
* Add a new node pool consisting of n1-standard-32 nodes to the demo cluster.
These have 120 GB of RAM compared to 30GB in our default pool of n1-standard-8
* Set requests and limits on the creator search index pod.
* Move all the config for the search-index-creator job into the
search-index-creator.jsonnet file. We need to customize the memory resources
so there's not much value to try to sharing config with other components.
In order to build a pipeline that can runs ksonnet command, the ksonnet registry need to be containerized.
Remove it from dockerignore to unblock the work.
* Create a component to submit the Dataflow job to compute embeddings for code search.
* Update Beam to 2.8.0
* Remove nmslib from Apache beam requirements.txt; its not needed and appears
to have problems installing on the Dataflow workers.
* Spacy download was failing on Dataflow workers; reinstalling the spacy
package as a pip package appears to fix this.
* Fix some bugs in the workflow for building the Docker images.
* * Split requirements.txt into separate requirements for the Dataflow
workers and the UI.
* We don't want to install unnecessary dependencies in the Dataflow workers.
Some unnecessary dependencies; e.g. nmslib were also having problems
being installed in the workers.
* Modify K8s models to export the models; tensorboard manifests
* Use a K8s job not a TFJob to export the model.
* Start an experiments.libsonnet file to define groups of parameters for
different experiments that should be reused
* Need to install tensorflow_hub in the Docker image because it is
required by t2t exporter.
* * Address review comments.
Otherwise when I want to execute dataflow code
```
python2 -m code_search.dataflow.cli.create_function_embeddings \
```
it complains no setup.py
I could workaround by using workingdir container API but setting it to default would be more convenient.
* Make distributed training work; Create some components to train models
* Check in a ksonnet component to train a model using the tinyparam
hyperparameter set.
* We want to check in the ksonnet component to facilitate reproducibility.
We need a better way to separate the particular experiments used for
the CS search demo effort from the jobs we want customers to try.
Related to #239 train a high quality model.
* Check in the cs_demo ks environment; this was being ignored as a result of
.gitignore
Make distributed training work #208
* We got distributed synchronous training to work with TensorTensor 1.10
* This required creating a simple python script to start the TF standard
server and run it as a sidecar of the chief pod and as the main container
for the workers/ps.
* Rename the model to kf_similarity_transformer to be consistent with other
code.
* We don't want to use the default name because we don't want to inadvertently
use the SimilarityTransformer model defined in the Tensor2Tensor project.
* replace build.sh by a Makefile. Makes it easier to add variant commands
* Use the GitHash not a random id as the tag.
* Add a label to the docker image to indicate the git version.
* Put the Makefile at the top of the code_search tree; makes it easier
to pull all the different sources for the Docker images.
* Add an option to build the Docker iamges with GCB; this is more efficient
when you are on a poor network connection because you don't have to download
images locally.
* Use jsonnet to define and parameterize the GCB workflow.
* Build separate docker images for running Dataflow and for running the trainer.
This helps avoid versioning conflicts caused by different versions of protobuf
pulled in by the TF version used as the base image vs. the version used
with apache beam.
Fix#310 - Training fails with GPUs.
* Changes to support distributed training.
* Simplify t2t-entrypoint.sh so that all we do is parse TF_CONFIG
and pass requisite config information as command line arguments;
everything else can be set in the K8s spec.
* Upgrade to T2T 1.10.
* * Add ksonnet prototypes for tensorboard.
* Update the datagen component.
* We should use a K8s job rather than a TFJob. We can also simplify the
ksonnet by just putting the spec into the jsonnet file rather than trying
to share various bits of the spec with the TFJob for training.
Related to kubeflow/examples#308 use globals to allow parameters to be shared
across components (e.g. working directory.)
* Update the README with information about data.
* Fix table markdown.
* Fix performance of dataflow preprocessing job.
* Fix#300; Dataflow job for preprocessing is really slow.
* The problem is we are loading the spacy tokenization model on every
invocation of the tokenization function and this is really expensive.
* We should be doing this once per module import.
* After fixing this issue; the job completed in approximately 20 minutes using
5 workers.
* We can process all 1.3 million records in ~ 20 minutes (elapsed time) using 5 32 CPU workers and about 1 hour of CPU time altogether.
* Add options to the Dataflow job to read from files as opposed to BigQuery
and to skip BigQuery writes. This is useful for testing.
* Add a "unittest" that verifies the Dataflow preprocessing job can run
successfully using the DirectRunner.
* Update the Docker image and a ksonnet component for a K8s job that
can be used to submit the Dataflow job.
* Fix#299; Add logging to the Dataflow preprocessing job to indicate that
a Dataflow job was submitted.
* Add an option to the preprocessing Dataflow job to read an entire
BigQuery table as the input rather than running a query to get the input.
This is useful in the case where the user wants to run a different
query to select the repo paths and contents to process and write them
to some table to be processed by the Dataflow job.
* Fix lint.
* More lint fixes.