Commit Graph

2 Commits

Author SHA1 Message Date
Jeremy Lewi e15bfffca4 An Argo workflow to use as the E2E test for code_search example. (#446)
* An Argo workflow to use as the E2E test for code_search example.

* The workflow builds the Docker images and then runs the python test
  to train and export a model

* Move common utilities into util.libsonnet.

* Add the workflow to the set of triggered workflows.

* Update the test environment used by the test ksonnet app; we've since
  changed the location of the app.

Related to #295

* Refactor the jsonnet file defining the GCB build workflow

  * Use an external variable to conditionally pull and use a previous
    Docker image as a cache

  * Reduce code duplication by building a shared template for all the different
    workflows.

* BUILD_ID needs to be defined in the default parameters otherwise we get an error when adding a new environment.

* Define suitable defaults.
2018-12-28 16:12:32 -08:00
Jeremy Lewi acd8007717 Use conditionals and add test for code search (#291)
* Fix model export, loss function, and add some manual tests.

Fix Model export to support computing code embeddings: Fix #260

* The previous exported model was always using the embeddings trained for
  the search query.

* But we need to be able to compute embedding vectors for both the query
  and code.

* To support this we add a new input feature "embed_code" and conditional
  ops. The exported model uses the value of the embed_code feature to determine
  whether to treat the inputs as a query string or code and computes
  the embeddings appropriately.

* Originally based on #233 by @activatedgeek

Loss function improvements

* See #259 for a long discussion about different loss functions.

* @activatedgeek was experimenting with different loss functions in #233
  and this pulls in some of those changes.

Add manual tests

* Related to #258

* We add a smoke test for T2T steps so we can catch bugs in the code.
* We also add a smoke test for serving the model with TFServing.
* We add a sanity check to ensure we get different values for the same
  input based on which embeddings we are computing.

Change Problem/Model name

* Register the problem github_function_docstring with a different name
  to distinguish it from the version inside the Tensor2Tensor library.

* * Skip the test when running under prow because its a manual test.
* Fix some lint errors.

* * Fix lint and skip tests.

* Fix lint.

* * Fix lint
* Revert loss function changes; we can do that in a follow on PR.

* * Run generate_data as part of the test rather than reusing a cached
  vocab and processed input file.

* Modify SimilarityTransformer so we can overwrite the number of shards
  used easily to facilitate testing.

* Comment out py-test for now.
2018-11-02 09:52:11 -07:00