mirror of https://github.com/kubeflow/examples.git
4 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
|
95db89ad74
|
Update Dockerfile ENTRYPOINT to use /usr/local/bin/python (#744)
Base image `FROM tensorflow/tensorflow:1.15.2-py3` uses python3 and therefore the python binary location is `/usr/bin/python3`. However, [tensorflow base image creates a symlink](
|
|
|
cc93a80420
|
Create a notebook for mnist E2E on GCP (#723)
* A notebook to run the mnist E2E example on GCP. This fixes a number of issues with the example * Use ISTIO instead of Ambassador to add reverse proxy routes * The training job needs to be updated to run in a profile created namespace in order to have the required service accounts * See kubeflow/examples#713 * Running inside a notebook running on Kubeflow should ensure user is running inside an appropriately setup namespace * With ISTIO the default RBAC rules prevent the web UI from sending requests to the model server * A short term fix was to not include the ISTIO side car * In the future we can add an appropriate ISTIO rbac policy * Using a notebook allows us to eliminate the use of kustomize * This resolves kubeflow/examples#713 which required people to use and old version of kustomize * Rather than using kustomize we can use python f style strings to write the YAML specs and then easily substitute in user specific values * This should be more informative; it avoids introducing kustomize and users can see the resource specs. * I've opted to make the notebook GCP specific. I think its less confusing to users to have separate notebooks focused on specific platforms rather than having one notebook with a lot of caveats about what to do under different conditions * I've deleted the kustomize overlays for GCS since we don't want users to use them anymore * I used fairing and kaniko to eliminate the use of docker to build the images so that everything can run from a notebook running inside the cluster. * k8s_utils.py has some reusable functions to add some details from users (e.g. low level calls to K8s APIs.) * * Change the mnist test to just run the notebook * Copy the notebook test infra for xgboost_synthetic to py/kubeflow/examples/notebook_test to make it more reusable * Fix lint. * Update for lint. * A notebook to run the mnist E2E example. Related to: kubeflow/website#1553 * 1. Use fairing to build the model. 2. Construct the YAML spec directly in the notebook. 3. Use the TFJob python SDK. * Fix the ISTIO rule. * Fix UI and serving; need to update TF serving to match version trained on. * Get the IAP endpoint. * Start writing some helper python functions for K8s. * Commit before switching from replace to delete. * Create a library to bulk create objects. * Cleanup. * Add back k8s_util.py * Delete train.yaml; this shouldn't have been aded. * update the notebook image. * Refactor code into k8s_util; print out links. * Clean up the notebok. Should be working E2E. * Added section to get logs from stackdriver. * Add comment about profile. * Latest. * Override mnist_gcp.ipynb with mnist.ipynb I accidentally put my latest changes in mnist.ipynb even though that file was deleted. * More fixes. * Resolve some conflicts from the rebase; override with changes on remote branch. |
|
|
7434bb55ba |
Updating mnist example to fix minio compatibility (#108)
* Updating mnist example to fix minio compatibility * Changing default sa user for ksonnet entrypoint * Updating mnist example based on pr feedback. |
|
|
1be7ccb142 |
Fixes #2: End to end model training/serving example using S3, Argo, and Kubeflow (#42)
* Add awscli tools container. * Add initial readme. * Add argo skeleton. * Run a an argo job. * Artifact support and argo test * Use built container (#3) * Fix artifacts and secrets * Add work in progress tfflow (#14) * Add kvc deployment to workflow. * Switch aws repo. * wip. * Add working tfflow job. * Add sidecar that waits for MASTER completion * Pass in job-name * Add volumemanager info step * Add input parameters to step * Adds nodeaffinity and hostpath * Add fixes for workflow (#17) - Use correct images for worker and ps - Use correct aws keys - Change volumemanager to mnist - Comment unused steps - Fix volume mount to correct containers * Fix hostpath for tfjob * Download all mnist files * added GCS stored artifacts comptability to Argo * Add initial inference workflow. (#30) * Initial serving step (#31) * Adds fixes to initial serving step * Ready for rough demo: Workflow in working state * Move conflicting readme. * Initial commit, everything boots without crashing. * Working, with some python errors. * Adding explicit flags * Working with ins-outs * Letting training job exit on success * Adding documentation skeletion * trying to properly save model * Almost working * Working * Adding export script, refactored to allow model more reusability * Starting documentation * little further on docs * More doc updates, fixing sleep logic * adding urls for mnist data * Removing download logic, it's to tied in with build-in tf examples. * Added argo workflow instructions, minor cleanups. * Adding mnist client. * Fixing typos * Adding instructions for installing components. * Added ksonnet container * Adding new entrypoint. * Added helm install instructions for kvc * doing things with variables * Typos. * Added better namespace support * S3 refactor. * Added missing region variables. * Adding tensorboard support. * Addding Container for Tensorboard. * Added temporary flag, added install instructions for CLI. * Removing invalid ksonnet environment. * Updating readme * Cleanup currently unused pieces * Add missint cluster-role * Minor cleanup. * Adding more parameters. * added changes to allow model to train on multiple workers and fixed some doc typos * Adding flag to enable/disable model serving. Adding s3 urls as outputs for future querying, renaming info step. * Adding seperate deployer workflow. * Split serving working. * Adding split workflow. * More parameters. * updates as to elson comments * Revert "added changes to allow model to train on multiple workers and fixed s…" * Initial working pure-s3 workflow. * Removed wait sidecars. * Remove unused flag. * Added part two, minor doc fixes * Inverted links... * Adding diff. * Fix url syntax * Documentation updates. * Added AWS Cli * Parameterized export. * Fixing image in s3 version. * Fixed documentation issues. * KVC snippet changes, need to find last working helm chart. * Temporarily pinning kvc version. * working master model and some doc typos fixes (#13) * added changes to allow model to train on multiple workers and fixed some doc typos * Adding flag to enable/disable model serving. Adding s3 urls as outputs for future querying, renaming info step. * Adding seperate deployer workflow. * Split serving working. * Adding split workflow. * More parameters. * updates as to elson comments * working master model and some doc typos * fixes as to Elson * Removign whitespace differences * updating diff * Changing parameters. * Undoing whitespace. * Changing termination policy on s3 version due to unknown issue. * Updating mnist diff. * Changing train steps. * Syncing Demo changes. * Update README.md * Going S3-native for initial example. Getting rid of Master. * Minor documentation tweaks, adding params, swapping aws cli for minio. * Updating KVC version. * Switching ksonnet repo, removing model name from client. * Updating git url. * Adding certificate hack to avoid RBAC errors. * Pinning KVC to commit while working on PR. * Updating version. * Updates README with additional details (#14) * Updates README with additional details * Adding clarity to kubectl config commands * Fixed comma placement * Refactoring notes for github and kubernetes credentials. * Forgot to add an overview of the argo template. * Updating example based on feedback. - Removed superflous images - Clarified use of KVC - Added unaltered model - Variable cleanup * Refactored grpc image into generic base image. * minor cleanup of resubmitting section. * Switching Argo deployment to ksonnet, conslidating install instructions. * Removing old cruft, clarifying cluster requirements. * [WIP] Switching out model (#15) * Switching to new mnist example. * Parameterized model, testing export. * Got CNN model exporting. * Attempting to do distributed training with Estimator, removed seperate export. * Adding master back, otherwise Estimator complains about not having a chief. * Switching to tf.estimator.train_and_evaluate. * Minor path/var name refactor. * Adding test data and new client. * Fixed documentation to reflect new client. * Getting rid of tf job shim. * Removing KVC from example, renaming directory * Modifying parent README * Removed reference to export. * Adding reference to export. * Removing unused Dockerfile. * Removing uneeded files, simplifying how to get status, refactor model serving workflow step. * Renaming directory * Minor doc improvements, removed extra clis. * Making SSL configurable for clusters without secured s3 endpoints. * Added a tf-user account for workflow. Fixed serving bug. * Updating gke version. * Re-ran through instructions, fixed errata. * Fixing lint issues * Pylint errors * Pylint errors * Adding parenthesis back. * pylint Hacks * Disabling argument filter, model bombs without empty arg. * Removing unneeded lambdas |