pipelines/manifests/kustomize/env/gcp
IronPan 04ddb3277f
Add readme and update application parameter (#2021)
* pass pipeline runner service account to api server

* update schema

* fix

* change image

* rename

* fix

* fix

* update readme

* update readme
2019-09-03 12:44:03 -07:00
..
minio add GCP marketplace application manifest for kubeflow pipelines (#1621) 2019-08-27 14:49:11 -07:00
mysql Add readme and update application parameter (#2021) 2019-09-03 12:44:03 -07:00
.gitignore Add cloud sql and gcs connection for pipeline-lite deployment (#1910) 2019-08-21 16:36:35 -07:00
README.md Cleanup pipeline-lite deployment (#1921) 2019-08-22 10:03:28 -07:00
gcp-configurations-patch.yaml Add readme and update application parameter (#2021) 2019-09-03 12:44:03 -07:00
kustomization.yaml Cleanup pipeline-lite deployment (#1921) 2019-08-22 10:03:28 -07:00

README.md

TL;DR

  1. To access the GCP services, the application needs a GCP service account token. Download the token to the current folder manifests/kustomize/env/gcp. Reference
gcloud iam service-accounts keys create application_default_credentials.json \
  --iam-account [SA-NAME]@[PROJECT-ID].iam.gserviceaccount.com
  1. Create or use an existing CloudSQL instance. The service account should have the access to the CloudSQL instance.
  2. Fill in gcp-configurations-patch.yaml with your CloudSQL and GCS configuration.

Why Cloud SQL and GCS

Kubeflow Pipelines keeps its metadata in mysql database and artifacts in S3 compatible object storage. Using CloudSQL and GCS for persisting the data provides better reliability and performance, as well as things like data backups, and usage monitoring. This is the recommended setup especially for production environments.