pipelines/test
Chen Sun cad02dc283
fix(test): Pin pip version in presubmit-tests-tfx.sh Fixes #5049 (#5050)
* Update presubmit-tests-tfx.sh

* Update presubmit-tests-tfx.sh

* Update presubmit-tests-tfx.sh
2021-01-29 15:16:01 -08:00
..
api-integration-test chore: add capri-xiyue as backend reviewer (#4964) 2021-01-06 19:11:45 -08:00
cloudbuild MetadataStore: Update to release metadata-envoy in each release (#4026) 2020-06-23 19:07:17 -07:00
frontend-integration-test feat(frontend): Reorganize Experiments, Runs, and Archive pages. Fixes #4918 (#4919) 2020-12-23 20:44:26 -08:00
imagebuilder [Testing] Use google/cloud-sdk:279.0.0 to resolve workload identity flakiness (#3019) 2020-02-11 18:34:07 -08:00
images update doc (#3151) 2020-02-24 09:24:48 -08:00
initialization-test fix(backend): updated the argo version too 2.7.7. Fixes #4392 (#4498) 2020-10-22 17:09:36 -07:00
kfp-functional-test test: added kfp functional e2e test (#4885) 2020-12-10 22:15:22 -08:00
manifests chore(proxy): use production inverse proxy (#5004) 2021-01-19 12:11:59 -08:00
sample-test fix(sample): Fix missing `--region` in dataflow sample notebook. Fixes #5007 (#5036) 2021-01-26 02:34:00 -08:00
scripts [Testing] Reduce flakiness caused by iam bindings (#3008) 2020-02-07 00:05:43 -08:00
tools/project-cleaner [Testing] Recycle upgrade- clusters and reduce time lapse (#3282) 2020-03-15 23:52:36 -07:00
.gitignore Move postsubmit tests to lite deployment (#1939) 2019-08-23 14:34:26 -07:00
OWNERS chore: update OWNERS 2020-11-09 10:18:18 +08:00
README.md Fix obsolete image cache when the same PR commit is tested with a new master (#2738) 2019-12-16 17:09:38 -08:00
build-images.sh [Testing] Reduce image build flakiness by share and retry cloudbuild jobs (#3492) 2020-04-13 20:33:11 -07:00
build_image.yaml [Testing] Use google/cloud-sdk:279.0.0 to resolve workload identity flakiness (#3019) 2020-02-11 18:34:07 -08:00
check-argo-status.sh Upgraded Argo to v2.7.5 (#3537) 2020-05-11 23:52:21 -07:00
check-build-image-status.sh Update check-build-image-status.sh (#3533) 2020-04-19 20:21:38 -07:00
component_test.yaml [Testing] Use google/cloud-sdk:279.0.0 to resolve workload identity flakiness (#3019) 2020-02-11 18:34:07 -08:00
deploy-cluster.sh test: make details log url easier to select (#4680) 2020-10-27 21:29:56 -07:00
deploy-pipeline-lite.sh feat(deployment): GCP managed storage - detailed instructions to set up workload identity bindings before deployment (#4232) 2020-07-16 23:13:00 -07:00
deploy-pipeline-mkp-cli.sh Integration test fix (#3357) 2020-03-25 01:23:02 -07:00
e2e_test_gke_v2.yaml [Test]Fix e2e test (#3471) 2020-04-09 10:35:44 -07:00
install-argo.sh Upgraded Argo to v2.7.5 (#3537) 2020-05-11 23:52:21 -07:00
postsubmit-tests-with-pipeline-deployment.sh test: postsubmit - fix cloudbuild job filtering. Part of #4046 (#4122) 2020-07-01 19:32:18 +08:00
presubmit-backend-test.sh test(backend): use go test in backend presubmit test (#4417) 2020-09-02 02:57:05 -07:00
presubmit-backend-visualization.sh test: Added presubmit unit tests for the backend visualizations (#4136) 2020-07-02 13:42:47 -07:00
presubmit-component-yaml.sh test: Added presubmit unit tests for the component.yaml definitions (#4135) 2020-07-14 17:02:49 -07:00
presubmit-components-gcp.sh test: Added presubmit unit tests for the GCP components (#4137) 2020-07-14 17:02:57 -07:00
presubmit-tests-mkp.sh [SDK] Make service account configurable for build_image_from_working_dir (#3419) 2020-04-15 00:06:02 -07:00
presubmit-tests-sdk.sh test(sdk): Testing the component library (#4634) 2020-10-22 16:27:36 -07:00
presubmit-tests-tfx.sh fix(test): Pin pip version in presubmit-tests-tfx.sh Fixes #5049 (#5050) 2021-01-29 15:16:01 -08:00
presubmit-tests-with-pipeline-deployment.sh two fixes (#3307) 2020-03-18 18:02:43 -07:00
sample_test.yaml feat(compiler): add dsl operation for parallelism on sub dag level (#4199) 2020-12-26 22:10:27 -08:00
tag_for_hosted.sh chore(release): upgrade mlmd to 0.25.1 (#4859) 2020-12-02 22:13:00 -08:00
test-prep.sh Migrate standalone deployment to workload identity on GCP (#2619) 2019-12-16 22:05:58 -08:00
upgrade-tests.sh [Testing] KFP standalone test infra for upgradability (#1971) 2020-03-09 16:53:37 -07:00
upgrade_test_setup.yaml [Testing] KFP standalone test infra for upgradability (#1971) 2020-03-09 16:53:37 -07:00

README.md

ML pipeline test infrastructure

This folder contains the integration/e2e tests for ML pipeline. We use Argo workflow to run the tests.

At a high level, a typical test workflow will

  • build docker images for all components
  • create a dedicate test namespace in the cluster
  • deploy ml pipeline using the newly built components
  • run the test
  • delete the namespace
  • delete the images

All these steps will be taking place in the same Kubernetes cluster. You can use GKE to test against the code in a Github Branch. The image will be temporarily stored in the GCR repository in the same project.

Tests are run automatically on each commit in a Kubernetes cluster using Prow. Tests can also be run manually, see the next section.

Run tests using GKE

You could run the tests against a specific commit.

Setup

Here are the one-time steps to prepare for your GKE testing cluster:

  • Follow the main page to create a GKE cluster.
  • Install Argo in the cluster.
  • Create cluster role binding.
    kubectl create clusterrolebinding default-as-admin --clusterrole=cluster-admin --serviceaccount=default:default
    
  • Follow the guideline to create a ssh deploy key, and store as Kubernetes secret in your cluster, so the job can later access the code. Note it requires admin permission to add a deploy key to github repo. This step is not needed when the project is public.
    kubectl create secret generic ssh-key-secret
    --from-file=id_rsa=/path/to/your/id_rsa
    --from-file=id_rsa.pub=/path/to/your/id_rsa.pub
    

Run tests

Simply submit the test workflow to the GKE cluster, with a parameter specifying the commit you want to test (master HEAD by default):

argo submit integration_test_gke.yaml -p commit-sha=<commit>

You can check the result by doing:

argo list

The workflow will create a temporary namespace with the same name as the Argo workflow. All the images will be stored in gcr.io/project_id/workflow_name/branch_name/*. By default when the test is *finished, the namespace and images will be deleted. However you can keep them by providing additional parameter.

argo submit integration_test_gke.yaml -p branch="my-branch" -p cleanup="false"

Run presubmit-tests-with-pipeline-deployment.sh locally

Run the following commands from root of kubeflow/pipelines repo.

# $WORKSPACE are env variables set by Prow
export WORKSPACE=$(pwd) # root of kubeflow/pipelines git repo
export SA_KEY_FILE=PATH/TO/YOUR/GCP/PROJECT/SERVICE/ACCOUNT/KEY
# (optional) uncomment the following to keep reusing the same cluster
# export TEST_CLUSTER=YOUR_PRECONFIGURED_CLUSTER_NAME
# (optional) uncomment the following to disable built image caching
# export DISABLE_IMAGE_CACHING=true

./test/presubmit-tests-with-pipeline-deployment.sh \
  --workflow_file e2e_test_gke_v2.yaml \ # You can specify other workflows you want to test too.
  --test_result_folder ${FOLDER_NAME_TO_HOLD_TEST_RESULT} \
  --test_result_bucket ${YOUR_GCS_TEST_RESULT_BUCKET} \
  --project ${YOUR_GCS_PROJECT}

Troubleshooting

Q: Why is my test taking so long on GKE?

The cluster downloads a bunch of images during the first time the test runs. It will be faster the second time since the images are cached. The image building steps are running in parallel and usually takes 2~3 minutes in total. If you are experiencing high latency, it might due to the resource constrains on your GKE cluster. In that case you need to deploy a larger cluster.