{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Copyright 2019 Google Inc. All Rights Reserved.\n", "#\n", "# Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# http://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License.\n", "# ==============================================================================" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Reusable components\n", "\n", "This tutorial describes the manual way of writing a full component program (in any language) and a component definition for it. Below is a summary of the steps involved in creating and using a component:\n", "\n", "- Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.\n", "- Containerize the program.\n", "- Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.\n", "- Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command:\n", " \n", "`which docker`\n", " \n", "The result should be something like:\n", "\n", "`/usr/bin/docker`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import kfp\n", "import kfp.gcp as gcp\n", "import kfp.dsl as dsl\n", "import kfp.compiler as compiler\n", "import kfp.components as comp\n", "import datetime\n", "\n", "import kubernetes as k8s" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [ "parameter" ] }, "outputs": [], "source": [ "# Required Parameters\n", "PROJECT_ID=''\n", "GCS_BUCKET='gs://'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create client\n", "\n", "If you run this notebook **outside** of a Kubeflow cluster, run the following command:\n", "- `host`: The URL of your Kubeflow Pipelines instance, for example \"https://``.endpoints.``.cloud.goog/pipeline\"\n", "- `client_id`: The client ID used by Identity-Aware Proxy\n", "- `other_client_id`: The client ID used to obtain the auth codes and refresh tokens.\n", "- `other_client_secret`: The client secret used to obtain the auth codes and refresh tokens.\n", "\n", "```python\n", "client = kfp.Client(host, client_id, other_client_id, other_client_secret)\n", "```\n", "\n", "If you run this notebook **within** a Kubeflow cluster, run the following command:\n", "```python\n", "client = kfp.Client()\n", "```\n", "\n", "You'll need to create OAuth client ID credentials of type `Other` to get `other_client_id` and `other_client_secret`. Learn more about [creating OAuth credentials](\n", "https://cloud.google.com/iap/docs/authentication-howto#authenticating_from_a_desktop_app)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Optional Parameters, but required for running outside Kubeflow cluster\n", "HOST = ''\n", "CLIENT_ID = ''\n", "OTHER_CLIENT_ID = ''\n", "OTHER_CLIENT_SECRET = ''" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create kfp client\n", "in_cluster = True\n", "try:\n", " k8s.config.load_incluster_config()\n", "except:\n", " in_cluster = False\n", " pass\n", "\n", "if in_cluster:\n", " client = kfp.Client()\n", "else:\n", " client = kfp.Client(host=HOST, \n", " client_id=CLIENT_ID,\n", " other_client_id=OTHER_CLIENT_ID, \n", " other_client_secret=OTHER_CLIENT_SECRET)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Writing the program code" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following cell creates a file `app.py` that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.\n", "\n", "Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as `/output.txt`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash\n", "\n", "# Create folders if they don't exist.\n", "mkdir -p tmp/reuse_components/mnist_training\n", "\n", "# Create the Python file that lists GCS blobs.\n", "cat > ./tmp/reuse_components/mnist_training/app.py < ./tmp/reuse_components/mnist_training/Dockerfile <= 0.7**, you need to ensure that valid credentials are created within your notebook's namespace.\n", "- With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through `Configurations`, which doesn't work properly at the time of creating this notebook. \n", "- You can also add credentials to the new namespace by either [copying credentials from an existing Kubeflow namespace, or by creating a new service account](https://www.kubeflow.org/docs/gke/authentication/#kubeflow-v0-6-and-before-gcp-service-account-key-as-secret).\n", "- The following cell demonstrates how to copy the default secret to your own namespace.\n", "\n", "```bash\n", "%%bash\n", "\n", "NAMESPACE=\n", "SOURCE=kubeflow\n", "NAME=user-gcp-sa\n", "SECRET=$(kubectl get secrets \\${NAME} -n \\${SOURCE} -o jsonpath=\"{.data.\\${NAME}\\.json}\" | base64 -D)\n", "kubectl create -n \\${NAMESPACE} secret generic \\${NAME} --from-literal=\"\\${NAME}.json=\\${SECRET}\"\n", "```" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "IMAGE_NAME=\"mnist_training_kf_pipeline\"\n", "TAG=\"latest\" # \"v_$(date +%Y%m%d_%H%M%S)\"\n", "\n", "GCR_IMAGE=\"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}\".format(\n", " PROJECT_ID=PROJECT_ID,\n", " IMAGE_NAME=IMAGE_NAME,\n", " TAG=TAG\n", ")\n", "\n", "builder = kfp.containers._container_builder.ContainerBuilder(\n", " gcs_staging=GCS_BUCKET + \"/kfp_container_build_staging\")\n", "\n", "image_name = kfp.containers.build_image_from_working_dir(\n", " image_name=GCR_IMAGE,\n", " working_dir='./tmp/reuse_components/mnist_training/',\n", " builder=builder\n", ")\n", "\n", "image_name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### If you want to use docker to build the image\n", "Run the following in a cell\n", "```bash\n", "%%bash -s \"{PROJECT_ID}\"\n", "\n", "IMAGE_NAME=\"mnist_training_kf_pipeline\"\n", "TAG=\"latest\" # \"v_$(date +%Y%m%d_%H%M%S)\"\n", "\n", "# Create script to build docker image and push it.\n", "cat > ./tmp/reuse_components/mnist_training/build_image.sh <\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Writing your component definition file\n", "To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.\n", "\n", "For the complete definition of a Kubeflow Pipelines component, see the [component specification](https://www.kubeflow.org/docs/pipelines/reference/component-spec/). However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.\n", "\n", "Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%bash -s \"{image_name}\"\n", "\n", "GCR_IMAGE=\"${1}\"\n", "echo ${GCR_IMAGE}\n", "\n", "# Create Yaml\n", "# the image uri should be changed according to the above docker image push output\n", "\n", "cat > mnist_component.yaml <