263 lines
8.5 KiB
Plaintext
263 lines
8.5 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Name\n",
|
|
"Data preparation using PySpark on Cloud Dataproc\n",
|
|
"\n",
|
|
"\n",
|
|
"# Label\n",
|
|
"Cloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components\n",
|
|
"\n",
|
|
"\n",
|
|
"# Summary\n",
|
|
"A Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc.\n",
|
|
"\n",
|
|
"\n",
|
|
"# Details\n",
|
|
"## Intended use\n",
|
|
"Use the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline.\n",
|
|
"\n",
|
|
"\n",
|
|
"## Runtime arguments\n",
|
|
"| Argument | Description | Optional | Data type | Accepted values | Default |\n",
|
|
"|----------------------|------------|----------|--------------|-----------------|---------|\n",
|
|
"| project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | |\n",
|
|
"| region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |\n",
|
|
"| cluster_name | The name of the cluster to run the job. | No | String | | |\n",
|
|
"| main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | |\n",
|
|
"| args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None |\n",
|
|
"| pyspark_job | The payload of a [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob). | Yes | Dict | | None |\n",
|
|
"| job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None |\n",
|
|
"\n",
|
|
"## Output\n",
|
|
"Name | Description | Type\n",
|
|
":--- | :---------- | :---\n",
|
|
"job_id | The ID of the created job. | String\n",
|
|
"\n",
|
|
"## Cautions & requirements\n",
|
|
"\n",
|
|
"To use the component, you must:\n",
|
|
"* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).\n",
|
|
"* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).\n",
|
|
"* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.\n",
|
|
"* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project.\n",
|
|
"\n",
|
|
"## Detailed description\n",
|
|
"\n",
|
|
"This component creates a PySpark job from the [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).\n",
|
|
"\n",
|
|
"Follow these steps to use the component in a pipeline:\n",
|
|
"\n",
|
|
"1. Install the Kubeflow Pipeline SDK:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"%%capture --no-stderr\n",
|
|
"\n",
|
|
"!pip3 install kfp --upgrade"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"2. Load the component using KFP SDK"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import kfp.components as comp\n",
|
|
"\n",
|
|
"dataproc_submit_pyspark_job_op = comp.load_component_from_url(\n",
|
|
" 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0-rc.1/components/gcp/dataproc/submit_pyspark_job/component.yaml')\n",
|
|
"help(dataproc_submit_pyspark_job_op)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Sample\n",
|
|
"\n",
|
|
"Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.\n",
|
|
"\n",
|
|
"\n",
|
|
"#### Setup a Dataproc cluster\n",
|
|
"\n",
|
|
"[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code.\n",
|
|
"\n",
|
|
"\n",
|
|
"#### Prepare a PySpark job\n",
|
|
"\n",
|
|
"Upload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible `hello-world.py` in Cloud Storage:"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Set sample parameters"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {
|
|
"tags": [
|
|
"parameters"
|
|
]
|
|
},
|
|
"outputs": [],
|
|
"source": [
|
|
"PROJECT_ID = '<Please put your project ID here>'\n",
|
|
"CLUSTER_NAME = '<Please put your existing cluster name here>'\n",
|
|
"REGION = 'us-central1'\n",
|
|
"PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py'\n",
|
|
"ARGS = ''\n",
|
|
"EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job'"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Example pipeline that uses the component"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import kfp.dsl as dsl\n",
|
|
"import json\n",
|
|
"@dsl.pipeline(\n",
|
|
" name='Dataproc submit PySpark job pipeline',\n",
|
|
" description='Dataproc submit PySpark job pipeline'\n",
|
|
")\n",
|
|
"def dataproc_submit_pyspark_job_pipeline(\n",
|
|
" project_id = PROJECT_ID, \n",
|
|
" region = REGION,\n",
|
|
" cluster_name = CLUSTER_NAME,\n",
|
|
" main_python_file_uri = PYSPARK_FILE_URI, \n",
|
|
" args = ARGS, \n",
|
|
" pyspark_job='{}', \n",
|
|
" job='{}', \n",
|
|
" wait_interval='30'\n",
|
|
"):\n",
|
|
" dataproc_submit_pyspark_job_op(\n",
|
|
" project_id=project_id, \n",
|
|
" region=region, \n",
|
|
" cluster_name=cluster_name, \n",
|
|
" main_python_file_uri=main_python_file_uri, \n",
|
|
" args=args, \n",
|
|
" pyspark_job=pyspark_job, \n",
|
|
" job=job, \n",
|
|
" wait_interval=wait_interval)\n",
|
|
" "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Compile the pipeline"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"pipeline_func = dataproc_submit_pyspark_job_pipeline\n",
|
|
"pipeline_filename = pipeline_func.__name__ + '.zip'\n",
|
|
"import kfp.compiler as compiler\n",
|
|
"compiler.Compiler().compile(pipeline_func, pipeline_filename)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Submit the pipeline for execution"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"#Specify pipeline argument values\n",
|
|
"arguments = {}\n",
|
|
"\n",
|
|
"#Get or create an experiment and submit a pipeline run\n",
|
|
"import kfp\n",
|
|
"client = kfp.Client()\n",
|
|
"experiment = client.create_experiment(EXPERIMENT_NAME)\n",
|
|
"\n",
|
|
"#Submit a pipeline run\n",
|
|
"run_name = pipeline_func.__name__ + ' run'\n",
|
|
"run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"## References\n",
|
|
"\n",
|
|
"* [Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) \n",
|
|
"* [PySparkJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/PySparkJob)\n",
|
|
"* [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs)\n",
|
|
"\n",
|
|
"## License\n",
|
|
"By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control."
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.6.4"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
} |