pipelines/components/gcp/dataproc/submit_hadoop_job/sample.ipynb

313 lines
11 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Name\n",
"Data preparation using Hadoop MapReduce on YARN with Cloud Dataproc\n",
"\n",
"# Label\n",
"Cloud Dataproc, GCP, Cloud Storage, Hadoop, YARN, Apache, MapReduce\n",
"\n",
"\n",
"# Summary\n",
"A Kubeflow Pipeline component to prepare data by submitting an Apache Hadoop MapReduce job on Apache Hadoop YARN to Cloud Dataproc.\n",
"\n",
"# Details\n",
"## Intended use\n",
"Use the component to run an Apache Hadoop MapReduce job as one preprocessing step in a Kubeflow Pipeline. \n",
"\n",
"## Runtime arguments\n",
"| Argument | Description | Optional | Data type | Accepted values | Default |\n",
"|----------|-------------|----------|-----------|-----------------|---------|\n",
"| project_id | The Google Cloud Platform (GCP) project ID that the cluster belongs to. | No | GCPProjectID | | |\n",
"| region | The Dataproc region to handle the request. | No | GCPRegion | | |\n",
"| cluster_name | The name of the cluster to run the job. | No | String | | |\n",
"| main_jar_file_uri | The Hadoop Compatible Filesystem (HCFS) URI of the JAR file containing the main class to execute. | No | List | | |\n",
"| main_class | The name of the driver's main class. The JAR file that contains the class must be either in the default CLASSPATH or specified in `hadoop_job.jarFileUris`. | No | String | | |\n",
"| args | The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None |\n",
"| hadoop_job | The payload of a [HadoopJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/HadoopJob). | Yes | Dict | | None |\n",
"| job | The payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs). | Yes | Dict | | None |\n",
"| wait_interval | The number of seconds to pause between polling the operation. | Yes | Integer | | 30 |\n",
"\n",
"Note: \n",
"`main_jar_file_uri`: The examples for the files are : \n",
"- `gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar` \n",
"- `hdfs:/tmp/test-samples/custom-wordcount.jarfile:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar`\n",
"\n",
"\n",
"## Output\n",
"Name | Description | Type\n",
":--- | :---------- | :---\n",
"job_id | The ID of the created job. | String\n",
"\n",
"## Cautions & requirements\n",
"To use the component, you must:\n",
"* Set up a GCP project by following this [guide](https://cloud.google.com/dataproc/docs/guides/setup-project).\n",
"* [Create a new cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster).\n",
"* The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details.\n",
"* Grant the Kubeflow user service account the role `roles/dataproc.editor` on the project.\n",
"\n",
"## Detailed description\n",
"\n",
"This component creates a Hadoop job from [Dataproc submit job REST API](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs/submit).\n",
"\n",
"Follow these steps to use the component in a pipeline:\n",
"\n",
"1. Install the Kubeflow Pipeline SDK:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"\n",
"!pip3 install kfp --upgrade"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"2. Load the component using KFP SDK"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import kfp.components as comp\n",
"\n",
"dataproc_submit_hadoop_job_op = comp.load_component_from_url(\n",
" 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0-rc.1/components/gcp/dataproc/submit_hadoop_job/component.yaml')\n",
"help(dataproc_submit_hadoop_job_op)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Sample\n",
"Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template.\n",
"\n",
"\n",
"### Setup a Dataproc cluster\n",
"[Create a new Dataproc cluster](https://cloud.google.com/dataproc/docs/guides/create-cluster) (or reuse an existing one) before running the sample code.\n",
"\n",
"\n",
"### Prepare a Hadoop job\n",
"Upload your Hadoop JAR file to a Cloud Storage bucket. In the sample, we will use a JAR file that is preinstalled in the main cluster, so there is no need to provide `main_jar_file_uri`. \n",
"\n",
"Here is the [WordCount example source code](https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordCount.java).\n",
"\n",
"To package a self-contained Hadoop MapReduce application from the source code, follow the [MapReduce Tutorial](https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html).\n",
"\n",
"\n",
"### Set sample parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"tags": [
"parameters"
]
},
"outputs": [],
"source": [
"PROJECT_ID = '<Please put your project ID here>'\n",
"CLUSTER_NAME = '<Please put your existing cluster name here>'\n",
"OUTPUT_GCS_PATH = '<Please put your output GCS path here>'\n",
"REGION = 'us-central1'\n",
"MAIN_CLASS = 'org.apache.hadoop.examples.WordCount'\n",
"INTPUT_GCS_PATH = 'gs://ml-pipeline-playground/shakespeare1.txt'\n",
"EXPERIMENT_NAME = 'Dataproc - Submit Hadoop Job'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Insepct Input Data\n",
"The input file is a simple text file:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!gsutil cat $INTPUT_GCS_PATH"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Clean up the existing output files (optional)\n",
"This is needed because the sample code requires the output folder to be a clean folder. To continue to run the sample, make sure that the service account of the notebook server has access to the `OUTPUT_GCS_PATH`.\n",
"\n",
"CAUTION: This will remove all blob files under `OUTPUT_GCS_PATH`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!gsutil rm $OUTPUT_GCS_PATH/**"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Example pipeline that uses the component"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import kfp.dsl as dsl\n",
"import json\n",
"@dsl.pipeline(\n",
" name='Dataproc submit Hadoop job pipeline',\n",
" description='Dataproc submit Hadoop job pipeline'\n",
")\n",
"def dataproc_submit_hadoop_job_pipeline(\n",
" project_id = PROJECT_ID, \n",
" region = REGION,\n",
" cluster_name = CLUSTER_NAME,\n",
" main_jar_file_uri = '',\n",
" main_class = MAIN_CLASS,\n",
" args = json.dumps([\n",
" INTPUT_GCS_PATH,\n",
" OUTPUT_GCS_PATH\n",
" ]), \n",
" hadoop_job='', \n",
" job='{}', \n",
" wait_interval='30'\n",
"):\n",
" dataproc_submit_hadoop_job_op(\n",
" project_id=project_id, \n",
" region=region, \n",
" cluster_name=cluster_name, \n",
" main_jar_file_uri=main_jar_file_uri, \n",
" main_class=main_class,\n",
" args=args, \n",
" hadoop_job=hadoop_job, \n",
" job=job, \n",
" wait_interval=wait_interval)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Compile the pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pipeline_func = dataproc_submit_hadoop_job_pipeline\n",
"pipeline_filename = pipeline_func.__name__ + '.zip'\n",
"import kfp.compiler as compiler\n",
"compiler.Compiler().compile(pipeline_func, pipeline_filename)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Submit the pipeline for execution"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Specify pipeline argument values\n",
"arguments = {}\n",
"\n",
"#Get or create an experiment and submit a pipeline run\n",
"import kfp\n",
"client = kfp.Client()\n",
"experiment = client.create_experiment(EXPERIMENT_NAME)\n",
"\n",
"#Submit a pipeline run\n",
"run_name = pipeline_func.__name__ + ' run'\n",
"run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Inspect the output\n",
"The sample in the notebook will count the words in the input text and save them in sharded files. The command to inspect the output is:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!gsutil cat $OUTPUT_GCS_PATH/*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## References\n",
"* [Component Python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/dataproc/_submit_hadoop_job.py)\n",
"* [Component Docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)\n",
"* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/dataproc/submit_hadoop_job/sample.ipynb)\n",
"* [Dataproc HadoopJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/HadoopJob)\n",
"\n",
"## License\n",
"By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}