Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Go to file
Jim Kleckner f85706cc19 Fix entrypoint.sh to /usr/bin/tini (#745)
Somehow this got lost in the updates.
The debian tini install puts it into /usr/bin/tini
2019-12-19 09:13:53 -08:00
docs Added support for specifying init-containers for driver/executors (#740) 2019-12-16 16:15:52 -08:00
examples Add sparkVersion in spark-pi-scheduled example (#708) 2019-12-06 09:24:36 -08:00
hack Generate CRD specs, bump to v1beta2 (#578) 2019-09-13 10:37:21 -07:00
manifest Removed initializers from all operator deployment yamls (#678) 2019-11-01 14:26:04 -07:00
pkg Added support for specifying init-containers for driver/executors (#740) 2019-12-16 16:15:52 -08:00
spark-docker Add Bigquery connector to GCP Spark Dockerfile (#731) 2019-12-12 13:44:54 -08:00
sparkctl Add volumes support for Spark scratch space spark.local.dir (#707) 2019-12-15 15:05:40 -08:00
test/e2e Upgraded default Spark version from 2.4.0 to 2.4.4 (#625) 2019-09-20 11:03:55 -07:00
.dockerignore use multi-stage Dockerfile for reliable builds (#174) 2018-06-01 10:15:06 -07:00
.gitignore Fix .gitignore typo (#636) 2019-09-25 13:08:55 -07:00
.gitlab-ci.yml Use SPARK_REGISTRY and SPARK_VERSION for build (#739) 2019-12-16 14:11:20 -08:00
.travis.gofmt.sh Run go fmt check as part of Travis build 2018-07-30 21:19:10 -07:00
.travis.yml Revert "Switched to use go modules (#562)" (#565) 2019-08-12 11:44:08 -07:00
CONTRIBUTING.md Added CONTRIBUTING.md and license headers 2017-12-18 11:31:49 -08:00
Dockerfile Use SPARK_REGISTRY and SPARK_VERSION for build (#739) 2019-12-16 14:11:20 -08:00
Dockerfile.rh Upgraded default Spark version from 2.4.0 to 2.4.4 (#625) 2019-09-20 11:03:55 -07:00
Gopkg.lock Added TTL for SparkApplications (#615) 2019-09-13 14:21:58 -07:00
Gopkg.toml Support volcano podgroup (#567) 2019-08-25 21:10:32 -07:00
LICENSE Added LICENSE 2017-09-09 16:04:29 -07:00
Makefile added install target for linux and macos + typo fix (#619) 2019-09-15 09:12:04 -07:00
README.md Updated README.md (#742) 2019-12-17 13:24:39 -08:00
entrypoint.sh Fix entrypoint.sh to /usr/bin/tini (#745) 2019-12-19 09:13:53 -08:00
main.go Fix integration issue with volcano (#632) 2019-09-24 16:41:13 -07:00

README.md

Build Status Go Report Card

This is not an officially supported Google product.

Community

Project Status

Project status: beta

Current API version: v1beta2

If you are currently using the v1alpha1 or v1beta1 version of the APIs in your manifests, please update them to use the v1beta2 version by changing apiVersion: "sparkoperator.k8s.io/<version>" to apiVersion: "sparkoperator.k8s.io/v1beta2". You will also need to delete the previous version of the CustomResourceDefinitions named sparkapplications.sparkoperator.k8s.io and scheduledsparkapplications.sparkoperator.k8s.io, and replace them with the v1beta2 version either by installing the latest version of the operator or by running kubectl create -f manifest/crds.

Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod affinity, is implemented using a Kubernetes Mutating Admission Webhook, which became beta in Kubernetes 1.9. The mutating admission webhook is enabled by default if you install the operator using the Helm chart. Check out the Quick Start Guide on how to enable the webhook.

Prerequisites

Installation

The easiest way to install the Kubernetes Operator for Apache Spark is to use the Helm chart.

$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install incubator/sparkoperator --namespace spark-operator

For configuration options available in the Helm chart, please refer to Configuration.

Version Matrix

The following table lists the most recent few versions of the operator.

Operator Version API Version Kubernetes Version Base Spark Version Operator Image Tag
latest (master HEAD) v1beta2 1.13+ 2.4.5-SNAPSHOT latest
v1beta2-1.0.2-2.4.5-SNAPSHOT v1beta2 1.13+ 2.4.5-SNAPSHOT v1beta2-1.0.2-2.4.5-SNAPSHOT
v1beta2-1.0.1-2.4.4 v1beta2 1.13+ 2.4.4 v1beta2-1.0.1-2.4.4
v1beta2-1.0.0-2.4.4 v1beta2 1.13+ 2.4.4 v1beta2-1.0.0-2.4.4
v1beta1-0.9.0 v1beta1 1.13+ 2.4.0 v2.4.0-v1beta1-0.9.0

When installing using the Helm chart, you can choose to use a specific image tag instead of the default one, using the following option:

--set operatorVersion=<operator image tag>

Get Started

Get started quickly with the Kubernetes Operator for Apache Spark using the Quick Start Guide.

If you are running the Kubernetes Operator for Apache Spark on Google Kubernetes Engine and want to use Google Cloud Storage (GCS) and/or BigQuery for reading/writing data, also refer to the GCP guide.

For more information, check the Design, API Specification and detailed User Guide.

Overview

The Kubernetes Operator for Apache Spark aims to make specifying and running Spark applications as easy and idiomatic as running other workloads on Kubernetes. It uses Kubernetes custom resources for specifying, running, and surfacing status of Spark applications. For a complete reference of the custom resource definitions, please refer to the API Definition. For details on its design, please refer to the design doc. It requires Spark 2.3 and above that supports Kubernetes as a native scheduler backend.

The Kubernetes Operator for Apache Spark currently supports the following list of features:

  • Supports Spark 2.3 and up.
  • Enables declarative application specification and management of applications through custom resources.
  • Automatically runs spark-submit on behalf of users for each SparkApplication eligible for submission.
  • Provides native cron support for running scheduled applications.
  • Supports customization of Spark pods beyond what Spark natively is able to do through the mutating admission webhook, e.g., mounting ConfigMaps and volumes, and setting pod affinity/anti-affinity.
  • Supports automatic application re-submission for updated SparkAppliation objects with updated specification.
  • Supports automatic application restart with a configurable restart policy.
  • Supports automatic retries of failed submissions with optional linear back-off.
  • Supports mounting local Hadoop configuration as a Kubernetes ConfigMap automatically via sparkctl.
  • Supports automatically staging local application dependencies to Google Cloud Storage (GCS) via sparkctl.
  • Supports collecting and exporting application-level metrics and driver/executor metrics to Prometheus.

Contributing

Please check CONTRIBUTING.md and the Developer Guide out.