Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Go to file
Yinan Li 246225376e Upgraded to go 1.11 2018-10-23 13:34:41 -07:00
docs Add support for tolerations 2018-10-20 17:21:18 -07:00
examples Miscellaneous Prometheus fixes 2018-09-19 21:49:43 -07:00
hack Add batch job to initialize the webhook secret to manifest, remove requirement of manual step 2018-10-05 10:16:41 -07:00
manifest Added e2e testing framework (#309) 2018-10-19 21:17:43 -07:00
pkg Upgraded to go 1.11 2018-10-23 13:34:41 -07:00
spark-docker Miscellaneous Prometheus fixes 2018-09-19 21:49:43 -07:00
sparkctl Added support for creating a SparkApplication from a ScheduledSparkApplication 2018-10-05 10:15:34 -07:00
test/e2e Added e2e testing framework (#309) 2018-10-19 21:17:43 -07:00
.dockerignore use multi-stage Dockerfile for reliable builds (#174) 2018-06-01 10:15:06 -07:00
.gitignore Updated RBAC setup 2018-01-22 15:38:16 -08:00
.travis.gofmt.sh Run go fmt check as part of Travis build 2018-07-30 21:19:10 -07:00
.travis.yml Upgraded to go 1.11 2018-10-23 13:34:41 -07:00
CONTRIBUTING.md Added CONTRIBUTING.md and license headers 2017-12-18 11:31:49 -08:00
Dockerfile Fixed go generate 2018-10-05 15:44:52 -07:00
Gopkg.lock Upgraded to go 1.11 2018-10-23 13:34:41 -07:00
Gopkg.toml Fixed go generate 2018-10-05 15:44:52 -07:00
LICENSE Added LICENSE 2017-09-09 16:04:29 -07:00
README.md Added a developer guide and notes on a code gen issue 2018-09-28 14:08:52 -07:00
main.go Fixed base time used to calculate the triggering time of the first run 2018-09-28 19:22:13 -07:00

README.md

Build Status Go Report Card

This is not an officially supported Google product.

Community

Project Status

Project status: alpha

The Kubernetes Operator for Apache Spark is still under active development. Backward compatibility of the APIs is not guaranteed for alpha releases.

Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod affinity, is currently experimental and implemented using a Kubernetes Mutating Admission Webhook, which became beta in Kubernetes 1.9. The mutating admission webhook is disabled by default but can be enabled if there are needs for pod customization. Check out the Quick Start Guide on how to enable the webhook.

Prerequisites

  • Version >= 1.8 of Kubernetes.
  • Version >= 1.9 of Kubernetes if using the mutating admission webhook for Spark pod customization.

The Kubernetes Operator for Apache Spark relies on garbage collection support for custom resources that is available in Kubernetes 1.8+ and optionally the Mutating Admission Webhook which is available in Kubernetes 1.9+.

Due to this bug in Kubernetes 1.9 and earlier, CRD objects with escaped quotes (e.g., spark.ui.port\") in map keys can cause serialization problems in the API server. So please pay extra attention to make sure no offending escaping is in your SparkAppliction CRD objects, particularly if you use Kubernetes prior to 1.10.

Installation

The easiest way to install the Kubernetes Operator for Apache Spark is to use the Helm chart.

$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ helm install incubator/sparkoperator

Get Started

Get started quickly with the Kubernetes Operator for Apache Spark using the Quick Start Guide.

If you are running the Kubernetes Operator for Apache Spark on Google Kubernetes Engine and want to use Google Cloud Storage (GCS) and/or BigQuery for reading/writing data, also refer to the GCP guide.

For more information, check the Design, API Specification and detailed User Guide.

Overview

The Kubernetes Operator for Apache Spark aims to make specifying and running Spark applications as easy and idiomatic as running other workloads on Kubernetes. It uses Kubernetes custom resources for specifying, running, and surfacing status of Spark applications. For a complete reference of the custom resource definitions, please refer to the API Definition. For details on its design, please refer to the design doc. It requires Spark 2.3 and above that supports Kubernetes as a native scheduler backend.

The Kubernetes Operator for Apache Spark currently supports the following list of features:

  • Supports Spark 2.3 and up.
  • Enables declarative application specification and management of applications through custom resources.
  • Automatically runs spark-submit on behalf of users for each SparkApplication eligible for submission.
  • Provides native cron support for running scheduled applications.
  • Supports customization of Spark pods beyond what Spark natively is able to do through the mutating admission webhook, e.g., mounting ConfigMaps and volumes, and setting pod affinity/anti-affinity.
  • Supports automatic application re-submission for updated SparkAppliation objects with updated specification.
  • Supports automatic application restart with a configurable restart policy.
  • Supports automatic retries of failed submissions with optional linear back-off.
  • Supports mounting local Hadoop configuration as a Kubernetes ConfigMap automatically via sparkctl.
  • Supports automatically staging local application dependencies to Google Cloud Storage (GCS) via sparkctl.
  • Supports collecting and exporting application-level metrics and driver/executor metrics to Prometheus.

Contributing

Please check CONTRIBUTING.md and the Developer Guide out.