* Add driverIngressOptions in SparkApplication CRD
Signed-off-by: Bo (AIML) Yang <bo_yang6@apple.com>
* Update chart version to 1.3.0
Signed-off-by: Bo (AIML) Yang <bo_yang6@apple.com>
* Update helm chart README
Signed-off-by: Bo (AIML) Yang <bo_yang6@apple.com>
* Fix make detect-crds-drift
Signed-off-by: Bo (AIML) Yang <bo_yang6@apple.com>
* Update api-docs.md
Signed-off-by: Bo (AIML) Yang <bo_yang6@apple.com>
---------
Signed-off-by: Bo (AIML) Yang <bo_yang6@apple.com>
* Add emptyDir sizeLimit support
Signed-off-by: Jacob Salway <jacob.salway@rokt.com>
* Bump appVersion and add sizeLimit example
Signed-off-by: Jacob Salway <jacob.salway@rokt.com>
---------
Signed-off-by: Jacob Salway <jacob.salway@rokt.com>
* Update workflow to publish Helm charts on chart changes, irrespective of image updates
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* fixed the chart name with prefix
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
---------
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* Add helm unit tests
Signed-off-by: Yi Chen <github@chenyicn.net>
* Fix: failed to render spark service account when extra annotations are specified
Signed-off-by: Yi Chen <github@chenyicn.net>
* Update developer guide
Signed-off-by: Yi Chen <github@chenyicn.net>
* Bump helm chart version
Signed-off-by: Yi Chen <github@chenyicn.net>
---------
Signed-off-by: Yi Chen <github@chenyicn.net>
* Issue templates are added to the repo
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* removed Google CLA requirement
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* Updated ghcr.io registry references in the workflow
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* Added Pull request template
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* Updated Main README.md with Kubeflow header and new Slack channel link
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* Removed the License header and it will be replaced with Kubeflow guidelines
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* Revert "Removed the License header and it will be replaced with Kubeflow guidelines"
This reverts commit b892f5c7fa0398cff8b85f961bd292313ef47953.
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* Readme line revert for gcp docs
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* pre-commit run -a updates
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* fixed the helm lint issue by upgrading the Helm chart version
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* fixed docker image tag and updated chart docs (#1969)
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
* rebase from master
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
---------
Signed-off-by: Vara Bonthu <vara.bonthu@gmail.com>
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
Co-authored-by: Andrii Chubatiuk <andrew.chubatiuk@gmail.com>
* publish chart independently, incremented both chart and image versions to trigger build of both
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
* bump chart version
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
---------
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
* added id for a build job to fix digests artifact creation
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
* configure user for helm publish
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
---------
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
* support multiple namespaces
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
* bump helm chart version
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
---------
Signed-off-by: Andrew Chubatiuk <andrew.chubatiuk@gmail.com>
* fix: fix issue #1723 about spark-operator not working with volcano on OCP
Signed-off-by: disaster37 <linuxworkgroup@hotmail.com>
* Update volcano_scheduler.go
---------
Signed-off-by: disaster37 <linuxworkgroup@hotmail.com>
Resolves#1344
Spark 3.4 supports IPv6:
- https://github.com/apache/spark/pull/36868
So I want to make the operator support IPv6.
I can confirm that this can submit the spark-job in IPv6-only environment.
Although it is necessary to add the following environment variables to the operator
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spark-on-k8s-spark-operator
spec:
template:
spec:
containers:
- name: spark-operator
env:
- name: _JAVA_OPTIONS
value: "-Djava.net.preferIPv6Addresses=true"
- name: KUBERNETES_DISABLE_HOSTNAME_VERIFICATION
value: "true"
```