Cherry pick PRs to v1.3-branch (#1920)

* Add kimwnasptd to the owners file (#1898)

Signed-off-by: Kimonas Sotirchos <kimwnasptd@arrikto.com>

* docs: Add 1.3 release retrospective (#1900)

The original document is:
https://docs.google.com/document/d/1KRF4IE48Ueb61DPBKK6fryRWSaNz_urXQOxWf4G_qD8

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Change user name to user@example.com in README (#1902)

* Change user name to user@example.com in README

* Fix user namespace

* Change user name to email

* Add linguist configuration (#1905)

* Update Knative serving/eventing to 0.17 (#1768)

Added README to explain changes from upstream.

YAML anchors and aliases are also removed/expanded as they
sometimes cause breakages in kustomize v3.9+.

* Update cert-manager manifests to 1.3.1 version (#1820)

* update cert-manager manifests to latest version

* review: make suggested change and update to 1.3.1

* Update cert-manager to 1.4.0

* Make suggested changes

* Downgrade to 1.3.1

* Fix image tags and add comment about preserveUnknownFields patch

* Remove images section and cleanup

* cert-manager: Split kubeflow-issuer into separate kustomization (#1916)

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Upgrade Istio to 1.9.5 (#1908)

* common: Upgrade to Istio 1.9.5

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* common: Fix all references to new Istio 1.9.5

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Knative 0.17 (#1917)

* knative: Document how to update Knative

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* knative: Restructure to eventing and serving kustomizations

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* knative: Apply changes according to guide

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* knative: Remove old kustomizations

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* knative: Enable sidecar injection for serving

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* knative: Fix example kustomization and guide

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Sync manifests for 1.3.1 (#1919)

* Notebook Controller: Sync manifests

Sync manifests for application "Notebook Controller".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/notebook-controller/config
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Tensorboard Controller: Sync manifests

Sync manifests for application "Tensorboard Controller".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/tensorboard-controller/config
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Central Dashboard: Sync manifests

Sync manifests for application "Central Dashboard".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/centraldashboard/manifests
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Profiles + KFAM: Sync manifests

Sync manifests for application "Profiles + KFAM".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/profile-controller/config
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* PodDefaults Webhook: Sync manifests

Sync manifests for application "PodDefaults Webhook".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/admission-webhook/manifests
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Jupyter Web App: Sync manifests

Sync manifests for application "Jupyter Web App".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/crud-web-apps/jupyter/manifests
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Tensorboards Web App: Sync manifests

Sync manifests for application "Tensorboards Web App".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/crud-web-apps/tensorboards/manifests
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Volumes Web App: Sync manifests

Sync manifests for application "Volumes Web App".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kubeflow
- Path: components/crud-web-apps/volumes/manifests
- Revision: v1.3.1-rc.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Katib: Sync manifests

Sync manifests for application "Katib".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/katib
- Path: manifests/v1beta1
- Revision: v0.11.1

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Kubeflow Pipelines: Sync manifests

Sync manifests for application "Kubeflow Pipelines".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/pipelines
- Path: manifests/kustomize
- Revision: 1.5.1

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* Kubeflow Tekton Pipelines: Sync manifests

Sync manifests for application "Kubeflow Tekton Pipelines".
Upstream manifests are copied from:
- Repo: https://github.com/kubeflow/kfp-tekton
- Path: manifests/kustomize
- Revision: v0.8.0

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>

* README: Update component versions

Signed-off-by: Yannis Zarkadas <yanniszark@arrikto.com>
This commit is contained in:
Yannis Zarkadas 2021-06-25 19:32:53 +03:00 committed by GitHub
parent c41f71e01e
commit 4b4281900b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
161 changed files with 35487 additions and 11677 deletions

2
.gitattributes vendored Normal file
View File

@ -0,0 +1,2 @@
*.yaml linguist-detectable=true
*.json linguist-detectable=true

4
OWNERS
View File

@ -1,12 +1,12 @@
approvers:
- elikatsis
- kimwnasptd
- PatrickXYS
- StefanoFioravanzo
- vkoukis
- yanniszark
reviewers:
- elikatsis
- kimwnasptd
- PatrickXYS
- StefanoFioravanzo
- vkoukis
- yanniszark

View File

@ -44,21 +44,20 @@ This repo periodically syncs all official Kubeflow components from their respect
| TFJob Operator | apps/tf-training/upstream | [v1.1.0](https://github.com/kubeflow/tf-operator/tree/v1.1.0/manifests) |
| PyTorch Operator | apps/pytorch-job/upstream | [v0.7.0](https://github.com/kubeflow/pytorch-operator/tree/v0.7.0/manifests) |
| MPI Operator | apps/mpi-job/upstream | [b367aa55886d2b042f5089df359d8e067e49e8d1](https://github.com/kubeflow/mpi-operator/tree/b367aa55886d2b042f5089df359d8e067e49e8d1/manifests) |
| MXNet Operator | apps/mxnet-job/upstream | [v1.1.0](https://github.com/kubeflow/mxnet-operator/v1.1.0/manifests) |
| MXNet Operator | apps/mxnet-job/upstream | [v1.1.0](https://github.com/kubeflow/mxnet-operator/tree/v1.1.0/manifests) |
| XGBoost Operator | apps/xgboost-job/upstream | [v0.2.0](https://github.com/kubeflow/xgboost-operator/tree/v0.2.0/manifests) |
| Notebook Controller | apps/jupyter/notebook-controller/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/notebook-controller/config) |
| Tensorboard Controller | apps/tensorboard/tensorboard-controller/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/tensorboard-controller/config) |
| Central Dashboard | apps/centraldashboard/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/centraldashboard/manifests) |
| Profiles + KFAM | apps/profiles/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/profile-controller/config) |
| PodDefaults Webhook | apps/admission-webhook/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/admission-webhook/manifests) |
| Jupyter Web App | apps/jupyter/jupyter-web-app/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/crud-web-apps/jupyter/manifests) |
| Tensorboards Web App | apps/tensorboard/tensorboards-web-app/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/crud-web-apps/tensorboards/manifests) |
| Volumes Web App | apps/volumes-web-app/upstream | [v1.3.0-rc.1](https://github.com/kubeflow/kubeflow/tree/v1.3.0-rc.1/components/crud-web-apps/volumes/manifests) |
| Katib | apps/katib/upstream | [origin/release-0.11 (7d7c34c72ab8bce74262c7abbe55ef9312291219)](https://github.com/kubeflow/katib/tree/7d7c34c72ab8bce74262c7abbe55ef9312291219/manifests/v1beta1) |
| KFServing | apps/kfserving/upstream | [origin/release-0.5 (e189a510121c09f764f749143b80f6ee6baaf48b)](https://github.com/kubeflow/kfserving/tree/e189a510121c09f764f749143b80f6ee6baaf48b/config) |
| Kubeflow Pipelines | apps/pipeline/upstream | [1.5.0](https://github.com/kubeflow/pipelines/tree/1.5.0/manifests/kustomize) |
| Kubeflow Tekton Pipelines | apps/kfp-tekton/upstream | [v0.8.0-rc0](https://github.com/kubeflow/kfp-tekton/tree/v0.8.0-rc0/manifests/kustomize) |
| Notebook Controller | apps/jupyter/notebook-controller/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/notebook-controller/config) |
| Tensorboard Controller | apps/tensorboard/tensorboard-controller/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/tensorboard-controller/config) |
| Central Dashboard | apps/centraldashboard/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/centraldashboard/manifests) |
| Profiles + KFAM | apps/profiles/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/profile-controller/config) |
| PodDefaults Webhook | apps/admission-webhook/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/admission-webhook/manifests) |
| Jupyter Web App | apps/jupyter/jupyter-web-app/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/crud-web-apps/jupyter/manifests) |
| Tensorboards Web App | apps/tensorboard/tensorboards-web-app/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/crud-web-apps/tensorboards/manifests) |
| Volumes Web App | apps/volumes-web-app/upstream | [v1.3.1-rc.0](https://github.com/kubeflow/kubeflow/tree/v1.3.1-rc.0/components/crud-web-apps/volumes/manifests) |
| Katib | apps/katib/upstream | [v0.11.1](https://github.com/kubeflow/katib/tree/v0.11.1/manifests/v1beta1) |
| KFServing | apps/kfserving/upstream | [e189a510121c09f764f749143b80f6ee6baaf48b (release-0.5)](https://github.com/kubeflow/kfserving/tree/e189a510121c09f764f749143b80f6ee6baaf48b/config) |
| Kubeflow Pipelines | apps/pipeline/upstream | [1.5.1](https://github.com/kubeflow/pipelines/tree/1.5.1/manifests/kustomize) |
| Kubeflow Tekton Pipelines | apps/kfp-tekton/upstream | [v0.8.0](https://github.com/kubeflow/kfp-tekton/tree/v0.8.0/manifests/kustomize) |
## Installation
Starting Kubeflow 1.3, the Manifests WG provides two options for installing Kubeflow official components and common services with kustomize. The aim is to help end users install easily and to help distribution owners build their opinionated distributions from a tested starting point:
@ -71,7 +70,7 @@ Option 2 targets customization and ability to pick and choose individual compone
The `example` directory contains an example kustomization for the single command to be able to run.
:warning: In both options, we use a default username (`user`) and password (`12341234`). For any production Kubeflow deployment, you should change the default password by following [the relevant section](#change-default-user-password).
:warning: In both options, we use a default email (`user@example.com`) and password (`12341234`). For any production Kubeflow deployment, you should change the default password by following [the relevant section](#change-default-user-password).
### Prerequisites
@ -116,9 +115,8 @@ admission webhooks.
Install cert-manager:
```sh
kustomize build common/cert-manager/cert-manager-kube-system-resources/base | kubectl apply -f -
kustomize build common/cert-manager/cert-manager-crds/base | kubectl apply -f -
kustomize build common/cert-manager/cert-manager/overlays/self-signed | kubectl apply -f -
kustomize build common/cert-manager/cert-manager/base | kubectl apply -f -
kustomize build common/cert-manager/kubeflow-issuer/base | kubectl apply -f -
```
#### Istio
@ -129,14 +127,14 @@ network authorization and implement routing policies.
Install Istio:
```sh
kustomize build common/istio-1-9-0/istio-crds/base | kubectl apply -f -
kustomize build common/istio-1-9-0/istio-namespace/base | kubectl apply -f -
kustomize build common/istio-1-9-0/istio-install/base | kubectl apply -f -
kustomize build common/istio-1-9/istio-crds/base | kubectl apply -f -
kustomize build common/istio-1-9/istio-namespace/base | kubectl apply -f -
kustomize build common/istio-1-9/istio-install/base | kubectl apply -f -
```
#### Dex
Dex is an OpenID Connect Identity (OIDC) with multiple authentication backends. In this default installation, it includes a static user named `user`. By default, the user's password is `12341234`. For any production Kubeflow deployment, you should change the default password by following [the relevant section](#change-default-user-password).
Dex is an OpenID Connect Identity (OIDC) with multiple authentication backends. In this default installation, it includes a static user with email `user@example.com`. By default, the user's password is `12341234`. For any production Kubeflow deployment, you should change the default password by following [the relevant section](#change-default-user-password).
Install Dex:
@ -159,16 +157,14 @@ Knative is used by the KFServing official Kubeflow component.
Install Knative Serving:
```sh
kustomize build common/knative/knative-serving-crds/base | kubectl apply -f -
kustomize build common/knative/knative-serving-install/base | kubectl apply -f -
kustomize build common/istio-1-9-0/cluster-local-gateway/base | kubectl apply -f -
kustomize build common/knative/knative-serving/base | kubectl apply -f -
kustomize build common/istio-1-9/cluster-local-gateway/base | kubectl apply -f -
```
Optionally, you can install Knative Eventing which can be used for inference request logging:
```sh
kustomize build common/knative/knative-eventing-crds/base | kubectl apply -f -
kustomize build common/knative/knative-eventing-install/base | kubectl apply -f -
kustomize build common/knative/knative-eventing/base | kubectl apply -f -
```
#### Kubeflow Namespace
@ -204,7 +200,7 @@ well.
Install istio resources:
```sh
kustomize build common/istio-1-9-0/kubeflow-istio-resources/base | kubectl apply -f -
kustomize build common/istio-1-9/kubeflow-istio-resources/base | kubectl apply -f -
```
#### Kubeflow Pipelines
@ -361,7 +357,7 @@ kustomize build apps/xgboost-job/upstream/overlays/kubeflow | kubectl apply -f -
#### User Namespace
Finally, create a new namespace for the the default user (named `user`).
Finally, create a new namespace for the the default user (named `kubeflow-user-example-com`).
```sh
kustomize build common/user-namespace/base | kubectl apply -f -

View File

@ -16,7 +16,7 @@ commonLabels:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/admission-webhook
newName: public.ecr.aws/j1r0q0g6/notebooks/admission-webhook
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0
namespace: kubeflow
generatorOptions:
disableNameSuffixHash: true

View File

@ -18,7 +18,7 @@ commonLabels:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/central-dashboard
newName: public.ecr.aws/j1r0q0g6/notebooks/central-dashboard
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0
configMapGenerator:
- envs:
- params.env

View File

@ -17,23 +17,23 @@
spawnerFormDefaults:
image:
# The container Image for the user's Jupyter Notebook
value: public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-scipy:v1.3.0-rc.1
value: public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-scipy:v1.3.1-rc.0
# The list of available standard container Images
options:
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-scipy:v1.3.0-rc.1
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-pytorch-full:v1.3.0-rc.1
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-pytorch-cuda-full:v1.3.0-rc.1
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-tensorflow-full:v1.3.0-rc.1
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-tensorflow-cuda-full:v1.3.0-rc.1
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-scipy:v1.3.1-rc.0
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-pytorch-full:v1.3.1-rc.0
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-pytorch-cuda-full:v1.3.1-rc.0
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-tensorflow-full:v1.3.1-rc.0
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/jupyter-tensorflow-cuda-full:v1.3.1-rc.0
imageGroupOne:
# The container Image for the user's Group One Server
# The annotation `notebooks.kubeflow.org/http-rewrite-uri: /`
# is applied to notebook in this group, configuring
# the Istio rewrite for containers that host their web UI at `/`
value: public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.3.0-rc.1
value: public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.3.1-rc.0
# The list of available standard container Images
options:
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.3.0-rc.1
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/codeserver-python:v1.3.1-rc.0
imageGroupTwo:
# The container Image for the user's Group Two Server
# The annotation `notebooks.kubeflow.org/http-rewrite-uri: /`
@ -42,10 +42,10 @@ spawnerFormDefaults:
# The annotation `notebooks.kubeflow.org/http-headers-request-set`
# is applied to notebook in this group, configuring Istio
# to add the `X-RStudio-Root-Path` header to requests
value: public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/rstudio-tidyverse:v1.3.0-rc.1
value: public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/rstudio-tidyverse:v1.3.1-rc.0
# The list of available standard container Images
options:
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/rstudio-tidyverse:v1.3.0-rc.1
- public.ecr.aws/j1r0q0g6/notebooks/notebook-servers/rstudio-tidyverse:v1.3.1-rc.0
allowCustomImage: true
imagePullPolicy:
value: IfNotPresent

View File

@ -17,7 +17,7 @@ spec:
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /src/apps/default/static/assets
- mountPath: /src/apps/default/static/assets/logos
name: logos-volume
env:
- name: APP_PREFIX

View File

@ -23,7 +23,7 @@ commonLabels:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/jupyter-web-app
newName: public.ecr.aws/j1r0q0g6/notebooks/jupyter-web-app
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0
# We need the name to be unique without the suffix because the original name is what
# gets used with patches
configMapGenerator:

View File

@ -5,4 +5,4 @@ resources:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/notebook-controller
newName: public.ecr.aws/j1r0q0g6/notebooks/notebook-controller
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0

View File

@ -7,13 +7,13 @@ data:
metrics-collector-sidecar: |-
{
"StdOut": {
"image": "docker.io/kubeflowkatib/file-metrics-collector:v0.11.0"
"image": "docker.io/kubeflowkatib/file-metrics-collector:v0.11.1"
},
"File": {
"image": "docker.io/kubeflowkatib/file-metrics-collector:v0.11.0"
"image": "docker.io/kubeflowkatib/file-metrics-collector:v0.11.1"
},
"TensorFlowEvent": {
"image": "docker.io/kubeflowkatib/tfevent-metrics-collector:v0.11.0",
"image": "docker.io/kubeflowkatib/tfevent-metrics-collector:v0.11.1",
"resources": {
"limits": {
"memory": "1Gi"
@ -24,25 +24,25 @@ data:
suggestion: |-
{
"random": {
"image": "docker.io/kubeflowkatib/suggestion-hyperopt:v0.11.0"
"image": "docker.io/kubeflowkatib/suggestion-hyperopt:v0.11.1"
},
"tpe": {
"image": "docker.io/kubeflowkatib/suggestion-hyperopt:v0.11.0"
"image": "docker.io/kubeflowkatib/suggestion-hyperopt:v0.11.1"
},
"grid": {
"image": "docker.io/kubeflowkatib/suggestion-chocolate:v0.11.0"
"image": "docker.io/kubeflowkatib/suggestion-chocolate:v0.11.1"
},
"hyperband": {
"image": "docker.io/kubeflowkatib/suggestion-hyperband:v0.11.0"
"image": "docker.io/kubeflowkatib/suggestion-hyperband:v0.11.1"
},
"bayesianoptimization": {
"image": "docker.io/kubeflowkatib/suggestion-skopt:v0.11.0"
"image": "docker.io/kubeflowkatib/suggestion-skopt:v0.11.1"
},
"cmaes": {
"image": "docker.io/kubeflowkatib/suggestion-goptuna:v0.11.0"
"image": "docker.io/kubeflowkatib/suggestion-goptuna:v0.11.1"
},
"enas": {
"image": "docker.io/kubeflowkatib/suggestion-enas:v0.11.0",
"image": "docker.io/kubeflowkatib/suggestion-enas:v0.11.1",
"resources": {
"limits": {
"memory": "200Mi"
@ -50,12 +50,12 @@ data:
}
},
"darts": {
"image": "docker.io/kubeflowkatib/suggestion-darts:v0.11.0"
"image": "docker.io/kubeflowkatib/suggestion-darts:v0.11.1"
}
}
early-stopping: |-
{
"medianstop": {
"image": "docker.io/kubeflowkatib/earlystopping-medianstop:v0.11.0"
"image": "docker.io/kubeflowkatib/earlystopping-medianstop:v0.11.1"
}
}

View File

@ -21,13 +21,13 @@ resources:
images:
- name: docker.io/kubeflowkatib/katib-controller
newName: docker.io/kubeflowkatib/katib-controller
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-db-manager
newName: docker.io/kubeflowkatib/katib-db-manager
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-ui
newName: docker.io/kubeflowkatib/katib-ui
newTag: v0.11.0
newTag: v0.11.1
patchesStrategicMerge:
- patches/katib-cert-injection.yaml

View File

@ -19,16 +19,16 @@ resources:
images:
- name: docker.io/kubeflowkatib/katib-controller
newName: docker.io/kubeflowkatib/katib-controller
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-db-manager
newName: docker.io/kubeflowkatib/katib-db-manager
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-ui
newName: docker.io/kubeflowkatib/katib-ui
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/cert-generator
newName: docker.io/kubeflowkatib/cert-generator
newTag: v0.11.0
newTag: v0.11.1
patchesStrategicMerge:
- db-manager-patch.yaml
# Modify katib-mysql-secrets with parameters for the DB.

View File

@ -21,13 +21,13 @@ resources:
images:
- name: docker.io/kubeflowkatib/katib-controller
newName: docker.io/kubeflowkatib/katib-controller
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-db-manager
newName: docker.io/kubeflowkatib/katib-db-manager
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-ui
newName: docker.io/kubeflowkatib/katib-ui
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/cert-generator
newName: docker.io/kubeflowkatib/cert-generator
newTag: v0.11.0
newTag: v0.11.1

View File

@ -9,13 +9,13 @@ resources:
images:
- name: docker.io/kubeflowkatib/katib-controller
newName: docker.io/kubeflowkatib/katib-controller
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-db-manager
newName: docker.io/kubeflowkatib/katib-db-manager
newTag: v0.11.0
newTag: v0.11.1
- name: docker.io/kubeflowkatib/katib-ui
newName: docker.io/kubeflowkatib/katib-ui
newTag: v0.11.0
newTag: v0.11.1
patchesStrategicMerge:
- patches/remove-resources-patch.yaml

View File

@ -10,4 +10,4 @@ commonLabels:
app: cache-deployer
images:
- name: gcr.io/ml-pipeline/cache-deployer
newTag: 1.5.0-rc.2
newTag: 1.5.0

View File

@ -10,4 +10,5 @@ commonLabels:
app: cache-server
images:
- name: gcr.io/ml-pipeline/cache-server
newTag: 1.5.0-rc.2
newName: docker.io/aipipeline/cache-server
newTag: 0.8.0

View File

@ -11,26 +11,6 @@ resources:
- mysql-secret.yaml
images:
- name: gcr.io/ml-pipeline/api-server
newName: docker.io/aipipeline/api-server
newTag: latest
- name: gcr.io/ml-pipeline/persistenceagent
newName: docker.io/aipipeline/persistenceagent
newTag: latest
- name: gcr.io/ml-pipeline/frontend
newName: docker.io/aipipeline/frontend
newTag: latest
- name: gcr.io/ml-pipeline/metadata-writer
newName: docker.io/aipipeline/metadata-writer
newTag: latest
- name: gcr.io/ml-pipeline/scheduledworkflow
newName: docker.io/aipipeline/scheduledworkflow
newTag: latest
- name: gcr.io/ml-pipeline/cache-server
newName: docker.io/aipipeline/cache-server
newTag: latest
# Used by Kustomize
vars:
- name: kfp-namespace

View File

@ -4,7 +4,7 @@ metadata:
name: pipeline-install-config
data:
appName: pipeline
appVersion: 1.5.0-rc.2
appVersion: 1.5.0
dbHost: mysql
dbPort: "3306"
mlmdDb: metadb
@ -25,5 +25,5 @@ data:
## cacheImage is the image that the mutating webhook will use to patch
## cached steps with. Will be used to echo a message announcing that
## the cached step result will be used. If not set it will default to
## 'gcr.io/google-containers/busybox'
cacheImage: "gcr.io/google-containers/busybox"
## 'registry.access.redhat.com/ubi8/ubi-minimal'
cacheImage: "registry.access.redhat.com/ubi8/ubi-minimal"

View File

@ -29,3 +29,14 @@ rules:
- watch
- update
- patch
- apiGroups:
- tekton.dev
resources:
- taskruns
- taskruns/status
verbs:
- get
- list
- watch
- update
- patch

View File

@ -70,19 +70,6 @@ spec:
rules:
- {}
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: metadata-grpc-service
spec:
action: ALLOW
selector:
matchLabels:
component: metadata-grpc-server
rules:
- {}
---
apiVersion: "networking.istio.io/v1alpha3"
kind: DestinationRule

View File

@ -19,27 +19,3 @@ spec:
port:
number: 80
timeout: 300s
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: metadata-grpc
namespace: kubeflow
spec:
gateways:
- kubeflow-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /ml_metadata
rewrite:
uri: /ml_metadata
route:
- destination:
host: ml-pipeline-ui.$(kfp-namespace).svc.cluster.local
port:
number: 80

View File

@ -9,4 +9,4 @@ resources:
- metadata-grpc-sa.yaml
images:
- name: gcr.io/ml-pipeline/metadata-envoy
newTag: 1.5.0-rc.2
newTag: 1.5.0

View File

@ -0,0 +1,9 @@
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: metadata-grpc-service
spec:
host: metadata-grpc-service.kubeflow.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL

View File

@ -0,0 +1,11 @@
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: metadata-grpc-service
spec:
action: ALLOW
selector:
matchLabels:
component: metadata-grpc-server
rules:
- {}

View File

@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- istio-authorization-policy.yaml
- destination-rule.yaml
- virtual-service.yaml

View File

@ -0,0 +1,21 @@
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: metadata-grpc
namespace: kubeflow
spec:
gateways:
- kubeflow-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /ml_metadata
rewrite:
uri: /ml_metadata
route:
- destination:
host: metadata-envoy-service.kubeflow.svc.cluster.local
port:
number: 9090

View File

@ -14,14 +14,20 @@ data:
artifact_script: |-
#!/usr/bin/env sh
push_artifact() {
tar -cvzf $1.tgz $2
mc cp $1.tgz storage/$ARTIFACT_BUCKET/artifacts/$PIPELINERUN/$PIPELINETASK/$1.tgz
if [ -f "$2" ]; then
tar -cvzf $1.tgz $2
mc cp $1.tgz storage/$ARTIFACT_BUCKET/artifacts/$PIPELINERUN/$PIPELINETASK/$1.tgz
else
echo "$2 file does not exist. Skip artifact tracking for $1"
fi
}
push_log() {
cat /var/log/containers/$PODNAME*$NAMESPACE*step-main*.log > step-main.log
push_artifact main-log step-main.log
}
strip_eof() {
awk 'NF' $2 | head -c -1 > $1_temp_save && cp $1_temp_save $2
if [ -f "$2" ]; then
awk 'NF' $2 | head -c -1 > $1_temp_save && cp $1_temp_save $2
fi
}
mc config host add storage ${ARTIFACT_ENDPOINT_SCHEME}${ARTIFACT_ENDPOINT} $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY

View File

@ -43,23 +43,23 @@ patchesStrategicMerge:
images:
- name: gcr.io/ml-pipeline/api-server
newName: docker.io/aipipeline/api-server
newTag: 0.8.0-rc0
newTag: 0.8.0
- name: gcr.io/ml-pipeline/persistenceagent
newName: docker.io/aipipeline/persistenceagent
newTag: 0.8.0-rc0
newTag: 0.8.0
- name: gcr.io/ml-pipeline/scheduledworkflow
newName: docker.io/aipipeline/scheduledworkflow
newTag: 0.8.0-rc0
newTag: 0.8.0
- name: gcr.io/ml-pipeline/frontend
newName: docker.io/aipipeline/frontend
newTag: 0.8.0-rc0
newTag: 0.8.0
- name: gcr.io/ml-pipeline/viewer-crd-controller
newTag: 1.5.0-rc.2
newTag: 1.5.0
- name: gcr.io/ml-pipeline/visualization-server
newTag: 1.5.0-rc.2
newTag: 1.5.0
- name: gcr.io/ml-pipeline/metadata-writer
newName: docker.io/aipipeline/metadata-writer
newTag: 0.8.0-rc0
newTag: 0.8.0
- name: gcr.io/ml-pipeline/cache-server
newName: docker.io/aipipeline/cache-server
newTag: 0.8.0-rc0
newTag: 0.8.0

View File

@ -7,4 +7,4 @@ resources:
- metadata-writer-sa.yaml
images:
- name: gcr.io/ml-pipeline/metadata-writer
newTag: 1.5.0-rc.2
newTag: 1.5.0

View File

@ -2,7 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: gcr.io/ml-pipeline/inverse-proxy-agent
newTag: 1.5.0-rc.2
newTag: 1.5.0
resources:
- proxy-configmap.yaml
- proxy-deployment.yaml

View File

@ -1,6 +0,0 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
labels:
kubeflow/crd-install: "true"
name: applications.app.k8s.io

View File

@ -9,7 +9,6 @@ resources:
- namespace.yaml
patchesStrategicMerge:
- application-crd.yaml
- scheduled-workflow-crd.yaml
- viewer-crd.yaml

View File

@ -4,6 +4,7 @@ kind: Kustomization
bases:
- ../../base/installs/multi-user
- ../../base/metadata/base
- ../../base/metadata/options/istio
- ../../third-party/mysql/base
- ../../third-party/mysql/options/istio
- ../../third-party/minio/base

View File

@ -1,4 +1,3 @@
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:

View File

@ -0,0 +1,56 @@
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: kubeflow-anyuid provides all features of the restricted SCC
but allows users to run with any UID and any GID.
name: kubeflow-anyuid-kfp-tekton
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: null
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups:
- system:cluster-admins
priority: 10
readOnlyRootFilesystem: false
requiredDropCapabilities:
- MKNOD
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
supplementalGroups:
type: RunAsAny
users:
#Metadata DB accesses files owned by root
- system:serviceaccount:kubeflow:metadatadb
#Minio accesses files owned by root
- system:serviceaccount:kubeflow:minio
#Katib injects container into pods which does not run as non-root user, trying to find Dockerfile for that image and fix it
#- system:serviceaccount:kubeflow:default
- system:serviceaccount:kubeflow:default
- system:serviceaccount:kubeflow:kubeflow-pipelines-cache
- system:serviceaccount:kubeflow:kubeflow-pipelines-cache-deployer-sa
- system:serviceaccount:kubeflow:metadata-grpc-server
- system:serviceaccount:kubeflow:kubeflow-pipelines-metadata-writer
- system:serviceaccount:kubeflow:ml-pipeline
- system:serviceaccount:kubeflow:ml-pipeline-persistenceagent
- system:serviceaccount:kubeflow:ml-pipeline-scheduledworkflow
- system:serviceaccount:kubeflow:ml-pipeline-ui
- system:serviceaccount:kubeflow:ml-pipeline-viewer-crd-service-account
- system:serviceaccount:kubeflow:ml-pipeline-visualizationserver
- system:serviceaccount:kubeflow:mysql
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret

View File

@ -1,5 +1,6 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- crd.yaml
- anyuid-scc.yaml
- privileged-scc.yaml

View File

@ -0,0 +1,57 @@
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: kubeflow-anyuid provides all features of the restricted SCC
but allows users to run with any UID and any GID.
name: kubeflow-privileged-kfp-tekton
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities: null
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups:
- system:cluster-admins
priority: 10
readOnlyRootFilesystem: false
requiredDropCapabilities:
- MKNOD
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
supplementalGroups:
type: RunAsAny
users:
#Metadata DB accesses files owned by root
- system:serviceaccount:kubeflow:metadatadb
#Minio accesses files owned by root
- system:serviceaccount:kubeflow:minio
#Katib injects container into pods which does not run as non-root user, trying to find Dockerfile for that image and fix it
#- system:serviceaccount:kubeflow:default
- system:serviceaccount:kubeflow:default
- system:serviceaccount:kubeflow:kubeflow-pipelines-cache
- system:serviceaccount:kubeflow:kubeflow-pipelines-cache-deployer-sa
- system:serviceaccount:kubeflow:metadata-grpc-server
- system:serviceaccount:kubeflow:kubeflow-pipelines-metadata-writer
- system:serviceaccount:kubeflow:ml-pipeline
- system:serviceaccount:kubeflow:ml-pipeline-persistenceagent
- system:serviceaccount:kubeflow:ml-pipeline-scheduledworkflow
- system:serviceaccount:kubeflow:ml-pipeline-ui
- system:serviceaccount:kubeflow:ml-pipeline-viewer-crd-service-account
- system:serviceaccount:kubeflow:ml-pipeline-visualizationserver
- system:serviceaccount:kubeflow:mysql
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- projected
- secret
- hostPath

View File

@ -8,6 +8,6 @@ namespace: tekton-pipelines
images:
- name: docker.io/aipipeline/pipelineloop-controller
newTag: 0.8.0-rc0
newTag: 0.8.0
- name: docker.io/aipipeline/pipelineloop-webhook
newTag: 0.8.0-rc0
newTag: 0.8.0

View File

@ -8,4 +8,4 @@ commonLabels:
app: cache-deployer
images:
- name: gcr.io/ml-pipeline/cache-deployer
newTag: 1.5.0
newTag: 1.5.1

View File

@ -10,4 +10,4 @@ commonLabels:
app: cache-server
images:
- name: gcr.io/ml-pipeline/cache-server
newTag: 1.5.0
newTag: 1.5.1

View File

@ -4,7 +4,7 @@ metadata:
name: pipeline-install-config
data:
appName: pipeline
appVersion: 1.5.0
appVersion: 1.5.1
dbHost: mysql
dbPort: "3306"
mlmdDb: metadb
@ -27,3 +27,7 @@ data:
## the cached step result will be used. If not set it will default to
## 'gcr.io/google-containers/busybox'
cacheImage: "gcr.io/google-containers/busybox"
## ConMaxLifeTimeSec will set the connection max lifetime for MySQL
## this is very important to setup when using external databases.
## See this issue for more details: https://github.com/kubeflow/pipelines/issues/5329
ConMaxLifeTimeSec: "120"

View File

@ -9,4 +9,4 @@ resources:
- metadata-grpc-sa.yaml
images:
- name: gcr.io/ml-pipeline/metadata-envoy
newTag: 1.5.0
newTag: 1.5.1

View File

@ -36,14 +36,14 @@ resources:
- viewer-sa.yaml
images:
- name: gcr.io/ml-pipeline/api-server
newTag: 1.5.0
newTag: 1.5.1
- name: gcr.io/ml-pipeline/persistenceagent
newTag: 1.5.0
newTag: 1.5.1
- name: gcr.io/ml-pipeline/scheduledworkflow
newTag: 1.5.0
newTag: 1.5.1
- name: gcr.io/ml-pipeline/frontend
newTag: 1.5.0
newTag: 1.5.1
- name: gcr.io/ml-pipeline/viewer-crd-controller
newTag: 1.5.0
newTag: 1.5.1
- name: gcr.io/ml-pipeline/visualization-server
newTag: 1.5.0
newTag: 1.5.1

View File

@ -7,4 +7,4 @@ resources:
- metadata-writer-sa.yaml
images:
- name: gcr.io/ml-pipeline/metadata-writer
newTag: 1.5.0
newTag: 1.5.1

View File

@ -58,6 +58,11 @@ spec:
configMapKeyRef:
name: pipeline-install-config
key: dbPort
- name: DBCONFIG_CONMAXLIFETIMESEC
valueFrom:
configMapKeyRef:
name: pipeline-install-config
key: ConMaxLifeTimeSec
- name: OBJECTSTORECONFIG_ACCESSKEY
valueFrom:
secretKeyRef:

View File

@ -2,7 +2,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: gcr.io/ml-pipeline/inverse-proxy-agent
newTag: 1.5.0
newTag: 1.5.1
resources:
- proxy-configmap.yaml
- proxy-deployment.yaml

View File

@ -9,4 +9,4 @@ resources:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/profile-controller
newName: public.ecr.aws/j1r0q0g6/notebooks/profile-controller
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0

View File

@ -28,4 +28,4 @@ vars:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/access-management
newName: public.ecr.aws/j1r0q0g6/notebooks/access-management
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0

View File

@ -36,4 +36,4 @@ patches:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/tensorboard-controller
newName: public.ecr.aws/j1r0q0g6/notebooks/tensorboard-controller
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0

View File

@ -3,6 +3,8 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.2.5
creationTimestamp: null
name: tensorboards.tensorboard.kubeflow.org
spec:
@ -12,7 +14,7 @@ spec:
listKind: TensorboardList
plural: tensorboards
singular: tensorboard
scope: ""
scope: Namespaced
subresources:
status: {}
validation:

View File

@ -14,7 +14,7 @@ commonLabels:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/tensorboards-web-app
newName: public.ecr.aws/j1r0q0g6/notebooks/tensorboards-web-app
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0
# We need the name to be unique without the suffix because the original name is what
# gets used with patches
configMapGenerator:

View File

@ -14,7 +14,7 @@ commonLabels:
images:
- name: public.ecr.aws/j1r0q0g6/notebooks/volumes-web-app
newName: public.ecr.aws/j1r0q0g6/notebooks/volumes-web-app
newTag: v1.3.0-rc.1
newTag: v1.3.1-rc.0
# We need the name to be unique without the suffix because the original name is what
# gets used with patches
configMapGenerator:

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- crd.yaml

View File

@ -1,24 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- role-binding.yaml
- role.yaml
commonLabels:
kustomize.component: cert-manager
configMapGenerator:
- name: cert-manager-kube-params-parameters
envs:
- params.env
generatorOptions:
disableNameSuffixHash: true
vars:
- name: certManagerNamespace
objref:
kind: ConfigMap
name: cert-manager-kube-params-parameters
apiVersion: v1
fieldref:
fieldpath: data.certManagerNamespace
configurations:
- params.yaml

View File

@ -1 +0,0 @@
certManagerNamespace=cert-manager

View File

@ -1,3 +0,0 @@
varReference:
- path: subjects/namespace
kind: RoleBinding

View File

@ -1,58 +0,0 @@
# grant cert-manager permission to manage the leaderelection configmap in the
# leader election namespace
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: cert-manager-cainjector:leaderelection
labels:
app: cainjector
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cert-manager-cainjector:leaderelection
subjects:
- apiGroup: ""
kind: ServiceAccount
name: cert-manager-cainjector
namespace: $(certManagerNamespace)
---
# grant cert-manager permission to manage the leaderelection configmap in the
# leader election namespace
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: cert-manager:leaderelection
labels:
app: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cert-manager:leaderelection
subjects:
- apiGroup: ""
kind: ServiceAccount
name: cert-manager
namespace: $(certManagerNamespace)
---
# apiserver gets the ability to read authentication. This allows it to
# read the specific configmap that has the requestheader-* entries to
# api agg
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: cert-manager-webhook:webhook-authentication-reader
labels:
app: webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- apiGroup: ""
kind: ServiceAccount
name: cert-manager-webhook
namespace: $(certManagerNamespace)

View File

@ -1,28 +0,0 @@
# leader election rules
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: cert-manager-cainjector:leaderelection
labels:
app: cainjector
rules:
# Used for leader election by the controller
# TODO: refine the permission to *just* the leader election configmap
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: cert-manager:leaderelection
labels:
app: cert-manager
rules:
# Used for leader election by the controller
# TODO: refine the permission to *just* the leader election configmap
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "update", "patch"]

View File

@ -1,16 +0,0 @@
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.webhook.cert-manager.io
labels:
app: webhook
annotations:
cert-manager.io/inject-ca-from-secret: "cert-manager/cert-manager-webhook-tls"
spec:
group: webhook.cert-manager.io
groupPriorityMinimum: 1000
versionPriority: 15
service:
name: cert-manager-webhook
namespace: $(namespace)
version: v1beta1

File diff suppressed because it is too large Load Diff

View File

@ -1,135 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-controller-issuers
labels:
app: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-controller-issuers
subjects:
- name: cert-manager
namespace: $(namespace)
kind: ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-controller-clusterissuers
labels:
app: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-controller-clusterissuers
subjects:
- name: cert-manager
namespace: $(namespace)
kind: ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-controller-certificates
labels:
app: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-controller-certificates
subjects:
- name: cert-manager
namespace: $(namespace)
kind: ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-controller-orders
labels:
app: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-controller-orders
subjects:
- name: cert-manager
namespace: $(namespace)
kind: ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-controller-challenges
labels:
app: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-controller-challenges
subjects:
- name: cert-manager
namespace: $(namespace)
kind: ServiceAccount
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-controller-ingress-shim
labels:
app: cert-manager
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-controller-ingress-shim
subjects:
- name: cert-manager
namespace: $(namespace)
kind: ServiceAccount
---
# apiserver gets the auth-delegator role to delegate auth decisions to
# the core apiserver
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-webhook:auth-delegator
labels:
app: webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- apiGroup: ""
kind: ServiceAccount
name: cert-manager-webhook
namespace: $(namespace)
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cert-manager-cainjector
labels:
app: cainjector
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cert-manager-cainjector
subjects:
- name: cert-manager-cainjector
namespace: $(namespace)
kind: ServiceAccount

View File

@ -1,265 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager-cainjector
labels:
app: cainjector
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "create", "update", "patch"]
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["apiregistration.k8s.io"]
resources: ["apiservices"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch", "update"]
---
# Issuer controller role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager-controller-issuers
labels:
app: cert-manager
rules:
- apiGroups: ["cert-manager.io"]
resources: ["issuers", "issuers/status"]
verbs: ["update"]
- apiGroups: ["cert-manager.io"]
resources: ["issuers"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
# ClusterIssuer controller role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager-controller-clusterissuers
labels:
app: cert-manager
rules:
- apiGroups: ["cert-manager.io"]
resources: ["clusterissuers", "clusterissuers/status"]
verbs: ["update"]
- apiGroups: ["cert-manager.io"]
resources: ["clusterissuers"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
# Certificates controller role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager-controller-certificates
labels:
app: cert-manager
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificates/status", "certificaterequests", "certificaterequests/status"]
verbs: ["update"]
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificaterequests", "clusterissuers", "issuers"]
verbs: ["get", "list", "watch"]
# We require these rules to support users with the OwnerReferencesPermissionEnforcement
# admission controller enabled:
# https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement
- apiGroups: ["cert-manager.io"]
resources: ["certificates/finalizers"]
verbs: ["update"]
- apiGroups: ["acme.cert-manager.io"]
resources: ["orders"]
verbs: ["create", "delete", "get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
# Orders controller role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager-controller-orders
labels:
app: cert-manager
rules:
- apiGroups: ["acme.cert-manager.io"]
resources: ["orders", "orders/status"]
verbs: ["update"]
- apiGroups: ["acme.cert-manager.io"]
resources: ["orders", "challenges"]
verbs: ["get", "list", "watch"]
- apiGroups: ["cert-manager.io"]
resources: ["clusterissuers", "issuers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["acme.cert-manager.io"]
resources: ["challenges"]
verbs: ["create", "delete"]
# We require these rules to support users with the OwnerReferencesPermissionEnforcement
# admission controller enabled:
# https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement
- apiGroups: ["acme.cert-manager.io"]
resources: ["orders/finalizers"]
verbs: ["update"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
# Challenges controller role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager-controller-challenges
labels:
app: cert-manager
rules:
# Use to update challenge resource status
- apiGroups: ["acme.cert-manager.io"]
resources: ["challenges", "challenges/status"]
verbs: ["update"]
# Used to watch challenge resources
- apiGroups: ["acme.cert-manager.io"]
resources: ["challenges"]
verbs: ["get", "list", "watch"]
# Used to watch challenges, issuer and clusterissuer resources
- apiGroups: ["cert-manager.io"]
resources: ["issuers", "clusterissuers"]
verbs: ["get", "list", "watch"]
# Need to be able to retrieve ACME account private key to complete challenges
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
# Used to create events
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
# HTTP01 rules
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: ["extensions", "networking.k8s.io/v1"]
resources: ["ingresses"]
verbs: ["get", "list", "watch", "create", "delete", "update"]
# We require these rules to support users with the OwnerReferencesPermissionEnforcement
# admission controller enabled:
# https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement
- apiGroups: ["acme.cert-manager.io"]
resources: ["challenges/finalizers"]
verbs: ["update"]
# DNS01 rules (duplicated above)
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
---
# ingress-shim controller role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cert-manager-controller-ingress-shim
labels:
app: cert-manager
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificaterequests"]
verbs: ["create", "update", "delete"]
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificaterequests", "issuers", "clusterissuers"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io/v1"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
# We require these rules to support users with the OwnerReferencesPermissionEnforcement
# admission controller enabled:
# https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#ownerreferencespermissionenforcement
- apiGroups: ["networking.k8s.io/v1"]
resources: ["ingresses/finalizers"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-manager-webhook:webhook-requester
labels:
app: webhook
rules:
- apiGroups:
- admission.cert-manager.io
resources:
- certificates
- certificaterequests
- issuers
- clusterissuers
verbs:
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-manager-view
labels:
app: cert-manager
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificaterequests", "issuers"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cert-manager-edit
labels:
app: cert-manager
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["cert-manager.io"]
resources: ["certificates", "certificaterequests", "issuers"]
verbs: ["create", "delete", "deletecollection", "patch", "update"]

View File

@ -1,124 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: cert-manager-cainjector
labels:
app: cainjector
spec:
replicas: 1
selector:
matchLabels:
app: cainjector
template:
metadata:
labels:
app: cainjector
annotations:
spec:
serviceAccountName: cert-manager-cainjector
containers:
- name: cainjector
image: "quay.io/jetstack/cert-manager-cainjector:v0.11.0"
imagePullPolicy: IfNotPresent
args:
- --v=2
- --leader-election-namespace=kube-system
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
{}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cert-manager
labels:
app: cert-manager
spec:
replicas: 1
selector:
matchLabels:
app: cert-manager
template:
metadata:
labels:
app: cert-manager
annotations:
prometheus.io/path: "/metrics"
prometheus.io/scrape: 'true'
prometheus.io/port: '9402'
spec:
serviceAccountName: cert-manager
containers:
- name: cert-manager
image: "quay.io/jetstack/cert-manager-controller:v0.11.0"
imagePullPolicy: IfNotPresent
args:
- --v=2
- --cluster-resource-namespace=$(POD_NAMESPACE)
- --leader-election-namespace=kube-system
- --webhook-namespace=$(POD_NAMESPACE)
- --webhook-ca-secret=cert-manager-webhook-ca
- --webhook-serving-secret=cert-manager-webhook-tls
- --webhook-dns-names=cert-manager-webhook,cert-manager-webhook.$(namespace),cert-manager-webhook.$(namespace).svc
ports:
- containerPort: 9402
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
requests:
cpu: 10m
memory: 32Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cert-manager-webhook
labels:
app: webhook
spec:
replicas: 1
selector:
matchLabels:
app: webhook
template:
metadata:
labels:
app: webhook
annotations:
spec:
serviceAccountName: cert-manager-webhook
containers:
- name: cert-manager
image: "quay.io/jetstack/cert-manager-webhook:v0.11.0"
imagePullPolicy: IfNotPresent
args:
- --v=2
- --secure-port=6443
- --tls-cert-file=/certs/tls.crt
- --tls-private-key-file=/certs/tls.key
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
resources:
{}
volumeMounts:
- name: certs
mountPath: /certs
volumes:
- name: certs
secret:
secretName: cert-manager-webhook-tls

View File

@ -1,41 +1,18 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: cert-manager
resources:
- namespace.yaml
- api-service.yaml
- cluster-role-binding.yaml
- cluster-role.yaml
- deployment.yaml
- mutating-webhook-configuration.yaml
- service-account.yaml
- service.yaml
- validating-webhook-configuration.yaml
commonLabels:
kustomize.component: cert-manager
images:
- name: quay.io/jetstack/cert-manager-controller
newName: quay.io/jetstack/cert-manager-controller
newTag: v0.11.0
- name: quay.io/jetstack/cert-manager-webhook
newName: quay.io/jetstack/cert-manager-webhook
newTag: v0.11.0
- name: quay.io/jetstack/cert-manager-cainjector
newName: quay.io/jetstack/cert-manager-cainjector
newTag: v0.11.0
configMapGenerator:
- name: cert-manager-parameters
envs:
- params.env
generatorOptions:
disableNameSuffixHash: true
vars:
- name: namespace
objref:
kind: ConfigMap
name: cert-manager-parameters
apiVersion: v1
fieldref:
fieldpath: data.namespace
configurations:
- params.yaml
# Manifests downloaded from:
# https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml
- cert-manager.yaml
# XXX: Do NOT use the namespace transformer, as cert-manager defines resources
# in two namespaces, 'cert-manager' and 'kube-system'.
# For more information, see https://github.com/jetstack/cert-manager/issues/4102.
# Patch upstream manifests to explicitly disable 'preserveUnknownFields',
# otherwise upgrade with 'kubectl apply' fails.
patches:
- path: patches/crd-preserve-unknown-fields.yaml
target:
kind: CustomResourceDefinition

View File

@ -1,32 +0,0 @@
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
name: cert-manager-webhook
labels:
app: webhook
annotations:
cert-manager.io/inject-apiserver-ca: "true"
webhooks:
- name: webhook.cert-manager.io
rules:
- apiGroups:
- "cert-manager.io"
apiVersions:
- v1alpha2
operations:
- CREATE
- UPDATE
resources:
- certificates
- issuers
- clusterissuers
- orders
- challenges
- certificaterequests
failurePolicy: Fail
clientConfig:
service:
name: kubernetes
namespace: default
path: /apis/webhook.cert-manager.io/v1beta1/mutations
caBundle: ""

View File

@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: $(namespace)

View File

@ -1 +0,0 @@
namespace=cert-manager

View File

@ -1,9 +0,0 @@
varReference:
- path: subjects/namespace
kind: ClusterRoleBinding
- path: spec/template/spec/containers/args
kind: Deployment
- path: metadata/name
kind: Namespace
- path: spec/service/namespace
kind: APIService

View File

@ -0,0 +1,3 @@
- op: add
path: /spec/preserveUnknownFields
value: false

View File

@ -1,24 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: cert-manager-cainjector
labels:
app: cainjector
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cert-manager
labels:
app: cert-manager
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cert-manager-webhook
labels:
app: webhook

View File

@ -1,30 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: cert-manager
labels:
app: cert-manager
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9402
targetPort: 9402
selector:
app: cert-manager
---
apiVersion: v1
kind: Service
metadata:
name: cert-manager-webhook
labels:
app: webhook
spec:
type: ClusterIP
ports:
- name: https
port: 443
targetPort: 6443
selector:
app: webhook

View File

@ -1,31 +0,0 @@
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: cert-manager-webhook
labels:
app: webhook
annotations:
cert-manager.io/inject-apiserver-ca: "true"
webhooks:
- name: webhook.certmanager.k8s.io
rules:
- apiGroups:
- "cert-manager.io"
apiVersions:
- v1alpha2
operations:
- CREATE
- UPDATE
resources:
- certificates
- issuers
- clusterissuers
- certificaterequests
failurePolicy: Fail
sideEffects: None
clientConfig:
service:
name: kubernetes
namespace: default
path: /apis/webhook.cert-manager.io/v1beta1/validations
caBundle: ""

View File

@ -1,11 +0,0 @@
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: $(acmeEmail)
http01: {}
privateKeySecretRef:
name: letsencrypt-prod-secret
server: $(acmeUrl)

View File

@ -1,35 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namespace: cert-manager
resources:
- cluster-issuer.yaml
commonLabels:
kustomize.component: cert-manager
app.kubernetes.io/component: cert-manager
app.kubernetes.io/name: cert-manager
configMapGenerator:
- name: cert-manager-parameters
behavior: merge
envs:
- params.env
generatorOptions:
disableNameSuffixHash: true
vars:
- name: acmeEmail
objref:
kind: ConfigMap
name: cert-manager-parameters
apiVersion: v1
fieldref:
fieldpath: data.acmeEmail
- name: acmeUrl
objref:
kind: ConfigMap
name: cert-manager-parameters
apiVersion: v1
fieldref:
fieldpath: data.acmeUrl
configurations:
- params.yaml

View File

@ -1,2 +0,0 @@
acmeEmail=
acmeUrl=https://acme-v02.api.letsencrypt.org/directory

View File

@ -1,5 +0,0 @@
varReference:
- path: spec/acme/email
kind: ClusterIssuer
- path: spec/acme/server
kind: ClusterIssuer

View File

@ -1,13 +0,0 @@
# TODO(https://github.com/kubeflow/manifests/issues/1052) clean up
# the manifests after the refactor is done. We should move
# cluster-issuer into the kubeflow-issuer package.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
resources:
- cluster-issuer.yaml
commonLabels:
kustomize.component: cert-manager
app.kubernetes.io/component: cert-manager
app.kubernetes.io/name: cert-manager

View File

@ -7,4 +7,4 @@ commonLabels:
app.kubernetes.io/component: cert-manager
app.kubernetes.io/name: cert-manager
resources:
- ../overlays/self-signed/cluster-issuer.yaml
- cluster-issuer.yaml

View File

@ -1,21 +0,0 @@
apiVersion: rbac.istio.io/v1alpha1
kind: ServiceRole
metadata:
name: cluster-local-gateway
namespace: istio-system
spec:
rules:
- services:
- cluster-local-gateway.istio-system.svc.cluster.local
---
apiVersion: rbac.istio.io/v1alpha1
kind: ServiceRoleBinding
metadata:
name: cluster-local-gateway
namespace: istio-system
spec:
roleRef:
kind: ServiceRole
name: cluster-local-gateway
subjects:
- user: '*'

View File

@ -12,8 +12,8 @@ old version is `X1.Y1.Z1`:
kustomization for the new Istio version:
$ export MANIFESTS_SRC=<path/to/manifests/repo>
$ export ISTIO_OLD=$MANIFESTS_SRC/common/istio-X1-Y1-Z1
$ export ISTIO_NEW=$MANIFESTS_SRC/common/istio-X-Y-Z
$ export ISTIO_OLD=$MANIFESTS_SRC/common/istio-X1-Y1
$ export ISTIO_NEW=$MANIFESTS_SRC/common/istio-X-Y
$ cp -a $ISTIO_OLD $ISTIO_NEW
2. Download `istioctl` for version `X.Y.Z`:
@ -26,6 +26,7 @@ old version is `X1.Y1.Z1`:
3. Use `istioctl` to generate an `IstioOperator` resource, the
CustomResource used to describe the Istio Control Plane:
$ cd $ISTIO_NEW
$ istioctl profile dump demo > profile.yaml
---
@ -57,8 +58,8 @@ old version is `X1.Y1.Z1`:
---
**NOTE**
`split-istio-packages` is a python script under `scripts/` that is
included in `PATH` in a bootstrapped env.
`split-istio-packages` is a python script in the same folder as this file.
The `ruamel.yaml` version used is 0.16.12.
---
@ -87,3 +88,6 @@ The Istio kustomizations make the following changes:
- Add Istio AuthorizationPolicy in Istio's root namespace, so that sidecars deny traffic by default (explicit deny-by-default authorization model).
- Add Gateway CRs for the Istio Ingressgateway and the Istio cluster-local gateway, as `istioctl` stopped generating them in later versions.
- Add the istio-system namespace object to `istio-namespace`, as `istioctl` stopped generating it in later versions.
- Configure TCP KeepAlives.
- Disable tracing as it causes DNS breakdown. See:
https://github.com/istio/istio/issues/29898

View File

@ -1,4 +1,3 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
@ -10,7 +9,7 @@ metadata:
release: istio
istio.io/rev: default
install.operator.istio.io/owning-resource: unknown
operator.istio.io/component: IngressGateways
operator.istio.io/component: "IngressGateways"
---
apiVersion: apps/v1
kind: Deployment
@ -37,9 +36,9 @@ spec:
metadata:
annotations:
prometheus.io/path: /stats/prometheus
prometheus.io/port: '15020'
prometheus.io/scrape: 'true'
sidecar.istio.io/inject: 'false'
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
sidecar.istio.io/inject: "false"
labels:
app: cluster-local-gateway
chart: gateways
@ -51,7 +50,7 @@ spec:
release: istio
service.istio.io/canonical-name: cluster-local-gateway
service.istio.io/canonical-revision: latest
sidecar.istio.io/inject: 'false'
sidecar.istio.io/inject: "false"
spec:
affinity:
nodeAffinity:
@ -146,12 +145,12 @@ spec:
- name: ISTIO_META_OWNER
value: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/cluster-local-gateway
- name: ISTIO_META_UNPRIVILEGED_POD
value: 'true'
value: "true"
- name: ISTIO_META_ROUTER_MODE
value: sni-dnat
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
image: docker.io/istio/proxyv2:1.9.0
image: docker.io/istio/proxyv2:1.9.5
name: istio-proxy
ports:
- containerPort: 15020
@ -269,7 +268,7 @@ metadata:
release: istio
istio.io/rev: default
install.operator.istio.io/owning-resource: unknown
operator.istio.io/component: IngressGateways
operator.istio.io/component: "IngressGateways"
spec:
minAvailable: 1
selector:
@ -286,11 +285,11 @@ metadata:
release: istio
istio.io/rev: default
install.operator.istio.io/owning-resource: unknown
operator.istio.io/component: IngressGateways
operator.istio.io/component: "IngressGateways"
rules:
- apiGroups: ['']
resources: [secrets]
verbs: [get, watch, list]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
@ -301,7 +300,7 @@ metadata:
release: istio
istio.io/rev: default
install.operator.istio.io/owning-resource: unknown
operator.istio.io/component: IngressGateways
operator.istio.io/component: "IngressGateways"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@ -313,6 +312,7 @@ subjects:
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
app: cluster-local-gateway
install.operator.istio.io/owning-resource: unknown

View File

@ -1,9 +1,8 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -234,7 +233,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -1414,7 +1413,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -1665,7 +1664,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -1889,7 +1888,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -1983,7 +1982,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -2093,7 +2092,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -2258,7 +2257,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -2408,7 +2407,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio
@ -2966,7 +2965,8 @@ spec:
format: int32
type: integer
perTryTimeout:
description: Timeout per retry attempt for a given request.
description: Timeout per attempt for a given request, including
the initial call and any retries.
type: string
retryOn:
description: Specifies the conditions under which retry takes
@ -3226,7 +3226,7 @@ apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
helm.sh/resource-policy: keep
"helm.sh/resource-policy": keep
labels:
app: istio-pilot
chart: istio

View File

@ -12,3 +12,4 @@ namespace: istio-system
patchesStrategicMerge:
- patches/service.yaml
- patches/remove-pdb.yaml
- patches/istio-configmap-disable-tracing.yaml

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: istio
namespace: istio-system
data:
# Configuration file for the mesh networks to be used by the Split Horizon EDS.
mesh: |-
accessLogFile: /dev/stdout
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
proxyMetadata: {}
tracing: {}
enablePrometheusMerge: true
rootNamespace: istio-system
tcpKeepalive:
interval: 5s
probes: 3
time: 10s
trustDomain: cluster.local

Some files were not shown because too many files have changed in this diff Show More