Merge pull request #17332 from rifelpet/delete-addons

Remove legacy addons
This commit is contained in:
Kubernetes Prow Robot 2025-04-10 12:32:42 -07:00 committed by GitHub
commit 886a0ef951
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
63 changed files with 5 additions and 69531 deletions

View File

@ -1,5 +0,0 @@
## Addons
**Legacy addons are deprecated and unmaintained!!!**
Use [managed addons](../docs/addons.md) instead.

View File

@ -1,57 +0,0 @@
# Ambassador
The [Ambassador API Gateway](https://getambassador.io/) provides all the functionality of a traditional ingress
controller (i.e., path-based routing) while exposing many additional capabilities such as authentication, URL rewriting,
CORS, rate limiting, and automatic metrics collection.
## Ambassador Addon
[Ambassador Operator](https://github.com/datawire/ambassador-operator) is a Kubernetes Operator that controls the
complete lifecycle of Ambassador in your cluster. It also automates many of the repeatable tasks you have to perform for
Ambassador. Once installed, the Operator will automatically complete rapid installations and seamless upgrades to new
versions of Ambassador.
This addon deploys Ambassador Operator which installs Ambassador in a kOps cluster.
##### Note:
The operator requires widely scoped permissions in order to install and manage Ambassador's lifecycle. Both, the
operator and Ambassador, are deployed in the `ambassador` namespace. You can review the permissions granted to the
operator [here](https://github.com/kubernetes/kops/blob/master/addons/ambassador/ambassador-operator.yaml).
### Usage
#### As a kops addon
To deploy the addon, run the following before creating a cluster -
```console
kops edit cluster <cluster-name>
```
Now add the addon specification in the cluster manifest in the section - `spec.addons`
```
addons:
- manifest: ambassador
```
##### Note:
If you've already created the cluster, you'll have to run -
```console
kops update cluster <cluster-name> --yes
```
followed by -
```console
kops rolling-update cluster --yes
```
to install the addon.
For more information on how to enable addon during cluster creation refer [Kops Addon guide](https://github.com/kubernetes/kops/blob/master/docs/operations/addons.md#installing-kubernetes-addons).
#### Deploying using `kubectl`
After cluster creation, you can deploy Ambassador using the following command -
```console
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ambassador/ambassador-operator.yaml
```

View File

@ -1,9 +0,0 @@
kind: Addons
metadata:
name: ambassador
spec:
addons:
- version: 1.1.0
selector:
k8s-addon: ambassador.addons.k8s.io
manifest: ambassador-operator.yaml

View File

@ -1,445 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: ambassador
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ambassadorinstallations.getambassador.io
spec:
additionalPrinterColumns:
- JSONPath: .spec.version
name: VERSION
type: string
- JSONPath: .spec.updateWindow
name: UPDATE-WINDOW
type: integer
- JSONPath: .status.lastCheckTime
description: Last time checked
name: LAST-CHECK
type: string
- JSONPath: .status.conditions[?(@.type=='Deployed')].status
description: Indicates if deployment has completed
name: DEPLOYED
type: string
- JSONPath: .status.conditions[?(@.type=='Deployed')].reason
description: Reason for deployment completed
name: REASON
priority: 1
type: string
- JSONPath: .status.conditions[?(@.type=='Deployed')].message
description: Message for deployment completed
name: MESSAGE
priority: 1
type: string
- JSONPath: .status.deployedRelease.appVersion
description: Deployed version of Ambassador
name: DEPLOYED-VERSION
type: string
- JSONPath: .status.deployedRelease.flavor
description: Deployed flavor of Ambassador (OSS or AES)
name: DEPLOYED-FLAVOR
type: string
group: getambassador.io
names:
kind: AmbassadorInstallation
listKind: AmbassadorInstallationList
plural: ambassadorinstallations
singular: ambassadorinstallation
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: AmbassadorInstallation is the Schema for the ambassadorinstallations
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: AmbassadorInstallationSpec defines the desired state of AmbassadorInstallation
properties:
baseImage:
description: An (optional) image to use instead of the image specified
in the Helm chart.
type: string
helmRepo:
description: An (optional) Helm repository.
type: string
installOSS:
description: 'Installs [Ambassador OSS](https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/)
instead of [AES](https://www.getambassador.io/docs/latest/topics/install/).
Default is false which means it installs AES by default. TODO: 1.
AES/AOSS is not installed and the user installs using `installOSS:
true`, then we straightaway install AOSS. 2. AOSS is installed via
operator and the user sets `installOSS: false`, then we perform the
migration as detailed here - https://www.getambassador.io/docs/latest/topics/install/upgrade-to-edge-stack/
3. AES is installed and the user sets `installOSS: true`, then we
point users to the docs which gives them pointers on how to do
that themselves.'
type: boolean
logLevel:
description: 'An (optional) log level: debug, info...'
enum:
- info
- debug
- warn
- warning
- error
- critical
- fatal
type: string
updateWindow:
description: "`updateWindow` is an optional item that will control when
the updates can take place. This is used to force system updates to
happen late at night if thats what the sysadmins want. \n * There
can be any number of `updateWindow` entries (separated by commas).
\ * `Never` turns off automatic updates even if there are other entries
in the comma-separated list. `Never` is used by sysadmins to disable
all updates during blackout periods by doing a `kubectl apply`
or using our Edge Policy Console to set this. * Each `updateWindow`
is in crontab format (see https://crontab.guru/) Some examples of
`updateWindows` are: - `* 0-6 * * * SUN`: every Sunday, from _0am_
to _6am_ - `* 5 1 * * *`: every first day of the month, at _5am_
* The Operator cannot guarantee minute time granularity, so specifying
\ a minute in the crontab expression can lead to some updates happening
\ sooner/later than expected."
type: string
version:
description: "We are using SemVer for the version number and it can
be specified with any level of precision and can optionally end in
`*`. These are interpreted as: \n * `1.0` = exactly version 1.0 *
`1.1` = exactly version 1.1 * `1.1.*` = version 1.1 and any bug fix
versions `1.1.1`, `1.1.2`, `1.1.3`, etc. * `2.*` = version 2.0 and
any incremental and bug fix versions `2.0`, `2.0.1`, `2.0.2`, `2.1`,
`2.2`, `2.2.1`, etc. * `*` = all versions. * `3.0-ea` = version `3.0-ea1`
and any subsequent EA releases on `3.0`. Also selects the final
3.0 once the final GA version is released. * `4.*-ea` = version `4.0-ea1`
and any subsequent EA release on `4.0`. Also selects the final GA
`4.0`. Also selects any incremental and bug fix versions `4.*` and
`4.*.*`. Also selects the most recent `4.*` EA release i.e., if
`4.0.5` is the last GA version and there is a `4.1-EA3`, then this
\ selects `4.1-EA3` over the `4.0.5` GA. \n You can find the reference
docs about the SemVer syntax accepted [here](https://github.com/Masterminds/semver#basic-comparisons)."
type: string
type: object
status:
description: AmbassadorInstallationStatus defines the observed state of
AmbassadorInstallation
properties:
conditions:
description: List of conditions the installation has experienced.
items:
description: AmbInsCondition defines an Ambassador installation condition,
as well as the last time there was a transition to this condition..
properties:
lastTransitionTime:
format: date-time
type: string
message:
type: string
reason:
type: string
status:
type: string
type:
type: string
required:
- status
- type
type: object
type: array
deployedRelease:
description: the currently deployed Helm chart
nullable: true
properties:
appVersion:
type: string
flavor:
type: string
manifest:
type: string
name:
type: string
version:
type: string
type: object
lastCheckTime:
description: Last time a successful update check was performed.
format: date-time
nullable: true
type: string
required:
- conditions
type: object
type: object
version: v2
versions:
- name: v2
served: true
storage: true
---
# Source: ambassador-operator/templates/ambassador-operator.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: static-helm-values
namespace: ambassador
labels:
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
data:
values.yaml: |+
deploymentTool: amb-oper-manifest
---
# Source: ambassador-operator/templates/ambassador-operator.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ambassador-operator
namespace: ambassador
labels:
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
---
# Source: ambassador-operator/templates/ambassador-operator.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ambassador-operator-cluster
namespace: ambassador
labels:
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ['*']
- nonResourceURLs: ['*']
verbs: ['*']
---
# Source: ambassador-operator/templates/ambassador-operator.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ambassador-operator-cluster
namespace: ambassador
labels:
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
subjects:
- kind: ServiceAccount
name: ambassador-operator
namespace: ambassador
roleRef:
kind: ClusterRole
name: ambassador-operator-cluster
apiGroup: rbac.authorization.k8s.io
---
# Source: ambassador-operator/templates/ambassador-operator.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: ambassador-operator
namespace: ambassador
labels:
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
rules:
- apiGroups:
- ""
resources:
- pods
- services
- services/finalizers
- endpoints
- persistentvolumeclaims
- events
- configmaps
- secrets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- deployments
- daemonsets
- replicasets
- statefulsets
- customresourcedefinitions
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- apiGroups:
- apps
resourceNames:
- ambassador-operator
resources:
- deployments/finalizers
verbs:
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- apps
resources:
- replicasets
- deployments
verbs:
- get
- apiGroups:
- getambassador.io
resources:
- '*'
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
---
# Source: ambassador-operator/templates/ambassador-operator.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ambassador-operator
namespace: ambassador
labels:
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
subjects:
- kind: ServiceAccount
name: ambassador-operator
roleRef:
kind: Role
name: ambassador-operator
apiGroup: rbac.authorization.k8s.io
---
# Source: ambassador-operator/templates/ambassador-operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ambassador-operator
namespace: ambassador
labels:
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
spec:
replicas: 1
selector:
matchLabels:
name: ambassador-operator
template:
metadata:
labels:
name: ambassador-operator
app.kubernetes.io/name: ambassador-operator
app.kubernetes.io/part-of: ambassador
helm.sh/chart: ambassador-operator-0.2.0
app.kubernetes.io/instance: ambassador
app.kubernetes.io/managed-by: Helm
getambassador.io/installer: operator
spec:
serviceAccountName: ambassador-operator
containers:
- name: ambassador-operator
# Replace this with the built image name
image: docker.io/datawire/ambassador-operator:v1.2.6
command:
- ambassador-operator
imagePullPolicy: IfNotPresent
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "ambassador-operator"
volumeMounts:
- name: static-helm-values
mountPath: /tmp/helm
volumes:
- name: static-helm-values
configMap:
name: static-helm-values
---
apiVersion: getambassador.io/v2
kind: AmbassadorInstallation
metadata:
name: ambassador
namespace: ambassador
spec:
installOSS: true
helmValues:
deploymentTool: amb-oper-kops
namespace:
name: ambassador

View File

@ -1,39 +0,0 @@
# Cluster Autoscaler Addon
**This addon is deprecated. See https://kops.sigs.k8s.io/addons/#cluster-autoscaler**
We strongly recommend using Cluster Autoscaler with the kubernetes version for which it was meant. Refer to the [Cluster Autoscaler documentation compatibility matrix]( https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/README.md#releases)
Note that you likely want to change `AWS_REGION` and `GROUP_NAME`, and probably `MIN_NODES` and `MAX_NODES`. Here is an example of how you may wish to do so:
```bash
CLOUD_PROVIDER=aws
IMAGE=registry.k8s.io/cluster-autoscaler:v1.2.2
MIN_NODES=1
MAX_NODES=5
AWS_REGION=us-east-1
# For AWS GROUP_NAME should be the name of ASG as seen on AWS console
GROUP_NAME="nodes.k8s.example.com"
SSL_CERT_PATH="/etc/ssl/certs/ca-certificates.crt" # (/etc/ssl/certs for gce, /etc/ssl/certs/ca-bundle.crt for RHEL7.X)
addon=cluster-autoscaler.yml
wget -O ${addon} https://raw.githubusercontent.com/kubernetes/kops/master/addons/cluster-autoscaler/v1.8.0.yaml
sed -i -e "s@{{CLOUD_PROVIDER}}@${CLOUD_PROVIDER}@g" "${addon}"
sed -i -e "s@{{IMAGE}}@${IMAGE}@g" "${addon}"
sed -i -e "s@{{MIN_NODES}}@${MIN_NODES}@g" "${addon}"
sed -i -e "s@{{MAX_NODES}}@${MAX_NODES}@g" "${addon}"
sed -i -e "s@{{GROUP_NAME}}@${GROUP_NAME}@g" "${addon}"
sed -i -e "s@{{AWS_REGION}}@${AWS_REGION}@g" "${addon}"
sed -i -e "s@{{SSL_CERT_PATH}}@${SSL_CERT_PATH}@g" "${addon}"
kubectl apply -f ${addon}
```
An enhanced script which also adds the IAM policies is included here [cluster-autoscaler.sh](cluster-autoscaler.sh)
Question: Which ASG group should be autoscaled?
Answer: By default, kOps creates a "nodes" instancegroup and a corresponding ASG group which will have a name such as "nodes.$CLUSTER_NAME", visible in the AWS Console. That ASG is a good choice to begin with. Optionally, you may also create a new instancegroup "kops create ig _newgroupname_", and configure that instead. Set the maxSize of the kOps instancesgroup, and update the cluster so the maxSize propagates to the ASG.
Question: The cluster-autoscaler [documentation](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws) mentions an IAM Policy. Which IAM Role should the Policy be attached to?
Answer: Kops creates two Roles, nodes.$CLUSTER_NAME and masters.$CLUSTER_NAME. Currently the example scripts run the autoscaler process on the k8s master node, so the IAM Policy should be assigned to masters.$CLUSTER_NAME (substituting that variable for your actual cluster name).

View File

@ -1,17 +0,0 @@
kind: Addons
metadata:
name: cluster-autoscaler
spec:
addons:
- version: 1.4.0
selector:
k8s-addon: cluster-autoscaler.addons.k8s.io
manifest: v1.4.0.yaml
- version: 1.6.0
selector:
k8s-addon: cluster-autoscaler.addons.k8s.io
manifest: v1.6.0.yaml
- version: 1.8.0
selector:
k8s-addon: cluster-autoscaler.addons.k8s.io
manifest: v1.8.0.yaml

View File

@ -1,121 +0,0 @@
#!/bin/bash
# Copyright 2019 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -e
#Set all the variables in this section
CLUSTER_NAME="myfirstcluster.k8s.local"
CLOUD_PROVIDER=aws
IMAGE=registry.k8s.io/cluster-autoscaler:v1.1.0
MIN_NODES=2
MAX_NODES=20
AWS_REGION=us-east-1
INSTANCE_GROUP_NAME="nodes"
ASG_NAME="${INSTANCE_GROUP_NAME}.${CLUSTER_NAME}" #ASG_NAME should be the name of ASG as seen on AWS console.
IAM_ROLE="masters.${CLUSTER_NAME}" #Where will the cluster-autoscaler process run? Currently on the master node.
SSL_CERT_PATH="/etc/ssl/certs/ca-certificates.crt" #(/etc/ssl/certs for gce, /etc/ssl/certs/ca-bundle.crt for RHEL7.X)
#KOPS_STATE_STORE="s3://___" #KOPS_STATE_STORE might already be set as an environment variable, in which case it doesn't have to be changed.
#Best-effort install script prerequisites, otherwise they will need to be installed manually.
if [[ -f /usr/bin/apt-get && ! -f /usr/bin/jq ]]
then
sudo apt-get update
sudo apt-get install -y jq
fi
if [[ -f /bin/yum && ! -f /bin/jq ]]
then
echo "This may fail if epel cannot be installed. In that case, correct/install epel and retry."
sudo yum install -y epel-release
sudo yum install -y jq || exit
fi
if [[ -f /usr/local/bin/brew && ! -f /usr/local/bin/jq ]]
then
brew install jq || exit
fi
echo "7⃣ Set up Autoscaling"
echo " First, we need to update the minSize and maxSize attributes for the kops instancegroup."
echo " The next command will open the instancegroup config in your default editor, please save and exit the file once you're done…"
sleep 1
kops edit ig $INSTANCE_GROUP_NAME --state ${KOPS_STATE_STORE} --name ${CLUSTER_NAME}
echo " Running kops update cluster --yes"
kops update cluster --yes --state ${KOPS_STATE_STORE} --name ${CLUSTER_NAME}
printf "\n"
printf " a) Creating IAM policy to allow aws-cluster-autoscaler access to AWS autoscaling groups…\n"
cat > asg-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": "*"
}
]
}
EOF
ASG_POLICY_NAME=aws-cluster-autoscaler
unset TESTOUTPUT
TESTOUTPUT=$(aws iam list-policies --output json | jq -r '.Policies[] | select(.PolicyName == "aws-cluster-autoscaler") | .Arn')
if [[ $? -eq 0 && -n "$TESTOUTPUT" ]]
then
printf " ✅ Policy already exists\n"
ASG_POLICY_ARN=$TESTOUTPUT
else
printf " ✅ Policy does not yet exist, creating now.\n"
ASG_POLICY=$(aws iam create-policy --policy-name $ASG_POLICY_NAME --policy-document file://asg-policy.json --output json)
ASG_POLICY_ARN=$(echo $ASG_POLICY | jq -r '.Policy.Arn')
printf " ✅ \n"
fi
printf " b) Attaching policy to IAM Role…\n"
aws iam attach-role-policy --policy-arn $ASG_POLICY_ARN --role-name $IAM_ROLE
printf " ✅ \n"
addon=cluster-autoscaler.yml
manifest_url=https://raw.githubusercontent.com/kubernetes/kops/master/addons/cluster-autoscaler/v1.8.0.yaml
if [[ $(which wget) ]]; then
wget -O ${addon} ${manifest_url}
elif [[ $(which curl) ]]; then
curl -s -o ${addon} ${manifest_url}
else
echo "No curl or wget available. Can't get the manifest."
exit 1
fi
sed -i -e "s@{{CLOUD_PROVIDER}}@${CLOUD_PROVIDER}@g" "${addon}"
sed -i -e "s@{{IMAGE}}@${IMAGE}@g" "${addon}"
sed -i -e "s@{{MIN_NODES}}@${MIN_NODES}@g" "${addon}"
sed -i -e "s@{{MAX_NODES}}@${MAX_NODES}@g" "${addon}"
sed -i -e "s@{{GROUP_NAME}}@${ASG_NAME}@g" "${addon}"
sed -i -e "s@{{AWS_REGION}}@${AWS_REGION}@g" "${addon}"
sed -i -e "s@{{SSL_CERT_PATH}}@${SSL_CERT_PATH}@g" "${addon}"
kubectl apply -f ${addon}
printf "Done\n"

View File

@ -1,174 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["events","endpoints"]
verbs: ["create", "patch"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["cluster-autoscaler"]
verbs: ["get","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch","list","get","update"]
- apiGroups: [""]
resources: ["pods","services","replicationcontrollers","persistentvolumeclaims","persistentvolumes"]
verbs: ["watch","list","get"]
- apiGroups: ["extensions"]
resources: ["replicasets","daemonsets"]
verbs: ["watch","list","get"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["watch","list"]
- apiGroups: ["apps"]
resources: ["statefulsets","replicasets","daemonsets"]
verbs: ["watch","list","get"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["watch","list","get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["watch","list","get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["cluster-autoscaler-status"]
verbs: ["delete","get","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
annotations:
# For 1.6, we keep the old tolerations in case of a downgrade to 1.5
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated", "value":"master"}]'
prometheus.io/scrape: 'true'
prometheus.io/port: '8085'
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
nodeSelector:
kubernetes.io/role: master
serviceAccountName: cluster-autoscaler
containers:
- image: {{IMAGE}}
name: cluster-autoscaler
livenessProbe:
httpGet:
path: /health-check
port: 8085
readinessProbe:
httpGet:
path: /health-check
port: 8085
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider={{CLOUD_PROVIDER}}
- --skip-nodes-with-local-storage=false
- --nodes={{MIN_NODES}}:{{MAX_NODES}}:{{GROUP_NAME}}
env:
- name: AWS_REGION
value: {{AWS_REGION}}
volumeMounts:
- name: ssl-certs
mountPath: {{SSL_CERT_PATH}}
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: {{SSL_CERT_PATH}}
dnsPolicy: "Default"

View File

@ -1,61 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
k8s-app: cluster-autoscaler
template:
metadata:
labels:
k8s-app: cluster-autoscaler
annotations:
# For 1.6, we keep the old tolerations in case of a downgrade to 1.5
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated", "value":"master"}]'
prometheus.io/scrape: 'true'
prometheus.io/port: '8085'
spec:
containers:
- name: cluster-autoscaler
image: {{IMAGE}}
livenessProbe:
httpGet:
path: /health-check
port: 8085
readinessProbe:
httpGet:
path: /health-check
port: 8085
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --cloud-provider={{CLOUD_PROVIDER}}
- --nodes={{MIN_NODES}}:{{MAX_NODES}}:{{GROUP_NAME}}
env:
- name: AWS_REGION
value: {{AWS_REGION}}
volumeMounts:
- name: ssl-certs
mountPath: {{SSL_CERT_PATH}}
readOnly: true
volumes:
- name: ssl-certs
hostPath:
path: {{SSL_CERT_PATH}}
dnsPolicy: "Default"
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: "node-role.kubernetes.io/master"
effect: NoSchedule

View File

@ -1,224 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups:
- ""
resources:
- events
- endpoints
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
- create
- apiGroups:
- ""
resources:
- pods/status
verbs:
- update
- apiGroups:
- ""
resources:
- endpoints
resourceNames:
- cluster-autoscaler
verbs:
- get
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- watch
- list
- get
- update
- apiGroups:
- ""
resources:
- pods
- services
- replicationcontrollers
- persistentvolumeclaims
- persistentvolumes
verbs:
- watch
- list
- get
- apiGroups:
- extensions
resources:
- replicasets
- daemonsets
verbs:
- watch
- list
- get
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- watch
- list
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- watch
- list
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- cluster-autoscaler-status
verbs:
- delete
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
k8s-app: cluster-autoscaler
template:
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
annotations:
# For 1.6, we keep the old tolerations in case of a downgrade to 1.5
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated", "value":"master"}]'
prometheus.io/scrape: 'true'
prometheus.io/port: '8085'
spec:
serviceAccountName: cluster-autoscaler
containers:
- name: cluster-autoscaler
image: {{IMAGE}}
livenessProbe:
httpGet:
path: /health-check
port: 8085
readinessProbe:
httpGet:
path: /health-check
port: 8085
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --cloud-provider={{CLOUD_PROVIDER}}
- --nodes={{MIN_NODES}}:{{MAX_NODES}}:{{GROUP_NAME}}
env:
- name: AWS_REGION
value: {{AWS_REGION}}
volumeMounts:
- name: ssl-certs
mountPath: {{SSL_CERT_PATH}}
readOnly: true
volumes:
- name: ssl-certs
hostPath:
path: {{SSL_CERT_PATH}}
dnsPolicy: "Default"
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: "node-role.kubernetes.io/master"
effect: NoSchedule

View File

@ -1,235 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
rules:
- apiGroups:
- ""
resources:
- events
- endpoints
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
- create
- apiGroups:
- ""
resources:
- pods/status
verbs:
- update
- apiGroups:
- ""
resources:
- endpoints
resourceNames:
- cluster-autoscaler
verbs:
- get
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- watch
- list
- get
- update
- apiGroups:
- ""
resources:
- pods
- services
- replicationcontrollers
- persistentvolumeclaims
- persistentvolumes
verbs:
- watch
- list
- get
- apiGroups:
- extensions
resources:
- replicasets
- daemonsets
verbs:
- watch
- list
- get
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- watch
- list
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- watch
- list
- get
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- watch
- list
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
- cluster-autoscaler-status
verbs:
- delete
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
app: cluster-autoscaler
annotations:
# For 1.6, we keep the old tolerations in case of a downgrade to 1.5
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"dedicated", "value":"master"}]'
prometheus.io/scrape: 'true'
prometheus.io/port: '8085'
spec:
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
nodeSelector:
kubernetes.io/role: master
serviceAccountName: cluster-autoscaler
containers:
- name: cluster-autoscaler
image: {{IMAGE}}
livenessProbe:
httpGet:
path: /health-check
port: 8085
readinessProbe:
httpGet:
path: /health-check
port: 8085
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider={{CLOUD_PROVIDER}}
- --skip-nodes-with-local-storage=false
- --nodes={{MIN_NODES}}:{{MAX_NODES}}:{{GROUP_NAME}}
env:
- name: AWS_REGION
value: {{AWS_REGION}}
volumeMounts:
- name: ssl-certs
mountPath: {{SSL_CERT_PATH}}
readOnly: true
volumes:
- name: ssl-certs
hostPath:
path: {{SSL_CERT_PATH}}
dnsPolicy: "Default"

View File

@ -1,30 +0,0 @@
# Deploying Citrix Ingress Controller through kOps
This guide explains how to deploy [Citrix Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) through KOPS addon.
## Quick Deploy using `kops`
You can enable the Citrix Ingress Controller addon when creating the Kubernetes cluster through KOPS.
Edit the cluster before creating it
```
kops edit cluster <cluster-name>
```
Now add the addon specification in the cluster manifest in the section - `spec.addons`
```
addons:
- manifest: ingress-citrix
```
For more information on how to enable addon during cluster creation refer [Kops Addon guide](https://github.com/kubernetes/kops/blob/master/docs/operations/addons.md#installing-kubernetes-addons)
## Quick Deploy using `kubectl`
After cluster creation, you can deploy [Citrix Ingress Controller](https://github.com/citrix/citrix-k8s-ingress-controller) using the below command
```
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-citrix/v1.1.1.yaml
```

View File

@ -1,9 +0,0 @@
kind: Addons
metadata:
name: ingress-citrix
spec:
addons:
- version: 1.1.1
selector:
k8s-addon: ingress-citrix.addons.k8s.io
manifest: v1.1.1.yaml

View File

@ -1,189 +0,0 @@
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cpx-ingress-k8s-role
rules:
- apiGroups: [""]
resources: ["endpoints", "ingresses", "pods", "secrets", "nodes", "routes", "namespaces", "configmaps"]
verbs: ["get", "list", "watch"]
# services/status is needed to update the loadbalancer IP in service status for integrating
# service of type LoadBalancer with external-dns
- apiGroups: [""]
resources: ["services/status"]
verbs: ["patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
- apiGroups: ["extensions"]
resources: ["ingresses", "ingresses/status"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
- apiGroups: ["citrix.com"]
resources: ["rewritepolicies", "canarycrds", "authpolicies", "ratelimits", "listeners", "httproutes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["citrix.com"]
resources: ["rewritepolicies/status", "canarycrds/status", "ratelimits/status", "authpolicies/status", "listeners/status", "httproutes/status"]
verbs: ["get", "list", "patch"]
- apiGroups: ["citrix.com"]
resources: ["vips"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: ["route.openshift.io"]
resources: ["routes"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cpx-ingress-k8s-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cpx-ingress-k8s-role
subjects:
- kind: ServiceAccount
name: cpx-ingress-k8s-role
namespace: default
apiVersion: rbac.authorization.k8s.io/v1
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cpx-ingress-k8s-role
namespace: default
---
apiVersion: v1
kind: Secret
metadata:
name: nslogin
namespace: default
type: Opaque
data:
password: bnNyb290
username: bnNyb290
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cpx-ingress
spec:
selector:
matchLabels:
app: cpx-ingress
replicas: 1
template:
metadata:
name: cpx-ingress
labels:
app: cpx-ingress
annotations:
spec:
serviceAccountName: cpx-ingress-k8s-role
containers:
- name: cpx-ingress
image: "quay.io/citrix/citrix-k8s-cpx-ingress:13.0-58.30"
securityContext:
privileged: true
env:
- name: "EULA"
value: "yes"
- name: "KUBERNETES_TASK_ID"
value: ""
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: nitro-http
containerPort: 9080
- name: nitro-https
containerPort: 9443
#This is required for Health check to succeed
readinessProbe:
tcpSocket:
port: 9080
initialDelaySeconds: 60
periodSeconds: 5
failureThreshold: 5
successThreshold: 1
imagePullPolicy: Always
volumeMounts:
- mountPath: /cpx/conf/
name: cpx-volume1
- mountPath: /cpx/crash/
name: cpx-volume2
# Add cic as a sidecar
- name: cic
image: "quay.io/citrix/citrix-k8s-ingress-controller:1.9.9"
env:
- name: "EULA"
value: "yes"
- name: "NS_IP"
value: "127.0.0.1"
- name: "NS_PROTOCOL"
value: "HTTP"
- name: "NS_PORT"
value: "80"
- name: "NS_DEPLOYMENT_MODE"
value: "SIDECAR"
- name: "NS_ENABLE_MONITORING"
value: "YES"
- name: "NS_USER"
valueFrom:
secretKeyRef:
name: nslogin
key: username
- name: "NS_PASSWORD"
valueFrom:
secretKeyRef:
name: nslogin
key: password
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
imagePullPolicy: Always
volumes:
- name: cpx-volume1
emptyDir: {}
- name: cpx-volume2
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: cpx-service
labels:
app: cpx-service
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: cpx-ingress

View File

@ -1,28 +0,0 @@
## Deployment
### AWS
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0.yaml
```
### GCE
```
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/ingress-nginx/v1.6.0-gce.yaml
```
## Creating a simple ingress
```
kubectl run echoheaders --image=registry.k8s.io/echoserver:1.4 --replicas=1 --port=8080
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y
kubectl apply -f https://raw.githubusercontent.com/kubernetes/contrib/master/ingress/controllers/nginx/examples/ingress.yaml
kubectl get services ingress-nginx -owide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx 100.71.196.44 a29c28f4b8b0811e685cb0a924c5a8a1-1593015597.us-east-1.elb.amazonaws.com 80/TCP,443/TCP 13m app=ingress-nginx
curl -v -H "Host: bar.baz.com" http://a29c28f4b8b0811e685cb0a924c5a8a1-1593015597.us-east-1.elb.amazonaws.com/bar
```

View File

@ -1,9 +0,0 @@
kind: Addons
metadata:
name: ingress-nginx
spec:
addons:
- version: 1.4.0
selector:
k8s-addon: ingress-nginx.addons.k8s.io
manifest: v1.4.0.yaml

View File

@ -1,133 +0,0 @@
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
labels:
k8s-addon: ingress-nginx.addons.k8s.io
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
labels:
k8s-addon: ingress-nginx.addons.k8s.io
spec:
replicas: 1
template:
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: registry.k8s.io/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx
labels:
k8s-addon: ingress-nginx.addons.k8s.io
data:
use-proxy-protocol: "true"
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
labels:
k8s-addon: ingress-nginx.addons.k8s.io
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
labels:
k8s-addon: ingress-nginx.addons.k8s.io
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
k8s-addon: ingress-nginx.addons.k8s.io
spec:
terminationGracePeriodSeconds: 60
containers:
- image: registry.k8s.io/nginx-ingress-controller:0.8.3
name: ingress-nginx
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --nginx-configmap=$(POD_NAMESPACE)/ingress-nginx

View File

@ -1,312 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-controller
namespace: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-controller
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:serviceaccount:kube-ingress:nginx-ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-controller
subjects:
- kind: ServiceAccount
name: nginx-ingress-controller
namespace: kube-ingress
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
namespace: kube-ingress
labels:
k8s-app: default-http-backend
k8s-addon: ingress-nginx.addons.k8s.io
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nginx-default-backend
namespace: kube-ingress
labels:
k8s-app: default-http-backend
k8s-addon: ingress-nginx.addons.k8s.io
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
labels:
k8s-app: default-http-backend
k8s-addon: ingress-nginx.addons.k8s.io
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: registry.k8s.io/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
data:
use-proxy-protocol: "false"
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
spec:
# Forces nodes without Service endpoints to remove themselves from the list of nodes eligible. See https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
k8s-app: nginx-ingress-controller
k8s-addon: ingress-nginx.addons.k8s.io
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
k8s-app: nginx-ingress-controller
k8s-addon: ingress-nginx.addons.k8s.io
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 60
serviceAccountName: nginx-ingress-controller
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0
name: nginx-ingress-controller
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --configmap=$(POD_NAMESPACE)/ingress-nginx
- --publish-service=$(POD_NAMESPACE)/ingress-nginx

View File

@ -1,322 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-controller
namespace: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-controller
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:serviceaccount:kube-ingress:nginx-ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
labels:
k8s-addon: ingress-nginx.addons.k8s.io
name: nginx-ingress-controller
namespace: kube-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-controller
subjects:
- kind: ServiceAccount
name: nginx-ingress-controller
namespace: kube-ingress
---
kind: Service
apiVersion: v1
metadata:
name: nginx-default-backend
namespace: kube-ingress
labels:
k8s-app: default-http-backend
k8s-addon: ingress-nginx.addons.k8s.io
spec:
ports:
- port: 80
targetPort: http
selector:
app: nginx-default-backend
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-default-backend
namespace: kube-ingress
labels:
k8s-app: default-http-backend
k8s-addon: ingress-nginx.addons.k8s.io
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-default-backend
template:
metadata:
labels:
k8s-app: default-http-backend
k8s-addon: ingress-nginx.addons.k8s.io
app: nginx-default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: registry.k8s.io/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
ports:
- name: http
containerPort: 8080
protocol: TCP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
data:
use-proxy-protocol: "true"
---
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
k8s-addon: ingress-nginx.addons.k8s.io
annotations:
# Enable PROXY protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
# Increase the ELB idle timeout to avoid issues with WebSockets or Server-Sent Events.
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: ingress-nginx
namespace: kube-ingress
labels:
k8s-app: nginx-ingress-controller
k8s-addon: ingress-nginx.addons.k8s.io
spec:
replicas: 3
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
k8s-app: nginx-ingress-controller
k8s-addon: ingress-nginx.addons.k8s.io
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
terminationGracePeriodSeconds: 60
serviceAccountName: nginx-ingress-controller
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0
name: nginx-ingress-controller
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --configmap=$(POD_NAMESPACE)/ingress-nginx
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=ingress.kubernetes.io

View File

@ -1,388 +0,0 @@
# Creating ingress with kube-ingress-aws-controller and skipper
[Kube AWS Ingress Controller](https://github.com/zalando-incubator/kubernetes-on-aws)
creates AWS Application Load Balancer (ALB) that is used to terminate TLS connections and use
[AWS Certificate Manager (ACM)](https://aws.amazon.com/certificate-manager/) or
[AWS Identity and Access Management (IAM)](https://docs.aws.amazon.com/IAM/latest/APIReference/Welcome.html)
certificates. ALBs are used to route traffic to an Ingress http router for example
[skipper](https://github.com/zalando/skipper/), which routes
traffic to Kubernetes services and implements
[advanced features](https://opensource.zalando.com/skipper/kubernetes/ingress-controller/)
like green-blue deployments, feature toggles, rate limits,
circuitbreakers, metrics, access logs, opentracing API, shadow traffic or A/B tests.
Advantages:
- it uses Cloudformation instead of API calls for safety reasons, because if use Kubernetes in AWS at scale you will get rate limited from AWS sooner or later
- it does not have routes limitations from AWS
- you can use managed certificates like ACM, but also use you purchased certificates using IAM certificates
- it automatically finds the best matching ACM and IAM certificate for your ingress, but you can also provide hostnames or the ARN to influence the certificate/ALB lookup
- you are free to use an http router implementation of your choice, which can implement more features like green-blue deployments
For this tutorial I assume you have GNU sed installed, if not read
commands with `sed` to modify the files according to the `sed` command
being run. If you are running BSD or MacOS you can use `gsed`.
## Kops cluster with cloud labels
Cloud Labels are required to make Kube AWS Ingress Controller work,
because it has to find the AWS Application Load Balancers it manages
by AWS Tags, which are called cloud Labels in Kops.
You have to set some environment variables to choose AZs to deploy to,
your S3 Bucket name for Kops configuration and you Kops cluster name:
```
export AWS_AVAILABILITY_ZONES=eu-central-1b,eu-central-1c
export S3_BUCKET=kops-aws-workshop-<your-name>
export KOPS_CLUSTER_NAME=example.cluster.k8s.local
```
You have two options, please skip the section, which does not apply:
1. You can create a new cluster with cloud labels
2. You can modify an existing cluster and add cloud labels
### Create a new cluster
Next, you create the Kops cluster and validate that everything is set up properly.
```
export KOPS_STATE_STORE=s3://${S3_BUCKET}
kops create cluster --name $KOPS_CLUSTER_NAME --zones $AWS_AVAILABILITY_ZONES --cloud-labels kubernetes.io/cluster/$KOPS_CLUSTER_NAME=owned --yes
kops validate cluster
```
### Modify an existing cluster
Next, you modify your existing Kops cluster and update it.
```
export KOPS_STATE_STORE=s3://${S3_BUCKET}
kops edit cluster $KOPS_CLUSTER_NAME
```
Add `cloudLabels` dependent on your `$KOPS_CLUSTER_NAME`, here `example.cluster.k8s.local`
```
spec:
cloudLabels:
kubernetes.io/cluster/example.cluster.k8s.local: owned
```
Update the cluster with the new configuration:
```
kops update cluster $KOPS_CLUSTER_NAME --yes
```
### IAM role
This is the effective policy that you need for your EC2 nodes for the
kube-ingress-aws-controller, which we will use:
```
{
"Effect": "Allow",
"Action": [
"acm:ListCertificates",
"acm:DescribeCertificate",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLoadBalancerTargetGroups",
"autoscaling:AttachLoadBalancers",
"autoscaling:DetachLoadBalancers",
"autoscaling:DetachLoadBalancerTargetGroups",
"autoscaling:AttachLoadBalancerTargetGroups",
"cloudformation:*",
"elasticloadbalancing:*",
"elasticloadbalancingv2:*",
"ec2:DescribeInstances",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRouteTables",
"ec2:DescribeVpcs",
"iam:GetServerCertificate",
"iam:ListServerCertificates"
],
"Resource": [
"*"
]
}
```
To apply the mentioned policy you have to add [additionalPolicies with kOps](https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md) for your cluster, so edit your cluster.
```
kops edit cluster $KOPS_CLUSTER_NAME
```
and add this to your node policy:
```
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"acm:ListCertificates",
"acm:DescribeCertificate",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLoadBalancerTargetGroups",
"autoscaling:AttachLoadBalancers",
"autoscaling:DetachLoadBalancers",
"autoscaling:DetachLoadBalancerTargetGroups",
"autoscaling:AttachLoadBalancerTargetGroups",
"cloudformation:*",
"elasticloadbalancing:*",
"elasticloadbalancingv2:*",
"ec2:DescribeInstances",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRouteTables",
"ec2:DescribeVpcs",
"iam:GetServerCertificate",
"iam:ListServerCertificates"
],
"Resource": ["*"]
}
]
```
After that make sure this was applied to your cluster with:
```
kops update cluster $KOPS_CLUSTER_NAME --yes
kops rolling-update cluster
```
### Security Group for Ingress
To be able to route traffic from ALB to your nodes you need to create
an Amazon EC2 security group with Kubernetes tags, that allow ingress
port 80 and 443 from the internet and everything from ALBs to your
nodes. You also need to allow traffic to leave the ALB to the Internet and Kubernetes nodes.
Tags are used from Kubernetes components to find AWS components
owned by the cluster. We will do with the AWS cli:
```
aws ec2 create-security-group --description ingress.$KOPS_CLUSTER_NAME --group-name ingress.$KOPS_CLUSTER_NAME
aws ec2 describe-security-groups --group-names ingress.$KOPS_CLUSTER_NAME
sgidingress=$(aws ec2 describe-security-groups --filters Name=group-name,Values=ingress.$KOPS_CLUSTER_NAME | jq '.["SecurityGroups"][0]["GroupId"]' -r)
sgidnode=$(aws ec2 describe-security-groups --filters Name=group-name,Values=nodes.$KOPS_CLUSTER_NAME | jq '.["SecurityGroups"][0]["GroupId"]' -r)
aws ec2 authorize-security-group-ingress --group-id $sgidingress --protocol tcp --port 443 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id $sgidingress --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-egress --group-id $sgidingress --protocol all --port -1 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id $sgidnode --protocol all --port -1 --source-group $sgidingress
aws ec2 create-tags --resources $sgidingress --tags '[{"Key": "kubernetes.io/cluster/id", "Value": "owned"}, {"Key": "kubernetes:application", "Value": "kube-ingress-aws-controller"}]'
```
If your cluster is running not in the default VPC then the commands for the creation of the security groups will look a little different:
```
VPC_ID=$(aws ec2 describe-security-groups --filters Name=group-name,Values=nodes.$KOPS_CLUSTER_NAME | jq '.["SecurityGroups"][0].VpcId' -r)
aws ec2 create-security-group --description ingress.$KOPS_CLUSTER_NAME --group-name ingress.$KOPS_CLUSTER_NAME --vpc-id $VPC_ID
aws ec2 describe-security-groups --filter Name=vpc-id,Values=$VPC_ID Name=group-name,Values=ingress.$KOPS_CLUSTER_NAME
sgidingress=$(aws ec2 describe-security-groups --filter Name=vpc-id,Values=$VPC_ID Name=group-name,Values=ingress.$KOPS_CLUSTER_NAME | jq '.["SecurityGroups"][0]["GroupId"]' -r)
sgidnode=$(aws ec2 describe-security-groups --filter Name=vpc-id,Values=$VPC_ID Name=group-name,Values=nodes.$KOPS_CLUSTER_NAME | jq '.["SecurityGroups"][0]["GroupId"]' -r)
aws ec2 authorize-security-group-ingress --group-id $sgidingress --protocol tcp --port 443 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id $sgidingress --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-egress --group-id $sgidingress --protocol all --port -1 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id $sgidnode --protocol all --port -1 --source-group $sgidingress
aws ec2 create-tags --resources $sgidingress --tags Key="kubernetes.io/cluster/${KOPS_CLUSTER_NAME}",Value="owned" Key="kubernetes:application",Value="kube-ingress-aws-controller"
```
### AWS Certificate Manager (ACM)
To have TLS termination you can use AWS managed certificates. If you
are unsure if you have at least one certificate provisioned use the
following command to list ACM certificates:
```
aws acm list-certificates
```
If you have one, you can move on to the next section.
To create an ACM certificate, you have to request a CSR with a domain name that you own in [route53](https://aws.amazon.com/route53/), for example.org. We will here request one wildcard certificate for example.org:
```
aws acm request-certificate --domain-name *.example.org
```
You will have to successfully do a challenge to show ownership of the
given domain. In most cases you have to click on a link from an e-mail
sent by certificates.amazon.com. E-Mail subject will be `Certificate approval for <example.org>`.
If you did the challenge successfully, you can now check the status of
your certificate. Find the ARN of the new certificate:
```
aws acm list-certificates
```
Describe the certificate and check the Status value:
```
aws acm describe-certificate --certificate-arn arn:aws:acm:<snip> | jq '.["Certificate"]["Status"]'
```
If this is no "ISSUED", your certificate is not valid and you have to fix it.
To resend the CSR validation e-mail, you can use:
```
aws acm resend-validation-email
```
### Install components kube-ingress-aws-controller and skipper
kube-ingress-aws-controller will be deployed as deployment with 1
replica, which is ok for production, because it's only configuring
ALBs. Skipper will be deployed as daemonset and we create 2 ingresses, 2
services and 2 deployments to show green-blue deployments.
Change region and hostnames depending on
your route53 domain and ACM certificate:
```
REGION=${AWS_AVAILABILITY_ZONES#*,}
REGION=${REGION:0:-1}
sed -i "s/<REGION>/$REGION/" v1.0.0.yaml
sed -i "s/<HOSTNAME>/demo-app.example.org/" v1.0.0.yaml
sed -i "s/<HOSTNAME2>/demo-green-blue.example.org/" v1.0.0.yaml
kubectl create -f v1.0.0.yaml
```
If your VPC-CIDR is different from 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12, 127.0.0.1/8,fd00::/8 or ::1/128 you may
get a "Readiness probe failed: HTTP probe failed with statuscode: 404" from the skipper pods with the *latest* or
*v0.10.7* tag of skipper.
To prevent this, uncomment the "-whitelisted-healthcheck-cidr=<CIDR_BLOCK>" in v1.0.0.yaml and add your VPC-CIDR.
Check, if the installation was successful:
```
kops validate cluster
```
If not and you are sure all steps before were done, please check the logs of the POD, which is not in running state:
```
kubectl -n kube-system get pods -l component=ingress
kubectl -n kube-system logs <podname>
```
### Test features
#### Base features
Check if your deployment was successful:
```
kubectl get pods,svc -l application=demo
```
To check if your Ingress created an ALB check the `ADDRESS` column:
```
kubectl get ing -l application=demo -o wide
NAME HOSTS ADDRESS PORTS AGE
demo-app-v1 myapp.example.org example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com 80 1m
```
If it is provisioned you can check with curl, http to https redirect is created automatically by Skipper:
```
curl -L -H"Host: myapp.example.org" example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com
<body style='color: green; background-color: white;'><h1>Hello!</h1>
```
Check if Kops dns-controller created a DNS record:
```
curl -L myapp.example.org
<body style='color: green; background-color: white;'><h1>Hello!</h1>
```
#### Feature toggle and rate limits
We assume you have all components running that were applied in `Base features`.
Now, you can test the feature toggle to access the new v2 application:
```
curl "https://myapp.example.org/?version=v2"
<body style='color: white; background-color: green;'><h1>Hello AWS!</h1>
```
If you run this more often, you can easily trigger the rate limit to stop proxying your call to the backend:
```
for i in {0..9}; do curl -v "https://myapp.example.org/?version=v2"; done
```
You should see output similar to:
```
* Trying 52.222.161.4...
-------- a lot of TLS output --------
> GET /?version=v2 HTTP/1.1
> Host: myapp.example.org
> User-Agent: curl/7.49.0
> Accept: */*
>
< HTTP/1.1 429 Too Many Requests
< Content-Type: text/plain; charset=utf-8
< Server: Skipper
< X-Content-Type-Options: nosniff
< X-Rate-Limit: 60
< Date: Mon, 27 Nov 2017 18:19:26 GMT
< Content-Length: 18
<
Too Many Requests
* Connection #0 to host myapp.example.org left intact
```
Your endpoint is now protected.
#### Green-Blue traffic Deployments
Next we will show traffic switching.
Deploy an ingress with traffic switching 80% traffic goes to v1 and
20% to v2. Change the hostname depending on your route53 domain and
ACM certificate as before:
To check if your Ingress has an ALB check the `ADDRESS` column:
```
kubectl get ing -l application=demo-tf -o wide
NAME HOSTS ADDRESS PORTS AGE
demo-traffic-switching demo-green-blue.example.org example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com 80 1m
```
If it is provisioned you can check with curl, http to https redirect is created automatically by Skipper:
```
curl -L -H"Host: demo-green-blue.example.org" example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com
<body style='color: green; background-color: white;'><h1>Hello!</h1>
```
Check if Kops dns-controller (in case you have it installed) created a DNS record:
```
curl -L demo-green-blue.example.org
<body style='color: green; background-color: white;'><h1>Hello!</h1>
```
You can now open your browser at
[https://demo-green-blue.example.org](https://demo-green-blue.example.org/) depending
on your `hostname` and reload it maybe 5 times to see switching from
white background to green background. If you modify the
`zalando.org/backend-weights` annotation you can control the chance
that you will hit the v1 or the v2 application. Use kubectl annotate to change this:
```
kubectl annotate ingress demo-traffic-switching zalando.org/backend-weights='{"demo-app-v1": 20, "demo-app-v2": 80}'
```

View File

@ -1,9 +0,0 @@
kind: Addons
metadata:
name: kube-ingress-aws-controller
spec:
addons:
- version: v1.0.0
selector:
k8s-addon: kube-ingress-aws-controller.addons.k8s.io
manifest: v1.0.0.yaml

View File

@ -1,222 +0,0 @@
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kube-ingress-aws-controller
namespace: kube-system
labels:
application: kube-ingress-aws-controller
component: ingress
spec:
replicas: 1
selector:
matchLabels:
application: kube-ingress-aws-controller
component: ingress
template:
metadata:
labels:
application: kube-ingress-aws-controller
component: ingress
spec:
containers:
- name: controller
image: registry.opensource.zalan.do/teapot/kube-ingress-aws-controller:latest
env:
- name: AWS_REGION
value: <REGION>
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: skipper-ingress
namespace: kube-system
labels:
component: ingress
spec:
selector:
matchLabels:
component: ingress
updateStrategy:
type: RollingUpdate
template:
metadata:
name: skipper-ingress
labels:
component: ingress
spec:
hostNetwork: true
containers:
- name: skipper-ingress
image: registry.opensource.zalan.do/pathfinder/skipper:latest
ports:
- name: ingress-port
containerPort: 9999
hostPort: 9999
args:
- "skipper"
- "-kubernetes"
- "-kubernetes-in-cluster"
- "-address=:9999"
- "-proxy-preserve-host"
- "-serve-host-metrics"
- "-enable-ratelimits"
- "-experimental-upgrade"
- "-metrics-exp-decay-sample"
- "-kubernetes-https-redirect=true"
# - "-whitelisted-healthcheck-cidr=<CIDR_BLOCK>"
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 25m
memory: 25Mi
readinessProbe:
httpGet:
path: /kube-system/healthz
port: 9999
initialDelaySeconds: 5
timeoutSeconds: 5
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: demo-app-v1
spec:
replicas: 1
template:
metadata:
labels:
application: demo
version: v1
spec:
containers:
- name: skipper-demo
image: registry.opensource.zalan.do/pathfinder/skipper:latest
args:
- "skipper"
- "-inline-routes"
- "* -> inlineContent(\"<body style='color: green; background-color: white;'><h1>Hello!</h1>\") -> <shunt>"
ports:
- containerPort: 9090
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: demo-app-v2
spec:
replicas: 1
template:
metadata:
labels:
application: demo
version: v2
spec:
containers:
- name: skipper-demo
image: registry.opensource.zalan.do/pathfinder/skipper:latest
args:
- "skipper"
- "-inline-routes"
- "* -> inlineContent(\"<body style='color: white; background-color: green;'><h1>Hello AWS!</h1>\") -> <shunt>"
ports:
- containerPort: 9090
---
apiVersion: v1
kind: Service
metadata:
name: demo-app-v1
labels:
application: demo
version: v1
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 9090
name: external
selector:
application: demo
version: v1
---
apiVersion: v1
kind: Service
metadata:
name: demo-app-v2
labels:
application: demo
version: v2
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 9090
name: external
selector:
application: demo
version: v2
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "demo-v1"
labels:
application: demo
spec:
rules:
- host: "<HOSTNAME>"
http:
paths:
- backend:
serviceName: "demo-app-v1"
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "demo-feature-toggle"
labels:
application: demo
annotations:
zalando.org/skipper-predicate: QueryParam("version", "^v2$")
zalando.org/skipper-filter: ratelimit(2, "1m")
spec:
rules:
- host: "<HOSTNAME>"
http:
paths:
- backend:
serviceName: "demo-app-v2"
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "demo-traffic-switching"
labels:
application: demo
annotations:
zalando.org/backend-weights: |
{"demo-app-v1": 80, "demo-app-v2": 20}
spec:
rules:
- host: "<HOSTNAME2>"
http:
paths:
- backend:
serviceName: "demo-app-v1"
servicePort: 80
- backend:
serviceName: "demo-app-v2"
servicePort: 80

View File

@ -1,2 +0,0 @@
## Usages
channels apply channel kube-state-metrics --yes

View File

@ -1,21 +0,0 @@
kind: Addons
metadata:
name: kube-state-metrics
spec:
addons:
- version: v1.0.1
selector:
k8s-addon: kube-state-metrics.addons.k8s.io
manifest: v1.0.1.yaml
- version: v1.1.0-rc.0
selector:
k8s-addon: kube-state-metrics.addons.k8s.io
manifest: v1.1.0-rc.0.yaml
- version: v1.1.0
selector:
k8s-addon: kube-state-metrics.addons.k8s.io
manifest: v1.1.0.yaml
- version: v1.9.5
selector:
k8s-addon: kube-state-metrics.addons.k8s.io
manifest: v1.9.5.yaml

View File

@ -1,158 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-state-metrics
rules:
- apiGroups: [""]
resources:
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["extensions"]
resources:
- daemonsets
- deployments
- replicasets
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs: ["list", "watch"]
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kube-state-metrics
spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: quay.io/coreos/kube-state-metrics:v1.0.1
ports:
- name: http-metrics
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 200m
- name: addon-resizer
image: registry.k8s.io/addon-resizer:1.0
resources:
limits:
cpu: 100m
memory: 30Mi
requests:
cpu: 100m
memory: 30Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --container=kube-state-metrics
- --cpu=100m
- --extra-cpu=1m
- --memory=100Mi
- --extra-memory=2Mi
- --threshold=5
- --deployment=kube-state-metrics
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kube-state-metrics
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kube-state-metrics-resizer
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
namespace: kube-system
name: kube-state-metrics-resizer
rules:
- apiGroups: [""]
resources:
- pods
verbs: ["get"]
- apiGroups: ["extensions"]
resources:
- deployments
resourceNames: ["kube-state-metrics"]
verbs: ["get", "update"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-state-metrics
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
name: kube-state-metrics
namespace: kube-system
labels:
k8s-app: kube-state-metrics
annotations:
prometheus.io/scrape: 'true'
spec:
ports:
- name: http-metrics
port: 8080
targetPort: http-metrics
protocol: TCP
selector:
k8s-app: kube-state-metrics

View File

@ -1,158 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-state-metrics
rules:
- apiGroups: [""]
resources:
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["extensions"]
resources:
- daemonsets
- deployments
- replicasets
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs: ["list", "watch"]
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kube-state-metrics
spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: quay.io/coreos/kube-state-metrics:v1.1.0-rc.0
ports:
- name: http-metrics
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 200m
- name: addon-resizer
image: registry.k8s.io/addon-resizer:1.0
resources:
limits:
cpu: 100m
memory: 30Mi
requests:
cpu: 100m
memory: 30Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --container=kube-state-metrics
- --cpu=100m
- --extra-cpu=1m
- --memory=100Mi
- --extra-memory=2Mi
- --threshold=5
- --deployment=kube-state-metrics
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kube-state-metrics
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kube-state-metrics-resizer
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
namespace: kube-system
name: kube-state-metrics-resizer
rules:
- apiGroups: [""]
resources:
- pods
verbs: ["get"]
- apiGroups: ["extensions"]
resources:
- deployments
resourceNames: ["kube-state-metrics"]
verbs: ["get", "update"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-state-metrics
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
name: kube-state-metrics
namespace: kube-system
labels:
k8s-app: kube-state-metrics
annotations:
prometheus.io/scrape: 'true'
spec:
ports:
- name: http-metrics
port: 8080
targetPort: http-metrics
protocol: TCP
selector:
k8s-app: kube-state-metrics

View File

@ -1,158 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-state-metrics
rules:
- apiGroups: [""]
resources:
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["extensions"]
resources:
- daemonsets
- deployments
- replicasets
verbs: ["list", "watch"]
- apiGroups: ["apps"]
resources:
- statefulsets
verbs: ["list", "watch"]
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs: ["list", "watch"]
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: kube-state-metrics
spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: quay.io/coreos/kube-state-metrics:v1.1.0
ports:
- name: http-metrics
containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 500Mi
cpu: 300m
- name: addon-resizer
image: registry.k8s.io/addon-resizer:1.8.1
resources:
limits:
cpu: 100m
memory: 30Mi
requests:
cpu: 100m
memory: 30Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- /pod_nanny
- --container=kube-state-metrics
- --cpu=100m
- --extra-cpu=50m
- --memory=100Mi
- --extra-memory=100Mi
- --threshold=3
- --deployment=kube-state-metrics
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kube-state-metrics
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kube-state-metrics-resizer
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
namespace: kube-system
name: kube-state-metrics-resizer
rules:
- apiGroups: [""]
resources:
- pods
verbs: ["get"]
- apiGroups: ["extensions"]
resources:
- deployments
resourceNames: ["kube-state-metrics"]
verbs: ["get", "update"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-state-metrics
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
name: kube-state-metrics
namespace: kube-system
labels:
k8s-app: kube-state-metrics
annotations:
prometheus.io/scrape: 'true'
spec:
ports:
- name: http-metrics
port: 8080
targetPort: http-metrics
protocol: TCP
selector:
k8s-app: kube-state-metrics

View File

@ -1,207 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.5
name: kube-state-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-state-metrics
subjects:
- kind: ServiceAccount
name: kube-state-metrics
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.5
name: kube-state-metrics
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- nodes
- pods
- services
- resourcequotas
- replicationcontrollers
- limitranges
- persistentvolumeclaims
- persistentvolumes
- namespaces
- endpoints
verbs:
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
- ingresses
verbs:
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
- daemonsets
- deployments
- replicasets
verbs:
- list
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- list
- watch
- apiGroups:
- authentication.k8s.io
resources:
- tokenreviews
verbs:
- create
- apiGroups:
- authorization.k8s.io
resources:
- subjectaccessreviews
verbs:
- create
- apiGroups:
- policy
resources:
- poddisruptionbudgets
verbs:
- list
- watch
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests
verbs:
- list
- watch
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
- volumeattachments
verbs:
- list
- watch
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.5
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
template:
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.5
spec:
containers:
- image: quay.io/coreos/kube-state-metrics:v1.9.5
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
name: kube-state-metrics
ports:
- containerPort: 8080
name: http-metrics
- containerPort: 8081
name: telemetry
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 5
timeoutSeconds: 5
securityContext:
runAsUser: 65534
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: kube-state-metrics
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.5
name: kube-state-metrics
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 1.9.5
name: kube-state-metrics
namespace: kube-system
spec:
clusterIP: None
ports:
- name: http-metrics
port: 8080
targetPort: http-metrics
- name: telemetry
port: 8081
targetPort: telemetry
selector:
app.kubernetes.io/name: kube-state-metrics

View File

@ -1,4 +0,0 @@
Changes:
* Switch to _not_ use a NodePort; we access through the kube-api proxy instead
* Add label `k8s-addon: kubernetes-dashboard.addons.k8s.io`

View File

@ -1,58 +0,0 @@
kind: Addons
metadata:
name: kubernetes-dashboard
spec:
addons:
- version: 1.1.0
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.1.0.yaml
- version: 1.4.0
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.4.0.yaml
- version: 1.5.0
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.5.0.yaml
- version: 1.6.0
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.6.0.yaml
- version: 1.6.1
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.6.1.yaml
- version: 1.6.3
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.6.3.yaml
- version: 1.7.1
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.7.1.yaml
- version: 1.8.0
kubernetesVersion: ">=1.8.0"
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.8.0.yaml
- version: 1.8.1
kubernetesVersion: ">=1.8.0"
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.8.1.yaml
- version: 1.8.3
kubernetesVersion: ">=1.8.0"
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.8.3.yaml
- version: 1.10.1
kubernetesVersion: ">=1.10.0"
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v1.10.1.yaml
- version: 2.0.1
kubernetesVersion: ">=1.18.0"
selector:
k8s-addon: kubernetes-dashboard.addons.k8s.io
manifest: v2.0.1.yaml

View File

@ -1,73 +0,0 @@
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
k8s-addon: kubernetes-dashboard.addons.k8s.io
version: v1.1.0
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.1.0
imagePullPolicy: Always
ports:
- name: http
containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
k8s-addon: kubernetes-dashboard.addons.k8s.io
# kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
selector:
app: kubernetes-dashboard

View File

@ -1,167 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.10.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,62 +0,0 @@
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kubernetes-dashboard-v1.4.0
namespace: kube-system
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.4.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090

View File

@ -1,62 +0,0 @@
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.5.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.5.0
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.5.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
ports:
- port: 80
targetPort: 9090

View File

@ -1,104 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
k8s-addon: kubernetes-dashboard.addons.k8s.io
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.6.0
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.6.0
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.6.0
ports:
- containerPort: 9090
protocol: TCP
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,104 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
k8s-addon: kubernetes-dashboard.addons.k8s.io
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.6.1
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.6.1
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.6.1
ports:
- containerPort: 9090
protocol: TCP
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,103 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
k8s-addon: kubernetes-dashboard.addons.k8s.io
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.6.3
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
version: v1.6.3
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.6.3
ports:
- containerPort: 9090
protocol: TCP
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-addon: kubernetes-dashboard.addons.k8s.io
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,129 +0,0 @@
# Copyright 2017 The Kubernetes Dashboard Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.7.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create and watch for changes of 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "watch"]
- apiGroups: [""]
resources: ["secrets"]
# Allow Dashboard to get, update and delete 'kubernetes-dashboard-key-holder' secret.
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.7.0
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,129 +0,0 @@
# Copyright 2017 The Kubernetes Dashboard Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.7.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create and watch for changes of 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "watch"]
- apiGroups: [""]
resources: ["secrets"]
# Allow Dashboard to get, update and delete 'kubernetes-dashboard-key-holder' secret.
resourceNames: ["kubernetes-dashboard-key-holder"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.7.1
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,163 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.8.0
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,167 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.8.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,167 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f <this_file>
# ------------------- Dashboard Secret ------------------- #
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: registry.k8s.io/kubernetes-dashboard-amd64:v1.8.3
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

View File

@ -1,302 +0,0 @@
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}

View File

@ -1,20 +0,0 @@
kind: Addons
metadata:
name: logging-elasticsearch
spec:
addons:
- version: 1.5.0
selector:
k8s-addon: logging-elasticsearch.addons.k8s.io
manifest: v1.5.0.yaml
kubernetesVersion: ">=1.5.0 <1.6.0" # We use statefulsets
- version: 1.6.0
selector:
k8s-addon: logging-elasticsearch.addons.k8s.io
manifest: v1.6.0.yaml
kubernetesVersion: ">=1.6.0" # RBAC v1beta1 is introduced as of v1.6.0
- version: 1.7.0
selector:
k8s-addon: logging-elasticsearch.addons.k8s.io
manifest: v1.7.0.yaml
kubernetesVersion: ">=1.6.0"

View File

@ -1,185 +0,0 @@
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-es
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v1.22
spec:
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v1.22
spec:
containers:
- name: fluentd-es
image: registry.k8s.io/fluentd-elasticsearch:1.22
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
#nodeSelector:
# alpha.kubernetes.io/fluentd-ds-ready: "true"
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceName: elasticsearch-logging
replicas: 2
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: registry.k8s.io/elasticsearch:v2.4.1-2
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: es-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "default"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: registry.k8s.io/kibana:v4.6.1-1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
- name: "KIBANA_BASE_URL"
value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging

View File

@ -1,280 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: kube-system
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-es
namespace: kube-system
labels:
k8s-app: fluentd-es
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
rules:
- apiGroups:
- ""
resources:
- "namespaces"
- "pods"
verbs:
- "get"
- "watch"
- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
subjects:
- kind: ServiceAccount
name: fluentd-es
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: fluentd-es
apiGroup: ""
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-es
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v1.22
spec:
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v1.22
spec:
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: registry.k8s.io/fluentd-elasticsearch:1.22
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
#nodeSelector:
# alpha.kubernetes.io/fluentd-ds-ready: "true"
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceName: elasticsearch-logging
replicas: 2
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: registry.k8s.io/elasticsearch:v2.4.1-2
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: es-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "default"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: registry.k8s.io/kibana:v4.6.1-1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
- name: "KIBANA_BASE_URL"
value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging

View File

@ -1,284 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: kube-system
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd-es
namespace: kube-system
labels:
k8s-app: fluentd-es
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
rules:
- apiGroups:
- ""
resources:
- "namespaces"
- "pods"
verbs:
- "get"
- "watch"
- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
subjects:
- kind: ServiceAccount
name: fluentd-es
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: fluentd-es
apiGroup: ""
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-es
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v2.0.4
spec:
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v2.0.4
spec:
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: registry.k8s.io/fluentd-elasticsearch:1.22
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
#nodeSelector:
# alpha.kubernetes.io/fluentd-ds-ready: "true"
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceName: elasticsearch-logging
replicas: 2
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: registry.k8s.io/elasticsearch:v5.6.4
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: es-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "default"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana:5.6.4
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
- name: "SERVER_BASEPATH"
value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
- name: "XPACK_MONITORING_ENABLED"
value: "false"
- name: "XPACK_SECURITY_ENABLED"
value: "false"
ports:
- containerPort: 5601
name: ui
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-addon: logging-elasticsearch.addons.k8s.io
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Kibana"
spec:
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging

View File

@ -1,98 +0,0 @@
# Kubernetes Metrics Server
**This addon is deprecated. Set `spec.metricsServer.enabled: true` instead**
## User guide
You can find the user guide in
[the official Kubernetes documentation](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/).
## Design
The detailed design of the project can be found in the following docs:
- [Metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md)
- [Metrics Server](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/metrics-server.md)
For the broader view of monitoring in Kubernetes take a look into
[Monitoring architecture](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/monitoring_architecture.md)
## Prerequisites
you must allow service account tokens to communicate with kubelet, edit your cluster configuration
```console
$ kops edit cluster
```
add configuration below to your cluster configuration.
```
kubelet:
anonymousAuth: false
authorizationMode: Webhook
authenticationTokenWebhook: true
```
update your cluster
```console
$ kops update cluster --yes
$ kops rolling-update cluster --yes
```
## Deployment
Compatibility matrix:
Metrics Server | Metrics API group/version | Supported Kubernetes version
---------------|---------------------------|-----------------------------
0.3.x | `metrics.k8s.io/v1beta1` | 1.8+
In order to deploy metrics-server in your cluster run the following command from
the top-level directory of this repository:
```console
# Kubernetes 1.8+ <= 1.15
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/metrics-server/v1.8.x.yaml
# Kubernetes 1.16+
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/metrics-server/v1.16.x.yaml
```
## Flags
Metrics Server supports all the standard Kubernetes API server flags, as
well as the standard Kubernetes `glog` logging flags. The most
commonly-used ones are:
- `--logtostderr`: log to standard error instead of files in the
container. You generally want this on.
- `--v=<X>`: set log verbosity. It's generally a good idea to run a log
level 1 or 2 unless you're encountering errors. At log level 10, large
amounts of diagnostic information will be reported, include API request
and response bodies, and raw metric results from Kubelet.
- `--secure-port=<port>`: set the secure port. If you're not running as
root, you'll want to set this to something other than the default (port
443).
- `--tls-cert-file`, `--tls-private-key-file`: the serving certificate and
key files. If not specified, self-signed certificates will be
generated, but it's recommended that you use non-self-signed
certificates in production.
Additionally, Metrics Server defines a number of flags for configuring its
behavior:
- `--metric-resolution=<duration>`: the interval at which metrics will be
scraped from Kubelets (defaults to 60s).
- `--kubelet-insecure-tls`: skip verifying Kubelet CA certificates. Not
recommended for production usage, but can be useful in test clusters
with self-signed Kubelet serving certificates.
- `--kubelet-port`: the port to use to connect to the Kubelet (defaults to
the default secure Kubelet port, 10250).
- `--kubelet-preferred-address-types`: the order in which to consider
different Kubelet node address types when connecting to Kubelet.
Functions similarly to the flag of the same name on the API server.

View File

@ -1,17 +0,0 @@
kind: Addons
metadata:
name: metrics-server
spec:
addons:
- version: 0.3.6
selector:
k8s-addon: metrics-server.addons.k8s.io
id: pre-k8s-1-16
kubernetesVersion: "<1.16.0"
manifest: v1.8.x.yaml
- version: 0.3.6
selector:
k8s-addon: metrics-server.addons.k8s.io
manifest: v1.16.x.yaml
id: k8s-1-16
kubernetesVersion: ">=1.16.0"

View File

@ -1,147 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "apps"
resources:
- deployments
verbs:
- get
- list
- watch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: 443
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: registry.k8s.io/metrics-server-amd64:v0.3.6
imagePullPolicy: Always
command:
- /metrics-server
- --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
- --kubelet-insecure-tls
volumeMounts:
- name: tmp-dir
mountPath: /tmp

View File

@ -1,147 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- deployments
verbs:
- get
- list
- watch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
image: registry.k8s.io/metrics-server-amd64:v0.3.6
imagePullPolicy: Always
command:
- /metrics-server
- --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
- --kubelet-insecure-tls
volumeMounts:
- name: tmp-dir
mountPath: /tmp

View File

@ -1,27 +0,0 @@
# Prometheus Operator Addon
[Prometheus Operator](https://coreos.com/operators/prometheus) creates/configures/manages Prometheus clusters atop Kubernetes. This addon deploy prometheus-operator and [kube-prometheus](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/README.md) in a kOps cluster.
## Prerequisites
Version `>=0.18.0` of the Prometheus Operator requires a Kubernetes
cluster of version `>=1.8.0`. If you are just starting out with the
Prometheus Operator, it is highly recommended to use the latest version.
If you have an older version of Kubernetes and the Prometheus Operator running,
we recommend upgrading Kubernetes first and then the Prometheus Operator.
## Usage
### Deploy To Cluster
```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/prometheus-operator/v0.26.0.yaml
```
### Updating the addon
Run the script below.
```console
addons/prometheus-operator/sync-repo.sh "v0.26.0"
```

View File

@ -1,25 +0,0 @@
kind: Addons
metadata:
name: prometheus-operator
spec:
addons:
- version: 0.19.0
selector:
k8s-addon: prometheus-operator.addons.k8s.io
manifest: v0.19.0.yaml
kubernetesVersion: ">=1.8.0"
- version: 0.26.0
selector:
k8s-addon: prometheus-operator.addons.k8s.io
manifest: v0.26.0.yaml
kubernetesVersion: ">=1.8.0"
- version: 0.29.0
selector:
k8s-addon: prometheus-operator.addons.k8s.io
manifest: v0.29.0.yaml
kubernetesversion: ">=1.8.0"
- version: 0.42.1
selector:
k8s-addon: prometheus-operator.addons.k8s.io
manifest: v0.42.1.yaml
kubernetesversion: ">=1.8.0"

View File

@ -1,35 +0,0 @@
#!/bin/bash
# Copyright 2019 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
P_OPERATOR_VERSION=${1:-"v0.26.0"}
P_OPERATOR_ADDON_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
cd ${P_OPERATOR_ADDON_DIR}
git clone -b ${P_OPERATOR_VERSION} --depth 1 https://github.com/coreos/prometheus-operator
cp prometheus-operator/bundle.yaml ${P_OPERATOR_VERSION}.yaml
mkdir tmp
cp prometheus-operator/contrib/kube-prometheus/manifests/* tmp
for i in `ls tmp`
do
echo "---" >> ${P_OPERATOR_VERSION}.yaml
cat tmp/$i >> ${P_OPERATOR_VERSION}.yaml
done
rm -rf ${P_OPERATOR_ADDON_DIR}/prometheus-operator ${P_OPERATOR_ADDON_DIR}/tmp/
cd -

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -4,8 +4,7 @@ kOps incorporates management of some addons; we _have_ to manage some addons whi
the kubernetes API is functional.
In addition, kOps offers end-user management of addons via the `channels` tool (which is still experimental,
but we are working on making it a recommended part of kubernetes addon management). We ship some
curated addons in the [addons directory](https://github.com/kubernetes/kops/tree/master/addons), more information in the [addons document](/addons.md).
but we are working on making it a recommended part of kubernetes addon management). More information in the [addons document](/addons.md).
kOps uses the `channels` tool for system addon management also. Because kOps uses the same tool
@ -34,9 +33,7 @@ If you want to update the bootstrap addons, you can run the following command to
The channels tool adds a manifest-of-manifests file, of `Kind: Addons`, which allows for a description
of the various manifest versions that are available. In this way kOps can manage updates
as new versions of the addon are released. For example,
the [dashboard addon](https://github.com/kubernetes/kops/blob/master/addons/kubernetes-dashboard/addon.yaml)
lists multiple versions.
as new versions of the addon are released.
For example, a typical addons declaration might looks like this:

View File

@ -275,7 +275,7 @@ curl http://34.200.247.63
```
**NOTE:** If you are replicating this exercise in a production environment, use a "real" load balancer in order to expose your replicated services. We are here just testing things so we really don't care right now about that, but, if you are doing this for a "real" production environment, either use an AWS ELB service, or an nginx ingress controller as described in our documentation: [NGINX Based ingress controller](https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx).
**NOTE:** If you are replicating this exercise in a production environment, use a "real" load balancer in order to expose your replicated services. We are here just testing things so we really don't care right now about that, but, if you are doing this for a "real" production environment, either use an AWS ELB service, or an nginx ingress controller.
Now, let's delete our recently-created deployment:

View File

@ -38,8 +38,7 @@ Specifically:
### Support For Multiple Metrics
To enable the resource metrics API for scaling on CPU and memory, install metrics-server
([installation instruction here][k8s-metrics-server]). The
To enable the resource metrics API for scaling on CPU and memory, enable metrics-server by setting `spec.metricsServer.enabled=true` in the Cluster spec. The
compatibility matrix is as follows:
Metrics Server | Metrics API group/version | Supported Kubernetes version
@ -54,5 +53,4 @@ Prometheus, checkout the [custom metrics adapter for Prometheus][k8s-prometheus-
[k8s-aggregation-layer]: https://kubernetes.io/docs/tasks/access-kubernetes-api/configure-aggregation-layer/
[k8s-extend-api]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/
[k8s-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
[k8s-metrics-server]: https://github.com/kubernetes/kops/blob/master/addons/metrics-server/README.md
[k8s-prometheus-custom-metrics-adapter]: https://github.com/DirectXMan12/k8s-prometheus-adapter

View File

@ -32,7 +32,7 @@ This is a document to gather the release notes prior to the release.
## Other breaking changes
* TODO
* Legacy addons have been removed from the kOps repo. These were only referenced by kOps <1.22 ([17322](https://github.com/kubernetes/kops/pull/17332))
# Known Issues

View File

@ -1,5 +1,3 @@
./addons/cluster-autoscaler/cluster-autoscaler.sh
./addons/prometheus-operator/sync-repo.sh
./hack/dev-build.sh
./hooks/nvidia-bootstrap/image/run.sh
./hooks/nvidia-device-plugin/image/files/01-aws-nvidia-driver.sh