Re-add kubectl docs to kubectl staging

Kubernetes-commit: f33ac853423106fcc64a7aa9897a90349113602f
This commit is contained in:
Sean Sullivan 2019-06-25 16:28:04 -07:00 committed by Kubernetes Publisher
parent feda2a6898
commit 7b766ac503
55 changed files with 14445 additions and 0 deletions

1
docs/book/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
node_modules

73
docs/book/CONTRIBUTING.md Normal file
View File

@ -0,0 +1,73 @@
# Contributing
## Process
### Fixing Issues
1. Open an Issue
1. Create a PR
1. Email PR to sig-cli@googlegroups.com with subject `Kubectl Book: Fix Issue <Issue> in <PR>`
1. Optional: Come to sig-cli meeting to discuss
### Adding New Content
1. Open an Issue with proposed content
1. Email sig-cli@googlegroups.com with subject `Kubectl Book: Proposed Content <Issue>`
1. Optional: Come to sig-cli meeting to discuss
## Editing
### Running Locally
- Install [GitBook Toolchain](https://toolchain.gitbook.com/setup.html)
- From `docs/book` run `npm install` to install node_modules locally (don't run install, it updates the shrinkwrap.json)
- From `docs/book` run `gitbook serve`
- Go to `http://localhost:4000` in a browser
### Adding a Section
- Update `SUMMARY.md` with a new section formatted as `## Section Name`
### Adding a Chapter
- Update `SUMMARY.md` under section with chapter formatted as `* [Name of Chapter](pages/section_chapter.md)`
- Add file `pages/section_chapter.md`
### Adding Examples to a Chapter
```bash
{% method %}
Text Explaining Example
{% sample lang="yaml" %}
Formatted code
{% endmethod %}
```
### Adding Notes to a Chapter
```bash
{% panel style="info", title="Title of Note" %}
Note text
{% endpanel %}
```
Notes may have the following styles:
- success
- info
- warning
- danger
### Building and Publishing a release
- Run `gitbook build`
- Push fies in `_book` to a server
### Adding GitBook plugins
- Update `book.json` with the plugin
- Run `npm install <npm-plugin-name>`
### Cool plugins
See https://github.com/swapagarwal/awesome-gitbook-plugins for more plugins.

27
docs/book/Dockerfile Normal file
View File

@ -0,0 +1,27 @@
# Copyright 2019 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM python:3.7
EXPOSE 4000
RUN curl -sL https://deb.nodesource.com/setup_11.x | bash
RUN apt-get update && apt-get install -y nodejs npm && apt-get clean;
RUN npm install gitbook-cli -g
WORKDIR /opt/book/
COPY . /opt/book/
RUN npm install
CMD ["gitbook", "serve"]

81
docs/book/README.md Normal file
View File

@ -0,0 +1,81 @@
{% panel style="success", title="Feedback and Contributing" %}
**Provide feedback on new kubectl docs at the [survey](https://www.surveymonkey.com/r/JH35X82)**
See [CONTRIBUTING](https://github.com/kubernetes/kubectl/blob/master/docs/book/CONTRIBUTING.md) for
instructions on filing/fixing issues and adding new content.
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Kubectl is the Kubernetes cli
- Kubectl provides a swiss army knife of functionality for working with Kubernetes clusters
- Kubectl may be used to deploy and manage applications on Kubernetes
- Kubectl may be used for scripting and building higher-level frameworks
{% endpanel %}
# Kubectl
Kubectl is the Kubernetes cli version of a swiss army knife, and can do many things.
While this Book is focused on using Kubectl to declaratively manage Applications in Kubernetes, it
also covers other Kubectl functions.
## Command Families
Most Kubectl commands typically fall into one of a few categories:
| Type | Used For | Description |
|----------------------------------------|----------------------------|----------------------------------------------------|
| Declarative Resource Management | Deployment and Operations (e.g. GitOps) | Declaratively manage Kubernetes Workloads using Resource Config |
| Imperative Resource Management | Development Only | Run commands to manage Kubernetes Workloads using Command Line arguments and flags |
| Printing Workload State | Debugging | Print information about Workloads |
| Interacting with Containers | Debugging | Exec, Attach, Cp, Logs |
| Cluster Management | Cluster Ops | Drain and Cordon Nodes |
## Declarative Application Management
The preferred approach for managing Resources is through
declarative files called Resource Config used with the Kubectl *Apply* command.
This command reads a local (or remote) file structure and modifies cluster state to
reflect the declared intent.
{% panel style="info", title="Apply" %}
Apply is the preferred mechanism for managing Resources in a Kubernetes cluster.
{% endpanel %}
## Printing state about Workloads
Users will need to view Workload state.
- Printing summarize state and information about Resources
- Printing complete state and information about Resources
- Printing specific fields from Resources
- Query Resources matching labels
## Debugging Workloads
Kubectl supports debugging by providing commands for:
- Printing Container logs
- Printing cluster events
- Exec or attaching to a Container
- Copying files from Containers in the cluster to a user's filesystem
## Cluster Management
On occasion, users may need to perform operations to the Nodes of cluster. Kubectl supports
commands to drain Workloads from a Node so that it can be decommission or debugged.
## Porcelain
Users may find using Resource Config overly verbose for *Development* and prefer to work with
the cluster *imperatively* with a shell-like workflow. Kubectl offers porcelain commands for
generating and modifying Resources.
- Generating + creating Resources such as Deployments, StatefulSets, Services, ConfigMaps, etc
- Setting fields on Resources
- Editing (live) Resources in a text editor
{% panel style="danger", title="Porcelain For Dev Only" %}
Porcelain commands are time saving for experimenting with workloads in a dev cluster, but shouldn't
be used for production.
{% endpanel %}

71
docs/book/SUMMARY.md Normal file
View File

@ -0,0 +1,71 @@
# Resource Management With Kubectl
## Background Information
* [Getting Started with Kubectl](pages/kubectl_book/getting_started.md)
* [Resources + Controllers Overview](pages/kubectl_book/resources_and_controllers.md)
## App Management
* [Introduction](pages/app_management/introduction.md)
* [Apply](pages/app_management/apply.md)
* [Secrets and ConfigMaps](pages/app_management/secrets_and_configmaps.md)
* [Container Images](pages/app_management/container_images.md)
* [Namespaces and Names](pages/app_management/namespaces_and_names.md)
* [Labels and Annotations](pages/app_management/labels_and_annotations.md)
* [Field Merge Semantics](pages/app_management/field_merge_semantics.md)
## Resource Printing
* [Summaries](pages/resource_printing/summaries.md)
* [Raw](pages/resource_printing/raw.md)
* [Fields](pages/resource_printing/fields.md)
* [Describe](pages/resource_printing/describe.md)
* [Queries and Options](pages/resource_printing/queries_and_options.md)
* [Watch](pages/resource_printing/watch.md)
* [Cluster Information](pages/resource_printing/cluster_information.md)
## Container Debugging
* [Container Logs](pages/container_debugging/container_logs.md)
* [Copying Container Files](pages/container_debugging/copying_container_files.md)
* [Executing a Command in a Container](pages/container_debugging/executing_a_command_in_a_container.md)
* [Port Forward to Pods](pages/container_debugging/port_forward_to_pods.md)
* [Proxying Traffic to Services](pages/container_debugging/proxying_traffic_to_services.md)
## App Customization
* [Introduction](pages/app_customization/introduction.md)
* [Bases and Variations](pages/app_customization/bases_and_variants.md)
* [Customizing Pod Templates](pages/app_customization/customizing_pod_templates.md)
* [Customizing Arbitrary Fields](pages/app_customization/customizing_arbitrary_fields.md)
* [Config Reflection](pages/app_customization/config_reflection.md)
## App Structure
* [Introduction](pages/app_composition_and_deployment/structure_introduction.md)
* [Directory Layout](pages/app_composition_and_deployment/structure_directories.md)
* [Branches Layout](pages/app_composition_and_deployment/structure_branches.md)
* [Repository Layout](pages/app_composition_and_deployment/structure_repositories.md)
* [Shared Base Layout](pages/app_composition_and_deployment/structure_multi_tier_apps.md)
## App Deployment
* [Diffing Local and Remote State](pages/app_composition_and_deployment/diffing_local_and_remote_resources.md)
* [Accessing Multiple Clusters](pages/app_composition_and_deployment/accessing_multiple_clusters.md)
* [Publishing Config](pages/app_composition_and_deployment/publishing_bases.md)
## Reference
* [kustomization.yaml](pages/reference/kustomize.md)
## Examples
* [kustomization.yaml](pages/examples/kustomize.md)
## Miscellaneous Imperative Commands
* [Introduction](pages/imperative_porcelain/introduction.md)
* [Creating Resources](pages/imperative_porcelain/creating_resources.md)
* [Setting Fields](pages/imperative_porcelain/setting_fields.md)
* [Editing Workloads](pages/imperative_porcelain/editing_workloads.md)

31
docs/book/book.json Normal file

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,34 @@
kind: Service
apiVersion: v1
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

BIN
docs/book/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

16
docs/book/firebase.json Normal file
View File

@ -0,0 +1,16 @@
{
"hosting": {
"public": "_book",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
],
"rewrites": [
{
"source": "**",
"destination": "/index.html"
}
]
}
}

6075
docs/book/npm-shrinkwrap.json generated Normal file

File diff suppressed because it is too large Load Diff

31
docs/book/package.json Normal file
View File

@ -0,0 +1,31 @@
{
"name": "book",
"private": true,
"version": "1.0.0",
"description": "**Note:** Impatient readers head straight to [Quick Start](quick_start.md).",
"main": "index.js",
"dependencies": {
"cryptiles": "^4.1.2",
"gitbook-cli": "^2.3.2",
"gitbook-plugin-copy-code-button": "0.0.2",
"gitbook-plugin-custom-favicon": "0.0.4",
"gitbook-plugin-ga": "^1.0.1",
"gitbook-plugin-insert-logo": "^0.1.5",
"gitbook-plugin-mermaid": "0.0.9",
"gitbook-plugin-mermaid-gb3": "^2.1.0",
"gitbook-plugin-page-toc": "^1.1.0",
"gitbook-plugin-panel": "^0.0.1",
"gitbook-plugin-sequence-diagrams": "^1.1.0",
"gitbook-plugin-theme-api": "^1.1.2",
"gitbook-plugin-toc": "0.0.2",
"lodash": "4.17.11",
"phantomjs-prebuilt": "^2.1.16",
"underscore.string": "^3.3.5"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "Apache-2.0"
}

View File

@ -0,0 +1,142 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Target a cluster for a rollout with the `--context` flag
- Target a cluster for a rollout with the `--kubeconfig` flag
{% endpanel %}
# Multi-Cluster Targeting
## Motivation
It is common for users to need to deploy **different Variants of an Application to different clusters**.
This can be done by configuring the different Variants using different `kustomization.yaml`'s,
and targeting each variant using the `--context` or `--kubeconfig` flag.
**Note:** The examples shown in this chapter store the Resource Config in a directory
matching the name of the cluster (i.e. as it is referred to be context).
## Targeting a Cluster via Context
The kubeconfig file allows multiple contexts to be specified, each with a different cluster + auth.
### List Contexts
{% method %}
List the contexts in the kubeconfig file
{% sample lang="yaml" %}
```sh
kubectl config get-contexts
```
```sh
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
us-central1-c us-central1-c us-central1-c
* us-east1-c us-east1-c us-east1-c
us-west2-c us-west2-c us-west2-c
```
{% endmethod %}
### Print a Context
{% method %}
Print information about the current context
{% sample lang="yaml" %}
```sh
kubectl config --kubeconfig=config-demo view --minify
```
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority: fake-ca-file
server: https://1.2.3.4
name: development
contexts:
- context:
cluster: development
namespace: frontend
user: developer
name: dev-frontend
current-context: dev-frontend
kind: Config
preferences: {}
users:
- name: developer
user:
client-certificate: fake-cert-file
client-key: fake-key-file
```
{% endmethod %}
### Specify a Context Flag
{% method %}
Specify the kubeconfig context as part of the command.
**Note:** In this example the `kustomization.yaml` exists in a directory whose name matches
the name of the context.
{% sample lang="yaml" %}
```sh
export CLUSTER=us-west2-c; kubectl apply -k ${CLUSTER} --context=${CLUSTER}
```
{% endmethod %}
### Switch to use a Context
{% method %}
Switch the current context before running the command.
**Note:** In this example the `kustomization.yaml` exists in a directory whose name matches
the name of the context.
{% sample lang="yaml" %}
```sh
# change the context to us-west2-c
kubectl config use-context us-west2-c
# deploy Resources from the ./us-west2-c/kustomization.yaml
kubectl apply -k ./us-west2-c
```
{% endmethod %}
## Targeting a Cluster via Kubeconfig
{% method %}
Alternatively, different kubeconfig files may be used for different clusters. The
kubeconfig may be specified with the `--kubeconfig` flag.
**Note:** In this example the `kustomization.yaml` exists in a directory whose name matches
the name of the directory containing the kubeconfig.
{% sample lang="yaml" %}
```sh
kubectl apply -k ./us-west2-c --kubeconfig /path/to/us-west2-c/config
```
{% endmethod %}
{% panel style="info", title="More Info" %}
For more information on configuring kubeconfig and contexts, see the
[Configure Access to Multiple Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
k8s.io document.
{% endpanel %}

View File

@ -0,0 +1,44 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- View diff of changes before they are Applied to the cluster
{% endpanel %}
# Diffing Local and Cluster State
## Motivation
The ability to view what changes will be made before applying them to a cluster can be useful.
{% method %}
## Generating a Diff
Use the `diff` program in a user's path to display a diff of the changes that will be
made by Apply.
{% sample lang="yaml" %}
```sh
kubectl diff -k ./dir/
```
{% endmethod %}
{% method %}
## Setting the Diff Program
The `KUBECTL_EXTERNAL_DIFF` environment variable can be used to select your own diff command.
By default, the "diff" command available in your path will be run with "-u" (unified) and "-N"
(treat new files as empty) options.
{% sample lang="yaml" %}
```sh
export KUBECTL_EXTERNAL_DIFF=meld; kubectl diff -k ./dir/
```
{% endmethod %}

View File

@ -0,0 +1,96 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="warning", title="Experimental" %}
**Content in this chapter is experimental and will evolve based on user feedback.**
Leave feedback on the conventions by creating an issue in the [kubectl](https://github.com/kubernetes/kubectl/issues)
GitHub repository.
Also provide feedback on new kubectl docs at the [survey](https://www.surveymonkey.com/r/JH35X82)
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Publish a White Box Application as a Base for other users to Kustomize
{% endpanel %}
# Publishing Bases
## Motivation
Users may want to run a common White Box Application without writing the Resource Config
for the Application from scratch. Instead they may want to consume ready-made Resource
Config published specifically for the White Box Application, and add customizations for
their specific needs.
- Run a White Box Application (e.g. Cassandra, MongoDB) instance from ready-made Resource Config
- Publish Resource Config to run an Application
## Publishing a White Box Base
{% method %}
White Box Applications may be published to a URL and consumed as Bases in an `kustomization.yaml`. It
can then be consumed in the following manner.
**Use Case:** Run a White Box Application published to GitHub.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
# GitHub URL
- github.com/kubernetes-sigs/kustomize/examples/multibases/dev/?ref=v1.0.6
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Resource comes from the Remote Base
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: dev-myapp-pod
spec:
containers:
- image: nginx:1.7.9
name: nginx
```
{% endmethod %}
## Customizing White Box Bases
The White Box Application may be customized using the same techniques described in
[Bases and Variations](../app_customization/bases_and_variants.md).
## Versioning White Box Bases
White Box Bases may be versioned using the well known versioning techniques provided by Git.
**Tag:**
Bases may be versioned by applying a tag to the repo and modifying the url to point to the tag:
`github.com/kubernetes-sigs/kustomize/examples/multibases?ref=v1.0.6`
**Branch:**
Bases may be versioned by creating a branch and modifying the url to point to the branch:
`github.com/Liujingfang1/kustomize/examples/helloWorld?ref=repoUrl2`
**Commit:**
If the White Box Base has not been explicitly versioned by the maintainer, users may pin the
base to a specific commit:
`github.com/Liujingfang1/kustomize/examples/helloWorld?ref=7050a45134e9848fca214ad7e7007e96e5042c03`
## Forking a White Box Base
Uses may fork a White Box Base hosted on GitHub by forking the GitHub repo. This allows the user
complete control over changes to the Base. Users should periodically pull changes from the
upstream repo back into the fork to get bug fixes and optimizations.

View File

@ -0,0 +1,275 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="warning", title="Experimental" %}
**Content in this chapter is experimental and will evolve based on user feedback.**
Leave feedback on the conventions by creating an issue in the [kubectl](https://github.com/kubernetes/kubectl/issues)
GitHub repository.
Also provide feedback on new kubectl docs at the [survey](https://www.surveymonkey.com/r/JH35X82)
{% endpanel %}
{% panel style="info", title="TL;DR" %}
Decouple changes to Config to be deployed to separate Environments.
{% endpanel %}
# Branch Structure Based Layout
## Motivation
**Why use branches?** Decouple changes that are rolled out with releases (e.g. new flags) from changes that are
rolled out in response to production events (e.g. resource tuning).
## Branch Structure
The convention shown here should be changed and adapted as needed.
| Branch Type Name | Deployed to a Cluster | Purpose | Example Config Change | Example Branch Name |
|----------------------------------------|----|-----------|--------|----|
| Base | **No**. Merged into other Branches only. | Changes that should be rolled out as part of a release. | Add *pubsub topic* flag | `master`, `release-1.14`, `i1026` |
| Deploy | **Yes**. - Manually or Continuously. | Base + Changes required to respond to "production" events (or dev, staging, etc). | Increase *memory resources* - e.g. for crashing Containers | `deploy-test`, `deploy-staging`, `deploy-prod` |
Use with techniques described in [Directories](structure_directories.md) and [Branches](structure_branches.md)
## Workflow Example
### Diagram
#### Scenario
1. Live Prod App version is *v1*
1. *v2* changes committed to Base Branch Config
1. *v2* rolled out to Staging
- Deployed by continuous deployment
1. Live Prod App requires change to *v1* (unrelated to *v2*)
- Change memory resources in Prod
1. Prod Branch Config Updated at *v1*
- Deployed immediately by continuous deployment
1. *v2* changes rolled out separately
- Tag on Base Branch merged into Prod Branch
- Prod Branch continuously deployed
{% sequence width=1000 %}
participant Base Branch as BB
participant Staging Branch as SB
participant Staging Clusters as SC
participant Prod Branch as PB
participant Prod Clusters as PC
Note over SC: At v1 release
Note over PC: At v1 release
Note left of BB: Bob: App Dev
Note over BB: Bob Adds Flag
Note over BB: Bob Tags v2
Note over SB: Bob Releases v2
BB-->SB: Merge v2
SB-->SC: Deploy
Note over SC: At v2 release
Note over BB,PC: Prod Outage
Note left of PB: Alice: App SRE
Note over PB: Alice fixes Config
PB-->PC: Alice's changes (only)
Note over PC: At v1* release
Note over BB,PC: Prod Outage resolved
Note over PB: Alice Releases v2
BB-->PB: Merge v2
PB-->PC: Deploy v2
Note over PC: At v2 release
{% endsequence %}
### Description
**Note:** Starting version of Application is *v1*
1. Developer Bob introduces new app flag for release with *v2*
- e.g. PubSub topic name
1. Bob updates the Base Config with the new flag
- Add staging topic for Staging (e.g. `staging-topic`)
- Add prod topic for Prod (e.g. `prod-topic`)
- Flag should be rolled out with *v2* release
1. *v2* is cut
- Base tagged with *v2* tag
1. *v2* rolled out to Staging
- Merge *v2* Tag -> Staging Branch
- Deploy Staging Branch to Staging Clusters
1. SRE Alice identifies issue in Prod (at *v1*)
- Fix is to increase memory of containers
1. Alice updates the Prod branch Config by increasing memory resources
- Changes go directly into Prod Branch without going into Base
1. *v1* changes rolled out to Prod (*v1++*)
- Include Alice's changes, but not Bob's
1. *v2* rolled out to Prod
- Merge *v2* Tag -> Prod Branch
- Deploy Prod Branch to Prod Clusters
{% method %}
Techniques:
- Add new required flags and environment variables to the Resource Config in the Base branch at the
time they are added to the code.
- Will be rolled out when the code is rolled out.
- Adjust flags and configuration to the Resource Config in the Deploy branch in the deploy directory.
- Will be rolled out immediately independent of versions.
- Merge code from the Base branch to the Deploy branches to perform a Rollout.
## Directory and Branch Layout
Structure:
- Base branch (e.g. `master`, `app-version`, etc) for Config changes tied to releases.
- Looks like [Directories](structure_directories.md)
- Separate Deploy branches for separate Environments (e.g. `deploy-<env>`).
- A new **Directory in each branch with will contain overlay customizations** - e.g. `deploy-<env>`.
{% sample lang="yaml" %}
**Base Branch:** `master`
```bash
tree
.
├── bases
│   ├── ...
├── prod
│   ├── bases
│   │   ├── ...
│   ├── us-central
│   │   ├── kustomization.yaml
│   │   └── backend
│   │   └── deployment-patch.yaml
│   ├── us-east
│   │   └── kustomization.yaml
│   └── us-west
│   └── kustomization.yaml
├── staging
│   ├── bases
│   │   ├── ...
│   └── us-west
│   └── kustomization.yaml
└── test
├── bases
│   ├── ...
└── us-west
└── kustomization.yaml
```
**Deploy Branches:**
Prod Branch: `deploy-prod`
```bash
tree
.
├── bases # From Base Branch
│   └── ...
└── deploy-prod # Prod deploy folder
│   ├── us-central
│   │   ├── kustomization.yaml # Uses bases: ["../../prod/us-central"]
│   ├── us-east
│   │   └── kustomization.yaml # Uses bases: ["../../prod/us-east"]
│ └── us-west
│ └── kustomization.yaml # Uses bases: ["../../prod/us-west"]
├── prod # From Base Branch
│   └── ...
├── staging # From Base Branch
│   └── ...
└── test # From Base Branch
└── ...
```
Staging Branch: `deploy-staging`
```bash
tree
.
├── bases # From Base Branch
│   ├── ...
├── deploy-staging # Staging deploy folder
│ └── us-west
│ └── kustomization.yaml # Uses bases: ["../../staging/us-west"]
├── prod # From Base Branch
│   └── ...
├── staging # From Base Branch
│   └── ...
└── test # From Base Branch
└── ...
```
Test Branch: `deploy-test`
```bash
tree
.
├── bases # From Base Branch
│   ├── ...
├──deploy-test # Test deploy folder
│ └── us-west
│ └── kustomization.yaml # Uses bases: ["../../test/us-west"]
├── prod # From Base Branch
│   └── ...
├── staging # From Base Branch
│   └── ...
└── test # From Base Branch
└── ...
```
{% endmethod %}
## Rollback Workflow Example
Summary of rollback workflow with Branches:
1. Live Prod App version is *v1*
1. Changes are introduced to Base Branch Config
- To be released with version *v2*
1. Release *v2* is cut to be rolled out
- Tag Base *v2* and build artifacts (e.g. images)
1. Changes are introduced into the Base Branch Confiug
- To be released with version *v3*
1. *v2* is pushed to Prod (eventually)
- *v2* Tag merged into Prod Branch
1. *v2* has issues in Prod and must be rolled back
- *v2* changes are rolled back in new commit to Prod Branch
1. Base Branch is unaffected
- Fix introduced in *v3*
**Note:** New changes committed to the Base for "v3" did not make the rollback from
"v2" -> "v1" more challenging, as they had not been merged into the Prod Branch.
### Diagram
{% sequence width=1000 %}
participant Base Branch as BB
participant Staging Branch as SB
participant Staging Clusters as SC
participant Prod Branch as PB
participant Prod Clusters as PC
Note over SC: At v1 release
Note over PC: At v1 release
Note left of BB: Bob: App Dev
Note over BB: Bob Adds Flag (for v2)
Note over BB: Bob Tags v2
Note over SB: Bob Releases v2
BB-->SB: Merge v2
SB-->SC: Deploy
Note over SC: At v2 release
Note over SB: Bob Adds another Flag (for v3)
Note over PB: Bob Releases v2
BB-->PB: Merge v2
PB-->PC: Deploy v2
Note over PC: At v2 release
Note over BB,PC: Unrelated Prod Outage
Note left of PB: Alice: App SRE
Note over PB: Alice rolls back v2 merge commit
PB-->PC: Deploy v1
Note over PC: At v1 release
Note over BB,PC: Prod Outage resolved
{% endsequence %}

View File

@ -0,0 +1,203 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="warning", title="Experimental" %}
**Content in this chapter is experimental and will evolve based on user feedback.**
Leave feedback on the conventions by creating an issue in the [kubectl](https://github.com/kubernetes/kubectl/issues)
GitHub repository.
Also provide feedback on new kubectl docs at the [survey](https://www.surveymonkey.com/r/JH35X82)
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Use **directory hierarchy to structure Resource Config**
- Separate directories for separate Environment and Cluster [Config Variants](../app_customization/bases_and_variants.md)
{% endpanel %}
# Directory Structure Based Layout
## Motivation
{% panel style="success", title="Which is right for my organization?" %}
While this chapter is focused on conventions when using Directories, Branches and
Repositories should be used with Directories as needed.
{% endpanel %}
{% panel style="info", title="Config Repo or Mono Repo?" %}
The techniques and conventions in this Chapter work regardless of whether or not the Resource Config
exists in the same Repository as the source code that is being deployed.
{% endpanel %}
## Directory Structure
| Dir Type | Deployed to a Cluster | Contains | Example Names |
|----------------|----------------------------------|----------|---------------|
| Base | **No** - Used as base | Shared Config. | `base/` |
| Env | **No** - Contains other dirs | Base and Cluster dirs. | `test/`, `staging/`, `prod/` |
| Cluster | **Yes** - Manually or Continuously | Deployable Config. | `us-west1`, `us-east1`, `us-central1` |
### Bases
A Kustomize Base (e.g. `bases:`) provides shared Config that is customized by some consuming `kustomization.yaml`.
The directory structure outlined in this chapter organizes Bases into a hierarchy as:
`app-bases/environment-bases/cluster`
## Workflow Example
- Changes made to *env/cluster/* roll out to **only that specific env-cluster**
- Changes made to *env>/bases/* roll out to **all clusters for that env**
- Changes made to *bases/* roll out to **all clusters in all envs**
## Diagram
```mermaid
graph TD;
B("bases/ ")---|base|P("prod/bases/ ");
B("bases/ ")---|base|S("staging/bases/ ");
B("bases/ ")---|base|T("test/bases/ ");
P("prod/bases/ ")---|base|PUW("prod/us-west/ ");
P("prod/bases/ ")---|base|PUE("prod/us-east/ ");
P("prod/bases/ ")---|base|PUC("prod/us-central/ ");
S("staging/bases/ ")---|base|SUW("staging/us-west/ ");
T("test/bases/ ")---|base|TUW("test/us-west/ ");
```
### Scenario
1. Alice modifies prod/us-west1 with change A
- Change gets pushed to prod us-west1 cluster by continuous deployment
1. Alice modifies prod/bases with change B
- Change gets pushed to all prod clusters by continuous deployment
1. Alice modifies bases with change C
- Change gets pushed to all clusters by continuous deployment
{% sequence width=1000 %}
participant Config in Git as B
participant Test Cluster as TC
participant Staging Cluster as SC
participant US West Prod Cluster as WC
participant US East Prod Cluster as EC
Note over B: Alice modifies prod/us-west1 with change A
B-->WC: A deployed
Note over B: Alice modifies prod/bases with change B
B-->EC: B deployed
B-->WC: B deployed
Note over B: Alice modifies bases/ with change C
B-->EC: C deployed
B-->TC: C deployed
B-->WC: C deployed
B-->SC: C deployed
{% endsequence %}
{% method %}
Techniques:
- Each Layer adds a [namePrefix](../app_management/namespaces_and_names.md#setting-a-name-prefix-or-suffix-for-all-resources) and [commonLabels](../app_management/labels_and_annotations.md#setting-labels-for-all-resources).
- Each Layer adds labels and annotations.
- Each deployable target sets a [namespace](../app_management/namespaces_and_names.md#setting-the-namespace-for-all-resources).
- Override [Pod Environment Variables and Arguments](../app_customization/customizing_pod_templates.md) using `configMapGenerator`s with `behavior: merge`.
- Perform Last-mile customizations with [patches / overlays](../app_customization/customizing_arbitrary_fields.md)
Structure:
- Put reusable bases under `*/bases/`
- `<project-name>/bases/`
- `<project-name>/<environment>/bases/`
- Put deployable targets under `<project-name>/<environment>/<cluster>/`
{% sample lang="yaml" %}
```bash
tree
.
├── bases # Used as a Base only
│   ├── kustomization.yaml
│   ├── backend
│   │   ├── deployment.yaml
│   │   └── service.yaml
│   ├── frontend
│   │   ├── deployment.yaml
│   │   ├── ingress.yaml
│   │   └── service.yaml
│   └── storage
│   ├── service.yaml
│   └── statefulset.yaml
├── prod # Production
│   ├── bases
│   │   ├── kustomization.yaml # Uses bases: ["../../bases"]
│   │   ├── backend
│   │   │   └── deployment-patch.yaml # Production Env specific backend overrides
│   │   ├── frontend
│   │   │   └── deployment-patch.yaml # Production Env specific frontend overrides
│   │   └── storage
│   │   └── statefulset-patch.yaml # Production Env specific storage overrides
│   ├── us-central
│   │   ├── kustomization.yaml # Uses bases: ["../bases"]
│   │   └── backend
│   │   └── deployment-patch.yaml # us-central cluster specific backend overrides
│   ├── us-east
│   │   └── kustomization.yaml # Uses bases: ["../bases"]
│   └── us-west
│   └── kustomization.yaml # Uses bases: ["../bases"]
├── staging # Staging
│   ├── bases
│   │   ├── kustomization.yaml # Uses bases: ["../../bases"]
│   └── us-west
│   └── kustomization.yaml # Uses bases: ["../bases"]
└── test # Test
├── bases
│   ├── kustomization.yaml # Uses bases: ["../../bases"]
└── us-west
└── kustomization.yaml # Uses bases: ["../bases"]
```
{% endmethod %}
{% panel style="warning", title="Applying Environment + Cluster" %}
Though the directory structure contains the cluster in the path, this won't be used by
Apply to determine the cluster context. To Apply a specific cluster, add that cluster to the
kubectl config`, and specify the corresponding context when running Apply.
For more information see [Multi-Cluster](accessing_multiple_clusters.md).
{% endpanel %}
{% panel style="success", title="Code Owners" %}
Some git hosting services provide the concept of *Code Owners* for providing a finer grain permissions model.
*Code Owners* may be used to provide separate permissions for separate environments - e.g. dev, test, prod.
{% endpanel %}
## Rollback Diagram
{% sequence width=1000 %}
participant Config in Git as B
participant Test Cluster as TC
participant Staging Cluster as SC
participant US West Prod Cluster as WC
participant US East Prod Cluster as EC
Note over B: Bob modifies bases/ with change B
B-->EC: B deployed
B-->SC: B deployed
B-->WC: B deployed
Note over B,EC: Prod Outage caused by B
B-->TC: B deployed
Note over B: Bob rolls back bases/ git commits to A
B-->WC: A deployed
B-->TC: A deployed
B-->EC: A deployed
Note over B,EC: Prod Outage resolved
B-->SC: A deployed
{% endsequence %}

View File

@ -0,0 +1,43 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Resource Config is stored in one or more git repositories
- Directory hierarchy, git branches and git repositories may be used for loose coupling
{% endpanel %}
# Resource Config Structure
The chapters in this section cover how to structure Resource Config using git.
Users may start with a pure Directory Hierarchy approach, and later include Branches
and / or Repositories as part of the structure.
## Background
Terms:
- *Bases:* provide **common or shared Resource Config to be factored out** that can be
imported into multiple projects.
- *Overlays and Customizations:* tailor **common or shared Resource Config to be modified** to
a specific application, environment or purpose.
| Technique | Decouple Changes | Used For | Workflow |
|---------------------------------------------|-----------------------------|----------------------------------------------------|----------|
| [Directories](structure_directories.md) | NA | Foundational structure. | Changes are immediately propagated globally. |
| [Branches](structure_branches.md) | *Across Environments* | Promoting changes across Environments. | Changes are promoted across linear stages. |
| [Repositories](structure_repositories.md) | *Across Teams* | Fetching changes across config shared across Teams. | Changes are pulled by consumers (like upgrades). |
Concepts:
- Resource Config may be initially structured using only Directory Hierarchy for organization.
- Use Bases with Overlays / Customizations for factoring across Directories
- Different Deployment environments for the same app may be loosely coupled
- Use separate **Branches for separate environments**.
- Use Bases with Overlays / Customization for factoring across Branches
- Different Teams owning sharing Config may be loosely coupled
- Use separate **Repositories for separate teams**.
- Use Bases with Overlays / Customization for factoring across Repositories

View File

@ -0,0 +1,354 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="warning", title="Experimental" %}
**Content in this chapter is experimental and will evolve based on user feedback.**
Leave feedback on the conventions by creating an issue in the [kubectl](https://github.com/kubernetes/kubectl/issues)
GitHub repository.
Also provide feedback on new kubectl docs at the [survey](https://www.surveymonkey.com/r/JH35X82)
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- The same Base may be used multiple times for different Applications within the same project.
{% endpanel %}
# Composition with Shared Bases
## Motivation
Users may want to reuse the **same base multiple times within the same Apply Project**. Examples:
- Define a very generic base (e.g. "Java Application") used by multiple Applications within a Project.
- Define multiple Environments (e.g. Staging, Canary, Prod) within a Project.
## Composition With A Shared Base
```mermaid
graph TD;
B("B ")---|base|A1("A1 ");
B("B ")---|base|A2("A2 ");
A1("A1 ")---|base|C("A ");
A2("A2 ")---|base|C("A ");
```
{% method %}
It is possible to reuse the same base multiple times within the same project by using a 3-tier
structure to compose multiple Variants of the base.
1. Generic Base in a `kustomization.yaml`.
1. Variants of the Generic Base in multiple `kustomization.yaml`'s.
1. Compose Variants as Bases to a single `kustomization.yaml`.
Each layer may add customizations and resources to the preceding layers.
Generic Base Layer: **../base/java**
- define the java app base Deployment
- define the java app base Service
Variant Layers: **../app1/ + ../app2/**
- inherit the generic base
- set a namePrefix
- set labels and selectors
- overlay an image on the base
- set the image tag
Composition Layer: **kustomization.yaml**
- compose the 2 apps as bases
- set the namespace for Resources in the project
- set a namePrefix for Resources in the project
{% sample lang="yaml" %}
**Generic Base Layer:**
```yaml
# base/java/kustomization.yaml
resources:
- deployment.yaml
- service.yaml
```
```yaml
# base/java/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: java
name: java
spec:
selector:
matchLabels:
app: java
template:
metadata:
labels:
app: java
spec:
containers:
- image: java
name: java
ports:
- containerPort: 8010
livenessProbe:
httpGet:
path: /health
port: 8010
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ready
port: 8010
initialDelaySeconds: 30
timeoutSeconds: 1
```
```yaml
# base/java/service.yaml
kind: Service
apiVersion: v1
metadata:
name: java
spec:
selector:
app: app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
```
**Variant Layers 1 and 2:**
```yaml
# app1/kustomization.yaml
namePrefix: 1-
commonLabels:
app: app1
bases:
- ../base/java
patchesStrategicMerge:
- overlay.yaml
images:
- name: myapp1
newTag: v2
```
```yaml
# app1/overlay.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: java
spec:
template:
spec:
containers:
- image: myapp1
name: java
```
```yaml
# ../app2/kustomization.yaml
namePrefix: 2-
commonLabels:
app: app2
bases:
- ../base/java
patchesStrategicMerge:
- overlay.yaml
images:
- name: myapp2
newTag: v1
```
```yaml
# app2/overlay.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: java
spec:
template:
spec:
containers:
- image: myapp2
name: java
```
**Composition Layer:**
```yaml
# kustomization.yaml
namePrefix: app-
namespace: app
bases:
- app1
- app2
```
{% endmethod %}
{% method %}
**Result**:
- 2 Deployments are created
- Each Deployment has a different images
- Each Deployment has different labels / selectors
- Each Deployment has a different name
- 2 Services are created
- Each Service has different selectors, matching the corresponding Deployment
- All Resource names share the same prefix
- All Resources share the same namespace
**Summary**
- Most of the complexity lives in the shared common base
- Cross Team or Cross Org conventions can be canonized in the common base
- Variations of the Base are much simpler and can modify pieces bespoke to the Variation - e.g. images, args, etc
- Variations may be Composed to form a Project where project-wide conventions are applied
**Benefits**
- Reduced maintenance through propagating updates to Base downstream
- Reduced complexity in Variations through separation of concerns
{% sample lang="yaml" %}
**Applied:**
```yaml
apiVersion: v1
kind: Service
metadata:
# name has both app1 and project kustomization.yaml namePrefixes
name: app-1-java
# namespace updated by namespace in project kustomization.yaml
namespace: app
# labels updated by commonLabels in app1 kustomization.yaml
labels:
app: app1
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
# selector updated by commonLabels in app1 kustomization.yaml
selector:
app: app1
---
apiVersion: v1
kind: Service
metadata:
# name has both app2 and project kustomization.yaml namePrefixes
name: app-2-java
# namespace updated by namespace in project kustomization.yaml
namespace: app
# labels updated by commonLabels in app2 kustomization.yaml
labels:
app: app2
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
# selector updated by commonLabels in app2 kustomization.yaml
selector:
app: app2
---
apiVersion: apps/v1
kind: Deployment
metadata:
# namespace updated by namespace in project kustomization.yaml
namespace: app
# name has both app1 and project kustomization.yaml namePrefixes
name: app-1-java
# labels updated by commonLabels in app1 kustomization.yaml
labels:
app: app1
spec:
# selector updated by commonLabels in app1 kustomization.yaml
selector:
matchLabels:
app: app1
template:
metadata:
# labels updated by commonLabels in app1 kustomization.yaml
labels:
app: app1
spec:
containers:
# Image is updated by Overlay
# ImageTag is updated by images in app1 kustomization.yaml
- image: myapp1:v2
name: java
# ports and probes inherited from the base
ports:
- containerPort: 8010
livenessProbe:
httpGet:
path: /health
port: 8010
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ready
port: 8010
initialDelaySeconds: 30
timeoutSeconds: 1
---
apiVersion: apps/v1
kind: Deployment
metadata:
# namespace updated by namespace in project kustomization.yaml
namespace: app
# name has both app2 and project kustomization.yaml namePrefixes
name: app-2-java
# labels updated by commonLabels in app2 kustomization.yaml
labels:
app: app2
spec:
# selector updated by commonLabels in app2 kustomization.yaml
selector:
matchLabels:
app: app2
template:
metadata:
# labels updated by commonLabels in app2 kustomization.yaml
labels:
app: app2
spec:
containers:
# Image is updated by Overlay
# ImageTag is updated by images in app2 kustomization.yaml
- image: myapp2:v1
name: java
# ports and probes inherited from the base
ports:
- containerPort: 8010
livenessProbe:
httpGet:
path: /health
port: 8010
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ready
port: 8010
initialDelaySeconds: 30
timeoutSeconds: 1
```
{% endmethod %}
{% panel style="info", title="Use Cases" %}
- Defining Generic Per-Application Archetype Bases
- Composing multiple Projects pushed together into a meta-Project
{% endpanel %}

View File

@ -0,0 +1,121 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="warning", title="Experimental" %}
**Content in this chapter is experimental and will evolve based on user feedback.**
Leave feedback on the conventions by creating an issue in the [kubectl](https://github.com/kubernetes/kubectl/issues)
GitHub repository.
Also provide feedback on new kubectl docs at the [survey](https://www.surveymonkey.com/r/JH35X82)
{% endpanel %}
{% panel style="info", title="TL;DR" %}
Decouple changes to Config owned by separate Teams.
{% endpanel %}
# Repository Structure Based Layout
## Motivation
- **Isolation between teams** managing separate Environments
- Permissions
- **Fine grain control** over
- PRs
- Issues
- Projects
- Automation
## Directory Structure
### Resource Config
| Repo Type | Deployed to a Cluster | Contains | Example Names |
|-----------------|------------------------------------|----------|---------------|
| Base | **No** - Used as Base | Config shared with other teams. | `platform` |
| App | **Yes** - Manually or Continuously | Deployable Config. | `guest-book` |
Use with techniques described in [Directories](structure_directories.md) and [Branches](structure_branches.md)
## Workflow Example
1. Alice on the Java Platform team updates the Java Base used by other teams
1. Alice creates a Git Tag for the new release
1. Bob on the GuestBook App team switches to the new Java Base by updating the reference
## Diagram
### Scenario
1. Alice modifies java Base Repo and tags it v2
- Change doesn't get pushed anywhere yet
1. Bob modifies GuestBook App Repo to use v2 of the java Base
- Change gets pushed by continuous deployment
{% sequence width=1000 %}
participant Base Repo as BR
participant App Repo as AR
participant Cluster as C
Note left of BR: Alice: Platform Dev
Note over BR: Alice modifies java Base
Note over BR: Alice tags java Base v2
Note left of AR: Bob: App Dev
Note over AR: Uses java Base v1
BR-->AR: Bob updates to reference Base v2
Note over AR: Uses java Base v2
AR-->C: java Base v2 changes deployed
{% endsequence %}
{% method %}
{% sample lang="yaml" %}
Structure:
- Platform teams create Base Repositories for shared Config
- App teams create App Repositories for their Apps
- Remotely reference the Base Repository
**Base Repository:** Platform Team
```bash
tree
.
├── bases # Used as a Base only
│   ├── kustomization.yaml
│   └── ...
├── java # Java Bases
│   ├── kustomization.yaml # Uses bases: ["../bases"]
│   └── ...
└── python # Python Bases
```
**App Repository:** GuestBook Team
```bash
tree
.
├── bases # References Base Repositories
│   └── ...
├── prod
│   └── ...
├── staging
│   └── ...
└── test
└── ...
```
{% endmethod %}
{% panel style="info", title="Remote URLs vs Vendoring" %}
- Repositories owned and controlled by the same organization may be referenced to by their URL
- Repositories owned or controlled by separate organizations should be vendored and referenced
by path to the vendor directory.
{% endpanel %}

View File

@ -0,0 +1,181 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/C855WZW)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Reuse Resource Config as Bases to `kustomization.yaml`'s.
- Customize Base for different Environments.
- Reuse a Base across multiple Projects.
{% endpanel %}
# Bases and Variations
## Motivation
It is common for users to deploy several **Variants of the same Resource Config**, or for different projects
to **reuse the same Resource Config**. The Resource Config produced by a `kustomization.yaml` can be
reused across multiple project using the `kustomization.yaml` as a *Base*.
Examples:
- a project may be deployed to **dev, test, staging, canary and production environments**,
but with variants between the environments.
- a project may be deployed to **different clusters** that are tuned differently or running
different versions of the project.
{% panel style="info", title="Reference" %}
- [bases](../reference/kustomize.md#bases)
{% endpanel %}
## Bases
Bases are shared Resource Config in a `kustomization.yaml` to be used and customized by another `kustomization.yaml`.
Examples of Bases:
- Common Java Base
- Used in multiple Apps (e.g. Guest Book, Calendar, Auth)
- Common Guest Book App Base
- Used in multiple Environments (e.g. Test, Staging, Prod)
- Common Prod Guest Book App Base
- Used in multiple clusters (e.g. us-west, us-east, us-canary)
## Referring to a Base
A project can add a Base by adding a path (relative to the `kustomization.yaml`) to **`base` that
points to a directory** containing another `kustomization.yaml` file. This will automatically
add and kustomize all of the Resources from the base project to the current project.
Bases can be:
- Relative paths from the `kustomization.yaml` - e.g. `../base`
- Urls - e.g. `github.com/kubernetes-sigs/kustomize/examples/multibases?ref=v1.0.6`
### Diagrams
Single Base inheritted by single Application
```mermaid
graph TD;
B[B]-->A[A];
```
Shared Bases inheritted by different Applications
```mermaid
graph TD;
B1[B1]-->A1[A1];
B2[B2]-->A1[A1];
B2[B2]-->A2[A2];
B3[B3]-->A2[A2];
```
{% method %}
**Example:** Add a `kustomization.yaml` as a Base.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
- ../base
```
**Base: kustomization.yaml and Resource Config**
```yaml
# ../base/kustomization.yaml
configMapGenerator:
- name: my-java-server-env-vars
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
resources:
- deployment.yaml
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /etc/config
name: config-volume
volumes:
- configMap:
name: my-java-server-env-vars
name: config-volume
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Unmodified Generated Base Resource
apiVersion: v1
kind: ConfigMap
metadata:
name: my-java-server-env-vars-k44mhd6h5f
data:
JAVA_HOME: /opt/java/jdk
JAVA_TOOL_OPTIONS: -agentlib:hprof
---
# Unmodified Config Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /etc/config
name: config-volume
volumes:
- configMap:
name: my-java-server-env-vars-k44mhd6h5f
name: config-volume
```
{% endmethod %}
{% panel style="info", title="Bases in Bases" %}
Bases themselves may also be Variants and have their own Bases. See [Advanced Composition](../app_composition_and_deployment/structure_multi_tier_apps.md)
for more information.
{% endpanel %}
```mermaid
graph TD;
B1[B1]-->B2[B2];
B2[B2]-->A[A];
```

View File

@ -0,0 +1,149 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/C855WZW)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Inject the values of other Resource Config fields into Pod Env Vars and Command Args with `vars`.
{% endpanel %}
# Config Reflection
## Motivation
Applications running in Pods may need to know about Application context or configuration.
For example, a **Pod may take the name of Service defined in the Project as a command argument**.
Instead of hard coding the value of the Service directly into the PodSpec, users can **reference
the Service value using a `vars` entry**. If the value is updated or transformed by the
`kustomization.yaml` file (e.g. by setting a `namePrefix`), the value will be propagated
to where it is referenced in the PodSpec.
{% panel style="info", title="Reference" %}
- [vars](../reference/kustomize.md#var)
{% endpanel %}
## Vars
The `vars` section contains variable references to Resource Config fields within the project. They require
the following to be defined:
- Resource Kind
- Resource Version
- Resource name
- Field path
{% method %}
**Example:** Set the Pod command argument to the value of a Service name.
Apply will resolve `$(BACKEND_SERVICE_NAME)` to a value using the object reference
specified in `vars`.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml, deployment.yaml and service.yaml files
```yaml
# kustomization.yaml
namePrefix: "test-"
vars:
# Name of the variable so it can be referenced
- name: BACKEND_SERVICE_NAME
# GVK of the object with the field
objref:
kind: Service
name: backend-service
apiVersion: v1
# Path to the field
fieldref:
fieldpath: metadata.name
resources:
- deployment.yaml
- service.yaml
```
```yaml
# service.yaml
kind: Service
apiVersion: v1
metadata:
# Value of the variable. This will be customized with
# a namePrefix, and change the Variable value.
name: backend-service
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 80
targetPort: 9376
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: curl-deployment
labels:
app: curl
spec:
selector:
matchLabels:
app: curl
template:
metadata:
labels:
app: curl
spec:
containers:
- name: curl
image: ubuntu
# Reference the Service name field value as a variable
command: ["curl", "$(BACKEND_SERVICE_NAME)"]
```
**Applied:** The Resources that are Applied to the cluster
```yaml
apiVersion: v1
kind: Service
metadata:
name: test-backend-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9376
selector:
app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-curl-deployment
labels:
app: curl
spec:
selector:
matchLabels:
app: curl
template:
metadata:
labels:
app: curl
spec:
containers:
- command:
- curl
# $(BACKEND_SERVICE_NAME) has been resolved to
# test-backend-service
- test-backend-service
image: ubuntu
name: curl
```
{% endmethod %}
{% panel style="warning", title="Referencing Variables" %}
Variables are intended only to inject Resource Config into Pods. They are
**not** intended as a general templating mechanism. Overriding values should be done with
patches instead of variables. See [Bases and Variations](bases_and_variants.md).
{% endpanel %}

View File

@ -0,0 +1,235 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/C855WZW)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Customize arbitrary fields from arbitrary Resources in a Base.
{% endpanel %}
# Customizing Resource Fields
## Motivation
It is often necessary for users to want to **modify arbitrary fields** from a Base, such
as resource reservations for Pods, replicas on Deployments, etc. Overlays and patches can
be used by Variants to specify fields values which will override the Base field values.
{% panel style="info", title="Reference" %}
- [patchesjson6902](../reference/kustomize.md#patchesjson6902)
- [patchesStrategicMerge](../reference/kustomize.md#patchesstrategicmerge)
{% endpanel %}
## Customizing Arbitrary Fields with Overlays
{% method %}
Arbitrary **fields may be added, changed, or deleted** by supplying *Overlays* against the
Resources provided by the Base. **Overlays are sparse Resource definitions** that
allow arbitrary customizations to be performed without requiring a base to expose
the customization as a template.
Overlays require the *Group, Version, Kind* and *Name* of the Resource to be specified, as
well as any fields that should be set on the base Resource. Overlays are applied using
*StrategicMergePatch*.
**Use Case:** Different Environments (test, dev, staging, canary, prod) require fields such as
replicas or resources to be overridden.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file and overlay
```yaml
# kustomization.yaml
bases:
- ../base
patchesStrategicMerge:
- overlay.yaml
```
```yaml
# overlay.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
# override replicas
replicas: 3
template:
spec:
containers:
- name: nginx
# override resources
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
```
**Base:**
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources:
limits:
cpu: "0.2"
requests:
cpu: "0.1"
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Overlayed Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
# replicas field has been added
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
# resources have been overridden
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
```
{% endmethod %}
{% panel style="info", title="Merge Semantics for Overlays" %}
Overlays use the same [merge semantics](../app_management/field_merge_semantics.md) as Applying Resource Config to cluster. One difference
is that there is no *Last Applied Resource Config* when merging overlays, so fields may only be deleted
if they are explicitly set to nil.
{% endpanel %}
## Customizing Arbitrary Fields with JsonPatch
{% method %}
Arbitrary fields may be added, changed, or deleted by supplying *JSON Patches* against the
Resources provided by the base.
**Use Case:** Different Environments (test, dev, staging, canary, prod) require fields such as
replicas or resources to be overridden.
JSON Patches are [RFC 6902](https://tools.ietf.org/html/rfc6902) patches that are applied
to resources. Patches require the *Group, Version, Kind* and *Name* of the Resource to be
specified in addition to the Patch. Patches offer a number of powerful imperative operations
for modifying the base Resources.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
- ../base
patchesJson6902:
- target:
group: apps
version: v1
kind: Deployment
name: nginx-deployment
path: patch.yaml
```
```yaml
# patch.yaml
- op: add
path: /spec/replicas
value: 3
```
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Patched Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
# replicas field has been added
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}

View File

@ -0,0 +1,281 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/C855WZW)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Customize Base Resource Namespaces
- Customize Base Resource Names with Prefixes or Suffixes
- Customize Base Resource Labels or Annotations
{% endpanel %}
# Customizing Resource Metadata
## Motivation
It is common for users to customize the metadata of their Applications - including
the **names, namespaces, labels and annotations**.
Examples:
- Overriding the Namespace
- Overriding the Names of Resources by supplying a Prefix or Suffix
- Overriding Labels and Annotations
- Running **multiple instances of the same White-Box Base** using the above techniques
{% panel style="info", title="Reference" %}
- [namespace](../reference/kustomize.md#namespace)
- [namePrefix](../reference/kustomize.md#nameprefix)
- [nameSuffix](../reference/kustomize.md#namesuffix)
{% endpanel %}
## Customizing Resource Namespaces
{% method %}
**Use Case:**
- Change the Namespace for Resources from Base.
Customize the Namespace of all Resources in the Base by adding `namespace`.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
- ../base
namespace: test
```
**Base:**
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
namespace: default
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Modified Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
# Namepace has been changed to test
namespace: test
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}
## Customizing Resource Name Prefixes and Suffixes
{% method %}
**Use Case:**
- Run multiple instances of the same Base.
- Create naming conventions for different Environments (test, dev, staging, canary, prod).
Customize the Name of all Resources in the Base by adding `namePrefix` or `nameSuffix` in Variants.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
- ../base
namePrefix: test-
```
**Base:**
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Modified Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
# Name has been prefixed with the environment
name: test-nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}
See [Namespaces and Names](../app_management/namespaces_and_names.md).
{% panel style="success", title="Chaining Name Prefixes" %}
Name Prefix's and Suffix's in Bases will be concatenated with Name Prefix's
and Suffix's specified in Variants - e.g. if a Base has a Name Prefix of `app-name-`
and the Variant has a Name Prefix of `test-` the Applied Resources will have
a Name Prefix of `test-app-name-`.
{% endpanel %}
## Customizing Resource Labels and Annotations
{% method %}
**Use Case:**
- Create Label or Annotation conventions for different Environments (test, dev, staging, canary, prod).
Customize the Labels and Annotations of all Resources in the Base by adding a
`commonLabels` or `commonAnnotations` in the variants.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
- ../base
commonLabels:
app: test-nginx
environment: test
commonAnnotations:
oncallPager: 800-555-1212
```
**Base:**
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
base: label
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Modified Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
# labels have been overridden
labels:
app: test-nginx
environment: test
base: label
# annotations have been overridden
annotations:
oncallPager: 800-555-1212
name: nginx-deployment
spec:
selector:
matchLabels:
app: test-nginx
environment: test
base: label
template:
metadata:
labels:
app: test-nginx
environment: test
base: label
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}

View File

@ -0,0 +1,357 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/C855WZW)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Override Base Pod and PodTemplate Image **Names** and **Tags**
- Override Base Pod and PodTemplate Environment Variables and Arguments
{% endpanel %}
# Customizing Pods
## Motivation
It is common for users to customize their Applications for specific environments.
Simple customizations to Pod Templates may be through **Images, Environment Variables and
Command Line Arguments**.
Common examples include:
- Running **different versions of an Image** for dev, test, canary, production
- Configuring **different Pod Environment Variables and Arguments** for dev, test, canary, production
{% panel style="info", title="Reference" %}
- [images](../reference/kustomize.md#images)
- [configMapGenerator](../reference/kustomize.md#configmapgenerator)
- [secretGenerator](../reference/kustomize.md#secretgenerator)
{% endpanel %}
## Customizing Images
{% method %}
**Use Case:** Different Environments (test, dev, staging, canary, prod) use images with different tags.
Override the name or tag for an `image` field from a [Pod Template](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#pod-templates)
in a base by specifying the `images` field in the `kustomization.yaml`.
| Field | Description | Example Field | Example Result |
|-----------|--------------------------------------------------------------------------|----------| --- |
| `name` | Match images with this image name| `name: nginx`| |
| `newTag` | Override the image **tag** or **digest** for images whose image name matches `name` | `newTag: new` | `nginx:old` -> `nginx:new` |
| `newName` | Override the image **name** for images whose image name matches `name` | `newImage: nginx-special` | `nginx:old` -> `nginx-special:old` |
{% sample lang="yaml" %}
**Input:** The `kustomization.yaml` file
```yaml
# kustomization.yaml
bases:
- ../base
images:
- name: nginx-pod
newTag: 1.15
newName: nginx-pod-2
```
**Base:** Resources to be modified by the `kustomization.yaml`
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx-pod
```
**Applied:** The Resource that is Applied to the cluster
```yaml
# Modified Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
# The image image tag has been changed for the container
- name: nginx
image: nginx-pod-2:1.15
```
{% endmethod %}
{% panel style="info", title="Replacing Images" %}
`newImage` allows an image name to be replaced with another arbitrary image name. e.g. you could
call your image `webserver` or `database` and replace it with `nginx` or `mysql`.
For more information on customizing images, see [Container Images](../app_management/container_images.md).
{% endpanel %}
## Customizing Pod Environment Variables
{% method %}
**Use Case:** Different Environments (test, dev, staging, canary, prod) are configured with
different Environment Variables.
Override Pod Environment Variables.
- Base uses ConfigMap data in Pods as Environment Variables
- Each Variant overrides or extends ConfigMap data
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
- ../base
configMapGenerator:
- name: special-config
behavior: merge
literals:
- special.how=very # override the base value
- special.type=charm # add a value to the base
```
**Base: kustomization.yaml and Resources**
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
configMapGenerator:
- name: special-config
behavior: merge
literals:
- special.how=some # this value is overridden
- special.other=that # this value is added
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
envFrom:
- configMapRef:
name: special-config
```
**Applied:** The Resources that are Applied to the cluster
```yaml
# Generated Variant Resource
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config-82tc88cmcg
data:
special.how: very
special.type: charm
special.other: that
---
# Unmodified Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
envFrom:
# Container env will have the overridden ConfigMap values
- configMapRef:
name: special-config-82tc88cmcg
```
{% endmethod %}
See [ConfigMaps and Secrets](../app_management/secrets_and_configmaps.md).
## Customizing Pod Command Arguments
{% method %}
**Use Case:** Different Environments (test, dev, staging, canary, prod) provide different Commandline
Arguments to a Pod.
Override Pod Command Arguments.
- Base uses ConfigMap data in Pods as Command Arguments
- Each Variant defines different ConfigMap data
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
bases:
- ../base
configMapGenerator:
- name: special-config
behavior: merge
literals:
- SPECIAL_LEVEL=very
- SPECIAL_TYPE=charm
```
```yaml
# ../base/kustomization.yaml
resources:
- deployment.yaml
configMapGenerator:
- name: special-config
literals:
- SPECIAL_LEVEL=override.me
- SPECIAL_TYPE=override.me
```
```yaml
# ../base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh" ]
# Use the ConfigMap Environment Variables in the Command
args: ["-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_TYPE
```
**Applied:** The Resources that are Applied to the cluster
```yaml
# Generated Variant Resource
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config-82tc88cmcg
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
---
# Unmodified Base Resource
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: k8s.gcr.io/busybox
name: test-container
command:
- /bin/sh
args:
- -c
# Container args will have the overridden ConfigMap values
- echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
key: SPECIAL_LEVEL
name: special-config-82tc88cmcg
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
key: SPECIAL_TYPE
name: special-config-82tc88cmcg
```
{% endmethod %}
{% panel style="info", title="More Info" %}
See [Secrets and ConfigMaps](../app_management/secrets_and_configmaps.md) for more information on ConfigMap and Secret generation.
{% endpanel %}

View File

@ -0,0 +1,16 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/C855WZW)**
{% endpanel %}
# Introduction
This section of the book covers how to build Projects and Applications with Config
shared across teams. Apply kustomizations allow the authoring of Resource Config to
be a collaboration across teams.
Kustomizations facilitate:
- Developing Variations of Resource Config for multiple Environments with different configurations
- Developing Variations of Resource Config for multiple Clusters with different configurations
- Developing and Publishing Ready-Made Resource Config for users to extend and customize
- Composing Resource Config from multiple sources

View File

@ -0,0 +1,152 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/CLQBQHR)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Apply Creates and Updates Resources in a cluster through running `kubectl apply` on Resource Config.
- Apply manages complexity such as ordering of operations and merging user defined and cluster defined state.
{% endpanel %}
# Apply
## Motivation
Apply is a command that will update a Kubernetes cluster to match state defined locally in files.
```bash
kubectl apply
```
- Fully declarative - don't need to specify create or update - just manage files
- Merges user owned state (e.g. Service `selector`) with state owned by the cluster (e.g. Service `clusterIp`)
## Definitions
- **Resources**: *Objects* in a cluster - e.g. Deployments, Services, etc.
- **Resource Config**: *Files* declaring the desired state for Resources - e.g. deployment.yaml.
Resources are created and updated using Apply with these files.
*kubectl apply* Creates and Updates Resources through local or remote files. This may be through
either raw Resource Config or *kustomization.yaml*.
## Usage
{% method %}
Though Apply can be run directly against Resource Config files or directories using `-f`, it is recommended
to run Apply against a `kustomization.yaml` using `-k`. The `kustomization.yaml` allows users to define
configuration that cuts across many Resources (e.g. namespace).
{% sample lang="yaml" %}
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# list of Resource Config to be Applied
resources:
- deployment.yaml
# namespace to deploy all Resources to
namespace: default
# labels added to all Resources
commonLabels:
app: example
env: test
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
component: nginx
tier: frontend
spec:
selector:
matchLabels:
component: nginx
tier: frontend
template:
metadata:
labels:
component: nginx
tier: frontend
spec:
containers:
- name: nginx
image: nginx:1.15.4
```
{% endmethod %}
{% method %}
Users run Apply on directories containing `kustomization.yaml` files using `-k` or on raw
ResourceConfig files using `-f`.
{% sample lang="yaml" %}
```bash
# Apply the Resource Config
kubectl apply -k .
# View the Resources
kubectl get -k .
```
{% endmethod %}
{% panel style="info", title="Multi-Resource Configs" %}
A single Resource Config file may declare multiple Resources separated by `\n---\n`.
{% endpanel %}
## CRUD Operations
### Creating Resources
Any Resources that do not exist and are declared in Resource Config when Apply is run will be Created.
### Updating Resources
Any Resources that already exist and are declared in Resource Config when Apply is run may be Updated.
**Added Fields**
Any fields that have been added to the Resource Config will be set on the Resource.
**Updated Fields**
Any fields that contain different values for the fields specified locally in the Resource Config from what is
in the Resource will be updated by merging the Resource Config into the live Resource. See [merging](field_merge_semantics.md)
for more details.
**Deleted Fields**
Fields that were in the Resource Config the last time Apply was run, will be deleted from the Resource, and
return to their default values.
**Unmanaged Fields**
Fields that were not specified in the Resource Config but are set on the Resource will be left unmodified.
### Deleting Resources
Declarative deletion of Resources does not yet exist in a usable form, but is under development.
{% panel style="info", title="Continuously Applying The Hard Way" %}
In some cases, it may be useful to automatically Apply changes when ever the Resource Config is changed.
This example uses the unix `watch` command to periodically invoke Apply against a target.
`watch -n 60 kubectl apply -k https://github.com/myorg/myrepo`
{% endpanel %}
## Resource Creation Ordering
Certain Resource Types may be dependent on other Resource Types being created first. e.g. Namespaced
Resources on the Namespaces, RoleBindings on Roles, CustomResources on the CRDs, etc.
When used with a `kustomization.yaml`, Apply sorts the Resources by Resource type to ensure Resources
with these dependencies are created in the correct order.

View File

@ -0,0 +1,202 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/CLQBQHR)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Override or set the Name and Tag for Container Images
{% endpanel %}
# Container Images
## Motivation
It may be useful to define the tags or digests of container images which are used across many Workloads.
Container image tags and digests are used to refer to a specific version or instance of a container
image - e.g. for the `nginx` container image you might use the tag `1.15.9` or `1.14.2`.
- Update the container image name or tag for multiple Workloads at once
- Increase visibility of the versions of container images being used within
the project
- Set the image tag from external sources - such as environment variables
- Copy or Fork an existing Project and change the Image Tag for a container
- Change the registry used for an image
See [Bases and Variations](../app_customization/bases_and_variants.md) for more details on Copying Projects.
{% panel style="info", title="Reference" %}
- [images](../reference/kustomize.md#images)
{% endpanel %}
## images
It is possible to set image tags for container images through
the `kustomization.yaml` using the `images` field. When `images` are
specified, Apply will override the images whose image name matches `name` with a new
tag.
| Field | Description | Example Field | Example Result |
|-----------|--------------------------------------------------------------------------|----------| --- |
| `name` | Match images with this image name| `name: nginx`| |
| `newTag` | Override the image **tag** or **digest** for images whose image name matches `name` | `newTag: new` | `nginx:old` -> `nginx:new` |
| `newName` | Override the image **name** for images whose image name matches `name` | `newImage: nginx-special` | `nginx:old` -> `nginx-special:old` |
{% method %}
**Example:** Use `images` in the `kustomization.yaml` to update the container
images in `deployment.yaml`
Apply will set the `nginx` image to have the tag `1.8.0` - e.g. `nginx:1.8.0` and
change the image name to `nginx-special`.
This will set the name and tag for *all* images matching the *name*.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml and deployment.yaml files
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: nginx # match images with this name
newTag: 1.8.0 # override the tag
newName: nginx-special # override the name
resources:
- deployment.yaml
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
# The image has been changed
- image: nginx-special:1.8.0
name: nginx
```
{% endmethod %}
## Setting a Name
{% method %}
The name for an image may be set by specifying `newName` and the name of the old container image.
{% sample lang="yaml" %}
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: mycontainerregistry/myimage
newName: differentregistry/myimage
```
{% endmethod %}
## Setting a Tag
{% method %}
The tag for an image may be set by specifying `newTag` and the name of the container image.
{% sample lang="yaml" %}
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: mycontainerregistry/myimage
newTag: v1
```
{% endmethod %}
## Setting a Digest
{% method %}
The digest for an image may be set by specifying `digest` and the name of the container image.
{% sample lang="yaml" %}
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: alpine
digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
```
{% endmethod %}
## Setting a Tag from the latest commit SHA
{% method %}
A common CI/CD pattern is to tag container images with the git commit SHA of source code. e.g. if
the image name is `foo` and an image was built for the source code at commit `1bb359ccce344ca5d263cd257958ea035c978fd3`
then the container image would be `foo:1bb359ccce344ca5d263cd257958ea035c978fd3`.
A simple way to push an image that was just built without manually updating the image tags is to
download the [kustomize standalone](https://github.com/kubernetes-sigs/kustomize/) tool and run
`kustomize edit set imagetag` command to update the tags for you.
**Example:** Set the latest git commit SHA as the image tag for `foo` images.
{% sample lang="yaml" %}
```bash
kustomize edit set imagetag foo:$(git log -n 1 --pretty=format:"%H")
kubectl apply -f .
```
{% endmethod %}
## Setting a Tag from an Environment Variable
{% method %}
It is also possible to set a Tag from an environment variable using the same technique for setting from a commit SHA.
**Example:** Set the tag for the `foo` image to the value in the environment variable `FOO_IMAGE_TAG`.
{% sample lang="yaml" %}
```bash
kustomize edit set image foo:$FOO_IMAGE_TAG
kubectl apply -f .
```
{% endmethod %}
{% panel style="info", title="Committing Image Tag Updates" %}
The `kustomization.yaml` changes *may* be committed back to git so that they
can be audited. When committing the image tag updates that have already
been pushed by a CI/CD system, be careful not to trigger new builds +
deployments for these changes.
{% endpanel %}

View File

@ -0,0 +1,481 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/CLQBQHR)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Fields set and deleted from Resource Config are merged into Resources by Apply
- If a Resource already exists, Apply updates the Resources by merging the local Resource Config into the remote Resources
- Fields removed from the Resource Config will be deleted from the remote Resource
{% endpanel %}
# Merging Fields
{% panel style="warning", title="Advanced Section" %}
This chapter contains advanced material that readers may want to skip and come back to later.
{% endpanel %}
## When are fields merged?
This page describes how Resource Config is merged with Resources or other Resource Config. This
may occur when:
- Applying Resource Config updates to the live Resources in the cluster
- Defining Patches in the `kustomization.yaml` which are overlayed on `resources` and [bases](../app_customization/bases_and_variants.md)
### Applying Resource Config Updates
Rather than replacing the Resource with the new Resource Config, **Apply will merge the new Resource Config
into the live Resource**. This retains values which may be set by the control plane - such as `replicas` values
set by auto scalers
### Defining Patches
`patches` are sparse Resource Config which **contain a subset of fields that override values
defined in other Resource Config** with the same Group/Version/Kind/Namespace/Name.
This is used to alter values defined on Resource Config without having to fork it.
## Motivation (Apply)
This page describes the semantics for merging Resource Config.
Ownership of Resource fields are shared between declarative Resource Config authored by human
users, and values set by Controllers running in the cluster. Some fields, such as the `status`
and `clusterIp` fields, are owned exclusively by Controllers. Fields, such as the `name`
and `namespace` fields, are owned exclusively by the human user managing the Resource.
Other fields, such as `replicas`, may be owned by either human users, the apiserver or
Controllers. For example, `replicas` may be explicitly set by a user, implicitly set
to a default value by the apiserver, or continuously adjusted by a Controller such as
and HorizontalPodAutoscaler.
{% method %}
### Last Applied Resource Config
When Apply creates or updates a Resource, it writes the Resource Config it Applied to an annotation on the
Resource. This allows it to compare the last Resource Config it Applied to the current Resource
Config and identify fields that have been deleted.
{% sample lang="yaml" %}
```yaml
# deployment.yaml (Resource Config)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
```
```yaml
# Original Resource
Doesn't Exist
```
```yaml
# Applied Resource
kind: Deployment
metadata:
annotations:
# ...
# This is the deployment.yaml Resource Config written as an annotation on the object
# It was written by kubectl apply when the object was created
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment",
"metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},
"spec":{"selector":{"matchLabels":{"app":nginx}},"template":{"metadata":{"labels":{"app":"nginx"}},
"spec":{"containers":[{"image":"nginx:1.7.9","name":"nginx"}]}}}}
# ...
spec:
# ...
status:
# ...
```
{% endmethod %}
## Merging Resources
Following are the merge semantics for Resources:
{% method %}
**Adding Fields:**
- Fields present in the Resource Config that are missing from the Resource will be added to the
Resource.
- Fields will be added to the Last Applied Resource Config
{% sample lang="yaml" %}
```yaml
# deployment.yaml (Resource Config)
apiVersion: apps/v1
kind: Deployment
metadata:
# ...
name: nginx-deployment
spec:
# ...
minReadySeconds: 3
```
```yaml
# Original Resource
kind: Deployment
metadata:
# ...
name: nginx-deployment
spec:
# ...
status:
# ...
```
```yaml
# Applied Resource
kind: Deployment
metadata:
# ...
name: nginx-deployment
spec:
# ...
minReadySeconds: 3
status:
# ...
```
{% endmethod %}
{% method %}
**Updating Fields**
- Fields present in the Resource Config that are also present in the Resource will be merged recursively
until a primitive field is updated, or a field is added / deleted.
- Fields will be updated in the Last Applied Resource Config
{% sample lang="yaml" %}
```yaml
# deployment.yaml (Resource Config)
apiVersion: apps/v1
kind: Deployment
metadata:
# ...
name: nginx-deployment
spec:
# ...
replicas: 2
```
```yaml
# Original Resource
kind: Deployment
metadata:
# ...
name: nginx-deployment
spec:
# ...
# could be defaulted or set by Resource Config
replicas: 1
status:
# ...
```
```yaml
# Applied Resource
kind: Deployment
metadata:
# ...
name: nginx-deployment
spec:
# ...
# updated
replicas: 2
status:
# ...
```
{% endmethod %}
{% method %}
**Deleting Fields**
- Fields present in the **Last Applied Resource Config** that have been removed from the Resource Config
will be deleted from the Resource.
- Fields set to *null* in the Resource Config that are present in the Resource Config will be deleted from the
Resource.
- Fields will be removed from the Last Applied Resource Config
{% sample lang="yaml" %}
```yaml
# deployment.yaml (Resource Config)
apiVersion: apps/v1
kind: Deployment
metadata:
# ...
name: nginx-deployment
spec:
# ...
```
```yaml
# Original Resource
kind: Deployment
metadata:
# ...
name: nginx-deployment
# Containers replicas and minReadySeconds
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment", "spec":{"replicas": "2", "minReadySeconds": "3", ...}, "metadata": {...}}
spec:
# ...
minReadySeconds: 3
replicas: 2
status:
# ...
```
```yaml
# Applied Resource
kind: Deployment
metadata:
# ...
name: nginx-deployment
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment", "spec":{...}, "metadata": {...}}
spec:
# ...
# deleted and then defaulted, but not in Last Applied
replicas: 1
# minReadySeconds deleted
status:
# ...
```
{% endmethod %}
{% panel style="danger", title="Removing Fields from Resource Config" %}
Simply removing a field from the Resource Config will *not* transfer the ownership to the cluster.
Instead it will delete the field from the Resource. If a field is set in the Resource Config and
the user wants to give up ownership (e.g. removing `replicas` from the Resource Config and using
and autoscaler), the user must first remove it from the last Applied Resource Config stored by the
cluster.
This can be performed using `kubectl apply edit-last-applied` to delete the `replicas` field from
the **Last Applied Resource Config**, and then deleting it from the **Resource Config.**
{% endpanel %}
## Field Merge Semantics
### Merging Primitives
Primitive fields are merged by replacing the current value with the new value.
**Field Creation:** Add the primitive field
**Field Update:** Change the primitive field value
**Field Deletion:** Delete the primitive field
| Field in Resource Config | Field in Resource | Field in Last Applied | Action |
|---------------------------|-------------------|-----------------------|-----------------------------------------|
| Yes | Yes | - | Set live to the Resource Config value. |
| Yes | No | - | Set live to the Resource Config value. |
| No | - | Yes | Remove from Resource. |
| No | - | No | Do nothing. |
### Merging Objects
Objects fields are updated by merging the sub-fields recursively (by field name) until a primitive field is found or
the field is added / deleted.
**Field Creation:** Add the object field
**Field Update:** Recursively compare object sub-field values and merge them
**Field Deletion:** Delete the object field
**Merge Table:** For each field merge Resource Config and Resource values with the same name
| Field in Resource Config | Field in Resource | Field in Last Applied | Action |
|---------------------------|-------------------|-----------------------|-------------------------------------------|
| Yes | Yes | - | Recursively merge the Resource Config and Resource values. |
| Yes | No | - | Set live to the Resource Config value. |
| No | - | Yes | Remove field from Resource. |
| No | - | No | Do nothing. |
### Merging Maps
Map fields are updated by merging the elements (by key) until a primitive field is found or the value is
added / deleted.
**Field Creation:** Add the map field
**Field Update:** Recursively compare map values by key and merge them
**Field Deletion:** Delete the map field
**Merge Table:** For each map element merge Resource Config and Resource values with the same key
| Key in Resource Config | Key in Resource | Key in Last Applied | Action |
|---------------------------|-------------------|-----------------------|-------------------------------------------|
| Yes | Yes | - | Recursively merge the Resource Config and Resource values. |
| Yes | No | - | Set live to the Resource Config value. |
| No | - | Yes | Remove map element from Resource. |
| No | - | No | Do nothing. |
### Merging Lists of Primitives
Lists of primitives will be merged if they have a `patch strategy: merge` on the field otherwise they will
be replaced. [Finalizer list example](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#objectmeta-v1-meta)
**Merge Strategy:**
- Merged primitive lists behave like ordered sets
- Replace primitive lists are replaced when merged
**Ordering:** Uses the ordering specified in the Resource Config. Elements not specified in the Resource Config
do not have ordering guarantees with respect to the elements in the Resource Config.
**Merge Table:** For each list element merge Resource Config and Resource element with the same value
| Element in Resource Config | Element in Resource | Element in Last Applied | Action |
|---------------------------|-------------------|-----------------------|-----------------------------------------|
| Yes | Yes | - | Do nothing |
| Yes | No | - | Add to list. |
| No | - | Yes | Remove from list. |
| No | - | No | Do nothing. |
{% method %}
This merge strategy uses the patch merge key to identify container elements in a list and merge them.
The `patch merge key` is defined in the [Kubernetes API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#podspec-v1-core)
on the field.
{% sample lang="yaml" %}
```yaml
# Last Applied
args: ["a", "b"]
```
```yaml
# Resource Config (Local)
args: ["a", "c"]
```
```yaml
# Resource (Live)
args: ["a", "b", "d"]
```
```yaml
# Applied Resource
args: ["a", "c", "d"]
```
{% endmethod %}
### Merging Lists of Objects
**Merge Strategy:** Lists of primitives may be merged or replaced. Lists are merged if the list has a `patch strategy` of *merge*
and a `patch merge key` on the list field. [Container list example](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#podspec-v1-core).
**Merge Key:** The `patch merge key` is used to identify same elements in a list. Unlike map elements (keyed by key) and object fields
(keyed by field name), lists don't have a built-in merge identity for elements (index does not define identity).
Instead an object field is used as a synthetic *key/value* for merging elements. This fields is the
`patch merge key`. List elements with the same patch merge key will be merged when lists are merged.
**Ordering:** Uses the ordering specified in the Resource Config. Elements not specified in the Resource Config
do not have ordering guarantees.
**Merge Table:** For each list element merge Resource Config and Resource element where the elements have the same
value for the `patch merge key`
| Element in Resource Config | Element in Resource | Element in Last Applied | Action |
|---------------------------|-------------------|-----------------------|-----------------------------------------|
| Yes | - | - | Recursively merge the Resource Config and Resource values. |
| Yes | No | - | Add to list. |
| No | - | Yes | Remove from list. |
| No | - | No | Do nothing. |
{% method %}
This merge strategy uses the patch merge key to identify container elements in a list and merge them.
The `patch merge key` is defined in the [Kubernetes API](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#podspec-v1-core)
on the field.
{% sample lang="yaml" %}
```yaml
# Last Applied Resource Config
containers:
- name: nginx # key: nginx
image: nginx:1.10
- name: nginx-helper-a # key: nginx-helper-a; will be deleted in result
image: helper:1.3
- name: nginx-helper-b # key: nginx-helper-b; will be retained
image: helper:1.3
```
```yaml
# Resource Config (Local)
containers:
- name: nginx
image: nginx:1.10
- name: nginx-helper-b
image: helper:1.3
- name: nginx-helper-c # key: nginx-helper-c; will be added in result
image: helper:1.3
```
```yaml
# Resource (Live)
containers:
- name: nginx
image: nginx:1.10
- name: nginx-helper-a
image: helper:1.3
- name: nginx-helper-b
image: helper:1.3
args: ["run"] # Field will be retained
- name: nginx-helper-d # key: nginx-helper-d; will be retained
image: helper:1.3
```
```yaml
# Applied Resource
containers:
- name: nginx
image: nginx:1.10
# Element nginx-helper-a was Deleted
- name: nginx-helper-b
image: helper:1.3
# Field was Ignored
args: ["run"]
# Element was Added
- name: nginx-helper-c
image: helper:1.3
# Element was Ignored
- name: nginx-helper-d
image: helper:1.3
```
{% endmethod %}
{% panel style="info", title="Edit and Set" %}
While `kubectl edit` and `kubectl set` ignore the Last Applied Resource Config, Apply will
change any values in the Resource Config set by either `kubectl edit` or `kubectl set`.
To ignore values set by `kubectl edit` or `kubectl set`:
- Use `kubectl apply edit-last-applied` to remove the value from the Last Applied (if it is present)
- Remove the field from the Resource Config
This is the same technique for retaining values set by cluster components such as autoscalers.
{% endpanel %}

View File

@ -0,0 +1,48 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/CLQBQHR)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Apply manages Applications through files defining Kubernetes Resources (i.e. Resource Config)
- Kustomize is used to author Resource Config
{% endpanel %}
# Declarative Application Management
This section covers how to declaratively manage Workloads and Applications.
Workloads in a cluster may be configured through files called *Resource Config*. These files are
typically checked into source control, and allow cluster state changes to be reviewed before they
are audited and applied.
There are 2 components to Application Management.
## Client Component
The client component consists of authoring Resource Config which defines the desired state
of an Application. This may be done as a collection of raw Resource Config files, or by
composing and overlaying Resource Config authored by separate teams
(using the `-k` flag with a `kustomization.yaml`).
Kustomize offers low-level tooling for simplifying the authoring of Resource Config. It provides:
- **Generating Resource Config** from other canonical sources - e.g. ConfigMaps, Secrets
- **Reusing and Composing one or more collections of Resource Config**
- **Customizing Resource Config**
- **Setting cross-cutting fields** - e.g. namespace, labels, annotations, name-prefixes, etc
**Example:** One user may define a Base for an application, while another user may customize
a specific instance of the Base.
## Server Component
The server component consists of a human applying the authored Resource Config to the cluster
to create or update Resources. Once Applied, the Kubernetes cluster will set additional desired
state on the Resource - e.g. *defaulting unspecified fields, filling in IP addresses, autoscaling
replica count, etc.*
Note that the process of Application Management is a collaborative one between users and the
Kubernetes system itself - where each may contribute to defining the desired state.
**Example**: An Autoscaler Controller in the cluster may set the scale field on a Deployment managed by a user.

View File

@ -0,0 +1,197 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/CLQBQHR)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Set Labels for all Resources declared within a Project with `commonLables`
- Set Annotations for all Resources declared within a Project with `commonAnnotations`
{% endpanel %}
# Setting Labels and Annotations
## Motivation
Users may want to define a common set of labels or annotations for all the Resource in a project.
- Identify the Resources within a project by querying their labels.
- Set metadata for all Resources within a project (e.g. environment=test).
- Copy or Fork an existing Project and add or change labels and annotations.
See [Bases and Variations](../app_customization/bases_and_variants.md) for more details on Copying Projects.
{% panel style="info", title="Reference" %}
- [commonLabels](../reference/kustomize.md#commonlabels)
- [commonAnnotations](../reference/kustomize.md#commonannotations)
{% endpanel %}
## Setting Labels for all Resources
{% method %}
**Example:** Add the labels declared in `commonLabels` to all Resources in the project.
**Important:** Once set, commonLabels should not be changed so as not to change the Selectors for Services
or Workloads.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml and deployment.yaml files
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
app: foo
environment: test
resources:
- deployment.yaml
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
bar: baz
spec:
selector:
matchLabels:
app: nginx
bar: baz
template:
metadata:
labels:
app: nginx
bar: baz
spec:
containers:
- name: nginx
image: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: foo # Label was changed
environment: test # Label was added
bar: baz # Label was ignored
name: nginx-deployment
spec:
selector:
matchLabels:
app: foo # Selector was changed
environment: test # Selector was added
bar: baz # Selector was ignored
template:
metadata:
labels:
app: foo # Label was changed
environment: test # Label was added
bar: baz # Label was ignored
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}
{% panel style="warning", title="Propagating Labels to Selectors" %}
In addition to updating the labels for each Resource, any selectors will also be updated to target the
labels. e.g. the selectors for Services in the project will be updated to include the commonLabels
*in addition* to the other labels.
**Note:** Once set, commonLabels should not be changed so as not to change the Selectors for Services
or Workloads.
{% endpanel %}
{% panel style="success", title="Common Labels" %}
The k8s.io documentation defines a set of [Common Labeling Conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/)
that may be applied to Applications.
**Note:** commonLabels should only be set for **immutable** labels, since they will be applied to Selectors.
Labeling Workload Resources makes it simpler to query Pods - e.g. for the purpose of getting their logs.
{% endpanel %}
## Setting Annotations for all Resources
{% method %}
**Example:** Add the annotations declared in `commonAnnotations` to all Resources in the project.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml and deployment.yaml files
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonAnnotations:
oncallPager: 800-555-1212
resources:
- deployment.yaml
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Annotation added to the Deployment
annotations:
oncallPager: 800-555-1212
labels:
app: nginx
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
# Annotation also added to PodTemplate
annotations:
oncallPager: 800-555-1212
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}
{% panel style="info", title="Propagating Annotations" %}
In addition to updating the annotations for each Resource, any fields that contain ObjectMeta
(e.g. PodTemplate) will also have the annotations added.
{% endpanel %}

View File

@ -0,0 +1,280 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/CLQBQHR)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Set the Namespace for all Resources within a Project with `namespace`
- Prefix the Names of all Resources within a Project with `namePrefix`
- Suffix the Names of all Resources within a Project with `nameSuffix`
{% endpanel %}
# Setting Namespaces and Names
## Motivation
It may be useful to enforce consistency across the namespace and names of all Resources within
a Project.
- Ensure all Resources are in the correct Namespace
- Ensure all Resources share a common naming convention
- Copy or Fork an existing Project and change the Namespace / Names
See [Bases and Variations](../app_customization/bases_and_variants.md) for more details on Copying Projects.
{% panel style="info", title="Reference" %}
- [namespace](../reference/kustomize.md#namespace)
- [namePrefix](../reference/kustomize.md#nameprefix)
- [nameSuffix](../reference/kustomize.md#namesuffix)
{% endpanel %}
## Setting the Namespace for all Resources
Reference:
The Namespace for all namespaced Resources declared in the Resource Config may be set with `namespace`.
This sets the namespace for both generated Resources (e.g. ConfigMaps and Secrets) and non-generated
Resources.
{% method %}
**Example:** Set the `namespace` specified in the `kustomization.yaml` on the namespaced Resources.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml and deployment.yaml files
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
# The namespace has been added
namespace: my-namespace
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}
{% panel style="info", title="Overriding Namespaces" %}
Setting the namespace will override the namespace on Resources if it is already set.
{% endpanel %}
## Setting a Name prefix or suffix for all Resources
A name prefix or suffix can be set for all resources using `namePrefix` or
`nameSuffix`.
{% method %}
**Example:** Prefix the names of all Resources.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml and deployment.yaml files
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: foo-
resources:
- deployment.yaml
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
# The name has been prefixed with "foo-"
name: foo-nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
```
{% endmethod %}
{% panel style="info", title="Propagation of the Name to Object References" %}
Resources such as Deployments and StatefulSets may reference other Resources such as
ConfigMaps and Secrets in the Pod Spec.
This sets a name prefix or suffix for both generated Resources (e.g. ConfigMaps
and Secrets) and non-generated Resources.
The namePrefix or nameSuffix that is applied is propagated to references to updated resources -
e.g. references to Secrets and ConfigMaps are updated with the namePrefix and nameSuffix.
{% endpanel %}
{% method %}
**Example:** Prefix the names of all Resources.
This will update the ConfigMap reference in the Deployment to have the `foo` prefix.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml and deployment.yaml files
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: foo-
configMapGenerator:
- name: props
literals:
- BAR=baz
resources:
- deployment.yaml
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
env:
- name: BAR
valueFrom:
configMapKeyRef:
name: props
key: BAR
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: v1
data:
BAR: baz
kind: ConfigMap
metadata:
creationTimestamp: null
name: foo-props-44kfh86dgg
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: foo-nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- env:
- name: BAR
valueFrom:
configMapKeyRef:
key: BAR
name: foo-props-44kfh86dgg
image: nginx
name: nginx
```
{% endmethod %}
{% panel style="info", title="References" %}
Apply will propagate the `namePrefix` to any place Resources within the project are referenced by other Resources
including:
- Service references from StatefulSets
- ConfigMap references from PodSpecs
- Secret references from PodSpecs
{% endpanel %}

View File

@ -0,0 +1,380 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/CLQBQHR)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Generate Secrets from files and literals with `secretGenerator`
- Generate ConfigMaps from files and literals with `configMapGenerator`
- Rolling out changes to Secrets and ConfigMaps
{% endpanel %}
# Secrets and ConfigMaps
{% panel style="info", title="Reference" %}
- [secretGenerators](../reference/kustomize.md#secretgenerator)
- [configMapGenerators](../reference/kustomize.md#configmapgenerator)
- [generatorOptions](../reference/kustomize.md#generatoroptions)
{% endpanel %}
## Motivation
The source of truth for Secret and ConfigMap Resources typically resides
somewhere else, such as a `.properties` file. Apply offers native support
for generating both Secrets and ConfigMaps from other sources such as files and
literals.
Additionally, Secrets and ConfigMaps require rollouts to be performed
differently than for most other Resources in order for the changes to be
rolled out to Pods consuming them.
## Generators
Secret and ConfigMap Resources can be generated by adding `secretGenerator`
or `configMapGenerator` entries to the `kustomization.yaml` file.
**The generated Resources name's will have suffixes that change when their data
changes. See [Rollouts](#rollouts) for more on this.**
### ConfigMaps From Files
{% method %}
ConfigMap Resources may be generated from files - such as a java `.properties` file. To generate a ConfigMap
Resource for a file, add an entry to `configMapGenerator` with the filename.
**Example:** Generate a ConfigMap with a data item containing the contents of a file.
The ConfigMaps will have data values populated from the file contents. The contents of each file will
appear as a single data item in the ConfigMap keyed by the filename.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: my-application-properties
files:
- application.properties
```
```yaml
# application.properties
FOO=Bar
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# The name has had a suffix applied
name: my-application-properties-c79528k426
# The data has been populated from each file's contents
data:
application.properties: |
FOO=Bar
```
{% endmethod %}
### ConfigMaps From Literals
ConfigMap Resources may be generated from literal key-value pairs - such as `JAVA_HOME=/opt/java/jdk`.
To generate a ConfigMap Resource from literal key-value pairs, add an entry to `configMapGenerator` with a
list of `literals`.
{% panel style="info", title="Literal Syntax" %}
- The key/value are separated by a `=` sign (left side is the key)
- The value of each literal will appear as a data item in the ConfigMap keyed by its key.
{% endpanel %}
{% method %}
**Example:** Create a ConfigMap with 2 data items generated from literals.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: my-java-server-env-vars
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# The name has had a suffix applied
name: my-java-server-env-vars-k44mhd6h5f
# The data has been populated from each literal pair
data:
JAVA_HOME: /opt/java/jdk
JAVA_TOOL_OPTIONS: -agentlib:hprof
```
{% endmethod %}
### ConfigMaps From Environment Files
ConfigMap Resources may be generated from key-value pairs much the same as using the literals option
but taking the key-value pairs from an environment file. These generally end in `.env`.
To generate a ConfigMap Resource from an environment file, add an entry to `configMapGenerator` with a
single `env` entry, e.g. `env: config.env`.
{% panel style="info", title="Environment File Syntax" %}
- The key/value pairs inside of the environment file are separated by a `=` sign (left side is the key)
- The value of each line will appear as a data item in the ConfigMap keyed by its key.
{% endpanel %}
{% method %}
**Example:** Create a ConfigMap with 3 data items generated from an environment file.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: tracing-options
env: tracing.env
```
```bash
# tracing.env
ENABLE_TRACING=true
SAMPLER_TYPE=probabilistic
SAMPLER_PARAMETERS=0.1
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# The name has had a suffix applied
name: tracing-options-6bh8gkdf7k
# The data has been populated from each literal pair
data:
ENABLE_TRACING: "true"
SAMPLER_TYPE: "probabilistic"
SAMPLER_PARAMETERS: "0.1"
```
{% endmethod %}
{% panel style="success", title="Overriding Base ConfigMap Values" %}
ConfigMaps Values from Bases may be overridden by adding another generator for the ConfigMap
in the Variant and specifying the `behavior` field. `behavior` may be
one of `create` (default value), `replace` (replace the base ConfigMap),
or `merge` (add or update the values the ConfigMap). See [Bases and Variantions](../app_customization/bases_and_variants.md)
for more on using Bases. e.g. `behavior: "merge"`
{% endpanel %}
### Secrets from Files
Secret Resources may be generated much like ConfigMaps can. This includes generating them
from literals, files or environment files.
{% panel style="info", title="Secret Syntax" %}
Secret type is set using the `type` field.
{% endpanel %}
{% method %}
**Example:** Generate a `kubernetes.io/tls` Secret from local files
{% sample lang="yaml" %}
**Input:** The kustomization.yaml file
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: app-tls
files:
- "secret/tls.cert"
- "secret/tls.key"
type: "kubernetes.io/tls"
```
**Applied:** The Resource that is Applied to the cluster
```yaml
apiVersion: v1
kind: Secret
metadata:
# The name has had a suffix applied
name: app-tls-4tc9tcbd8k
type: kubernetes.io/tls
# The data has been populated from each command's output
data:
tls.crt: LS0tLS1CRUd...tCg==
tls.key: LS0tLS1CRUd...0tLQo=
```
{% endmethod %}
### Generator Options
{% method %}
It is also possible to specify cross-cutting options for generated objects
using `generatorOptions`.
{% sample lang="yaml" %}
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generatorOptions:
# labels to add to all generated resources
labels:
kustomize.generated.resources: somevalue
# annotations to add to all generated resources
annotations:
kustomize.generated.resource: somevalue
# disableNameSuffixHash is true disables the default behavior of adding a
# suffix to the names of generated resources that is a hash of
# the resource contents.
disableNameSuffixHash: true
```
{% endmethod %}
### Propagating the Name Suffix
{% method %}
Workloads that reference the ConfigMap or Secret will need to know the name of the generated Resource
including the suffix, however Apply takes care of this automatically for users. Apply will identify
references to generated ConfigMaps and Secrets, and update them.
The generated ConfigMap name will be `my-java-server-env-vars` with a suffix unique to its contents.
Changes to the contents will change the name suffix, resulting in the creation of a new ConfigMap,
and transform Workloads to point to this one.
The PodTemplate volume references the ConfigMap by the name specified in the generator (excluding the suffix).
Apply will update the name to include the suffix applied to the ConfigMap name.
{% sample lang="yaml" %}
**Input:** The kustomization.yaml and deployment.yaml files
```yaml
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: my-java-server-env-vars
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
resources:
- deployment.yaml
```
```yaml
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: my-java-server-env-vars
```
**Applied:** The Resources that are Applied to the cluster.
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
# The name has been updated to include the suffix
name: my-java-server-env-vars-k44mhd6h5f
data:
JAVA_HOME: /opt/java/jdk
JAVA_TOOL_OPTIONS: -agentlib:hprof
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test
name: test-deployment
spec:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- command:
- /bin/sh
- -c
- ls /etc/config/
image: k8s.gcr.io/busybox
name: container
volumeMounts:
- mountPath: /etc/config
name: config-volume
volumes:
- configMap:
# The name has been updated to include the
# suffix matching the ConfigMap
name: my-java-server-env-vars-k44mhd6h5f
name: config-volume
```
{% endmethod %}
## Rollouts
ConfigMap values are consumed by Pods as: environment variables, command line arguments and files.
This is important because Updating a ConfigMap will:
- immediately update the files mounted by *all* Pods consuming them
- not update the environment variables or command line arguments until the Pod is restarted
Typically users want to perform a rolling update of the ConfigMap changes to Pods as soon as
the ConfigMap changes are pushed.
Apply facilitates rolling updates for ConfigMaps by creating a new ConfigMap
for each change to the data. Workloads (e.g. Deployments, StatefulSets, etc) are updated to point to a new
ConfigMap instead of the old one. This allows the change to be gradually rolled the same way
other Pod Template changes are rolled out.
Each generated Resources name has a suffix appended by hashing the contents. This approach ensures a new
ConfigMap is generated each time the contents is modified.
**Note:** Because the Resource names will contain a suffix, when looking for them with `kubectl get`,
their names will not match exactly what is specified in the kustomization.yaml file.

View File

@ -0,0 +1,165 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Print the Logs of a Container in a cluster
{% endpanel %}
# Summarizing Resources
## Motivation
Debugging Workloads by printing out the Logs of containers in a cluster.
{% method %}
## Print Logs for a Container in a Pod
Print the logs for a Pod running a single Container
{% sample lang="yaml" %}
```bash
kubectl logs echo-c6bc8ccff-nnj52
```
```bash
hello
hello
```
{% endmethod %}
{% panel style="success", title="Crash Looping Containers" %}
If a container is crash looping and you want to print its logs after it
exits, use the `-p` flag to look at the **logs from containers that have
exited**. e.g. `kubectl logs -p -c ruby web-1`
{% endpanel %}
---
{% method %}
## Print Logs for all Pods for a Workload
Print the logs for all Pods for a Workload
{% sample lang="yaml" %}
```bash
# Print logs from all containers matching label
kubectl logs -l app=nginx
```
{% endmethod %}
{% panel style="success", title="Workloads Logs" %}
Print all logs from **all containers for a Workload** by passing the
Workload label selector to the `-l` flag. e.g. if your Workload
label selector is `app=nginx` usie `-l "app=nginx"` to print logs
for all the Pods from that Workload.
{% endpanel %}
---
{% method %}
## Follow Logs for a Container
Stream logs from a container.
{% sample lang="yaml" %}
```bash
# Follow logs from container
kubectl logs nginx-78f5d695bd-czm8z -f
```
{% endmethod %}
---
{% method %}
## Printing Logs for a Container that has exited
Print the logs for the previously running container. This is useful for printing containers that have
crashed or are crash looping.
{% sample lang="yaml" %}
```bash
# Print logs from exited container
kubectl logs nginx-78f5d695bd-czm8z -p
```
{% endmethod %}
---
{% method %}
## Selecting a Container in a Pod
Print the logs from a specific container within a Pod. This is necessary for Pods running multiple
containers.
{% sample lang="yaml" %}
```bash
# Print logs from the nginx container in the nginx-78f5d695bd-czm8z Pod
kubectl logs nginx-78f5d695bd-czm8z -c nginx
```
{% endmethod %}
---
{% method %}
## Printing Logs After a Time
Print the logs that occurred after an absolute time.
{% sample lang="yaml" %}
```bash
# Print logs since a date
kubectl logs nginx-78f5d695bd-czm8z --since-time=2018-11-01T15:00:00Z
```
{% endmethod %}
---
{% method %}
## Printing Logs Since a Time
Print the logs that are newer than a duration.
Examples:
- 0s: 0 seconds
- 1m: 1 minute
- 2h: 2 hours
{% sample lang="yaml" %}
```bash
# Print logs for the past hour
kubectl logs nginx-78f5d695bd-czm8z --since=1h
```
{% endmethod %}
---
{% method %}
## Include Timestamps
Include timestamps in the log lines
{% sample lang="yaml" %}
```bash
# Print logs with timestamps
kubectl logs -l app=echo --timestamps
```
```bash
2018-11-16T05:26:31.38898405Z hello
2018-11-16T05:27:13.363932497Z hello
```
{% endmethod %}

View File

@ -0,0 +1,80 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Copy files to and from Containers in a cluster
{% endpanel %}
# Copying Container Files
## Motivation
- Copying files from Containers in a cluster to a local filesystem
- Copying files from a local filesystem to Containers in a cluster
{% panel style="warning", title="Install Tar" %}
Copy requires that *tar* be installed in the container image.
{% endpanel %}
{% method %}
## Local to Remote
Copy a local file to a remote Pod in a cluster.
- Local file format is `<path>`
- Remote file format is `<pod-name>:<path>`
{% sample lang="yaml" %}
```bash
kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
```
{% endmethod %}
{% method %}
## Remote to Local
Copy a remote file from a Pod to a local file.
- Local file format is `<path>`
- Remote file format is `<pod-name>:<path>`
{% sample lang="yaml" %}
```bash
kubectl cp /tmp/foo <some-pod>:/tmp/bar
```
{% endmethod %}
{% method %}
## Specify the Container
Specify the Container within a Pod running multiple containers.
- `-c <container-name>`
{% sample lang="yaml" %}
```bash
kubectl cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container>
```
{% endmethod %}
{% method %}
## Namespaces
Set the Pod namespace by prefixing the Pod name with `<namespace>/` .
- `<pod-namespace>/<pod-name>:<path>`
{% sample lang="yaml" %}
```bash
kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
```
{% endmethod %}

View File

@ -0,0 +1,54 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Execute a Command in a Container
- Get a Shell in a Container
{% endpanel %}
# Executing Commands
## Motivation
Debugging Workloads by running commands within the Container. Commands may be a Shell with
a tty.
{% method %}
## Exec Command
Run a command in a Container in the cluster by specifying the **Pod name**.
{% sample lang="yaml" %}
```bash
kubectl exec nginx-78f5d695bd-czm8z ls
```
```bash
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
```
{% endmethod %}
{% method %}
## Exec Shell
To get a Shell in a Container, use the `-t -i` options to get a tty and attach STDIN.
{% sample lang="yaml" %}
```bash
kubectl exec -t -i nginx-78f5d695bd-czm8z bash
```
```bash
root@nginx-78f5d695bd-czm8z:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
```
{% endmethod %}
{% panel style="info", title="Specifying the Container" %}
For Pods running multiple Containers, the Container should be specified with `-c <container-name>`.
{% endpanel %}

View File

@ -0,0 +1,68 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Port Forward local connections to Pods running in a cluster
{% endpanel %}
# Port Forward
## Motivation
Connect to ports of Pods running a cluster by port forwarding local ports.
{% method %}
## Forward Multiple Ports
Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
{% sample lang="yaml" %}
```bash
kubectl port-forward pod/mypod 5000 6000
```
{% endmethod %}
---
{% method %}
## Pod in a Workload
Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the
deployment
{% sample lang="yaml" %}
```bash
kubectl port-forward deployment/mydeployment 5000 6000
```
{% endmethod %}
---
{% method %}
## Different Local and Remote Ports
Listen on port 8888 locally, forwarding to 5000 in the pod
{% sample lang="yaml" %}
```bash
kubectl port-forward pod/mypod 8888:5000
```
{% endmethod %}
---
{% method %}
## Random Local Port
Listen on a random port locally, forwarding to 5000 in the pod
{% sample lang="yaml" %}
```bash
kubectl port-forward pod/mypod :5000
```
{% endmethod %}

View File

@ -0,0 +1,77 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Proxy local connections to Services running in the cluster
{% endpanel %}
# Connecting to Services
## Motivation
Not all Services running a Kubernetes cluster are exposed externally. However Services
only exposed internally to a cluster with a *clusterIp* are accessible through an
apiserver proxy.
Users may use Proxy to **connect to Kubernetes Services in a cluster that are not
externally exposed**.
**Note:** Services running a type LoadBalancer or type NodePort may be exposed externally and
accessed without the need for a Proxy.
{% method %}
## Connecting to an internal Service
Connect to a internal Service using the Proxy command, and the Service Proxy url.
To visit the nginx service go to the Proxy URL at
`http://127.0.0.1:8001/api/v1/namespaces/default/services/nginx/proxy/`
{% sample lang="yaml" %}
```bash
kubectl proxy
Starting to serve on 127.0.0.1:8001
```
```bash
curl http://127.0.0.1:8001/api/v1/namespaces/default/services/nginx/proxy/
```
{% endmethod %}
{% panel style="info", title="Literal Syntax" %}
To connect to a Service through a proxy the user must build the Proxy URL. The Proxy URL format is:
`http://<apiserver-address>/api/v1/namespaces/<service-namespace>/services/[https:]<service-name>[:<port-name>]/proxy`
- The apiserver-address should be the URL printed by the Proxy command
- The Port is optional if you havent specified a name for your port
- The Protocol is optional if you are using `http`
{% endpanel %}
## Builtin Cluster Services
A common usecase is to connect to Services running as part of the cluster itself. A user can print out these
Services and their Proxy Urls with `kubectl cluster-info`.
```bash
kubectl cluster-info
Kubernetes master is running at https://104.197.5.247
GLBCDefaultBackend is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
```
{% panel style="info", title="More Info" %}
For more information on connecting to a cluster, see
[Accessing Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/).
{% endpanel %}

View File

@ -0,0 +1,317 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Examples for `kustomization.yaml`
{% endpanel %}
# Kustomization.yaml Examples
{% method %}
This file declares the customization provided by the kustomize program.
Since customization is, by definition, _custom_, there are no default
values that should be copied from this file or that are recommended.
In practice, fields with no value should simply be omitted from kustomization.yaml
to reduce the content visible in configuration reviews.
Example copied from the [kustomize repo](https://github.com/kubernetes-sigs/kustomize/blob/master/docs/kustomization.yaml)
{% sample lang="yaml" %}
```yaml
# ----------------------------------------------------
# apiVersion and kind of Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# Adds namespace to all resources.
namespace: my-namespace
# Value of this field is prepended to the
# names of all resources, e.g. a deployment named
# "wordpress" becomes "alices-wordpress".
namePrefix: alices-
# Value of this field is appended to the
# names of all resources, e.g. a deployment named
# "wordpress" becomes "wordpress-v2".
# The suffix is appended before content hash
# if resource type is ConfigMap or Secret.
nameSuffix: -v2
# Labels to add to all resources and selectors.
commonLabels:
someName: someValue
owner: alice
app: bingo
# Annotations (non-identifying metadata)
# to add to all resources. Like labels,
# these are key value pairs.
commonAnnotations:
oncallPager: 800-555-1212
# Each entry in this list must resolve to an existing
# resource definition in YAML. These are the resource
# files that kustomize reads, modifies and emits as a
# YAML string, with resources separated by document
# markers ("---").
resources:
- some-service.yaml
- sub-dir/some-deployment.yaml
# Each entry in this list results in the creation of
# one ConfigMap resource (it's a generator of n maps).
# The example below creates two ConfigMaps. One with the
# names and contents of the given files, the other with
# key/value as data.
# Each configMapGenerator item accepts a parameter of
# behavior: [create|replace|merge]. This allows an overlay to modify or
# replace an existing configMap from the parent.
configMapGenerator:
- name: my-java-server-props
files:
- application.properties
- more.properties
- name: my-java-server-env-vars
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
# Each entry in this list results in the creation of
# one Secret resource (it's a generator of n secrets).
secretGenerator:
- name: app-tls
files:
- secret/tls.cert
- secret/tls.key
type: "kubernetes.io/tls"
- name: app-tls-namespaced
# you can define a namespace to generate secret in, defaults to: "default"
namespace: apps
files:
- tls.crt=catsecret/tls.cert
- tls.key=secret/tls.key
type: "kubernetes.io/tls"
- name: env_file_secret
# env is a path to a file to read lines of key=val
# you can only specify one env file per secret.
env: env.txt
type: Opaque
# generatorOptions modify behavior of all ConfigMap and Secret generators
generatorOptions:
# labels to add to all generated resources
labels:
kustomize.generated.resources: somevalue
# annotations to add to all generated resources
annotations:
kustomize.generated.resource: somevalue
# disableNameSuffixHash is true disables the default behavior of adding a
# suffix to the names of generated resources that is a hash of
# the resource contents.
disableNameSuffixHash: true
# Each entry in this list should resolve to a directory
# containing a kustomization file, else the
# customization fails.
#
# The entry could be a relative path pointing to a local directory
# or a url pointing to a directory in a remote repo.
# The url should follow hashicorp/go-getter URL format
# https://github.com/hashicorp/go-getter#url-format
#
# The presence of this field means this file (the file
# you a reading) is an _overlay_ that further
# customizes information coming from these _bases_.
#
# Typical use case: a dev, staging and production
# environment that are mostly identical but differing
# crucial ways (image tags, a few server arguments,
# etc. that differ from the common base).
bases:
- ../../base
- github.com/kubernetes-sigs/kustomize/examples/multibases?ref=v1.0.6
- github.com/Liujingfang1/mysql
- github.com/Liujingfang1/kustomize/examples/helloWorld?ref=test-branch
# Each entry in this list should resolve to
# a partial or complete resource definition file.
#
# The names in these (possibly partial) resource files
# must match names already loaded via the `resources`
# field or via `resources` loaded transitively via the
# `bases` entries. These entries are used to _patch_
# (modify) the known resources.
#
# Small patches that do one thing are best, e.g. modify
# a memory request/limit, change an env var in a
# ConfigMap, etc. Small patches are easy to review and
# easy to mix together in overlays.
patchesStrategicMerge:
- service_port_8888.yaml
- deployment_increase_replicas.yaml
- deployment_increase_memory.yaml
# Each entry in this list should resolve to
# a kubernetes object and a JSON patch that will be applied
# to the object.
# The JSON patch is documented at https://tools.ietf.org/html/rfc6902
#
# target field points to a kubernetes object within the same kustomization
# by the object's group, version, kind, name and namespace.
# path field is a relative file path of a JSON patch file.
# The content in this patch file can be either in JSON format as
#
# [
# {"op": "add", "path": "/some/new/path", "value": "value"},
# {"op": "replace", "path": "/some/existing/path", "value": "new value"}
# ]
#
# or in YAML format as
#
# - op: add
# path: /some/new/path
# value: value
# - op:replace
# path: /some/existing/path
# value: new value
#
patchesJson6902:
- target:
version: v1
kind: Deployment
name: my-deployment
path: add_init_container.yaml
- target:
version: v1
kind: Service
name: my-service
path: add_service_annotation.yaml
# Each entry in this list should be a relative path to
# a file for custom resource definition(CRD) in openAPI definition.
#
# The presence of this field is to allow kustomize be
# aware of CRDs and apply proper
# transformation for any objects in those types.
#
# Typical use case: A CRD object refers to a ConfigMap object.
# In kustomization, the ConfigMap object name may change by adding namePrefix, nameSuffix, or hashing
# The name reference for this ConfigMap object in CRD object need to be
# updated with namePrefix, nameSuffix, or hashing in the same way.
#
# The annotations can be put into openAPI definitions are:
# "x-kubernetes-annotation": ""
# "x-kubernetes-label-selector": ""
# "x-kubernetes-identity": ""
# "x-kubernetes-object-ref-api-version": "v1",
# "x-kubernetes-object-ref-kind": "Secret",
# "x-kubernetes-object-ref-name-key": "name",
crds:
- crds/typeA.json
- crds/typeB.json
# Vars are used to capture text from one resource's field
# and insert that text elsewhere.
#
# For example, suppose someone specifies the name of a k8s Service
# object in a container's command line, and the name of a
# k8s Secret object in a container's environment variable,
# so that the following would work:
#
# containers:
# - image: myimage
# command: ["start", "--host", "$(MY_SERVICE_NAME)"]
# env:
# - name: SECRET_TOKEN
# value: $(SOME_SECRET_NAME)
#
#
# To do so, add an entry to `vars:` as follows:
#
vars:
- name: SOME_SECRET_NAME
objref:
kind: Secret
name: my-secret
apiVersion: v1
- name: MY_SERVICE_NAME
objref:
kind: Service
name: my-service
apiVersion: v1
fieldref:
fieldpath: metadata.name
- name: ANOTHER_DEPLOYMENTS_POD_RESTART_POLICY
objref:
kind: Deployment
name: my-deployment
apiVersion: apps/v1
fieldref:
fieldpath: spec.template.spec.restartPolicy
#
# A var is a tuple of variable name, object reference and field
# reference within that object. That's where the text is found.
#
# The field reference is optional; it defaults to `metadata.name`,
# a normal default, since kustomize is used to generate or
# modify the names of resources.
#
# At time of writing, only string type fields are supported.
# No ints, bools, arrays etc. It's not possible to, say,
# extract the name of the image in container number 2 of
# some pod template.
#
# A variable reference, i.e. the string '$(FOO)', can only
# be placed in particular fields of particular objects as
# specified by kustomize's configuration data.
#
# The default config data for vars is at
# https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/config/defaultconfig/varreference.go
# Long story short, the default targets are all
# container command args and env value fields.
#
# Vars should _not_ be used for inserting names in places
# where kustomize is already handling that job. E.g.,
# a Deployment may reference a ConfigMap by name, and
# if kustomize changes the name of a ConfigMap, it knows
# to change the name reference in the Deployment.
# Images modify the name, tags and/or digest for images without creating patches.
# E.g. Given this kubernetes Deployment fragment:
#
# containers:
# - name: mypostgresdb
# image: postgres:8
# - name: nginxapp
# image: nginx:1.7.9
# - name: myapp
# image: my-demo-app:latest
# - name: alpine-app
# image: alpine:3.7
#
# one can change the `image` in the following ways:
#
# - `postgres:8` to `my-registry/my-postgres:v1`,
# - nginx tag `1.7.9` to `1.8.0`,
# - image name `my-demo-app` to `my-app`,
# - alpine's tag `3.7` to a digest value
#
# all with the following *kustomization*:
images:
- name: postgres
newName: my-registry/my-postgres
newTag: v1
- name: nginx
newTag: 1.8.0
- name: my-demo-app
newName: my-app
- name: alpine
digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
```
{% endmethod %}

View File

@ -0,0 +1,172 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Imperatively Create a Resources
{% endpanel %}
# Creating Resources
## Motivation
Create Resources directly from the command line for the purposes of development or debugging.
Not for production Application Management.
{% method %}
## Deployment
A Deployment can be created with the `create deployment` command.
{% sample lang="yaml" %}
```bash
kubectl create deployment my-dep --image=busybox
```
{% endmethod %}
{% panel style="success", title="Running and Attaching" %}
It is possible to run a container and immediately attach to it using the `-i -t` flags. e.g.
`kubectl run -t -i my-dep --image ubuntu -- bash`
{% endpanel %}
{% method %}
## ConfigMap
Create a configmap based on a file, directory, or specified literal value.
A single configmap may package one or more key/value pairs.
When creating a configmap based on a file, the key will default to the basename of the file, and the value will default
to the file content. If the basename is an invalid key, you may specify an alternate key.
When creating a configmap based on a directory, each file whose basename is a valid key in the directory will be
packaged into the configmap. Any directory entries except regular files are ignored (e.g. subdirectories, symlinks,
devices, pipes, etc).
{% sample lang="yaml" %}
```bash
# Create a new configmap named my-config based on folder bar
kubectl create configmap my-config --from-file=path/to/bar
```
```bash
# Create a new configmap named my-config with specified keys instead of file basenames on disk
kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt
```
```bash
# Create a new configmap named my-config with key1=config1 and key2=config2
kubectl create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2
```
```bash
# Create a new configmap named my-config from an env file
kubectl create configmap my-config --from-env-file=path/to/bar.env
```
{% endmethod %}
{% method %}
## Secret
Create a new secret named my-secret with keys for each file in folder bar
{% sample lang="yaml" %}
```bash
kubectl create secret generic my-secret --from-file=path/to/bar
```
{% endmethod %}
{% panel style="success", title="Bootstrapping Config" %}
Imperative commands can be used to bootstrap config by using `--dry-run -o yaml`.
`kubectl create secret generic my-secret --from-file=path/to/bar --dry-run -o yaml`
{% endpanel %}
{% method %}
## Namespace
Create a new namespace named my-namespace
{% sample lang="yaml" %}
```bash
kubectl create namespace my-namespace
```
{% endmethod %}
## Auth Resources
{% method %}
### ClusterRole
Create a ClusterRole named "foo" with API Group specified.
{% sample lang="yaml" %}
```bash
kubectl create clusterrole foo --verb=get,list,watch --resource=rs.extensions
```
{% endmethod %}
{% method %}
### ClusterRoleBinding
Create a role binding to give a user cluster admin permissions.
{% sample lang="yaml" %}
```bash
kubectl create clusterrolebinding <choose-a-name> --clusterrole=cluster-admin --user=<your-cloud-email-account>
```
{% endmethod %}
{% panel style="info", title="Required Admin Permissions" %}
The cluster-admin role maybe required for creating new RBAC bindings.
{% endpanel %}
{% method %}
### Role
Create a Role named "foo" with API Group specified.
{% sample lang="yaml" %}
```bash
kubectl create role foo --verb=get,list,watch --resource=rs.extensions
```
{% endmethod %}
{% method %}
### RoleBinding
Create a RoleBinding for user1, user2, and group1 using the admin ClusterRole.
{% sample lang="yaml" %}
```bash
kubectl create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1
```
{% endmethod %}
{% method %}
### ServiceAccount
Create a new service account named my-service-account
{% sample lang="yaml" %}
```bash
kubectl create serviceaccount my-service-account
```
{% endmethod %}

View File

@ -0,0 +1,44 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Edit a live Resource in an editor
{% endpanel %}
# Editing Resources
## Motivation
Directly modify a Resource in the cluster by opening its Config in an editor.
{% method %}
## Edit
Edit allows a user to directly edit a Resource in a cluster rather than
editing it through a local file.
{% sample lang="yaml" %}
```yaml
# Edit the service named 'docker-registry':
kubectl edit svc/docker-registry
```
```yaml
# Use an alternative editor
KUBE_EDITOR="nano" kubectl edit svc/docker-registry
```
```yaml
# Edit the job 'myjob' in JSON using the v1 API format:
kubectl edit job.v1.batch/myjob -o json
```
```yaml
# Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation:
kubectl edit deployment/mydeployment -o yaml --save-config
```
{% endmethod %}

View File

@ -0,0 +1,15 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
# Introduction
While Declarative Management of Applications is the recommended pattern for production
use cases, imperative porcelain commands may be helpful for development or debugging
issues. These commands are particularly helpful for learning about Kubernetes when coming
from an imperative system.
**Note:** Some imperative commands can by run with `--dry-run -o yaml` to display the declarative
form.
This section describes imperative commands that will generate or patch Resource Config.

View File

@ -0,0 +1,172 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Imperatively Set fields on Resources
{% endpanel %}
# Creating Resources
## Motivation
Set fields on Resources directly from the command line for the purposes of development or debugging.
Not for production Application Management.
{% method %}
## Scale
The Replicas field on a Resource can be set using the `kubectl scale` command.
{% sample lang="yaml" %}
```bash
# Scale a replicaset named 'foo' to 3.
kubectl scale --replicas=3 rs/foo
```
```sh
# Scale a resource identified by type and name specified in "foo.yaml" to 3.
kubectl scale --replicas=3 -f foo.yaml
```
```sh
# If the deployment named mysql's current size is 2, scale mysql to 3.
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
```
```sh
# Scale multiple replication controllers.
kubectl scale --replicas=5 rc/foo rc/bar rc/baz
```
```sh
# Scale statefulset named 'web' to 3.
kubectl scale --replicas=3 statefulset/web
```
{% endmethod %}
{% panel style="info", title="Conditional Scale Update" %}
It is possible to conditionally update the replicas if and only if the
replicas haven't changed from their last known value using the `--current-replicas` flag.
e.g. `kubectl scale --current-replicas=2 --replicas=3 deployment/mysql`
{% endpanel %}
{% method %}
## Labels
Labels can be set using the `kubectl label` command. Multiple Resources can
be updated in a single command using the `-l` flag.
{% sample lang="yaml" %}
```sh
# Update pod 'foo' with the label 'unhealthy' and the value 'true'.
kubectl label pods foo unhealthy=true
```
```sh
# Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value.
kubectl label --overwrite pods foo status=unhealthy
```
```sh
# Update all pods in the namespace
kubectl label pods --all status=unhealthy
```
```sh
# Update a pod identified by the type and name in "pod.json"
kubectl label -f pod.json status=unhealthy
```
```sh
# Update pod 'foo' only if the resource is unchanged from version 1.
kubectl label pods foo status=unhealthy --resource-version=1
```
```sh
# Update pod 'foo' by removing a label named 'bar' if it exists.
# Does not require the --overwrite flag.
kubectl label pods foo bar-
```
{% endmethod %}
{% method %}
## Annotations
Annotations can be set using the `kubectl annotate` command.
{% sample lang="yaml" %}
```sh
# Update pod 'foo' with the annotation 'description' and the value 'my frontend'.
# If the same annotation is set multiple times, only the last value will be applied
kubectl annotate pods foo description='my frontend'
```
```sh
# Update a pod identified by type and name in "pod.json"
kubectl annotate -f pod.json description='my frontend'
```
```sh
# Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any
existing value.
kubectl annotate --overwrite pods foo description='my frontend running nginx'
```
```sh
# Update all pods in the namespace
kubectl annotate pods --all description='my frontend running nginx'
```
```sh
# Update pod 'foo' only if the resource is unchanged from version 1.
kubectl annotate pods foo description='my frontend running nginx' --resource-version=1
```
```sh
# Update pod 'foo' by removing an annotation named 'description' if it exists.
# Does not require the --overwrite flag.
kubectl annotate pods foo description-
```
{% endmethod %}
{% method %}
## Patches
Arbitrary fields can be set using the `kubectl patch` command.
{% sample lang="yaml" %}
```sh
# Partially update a node using a strategic merge patch. Specify the patch as JSON.
kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
```
```sh
# Partially update a node using a strategic merge patch. Specify the patch as YAML.
kubectl patch node k8s-node-1 -p $'spec:\n unschedulable: true'
```
```sh
# Partially update a node identified by the type and name specified in "node.json" using strategic merge patch.
kubectl patch -f node.json -p '{"spec":{"unschedulable":true}}'
```
```sh
# Update a container's image; spec.containers[*].name is required because it's a merge key.
kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
```
```sh
# Update a container's image using a json patch with positional arrays.
kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"newimage"}]'
```
{% endmethod %}

View File

@ -0,0 +1,207 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Creating Resources
- Printing Resources
- Debugging Containers
{% endpanel %}
# Getting Started With Kubectl
**Note**: If you are already familiar with Kubectl, you can skip this section.
This section provides a brief overview of the most basic Kubectl commands, which are
described in more detail in later chapters.
For more background on the Kubernetes APIs themselves, see the docs at [k8s.io](k8s.io).
## Listing Kubernetes Resources
{% method %}
List the Kubernetes *Deployment* Resources that are in the kube-system namespace.
**Note:** Deployments are Resources which manage Pod replicas (Pods run Containers)
{% sample lang="yaml" %}
```bash
kubectl get deployments --namespace kube-system
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
event-exporter-v0.2.3 1 1 1 1 14d
fluentd-gcp-scaler 1 1 1 1 14d
heapster-v1.6.0-beta.1 1 1 1 1 14d
kube-dns 2 2 2 2 14d
kube-dns-autoscaler 1 1 1 1 14d
l7-default-backend 1 1 1 1 14d
metrics-server-v0.3.1 1 1 1 1 14d
```
{% endmethod %}
{% method %}
Print detailed information about the kube-dns Deployment in the kube-system namespace.
{% sample lang="yaml" %}
```bash
kubectl describe deployment kube-dns --namespace kube-system
```
```bash
Name: kube-dns
Namespace: kube-system
CreationTimestamp: Wed, 06 Mar 2019 17:36:05 -0800
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
Annotations: deployment.kubernetes.io/revision: 2
...
```
{% endmethod %}
## Creating a Resource from Config
{% method %}
Create or Update Kubernetes Resources from Remote Config.
{% sample lang="yaml" %}
```bash
kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubectl/master/docs/book/examples/nginx/nginx.yaml
```
```bash
service/nginx created
deployment.apps/nginx-deployment created
```
{% endmethod %}
{% method %}
Create or Update Kubernetes Resources from Local Config.
{% sample lang="yaml" %}
```bash
kubectl apply -f ./examples/nginx/nginx.yaml
```
```bash
service/nginx created
deployment.apps/nginx-deployment created
```
{% endmethod %}
{% method %}
Print the Resources that were Applied.
{% sample lang="yaml" %}
```bash
kubectl get -f ./examples/nginx/nginx.yaml --show-labels
```
```bash
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
service/nginx ClusterIP 10.59.245.201 <none> 80/TCP 11m <none>
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS
deployment.apps/nginx-deployment 3 3 3 3 11m app=nginx
```
{% endmethod %}
## Generating a Config from a Command
{% method %}
Generate Config for a Deployment Resource. This could be Applied to a cluster by writing the output
to a file, and then running `kubectl apply -f <yaml-file>`
**Note:** The generated Config has extra boilerplate that users shouldn't include but exists
due to the serialization process of go objects.
{% sample lang="yaml" %}
```bash
kubectl create deployment nginx --dry-run -o yaml --image nginx
```
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null # delete this
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy: {} # delete this
template:
metadata:
creationTimestamp: null # delete this
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {} # delete this
status: {} # delete this
```
{% endmethod %}
## Viewing Pods Associated with Resources
{% method %}
View the Pods created by the Deployment using the Pod labels.
{% sample lang="yaml" %}
```bash
kubectl get pods -l app=nginx
```
```bash
NAME READY STATUS RESTARTS AGE
nginx-deployment-5c689d88bb-b2xfk 1/1 Running 0 10m
nginx-deployment-5c689d88bb-rx569 1/1 Running 0 10m
nginx-deployment-5c689d88bb-s7xcv 1/1 Running 0 10m
```
{% endmethod %}
## Debugging Containers
{% method %}
Get the logs from all Pods managed by the Deployment.
{% sample lang="yaml" %}
```bash
kubectl logs -l app=nginx
```
{% endmethod %}
{% method %}
Get a shell into a specific Pod's Container
{% sample lang="yaml" %}
```bash
kubectl exec -i -t nginx-deployment-5c689d88bb-s7xcv bash
```
```bash
root@nginx-deployment-5c689d88bb-s7xcv:/#
```
{% endmethod %}

View File

@ -0,0 +1,228 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- A Kubernetes API has 2 parts - a Resource Type and a Controller
- Resources are objects declared as json or yaml and written to a cluster
- Controllers asynchronously actuate Resources after they are stored
{% endpanel %}
# Kubernetes Resources and Controllers Overview
This section provides background on the Kubernetes Resource model. This information
is also available at the [kubernetes.io](https://kubernetes.io/docs/home/) docs site.
For more information on Kubernetes Resources see: [kubernetes.io Concepts](https://kubernetes.io/docs/concepts/).
## Resources
Instances of Kubernetes objects (e.g. Deployment, Services, Namespaces, etc)
are called **Resources**.
Resources which run containers are referred to as **Workloads**.
Examples of Workloads:
- [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
- [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
- [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/)
- [CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/)
- [DaemonSets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
**Users work with Resource APIs by declaring them in files which are then Applied to a Kubernetes
cluster. These declarative files are called Resource Config.**
Resource Config is *Applied* (declarative Create/Update/Delete) to a Kubernetes cluster using
tools such as Kubectl, and then actuated by a *Controller*.
Resources are uniquely identified:
- **apiVersion** (API Type Group and Version)
- **kind** (API Type Name)
- **metadata.namespace** (Instance namespace)
- **metadata.name** (Instance name)
{% panel style="warning", title="Default Namespace" %}
If namespace is omitted from the Resource Config, the *default* namespace is used. Users
should almost always explicitly specify the namespace for their Application using a
`kustomization.yaml`.
{% endpanel %}
{% method %}
### Resources Structure
Resources have the following components.
**TypeMeta:** Resource Type **apiVersion** and **kind**.
**ObjectMeta:** Resource **name** and **namespace** + other metadata (labels, annotations, etc).
**Spec:** the desired state of the Resource - intended state the user provides to the cluster.
**Status:** the observed state of the object - recorded state the cluster provides to the user.
Resource Config written by the user omits the Status field.
**Example Deployment Resource Config**
{% sample lang="yaml" %}
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
```
{% endmethod %}
{% panel style="info", title="Spec and Status" %}
Resources such as ConfigMaps and Secrets do not have a Status,
and as a result their Spec is implicit (i.e. they don't have a spec field).
{% endpanel %}
## Controllers
Controllers actuate Kubernetes APIs. They observe the state of the system and look for
changes either to desired state of Resources (create, update, delete) or the system
(Pod or Node dies).
Controllers then make changes to the cluster to fulfill the intent specified by the user
(e.g. in Resource Config) or automation (e.g. changes from Autoscalers).
**Example:** After a user creates a Deployment, the Deployment Controller will see
that the Deployment exists and verify that the corresponding ReplicaSet it expects
to find exists. The Controller will see that the ReplicaSet does not exist and will
create one.
{% panel style="warning", title="Asynchronous Actuation" %}
Because Controllers run asynchronously, issues such as a bad
Container Image or unschedulable Pods will not be present in the CRUD response.
Tooling must facilitate processes for watching the state of the system until changes are
completely actuated by Controllers. Once the changes have been fully actuated such
that the desired state matches the observed state, the Resource is considered *Settled*.
{% endpanel %}
### Controller Structure
**Reconcile**
Controllers actuate Resources by reading the Resource they are Reconciling + related Resources,
such as those that they create and delete.
**Controllers *do not* Reconcile events, rather they Reconcile the expected
cluster state to the observed cluster state at the time Reconcile is run.**
1. Deployment Controller creates/deletes ReplicaSets
1. ReplicaSet Controller creates/delete Pods
1. Scheduler (Controller) writes Nodes to Pods
1. Node (Controller) runs Containers specifid in Pods on the Node
**Watch**
Controllers actuate Resources *after* they are written by Watching Resource Types, and then
triggering Reconciles from Events. After a Resource is created/updated/deleted, Controllers
Watching the Resource Type will receive a notification that the Resource has been changed,
and they will read the state of the system to see what has changed (instead of relying on
the Event for this information).
- Deployment Controller watches Deployments + ReplicaSets (+ Pods)
- ReplicaSet Controller watches ReplicaSets + Pods
- Scheduler (Controller) watches Pods
- Node (Controller) watches Pods (+ Secrets + ConfigMaps)
{% panel style="info", title="Level vs Edge Based Reconciliation" %}
Because Controllers don't respond to individual Events, but instead Reconcile the state
of the system at the time that Reconcile is run, **changes from several different events may be observed
and Reconciled together.** This is referred to as a *Level Based* system, whereas a system that
responds to each event individually would be referred to as an *Edge Based* system.
{% endpanel %}
## Overview of Kubernetes Resource APIs
### Pods
Containers are run in [*Pods*](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) which are
scheduled to run on *Nodes* (i.e. worker machines) in a cluster.
Pods run a *single replica* of an Application and provide:
- Compute Resources (cpu, memory, disk)
- Environment Variables
- Readiness and Health Checking
- Network (IP address shared by containers in the Pod)
- Mounting Shared Configuration and Secrets
- Mounting Storage Volumes
- Initialization
{% panel style="warning", title="Multi Container Pods" %}
Multiple replicas of an Application should be created using a Workload API to manage
creation and deletion of Pod replicas using a PodTemplate.
In some cases a Pod may contain multiple Containers forming a single instance of an Application. These
containers may coordinate with one another through shared network (IP) and storage.
{% endpanel %}
### Workloads
Pods are typically managed by higher level abstractions that handle concerns such as
replication, identity, persistent storage, custom scheduling, rolling updates, etc.
The most common out-of-the-box Workload APIs (manage Pods) are:
- [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) (Stateless Applications)
- replication + rollouts
- [StatefulSets](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) (Stateful Applications)
- replication + rollouts + persistent storage + identity
- [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/) (Batch Work)
- run to completion
- [CronJobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) (Scheduled Batch Work)
- scheduled run to completion
- [DaemonSets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) (Per-Machine)
- per-Node scheduling
{% panel style="success", title="API Abstraction Layers" %}
High-level Workload APIs may manage lower-level Workload APIs instead of directly managing Pods
(e.g. Deployments manage ReplicaSets).
{% endpanel %}
### Service Discovery and Load Balancing
Service discovery and Load Balancing may be managed by a *Service* object. Services provide a single
virtual IP address and dns name load balanced to a collection of Pods matching Labels.
{% panel style="info", title="Internal vs External Services" %}
- [Services Resources](https://kubernetes.io/docs/concepts/services-networking/service/)
(L4) may expose Pods internally within a cluster or externally through an HA proxy.
- [Ingress Resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) (L7)
may expose URI endpoints and route them to Services.
{% endpanel %}
### Configuration and Secrets
Shared Configuration and Secret data may be provided by ConfigMaps and Secrets. This allows
Environment Variables, Command Line Arguments and Files to be loosely injected into
the Pods and Containers that consume them.
{% panel style="info", title="ConfigMaps vs Secrets" %}
- [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/)
are for providing non-sensitive data to Pods.
- [Secrets](https://kubernetes.io/docs/concepts/configuration/secret/)
are for providing sensitive data to Pods.
{% endpanel %}

View File

@ -0,0 +1,806 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Reference for `kustomization.yaml`
{% endpanel %}
# Kustomization.yaml Reference
#### Terms:
- **Generators**: Provide Resource Config to Kustomize - e.g. `resources`, `bases`, `secretGenerators`.
- **Transformers**: Modify Resource Config by adding, updating or deleting fields - e.g. `namespace`, `commonLabels`, `images`.
- **Meta**: Configure behavior of Generators and Transformers - e.g. generatorOptions, crds, configurations.
## Table of Contents
| Name | Type | Descriptions | Guides |
| :----------------------------------------------- | :--------------- | -------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| [bases](#bases) | Generator | Add Resource Configs from another `kustomization.yaml` | [Bases and Variants](../app_customization/bases_and_variants.md) |
| [commonAnnotations](#commonannotations) | Transformer | Set annotations on all Resources and Selectors. | [Labels and Annotations](../app_management/labels_and_annotations.md#setting-annotations-for-all-resources) |
| [commonLabels](#commonlabels) | Transformer | Set labels on all Resources and Selectors. | [Labels and Annotations](../app_management/labels_and_annotations.md#setting-labels-for-all-resources) |
| [configMapGenerator](#configmapgenerator) | Generator | Generate ConfigMap Resources. | [Secrets and ConfigMaps](../app_management/secrets_and_configmaps.md#configmaps-from-files) |
| [configurations](#configurations) | Meta | Extend functionality of builtin Transformers to work with additional types (e.g. CRDs).| |
| [generatorOptions](#generatoroptions) | Meta | Configure how ConfigMaps and Secrets are generated. | |
| [images](#images) | Transformer | Override image names and tags. | [Container Images](../app_management/container_images.md) |
| [namespace](#namespace) | Transformer | Override namespaces on all Resources. | [Namespaces and Names](../app_management/namespaces_and_names.md##setting-a-namespace-for-all-resources) |
| [namePrefix](#nameprefix) | Transformer | Add a prefix to the names of all Resources and References. | [Namespaces and Names](../app_management/namespaces_and_names.md#setting-a-name-prefix-or-suffix-for-all-resources) |
| [nameSuffix](#namesuffix) | Transformer | Add a suffix to the name of all Resources and References. | [Namespaces and Names](../app_management/namespaces_and_names.md#setting-a-name-prefix-or-suffix-for-all-resources) |
| [patchesJson6902](#patchesjson6902) | Transformer | Patch Resource Config using json patch. | [Customizing Resource Fields](../app_customization/customizing_arbitrary_fields.md#customizing-arbitrary-fields-with-jsonpatch) |
| [patchesStrategicMerge](#patchesstrategicmerge) | Transformer | Patch Resource Config using an overlay. | [Customizing Resource Fields](../app_customization/customizing_arbitrary_fields.md#customizing-arbitrary-fields-with-overlays) |
| [resources](#resources) | Generator | Add Raw Resource Configs. | [Apply](../app_management/apply.md#usage) |
| [secretGenerator](#secretgenerator) | Generator | Generate Secret Resources. | [Secrets and ConfigMaps](../app_management/secrets_and_configmaps.md#secrets-from-files) |
| [vars](#vars) | Transformer | Substitute Resource Config field values into Pod Arguments. | [Config Reflection](../app_customization/config_reflection.md) |
See this [example kustomization.yaml](../examples/kustomize.md)
## Resource Generators
Resource Generators provide Resource Configs to Kustomize from sources such as files, urls, or
`kustomization.yaml` fields.
### bases
{% method %}
`bases` contains a list of paths to **directories or git repositories** containing `kustomization.yaml`s.
`bases` produce Resource Config by running Kustomize against the target. The provided Resource Config
will then have Transformers from the current `kustomization.yaml` applied.
`bases` are conceptually similar to a base image referenced by `FROM` in a Dockerfile.
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **base** | []string | List of paths must point to directories or git repositories containing `kustomization.yaml`s. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- path/to/dir/with/kust/
- https://github.com/org/repo/dir/
```
{% endmethod %}
### configMapGenerator
{% method %}
`configMapGenerator` contains a list of ConfigMaps to generate.
By default, generated ConfigMaps will have a hash appended to the name. The ConfigMap hash is
appended after a `nameSuffix`, if one is specified. Changes to ConfigMap data will cause a ConfigMap
with a new name to be generated, triggering a rolling update to Workloads referencing the ConfigMap.
Resources such as PodTemplates should reference ConfigMaps by the `name` ConfigMapGenerator field,
and Kustomize will update the reference to match the generated name,
as well as `namePrefix`'s and `nameSuffix`'s.
**Note:** Hash suffix generation can be disabled for a subset of ConfigMaps by creating a separate
`kustomization.yaml` and generating these ConfigMaps there. This `kustomization.yaml` must set
`generatorOptions.disableNameSuffixHash=true`, and be used as a `base`. See
[generatorOptions](#generatoroptions) for more details.
| Name | Type | Desc |
| :--------------------- | :------------------------ | :---------------------------------- |
| **configMapGenerator** | []ConfigMapGeneratorArgs | List of ConfigMaps to generate. |
##### ConfigMapGeneratorArgs
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **behavior** | string | Merge behavior when the ConfigMap generator is defined in a base. May be one of `create`, `replace`, `merge. |
| **env** | string | Single file to generate ConfigMap data entries from. Should be a path to a local *env* file, e.g. `path/to/file.env`, where each line of the file is a `key=value` pair. *Each line* will appear as an entry in the ConfigMap data field. |
| **files** | []string | List of files to generate ConfigMap data entries from. Each item should be a path to a local file, e.g. `path/to/file.config`, and the filename will appear as an entry in the ConfigMap data field with its contents as a value. |
| **literals** | []string | List of literal ConfigMap data entries. Each item should be a key and literal value, e.g. `somekey=somevalue`, and the key/value will appear as an entry in the ConfigMap data field.|
| **name** | string | Name for the ConfigMap. Modified by the `namePrefix` and `nameSuffix` fields. |
| **namespace** | string | Namespace for the ConfigMap. Overridden by kustomize-wide `namespace` field.|
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
# generate a ConfigMap named my-java-server-props-<some-hash> where each file
# in the list appears as a data entry (keyed by base filename).
- name: my-java-server-props
files:
- application.properties
- more.properties
# generate a ConfigMap named my-java-server-env-vars-<some-hash> where each literal
# in the list appears as a data entry (keyed by literal key).
- name: my-java-server-env-vars
literals:
- JAVA_HOME=/opt/java/jdk
- JAVA_TOOL_OPTIONS=-agentlib:hprof
# generate a ConfigMap named my-system-env-<some-hash> where each key/value pair in the
# env.txt appears as a data entry (separated by \n).
- name: my-system-env
env: env.txt
```
{% endmethod %}
### resources
{% method %}
`resources` contains a list of Resource Config file paths to be customized. Each file may contain multiple
Resource Config definitions separated by `\n---\n`.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **resources** | []string | Paths to Resource Config files. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
# list of files containing Resource Config to add
resources:
- path/to/resource.yaml
- another/path/to/resource.yaml
```
{% endmethod %}
### secretGenerator
{% method %}
`secretGenerator` contains a list of Secrets to generate.
By default, generated Secrets will have a hash appended to the name. The Secrets hash is
appended after a `nameSuffix`, if one is specified. Changes to Secrets data will cause a Secrets
with a new name to be generated, triggering a rolling update to Workloads referencing the Secrets.
Resources such as PodTemplates should reference Secrets by the `name` secretsGenerator field,
and Kustomize will update the reference to match the generated name,
as well as `namePrefix`'s and `nameSuffix`'s.
**Note:** Hash suffix generation can be disabled for a subset of Secret by creating a separate
`kustomization.yaml` and generating these Secret there. This `kustomization.yaml` must set
`generatorOptions.disableNameSuffixHash=true`, and be used as a `base`. See
[generatorOptions](#generatoroptions) for more details.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **secretGenerator** | []SecretGeneratorArgs | List of Secrets to generate. |
##### SecretGeneratorArgs
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **behavior** | string | Merge behavior when the Secret generator is defined in a base. May be one of `create`, `replace`, `merge. |
| **env** | string | Single file to generate Secret data entries from. Should be a path to a local *env* file, e.g. `path/to/file.env`, where each line of the file is a `key=value` pair. *Each line* will appear as an entry in the Secret data field. |
| **files** | []string | List of files to generate Secret data entries from. Each item should be a path to a local file, e.g. `path/to/file.config`, and the filename will appear as an entry in the ConfigMap data field with its contents as a value. |
| **literals** | []string | List of literal Secret data entries. Each item should be a key and literal value, e.g. `somekey=somevalue`, and the key/value will appear as an entry in the Secret data field.|
| **name** | string | Name for the Secret. Modified by the `namePrefix` and `nameSuffix` fields. |
| **namespace** | string | Namespace for the Secret. Overridden by kustomize-wide `namespace` field.|
| **type** | string | Type of Secret. If type is "kubernetes.io/tls", then "literals" or "files" must have exactly two keys: "tls.key" and "tls.crt". |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
# generate a tls Secret
- name: app-tls
files:
- secret/tls.cert
- secret/tls.key
type: "kubernetes.io/tls"
- name: env_file_secret
# env is a path to a file to read lines of key=val
# you can only specify one env file per secret.
env: env.txt
type: Opaque
```
{% endmethod %}
## Transformers
Transformers modify Resources by adding, updating or deleting fields. Transformers work against Generated Resource
Config - e.g.
- `resources`
- `bases`
- `configMapGenerator`
- `secretGenerator`
### commonAnnotations
{% method %}
`commonAnnotations` sets annotations on all Resources. `commonAnnotations`'s from bases will stack - e.g.
if a `commonAnnotations` was set in a `base`, the new `commonAnnotations` will be added
to or override the base `commonAnnotations`.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **commonAnnotations** | map[string]string | Keys/Values for annotations. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonAnnotations:
annotationKey1: "annotationValue2"
annotationKey2: "annotationValue2"
```
{% endmethod %}
### commonLabels
{% method %}
This field sets labels on all Resources. `commonLabels`'s from bases will stack - e.g.
if a `commonLabels` was set in a `base`, the new `commonLabels` will be added
to or override the base `commonLabels`.
`commonLabels` will also be applied both to Label Selector fields and Label fields in PodTemplates.
**Note:** Because `commonLabels` are applied to Selectors, they cannot be changed for some objects.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **commonLabels** | map[string]string | Keys/Values for labels. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
labelKey1: "labelValue1"
labelKey2: "labelValue2"
```
{% endmethod %}
### images
{% method %}
`images` overrides image names and tags in all `[spec.template.]spec.containers.image` fields matching the
`name`. This is an alternative to creating patches to change images.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **images** | []Image | Images to override. |
##### Image
Definitions:
- *name*: portion of the `image` field value before the `:` - e.g. for `foo:v1` the name would be `foo`.
- *tag*: portion of the `image` field value after the `:` - e.g. for `foo:v1` the name would be `v1`.
- *digest*: alternative to tag for referencing an image.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **name** | string | Match all `image` fields with this value for the *name*|
| **nameName** | string | Replace the `image` field *name* with this value. |
| **newTag** | string | Replace the `image` field *tag* with this tag value. |
| **digest** | string | Replace the `image` field *tag* with this digest value. Includes the `sha256:` portion of the digest. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: postgres
newName: my-registry/my-postgres
newTag: v1
- name: nginx
newTag: 1.8.0
- name: my-demo-app
newName: my-app
- name: alpine
digest: sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
```
{% endmethod %}
### patchesJson6902
{% method %}
Each entry in this list should resolve to a kubernetes object and a JSON patch that will be applied
to the object. The JSON patch schema is documented at https://tools.ietf.org/html/rfc6902
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **patchesJson6902** | []Json6902 | List of patch definitions. |
##### Json6902
Target field points to a kubernetes object by the object's group, version, kind, name and namespace.
Path field is a relative file path of a JSON patch file. File contents can be either json or yaml.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **target** | Target | Target Resource for the patch. |
| **path** | string | Path to json patch file. Maybe json or yaml. |
Example patch file:
```yaml
- op: add
path: /some/new/path
value: value
- op: replace
path: /some/existing/path
value: new value
```
##### Target
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **group** | string | Group of the Resource to patch. |
| **kind** | string | Kind of the Resource to patch. |
| **name** | string | Name of the Resource to patch. |
| **namespace** | string | Namespace of the Resource to patch. |
| **version** | string | Version of the Resource to patch. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- target:
version: v1
kind: Deployment
name: my-deployment
path: add_init_container.yaml
- target:
version: v1
kind: Service
name: my-service
path: add_service_annotation.yaml
```
{% endmethod %}
### patchesStrategicMerge
{% method %}
`patchesStrategicMerge` applies patches to the matching Resource Config (by Group/Version/Kind + Name/Namespace). Patch
files contain sparse Resource Config definitions - i.e. containing only the Resource Config fields to
add or override. Strategic merge patches are also called *overlays*.
Small patches that do one thing are best, e.g. modify a memory request/limit.
Small patches are easy to review and easy to compose together.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **patchesStrategicMerge** | []string | Paths to files containing sparse Resource Config. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- service_port_8888.yaml
- deployment_increase_replicas.yaml
- deployment_increase_memory.yaml
```
{% endmethod %}
### namespace
{% method %}
This field sets the `namespace` of all namespaced Resources. If the namespace has already been set in the
Resource Config, this will override the namespace.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **namespace** | String | Namespace |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: "my-app-namespace"
```
{% endmethod %}
### namePrefix
{% method %}
`namePrefix` sets a name prefix on all Resources. `namePrefix`'s from bases will stack -
e.g. if a `namePrefix` was set in a `base`, the new `namePrefix` will be pre-prended to the `namePrefix` in the
`base`.
Fields that references another Resource will also have the `namePrefix` applied so that the reference is
updated.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **namePrefix** | String | Value to prepend to all Resource names and references. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namePrefix: "my-app-name-prefix-"
```
{% endmethod %}
### nameSuffix
{% method %}
`nameSuffix` sets a `nameSuffix` on all Resources. `nameSuffix`'s from bases will stack -
e.g. if a `nameSuffix` was set in a `base`, the new `nameSuffix` will be appended to the `nameSuffix` in the
`base`.
Fields that references another Resource will also have the `nameSuffix` applied so that the reference is
updated.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **nameSuffix** | String | Value to append to all Resource names and references. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
nameSuffix: "-my-app-name-suffix"
```
{% endmethod %}
### vars
{% method %}
`vars` defines values that can be substituted into Pod container arguments and environment variables.
This is necessary for wiring post-transformed fields into container arguments and environment variables.
e.g. Services names may be transformed by `namePrefix` and containers may need to refer to Service names
at runtime.
Vars are similar to the Kubernetes [Downward API](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-container-fields-as-values-for-environment-variables)
in that they allow Pods to reference information about the environment in which they are run.
Variables are referenced from container argument using `$(MY_VAR_NAME)`
Example:
```yaml
containers:
- image: myimage
command: ["start", "--host", "$(MY_SERVICE_NAME)"]
env:
- name: SECRET_TOKEN
value: $(SOME_SECRET_NAME)
```
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **vars** | []Var | List of variable declarations that may be referenced in container arguments. |
##### Var
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **name** | string | Name of the variable. Referenced by `$(NAME)`. |
| **objref** | string | Reference to the object containing the field to be referenced. ObjRef should use the unTransformed object name |
| **fieldref** | string | Reference to the field in the object. Defaults to `metadata.name` if unspecified. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
vars:
- name: SOME_SECRET_NAME
objref:
kind: Secret
name: my-secret
apiVersion: v1
- name: MY_SERVICE_NAME
objref:
kind: Service
name: my-service
apiVersion: v1
fieldref:
fieldpath: metadata.name
- name: ANOTHER_DEPLOYMENTS_POD_RESTART_POLICY
objref:
kind: Deployment
name: my-deployment
apiVersion: apps/v1
fieldref:
fieldpath: spec.template.spec.restartPolicy
```
{% endmethod %}
## Meta Options
Meta Options control how Kustomize generates and transforms Resource Config.
### configurations
`configurations` is used to configure the built-in Kustomize Transformers to work with CRDs. The built-in
Kustomize configurations can be found [here](https://github.com/kubernetes-sigs/kustomize/tree/master/pkg/transformers/config/defaultconfig)
Examples:
- *images* that should be updated by the `images` Transformer
- *object references* that should be updated by `namePrefix`, `nameSuffix`
- *secret* and *configmap* references that should be updated by `secretGenerator` and `configMapGenerator`
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **configurations** | []string | List of paths to yaml files containing Kustomize meta configuration. |
> kustomization.yaml
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configurations:
- mykind_configuration.yaml
```
##### commonAnnotations
{% method %}
Specify `commonAnnotations` in the **configuration file** to configure the Kustomize `commonAnnotations` field
to find additional annotation fields on CRDs.
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **commonAnnotations** | []Annotation | List of paths to annotations fields. |
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **create** | bool | If true, create the annotation field if it is not present on the Resource Config. |
| **group** | string | API Group of the object to add the annotation to. If unset, applies to all API Groups. |
| **kind** | string | Kind of the object to add the annotation to. If unset, applies to all Kinds. |
| **path** | string | Path to annotation field. |
| **version** | string | API Version of the object to add the annotation to. If unset, applies to all Versions. |
[Built-in examples](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/config/defaultconfig/commonannotations.go)
{% sample lang="yaml" %}
> mykind_configuration.yaml file referenced by the configurations field
```yaml
commonAnnotations:
# set labels at metadata.annotations for all types
- path: metadata/annotations
# create metadata.annotations if it doesn't exist
create: true
```
{% endmethod %}
##### commonLabels
{% method %}
Specify `commonLabels` in the **configuration file** to configure the Kustomize `commonLabels` field find
additional labels and selector fields on CRDs.
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **commonLabels** | []Label | List of paths to label fields. |
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **create** | bool | If true, create the label field if it is not present on the Resource Config. |
| **group** | string | API Group of the object to add the label to. If unset, applies to all API Groups. |
| **kind** | string | Kind of the object to add the label to. If unset, applies to all Kinds. |
| **path** | string | Path to label field. |
| **version** | string | API Version of the object to add the label to. If unset, applies to all Versions. |
[Built-in examples](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/config/defaultconfig/commonlabels.go)
{% sample lang="yaml" %}
> mykind_configuration.yaml file referenced by the configurations field
```yaml
commonLabels:
# set labels at metadata.labels for all types
- path: metadata/labels
# create metadata.annotations if it doesn't exist
create: true
# set labels at spec.selector for v1.Service types
- path: spec/selector
create: true
version: v1
kind: Service
# set labels at spec.selector.matchLabels for Deployment types
- path: spec/selector/matchLabels
create: true
kind: Deployment
# set labels at spec...podAffinity...matchLabels for apps.Deployment types
- path: spec/template/spec/affinity/podAffinity/preferredDuringSchedulingIgnoredDuringExecution/podAffinityTerm/labelSelector/matchLabels
# do NOT create spec...podAffinity...matchLabels if it doesn't exist on the Deployment Resource Config
create: false
group: apps
kind: Deployment
```
{% endmethod %}
##### images
{% method %}
Specify `images` in the **configuration file** to configure the Kustomize `images` field find additional
image fields on CRDs.
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **images** | []Image |List of paths to image fields. |
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **group** | string | API Group of the object to add the label to. If unset, applies to all API Groups. |
| **kind** | string | Kind of the object to add the label to. If unset, applies to all Kinds. |
| **path** | string | Path to label field. |
| **version** | string | API Version of the object to add the label to. If unset, applies to all Versions. |
{% sample lang="yaml" %}
> mykind_configuration.yaml file referenced by the configurations field
```yaml
images:
# set images at spec.runLatest.container.image for MyKind types
- path: spec/runLatest/container/image
kind: MyKind
```
{% endmethod %}
##### Name References
{% method %}
Specify `nameReference` in the **configuration file** for CRDs that reference other objects by name - e.g.
Secrets, ConfigMaps, Services, etc.
`nameReference` registers for a given type, that **it is referenced by name from another type** - e.g.
Secrets are referenced by Pods.
Doing so will configure Generators and Transformers to update the field value with a new name when
names are modified - e.g. `namePrefix`, `secretGenerator`.
| Name | Type | Desc |
| :------------ | :-------- | :---------------------------------- |
| **nameReference** | []Reference |List of types of objects that are referenced by other objects. |
| Name | Type | Desc |
| :------------- | :-------- | :---------------------------------- |
| **group** | string | API Group of the object **that is being referenced**. If unset, applies to all API Groups. |
| **kind** | string | Kind of the object to **that is being referenced - e.g. Secret, ConfigMap**. |
| **fieldSpecs** | []FieldSpec | Object types that reference this object type. |
| **version** | string | API Version of the object **that is being referenced**. If unset, applies to all Versions. |
| Name | Type | Desc |
| :------------- | :-------- | :---------------------------------- |
| **group** | string | API Group of the object **that contains a reference**. If unset, applies to all API Groups. |
| **kind** | string | Kind of the object *that contains a reference - e.g. Pod, Deployment**. If unset, applies to all Kinds. |
| **path** | string | Path to the name field that is a reference. |
| **version** | string | API Version of the object *that contains a reference**. If unset, applies to all Versions. |
[Built-In Examples](https://github.com/kubernetes-sigs/kustomize/blob/master/pkg/transformers/config/defaultconfig/namereference.go)
{% sample lang="yaml" %}
> mykind_configuration.yaml file referenced by the configurations field
```yaml
nameReference:
# Configure named references to Secret objects to be updated by Transformers and Generators - e.g. namePrefix, secretGenerator, etc
- kind: Secret
version: v1
fieldSpecs:
# v1.Pods that reference a Secret in spec.volumes.secret.secretName will have it updated
- path: spec/volumes/secret/secretName
version: v1
kind: Pod
# v1.Pods that reference a Secret in spec.containers.env.valueFrom.secretKeyRef.name will have it updated
- path: spec/containers/env/valueFrom/secretKeyRef/name
version: v1
kind: Pod
```
{% endmethod %}
### generatorOptions
{% method %}
`generatorOptions` modifies behavior of all ConfigMap and Secret generators in the current `kustomization.yaml`.
generatorOptions from `bases` apply **only** to the Secrets and ConfigMaps generated within **the same
`kustomization.yaml`**.
**Note** It is possible to define generatorOptions for a subset of generated Resources by defining a `base` to generate
the Resources and setting the options there. This supports generating some ConfigMaps with hash-suffixes, and some
without.
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **generatorOptions** | GeneratorOptions | Options to define how Secrets and ConfigMaps are generated. |
##### GeneratorOptions
| Name | Type | Desc |
| :------------ | :------ | :---------------------------------- |
| **labels** | map[string]string | Labels to add to all Resources generated from this `kustomization.yaml`. |
| **annotations** | map[string]annotations | Annotations to add to all Resources generated from this `kustomization.yaml`. |
| **disableNameSuffixHash** | bool | If set to true, don't add a hash suffix to any Resources generated from this `kustomization.yaml`. |
{% sample lang="yaml" %}
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
generatorOptions:
# labels to add to all generated resources
labels:
kustomize.generated.resources: somevalue
# annotations to add to all generated resources
annotations:
kustomize.generated.resource: somevalue
# disableNameSuffixHash is true disables the default behavior of adding a
# suffix to the names of generated resources that is a hash of
# the resource contents.
disableNameSuffixHash: true
```
{% endmethod %}

View File

@ -0,0 +1,201 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Print information about the Cluster and Client versions
- Print information about the Control Plane
- Print information about Nodes
- Print information about APIs
{% endpanel %}
# Cluster Info
## Motivation
It may be necessary to learn about the Kubernetes cluster itself, rather
than just the workloads running in it. This can be useful for debugging
unexpected behavior.
## Versions
{% method %}
The `kubectl version` prints the client and server versions. Note that
the client version may not be present for clients built locally from
source.
{% sample lang="yaml" %}
```bash
kubectl version
```
```bash
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T19:38:17Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.6-gke.2", GitCommit:"04ad69a117f331df6272a343b5d8f9e2aee5ab0c", GitTreeState:"clean", BuildDate:"2019-01-04T16:19:46Z", GoVersion:"go1.10.3b4", Compiler:"gc", Platform:"linux/amd64"}
```
{% endmethod %}
{% panel style="warning", title="Version Skew" %}
Kubectl supports +/-1 version skew with the Kubernetes cluster. Kubectl
versions that are more than 1 version ahead of or behind the cluster are
not guaranteed to be compatible.
{% endpanel %}
## Control Plane and Addons
{% method %}
The `kubectl cluster-info` prints information about the control plane and
add-ons.
{% sample lang="yaml" %}
```bash
kubectl cluster-info
```
```bash
Kubernetes master is running at https://1.1.1.1
GLBCDefaultBackend is running at https://1.1.1.1/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://1.1.1.1/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://1.1.1.1/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://1.1.1.1/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
```
{% endmethod %}
{% panel style="info", title="Kube Proxy" %}
The URLs printed by `cluster-info` can be access at `127.0.0.1:8001` by
running `kubectl proxy`.
{% endpanel %}
## Nodes
{% method %}
The `kubectl top node` and `kubectl top pod` print information about the
top nodes and pods.
{% sample lang="yaml" %}
```bash
kubectl top node
```
```bash
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-dev-default-pool-e1e7bf6a-cc8b 37m 1% 571Mi 10%
gke-dev-default-pool-e1e7bf6a-f0xh 103m 5% 1106Mi 19%
gke-dev-default-pool-e1e7bf6a-jfq5 139m 7% 1252Mi 22%
gke-dev-default-pool-e1e7bf6a-x37l 112m 5% 982Mi 17%
```
{% endmethod %}
## APIs
The `kubectl api-versions` and `kubectl api-resources` print information
about the available Kubernetes APIs. This information is read from the
Discovery Service.
{% method %}
Print the Resource Types available in the cluster.
{% sample lang="yaml" %}
```bash
kubectl api-resources
```
```bash
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
endpoints ep true Endpoints
events ev true Event
limitranges limits true LimitRange
namespaces ns false Namespace
...
```
{% endmethod %}
{% method %}
Print the API versions available in the cluster.
{% sample lang="yaml" %}
```bash
kubectl api-versions
```
```bash
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
...
```
{% endmethod %}
{% panel style="info", title="Discovery" %}
The discovery information can be viewed at `127.0.0.1:8001/` by running
`kubectl proxy`. The Discovery for specific API can be found under either
`/api/v1` or `apis/<group>/<version>`, depending on the API group -
e.g. `127.0.0.1:8001/apis/apps/v1`
{% endpanel %}
{% method %}
The `kubectl explain` command can be used to print metadata about specific
Resource types. This is useful for learning about the type.
{% sample lang="yaml" %}
```bash
kubectl explain deployment --api-version apps/v1
```
```bash
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
metadata <Object>
Standard object metadata.
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
```
{% endmethod %}

View File

@ -0,0 +1,68 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Print verbose debug information about a Resource
{% endpanel %}
# Describe Resources
## Motivation
{% method %}
Describe is a **higher level printing operation that may aggregate data from other sources** in addition
to the Resource being queried (e.g. Events).
Describe pulls out the most important information about a Resource from the Resource itself and related
Resources, and formats and prints this information on multiple lines.
- Aggregates data from related Resources
- Formats Verbose Output for debugging
{% sample lang="yaml" %}
```bash
kubectl describe deployments
```
```bash
Name: nginx
Namespace: default
CreationTimestamp: Thu, 15 Nov 2018 10:58:03 -0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=1
Selector: app=nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-78f5d695bd (1/1 replicas created)
Events: <none>
```
{% endmethod %}
{% panel style="info", title="Get vs Describe" %}
When Describing a Resource, it may aggregate information from several other Resources. For instance Describing
a Node will aggregate Pod Resources to print the Node utilization.
When Getting a Resource, it will only print information available from reading that Resource. While Get may aggregate
data from the the *fields* of that Resource, it won't look at fields from other Resources.
{% endpanel %}

View File

@ -0,0 +1,187 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Format and print specific fields from Resources
- Use when scripting with Get
{% endpanel %}
# Print Resource Fields
## Motivation
Kubectl Get is able to pull out fields from Resources it queries and format them as output.
This may be **useful for scripting or gathering data** about Resources from a Kubernetes cluster.
## Get
The `kubectl get` reads Resources from the cluster and formats them as output. The examples in
this chapter will query for Resources by providing Get the *Resource Type* with the
Version and Group as an argument.
For more query options see [Queries and Options](queries_and_options.md).
Kubectl can format and print specific fields from Resources using Json Path.
{% panel style="warning", title="Scripting Pitfalls" %}
By default, if no API group or version is specified, kubectl will use the group and version preferred by
the apiserver.
Because the **Resource structure may change between API groups and Versions**, users *should* specify the
API Group and Version when emitting fields from `kubectl get` to make sure the command does not break
in future releases.
Failure to do this may result in the different API group / version being used after a cluster upgrade, and
this group / version may have changed the representation of fields.
{% endpanel %}
### JSON Path
Print the fields from the JSON Path
**Note:** JSON Path can also be read from a file using `-o custom-columns-file`.
- JSON Path template is composed of JSONPath expressions enclosed by {}. In addition to the original JSONPath syntax, several capabilities are added:
- The `$` operator is optional (the expression starts from the root object by default).
- Use "" to quote text inside JSONPath expressions.
- Use range operator to iterate lists.
- Use negative slice indices to step backwards through a list. Negative indices do not “wrap around” a list. They are valid as long as -index + listLength >= 0.
### JSON Path Symbols Table
| Function | Description | Example | Result |
|---|---|---|---|
| text | the plain text | kind is {.kind} | kind is List |
| @ | the current object | {@} | the same as input |
| . or [] | child operator | {.kind} or {[kind]} | List |
| .. | recursive descent | {..name} | 127.0.0.1 127.0.0.2 myself e2e |
| * | wildcard. Get all objects | {.items[*].metadata.name} | [127.0.0.1 127.0.0.2] |
| [start:end :step] | subscript operator | {.users[0].name} | myself |
| [,] | union operator | {.items[*][metadata.name, status.capacity]} |127.0.0.1 127.0.0.2 map[cpu:4] map[cpu:8] |
| ?() | filter | {.users[?(@.name==“e2e”)].user.password} | secret |
| range, end | iterate list | {range .items[*]}[{.metadata.name}, {.status.capacity}] {end} | [127.0.0.1, map[cpu:4]] [127.0.0.2, map[cpu:8]] |
| “ | quote interpreted string | {range .items[*]}{.metadata.name}{\t} {end} | 127.0.0.1 127.0.0.2|
---
{% method %}
Print the JSON representation of the first Deployment in the list on a single line.
{% sample lang="yaml" %}
```bash
kubectl get deployment.v1.apps -o=jsonpath='{.items[0]}{"\n"}'
```
```bash
map[apiVersion:apps/v1 kind:Deployment...replicas:1 updatedReplicas:1]]
```
{% endmethod %}
---
{% method %}
Print the `metadata.name` field for the first Deployment in the list.
{% sample lang="yaml" %}
```bash
kubectl get deployment.v1.apps -o=jsonpath='{.items[0].metadata.name}{"\n"}'
```
```bash
nginx
```
{% endmethod %}
---
{% method %}
For each Deployment, print its `metadata.name` field and a newline afterward.
{% sample lang="yaml" %}
```bash
kubectl get deployment.v1.apps -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
```
```bash
nginx
nginx2
```
{% endmethod %}
---
{% method %}
For each Deployment, print its `metadata.name` and `.status.availableReplicas`.
{% sample lang="yaml" %}
```bash
kubectl get deployment.v1.apps -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.availableReplicas}{"\n"}{end}'
```
```bash
nginx 1
nginx2 1
```
{% endmethod %}
---
{% method %}
Print the list of Deployments as single line.
{% sample lang="yaml" %}
```bash
kubectl get deployment.v1.apps -o=jsonpath='{@}{"\n"}'
```
```bash
map[kind:List apiVersion:v1 metadata:map[selfLink: resourceVersion:] items:[map[apiVersion:apps/v1 kind:Deployment...replicas:1 updatedReplicas:1]]]]
```
{% endmethod %}
---
{% method %}
Print each Deployment on a new line.
{% sample lang="yaml" %}
```bash
kubectl get deployment.v1.apps -o=jsonpath='{range .items[*]}{@}{"\n"}{end}'
```
```bash
map[kind:Deployment...readyReplicas:1]]
map[kind:Deployment...readyReplicas:1]]
```
{% endmethod %}
---
{% panel style="info", title="Literal Syntax" %}
On Windows, you must double quote any JSONPath template that contains spaces (not single quote as shown above for bash).
This in turn means that you must use a single quote or escaped double quote around any literals in the template.
For example:
```bash
C:\> kubectl get pods -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.startTime}{'\n'}{end}"
```
{% endpanel %}

View File

@ -0,0 +1,180 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Queries for Getting or Describing Resources
{% endpanel %}
# Matching Objects from Get and Describing
## Motivation
Match Resources with Queries when Getting or Describing them.
{% method %}
## Resource Config By `kustomization.yaml`
Get all Resources provided by the `kustomization.yaml` in project/.
{% sample lang="yaml" %}
```bash
kubectl get -k project/
```
{% endmethod %}
{% method %}
## Resource Config By Dir
Get all Resources present in the Resource Config for a directory.
{% sample lang="yaml" %}
```bash
kubectl get -f configs/
```
{% endmethod %}
{% method %}
## Resource Types
Get **all** Resources in a namespace for a given type.
The Group and Version for the Resource are determined by the apiserver discovery service.
The Singular, Plural, Short Name also apply to *Types with Name* and *Types with Selectors*.
{% sample lang="yaml" %}
```bash
# Plural
kubectl get deployments
```
```bash
# Singular
kubectl get deployment
```
```bash
# Short name
kubectl get deploy
```
{% endmethod %}
{% method %}
## Resource Types with Group / Version
Get **all** Resources in a namespace for a given type.
The Group and Version for the Resource are explicit.
{% sample lang="yaml" %}
```bash
kubectl get deployments.apps
```
```bash
kubectl get deployments.v1.apps
```
{% endmethod %}
{% method %}
## Resource Types with Name
Get named Resources in a namespace for a given type.
{% sample lang="yaml" %}
```bash
kubectl get deployment nginx
```
{% endmethod %}
{% method %}
## Label Selector
Get **all** Resources in a namespace **matching a label select** for a given type.
{% sample lang="yaml" %}
```bash
kubectl get deployments -l app=nginx
```
{% endmethod %}
{% method %}
## Namespaces
By default Get and Describe will fetch resource in the default namespace or the namespace specified
with `--namespace`.
The `---all-namespaces` flag will **fetch Resources from all namespaces**.
{% sample lang="yaml" %}
```bash
kubectl get deployments --all-namespaces
```
{% endmethod %}
{% method %}
## List multiple Resource types
Get and Describe can accept **multiple Resource types**, and it will print them both in separate sections.
{% sample lang="yaml" %}
```bash
kubectl get deployments,services
```
{% endmethod %}
{% method %}
## List multiple Resource types by name
Get and Describe can accept **multiple Resource types and names**.
{% sample lang="yaml" %}
```bash
kubectl get kubectl get rc/web service/frontend pods/web-pod-13je7
```
{% endmethod %}
{% method %}
## Uninitialized
Kubernetes **Resources may be hidden until they have gone through an initialization process**.
These Resources can be view with the `--include-uninitialized` flag.
{% sample lang="yaml" %}
```bash
kubectl get deployments --include-uninitialized
```
{% endmethod %}
{% method %}
## Not Found
By default, Get or Describe **will return an error if an object is requested and doesn't exist**.
The `--ignore-not-found` flag will cause kubectl to exit 0 if the Resource is not found
{% sample lang="yaml" %}
```bash
kubectl get deployment nginx --ignore-not-found
```
{% endmethod %}

View File

@ -0,0 +1,229 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Get or List Raw Resources in a cluster as Yaml or Json
{% endpanel %}
# Print Raw Resource
## Motivation
Inspecting or Debugging Resources.
The Kubernetes Resources stored in etcd by the apiserver have **many more fields than
are shown in the summarized views**. Users can learn much more about a Resource by
viewing the Raw Resource as Yaml or Json. The Raw Resource will contain:
- fields specified by the **user** in the Resource Config (e.g. `metadata.name`)
- metadata fields owned by the **apiserver** (e.g. `metadata.creationTimestamp`)
- fields defaulted by the **apiserver** (e.g. `spec..imagePullPolicy`)
- fields set by **Controllers** (e.g. `spec.clusterIp`, `status`)
## Get
The `kubectl get` reads Resources from the cluster and formats them as output. The examples in
this chapter will query for Resources by providing Get the *Resource Type* as an argument.
For more query options see [Queries and Options](queries_and_options.md).
{% method %}
### YAML
Print the Raw Resource formatting it as YAML.
{% sample lang="yaml" %}
```bash
kubectl get deployments -o yaml
```
```yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-11-15T18:58:03Z
generation: 1
labels:
app: nginx
name: nginx
namespace: default
resourceVersion: "1672574"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
uid: 6131547f-e908-11e8-9ff6-42010a8a00d1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-11-15T18:58:10Z
lastUpdateTime: 2018-11-15T18:58:10Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-11-15T18:58:03Z
lastUpdateTime: 2018-11-15T18:58:10Z
message: ReplicaSet "nginx-78f5d695bd" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
```
{% endmethod %}
---
{% method %}
### JSON
Print the Raw Resource formatting it as JSON.
{% sample lang="yaml" %}
```bash
kubectl get deployments -o json
```
```json
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"annotations": {
"deployment.kubernetes.io/revision": "1"
},
"creationTimestamp": "2018-11-15T18:58:03Z",
"generation": 1,
"labels": {
"app": "nginx"
},
"name": "nginx",
"namespace": "default",
"resourceVersion": "1672574",
"selfLink": "/apis/extensions/v1beta1/namespaces/default/deployments/nginx",
"uid": "6131547f-e908-11e8-9ff6-42010a8a00d1"
},
"spec": {
"progressDeadlineSeconds": 600,
"replicas": 1,
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"app": "nginx"
}
},
"strategy": {
"rollingUpdate": {
"maxSurge": "25%",
"maxUnavailable": "25%"
},
"type": "RollingUpdate"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "nginx"
}
},
"spec": {
"containers": [
{
"image": "nginx",
"imagePullPolicy": "Always",
"name": "nginx",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File"
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"terminationGracePeriodSeconds": 30
}
}
},
"status": {
"availableReplicas": 1,
"conditions": [
{
"lastTransitionTime": "2018-11-15T18:58:10Z",
"lastUpdateTime": "2018-11-15T18:58:10Z",
"message": "Deployment has minimum availability.",
"reason": "MinimumReplicasAvailable",
"status": "True",
"type": "Available"
},
{
"lastTransitionTime": "2018-11-15T18:58:03Z",
"lastUpdateTime": "2018-11-15T18:58:10Z",
"message": "ReplicaSet \"nginx-78f5d695bd\" has successfully progressed.",
"reason": "NewReplicaSetAvailable",
"status": "True",
"type": "Progressing"
}
],
"observedGeneration": 1,
"readyReplicas": 1,
"replicas": 1,
"updatedReplicas": 1
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": ""
}
}
```
{% endmethod %}

View File

@ -0,0 +1,150 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Get a Summary of Resources Running in the Cluster
{% endpanel %}
# Summarizing Resources
## Motivation
Quickly summarizing a collection of Resources and their state.
Summarizing Resource State using a columnar format is the most common way to view cluster
state when developing applications or triaging issues. The **columnar view gives a compact
summary of the most relevant information** for a collection of Resources.
## Get
The `kubectl get` reads Resources from the cluster and formats them as output. The examples in
this chapter will query for Resources by providing Get the *Resource Type* as an argument.
For more query options see [Queries and Options](queries_and_options.md).
{% method %}
### Default
If no output format is specified, Get will print a default set of columns.
**Note:** Some columns *may* not directly map to fields on the Resource, but instead may
be a summary of fields.
{% sample lang="yaml" %}
```bash
kubectl get deployments nginx
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 0 5s
```
{% endmethod %}
---
{% method %}
### Wide
Print the default columns plus some additional columns.
**Note:** Some columns *may* not directly map to fields on the Resource, but instead may
be a summary of fields.
{% sample lang="yaml" %}
```bash
kubectl get -o=wide deployments nginx
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 1 1 1 1 26s nginx nginx app=nginx
```
{% endmethod %}
---
{% method %}
### Custom Columns
Print out specific fields as Columns.
**Note:** Custom Columns can also be read from a file using `-o custom-columns-file`.
{% sample lang="yaml" %}
```bash
kubectl get deployments -o custom-columns="Name:metadata.name,Replicas:spec.replicas,Strategy:spec.strategy.type"
```
```bash
Name Replicas Strategy
nginx 1 RollingUpdate
```
{% endmethod %}
---
{% method %}
#### Labels
Print out specific labels each as their own columns
{% sample lang="yaml" %}
```bash
kubectl get deployments -L=app
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE APP
nginx 1 1 1 1 8m nginx
```
{% endmethod %}
---
{% method %}
### Show Labels
Print out all labels on each Resource in a single column (last).
{% sample lang="yaml" %}
```bash
kubectl get deployment --show-labels
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS
nginx 1 1 1 1 7m app=nginx
```
{% endmethod %}
---
{% method %}
### Show Kind
Print out the Group.Kind as part of the Name column.
**Note:** This can be useful if the user did not specify the group in the command and
they want to know which API is being used.
{% sample lang="yaml" %}
```bash
kubectl get deployments --show-kind
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/nginx 1 1 1 1 8m
```
{% endmethod %}

View File

@ -0,0 +1,51 @@
{% panel style="success", title="Providing Feedback" %}
**Provide feedback at the [survey](https://www.surveymonkey.com/r/JH35X82)**
{% endpanel %}
{% panel style="info", title="TL;DR" %}
- Continuously Watch and print Resources as they change
{% endpanel %}
# Watching Resources for changes
## Motivation
Print Resources as they are updated.
{% method %}
It is possible to have `kubectl get` **continuously watch for changes to objects**, and print the objects
when they are changed or when the watch is reestablished.
{% sample lang="yaml" %}
```bash
kubectl get deployments --watch
```
```bash
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 6h
nginx2 1 1 1 1 21m
```
{% endmethod %}
{% panel style="danger", title="Watch Timeouts" %}
Watch **timesout after 5 minutes**, after which kubectl will re-establish the watch and print the
resources.
{% endpanel %}
{% method %}
It is possible to have `kubectl get` continuously watch for changes to objects **without fetching them first**
using the `--watch-only` flag.
{% sample lang="yaml" %}
```bash
kubectl get deployments --watch-only
```
{% endmethod %}

View File

@ -0,0 +1,79 @@
# SIG cli maintainers Guide
## Sustaining engineering tasks
The following tasks need to be performed consistently as a part of maintaining the health
of SIG cli. We will be developing an oncall rotation for working on these tasks, where
the oncall is responsible to doing each task daily.
### Issue triage
Routinely monitor the newly filed issues and triage them to make sure we identify regressions.
[Kubectl repo](https://github.com/kubernetes/kubectl/issues)
[Kubernetes repo](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3Asig%2Fcli)
Look for:
- Requests for help
- Don't spend a lot of time on these, but answer and close them if it is easy
- Regressions and bugs
- Find the root cause
- Triage the severity
- Issues only occurring in old versions but not in new versions are less severe
- Simple issues for new contributors
- Label these with "for-new-contributors"
- Give them a priority
- Make sure they are
- Small
- Well scoped
- In areas of code with minimal technical debt
- In areas of code with strong ownership already
- Feature requests
- Do one of
- Close them with an explanation along the lines of "Don't have capacity right now, try reopening in 6 months"
- Label them with a "priority"
### Test triage
Monitor [test grid](https://k8s-testgrid.appspot.com/sig-cli-master)
and make sure the tests are passing.
If any tests are failing, debug them and send a fix. Ask for help if you get stuck.
### PR review
Make sure PRs aren't getting stuck without attention. If reviewers routinely don't respond
to PRs within a few days, we should take those reviewers out of the list.
Look through the PR list with [SIG cli](https://github.com/kubernetes/kubernetes/pulls?utf8=%E2%9C%93&q=is%3Apr%20is%3Aopen%20label%3Asig%2Fcli)
## New contributor assistance
- Look through issues labeled "for-new-contributors" that are assigned, and make sure they are active.
If they haven't had activity in a couple days, ping the assignee and ask if help is needed.
- Identify issues for new contributors to pick up
- Figure out a progression for new contributors to become reviewers
## Per-release tasks
### At the start of the dev cycle
- Write planned features for each release
- Use the [template](../template.md)
### During code-freeze
- Daily look at issues labeled with [sig/cli in the milestone](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3Asig%2Fcli%20milestone%3Av1.9%20) and make sure they are owned and make progress
- **Note:** You will need to update the milestone in the link to the current milestone
## Every 3-6 months tasks
### (3 months) Report about SIG cli at the community meeting
TODO: fill this in
### (6 months) Setup a SIG cli face-to-face
TODO: fill this in

View File

@ -0,0 +1,192 @@
# SIG CLI Issue Backlog
Grooming work for new and existing contributors
- Link: https://goo.gl/YEq33R
- Author: @pwittrock
- Last updated: 10/25/17
## Background
A goal of SIG CLI is to keep the SIG open to contributions from anyone willing to do the work to get involved.
While many folks have shown interest in contributing, we have struggled to keep most folks engaged.
Kubernetes recently conducted a survey of new contributors, and major themes were that:
- Contributors dont know where to start
- There are too many details to learn before contributing
- Communication is hard / everyone is too busy to help
- Hard to get reviewers on PRs
These challenges can be reduced by:
- Providing a backlog for contributors to browse and pull work from.
- Scoping work that can be done with minimal experience so folks can pick up and work on it.
- Marking issues as good first time issues if they can be done by someone with no experience.
- Reducing the need for constant communication by having the work be well defined and clearly scoped in the issues
themselves.
- Ensuring each issue has a stake holder that is committed to seeing that changes are reviewed.
[New Contributors Project]: https://github.com/kubernetes/kubectl/projects/3
## Contribution lifecycle
1. A [good issue](#what-makes-a-good-issue) is created with description and labels.
1. SIG agrees that work for issue will be accepted.
1. Issue moved to the _backlog_ column in the [New Contributors Project].
1. Contributor assigns issue to self, or asks issue be assigned if they are not a Kubernetes org member.
1. Issue moved to the _assigned_ column in the [New Contributors Project].
1. Contributor updates issue weekly with status updates, and pushes work to fork
- Periodic feedback provided
- Discussion between contributor and stakeholder occurs on issue
1. Contributor sends PR for review
- Stakeholder ensures the appropriate reviewers exist
- Discussion and updates occur on the PR
1. PR accepted and merged
## What makes a good issue?
### Stakeholder and Contributor
A stakeholder typically files the issue, wants to see the work done, and will
find reviewers for PRs that address the issue.
The contributor is the issue assignee - they provide PRs for review
to close the issue.
The stakeholder may become the contributor. They must find a new stakeholder to
review the work and help follow through on issue closure.
### Encapsulated
Issues that require modifying large pieces of existing code are typically hard
to accept without multiple reviewers, require a high degree of communication and require knowledge of the
existing codebase.
This makes them bad candidates for contributors looking to get started independently.
Issues with good encapsulation have the following properties:
- Minimal wiring or changes to existing code
- Can disable / enable with a flag
- Easy to review the contribution on its own without needing to examine other parts of the system
- Low chance of needing to rebase or conflicting with changes made in parallel
### Consensus on work within the SIG
Work described in issues in the backlog should be agreed upon by the SIG. PRs
sent for review should have the code reviewed, not the _reason_ for doing the
PR.
SIG CLI needs to come up with low overhead a process for accepting proposed work.
1. Create an issue for the work
2. SIG agrees to accept work for the issue (as described) if it is completed
3. Add issue to the issue backlog
## Types of code contributions
### Code documentation
Documenting code is an excellent starter task. It is easier to merge and get consensus on than writing tests or
features. It also provides a structured approach to learning about the system and its components.
For the packages that need it most, understanding the code base well enough to document it may be an involved task.
- Adding doc.go files to packages that are missing them
- Updating doc.go files that are empty placeholders
- Adding examples of how to use types and functions to packages
- Documenting functions with their purpose and usage
### Test coverage
Improving test coverage and augmenting e2e tests with integration tests is also a good candidate for 1st and
2nd time contributors. Writing tests for libraries requires understanding how they behave and are meant to be used.
Improving code coverage allows the project to move more quickly by reducing regressions issues that the SIG must field,
and by providing a safety net for code reviewers ensuring changes dont break existing functionality.
[e2e tests]: https://github.com/kubernetes/community/blob/master/contributors/devel/e2e-tests.md
- Write unit tests for functionality currently only covered by integration and [e2e tests].
> Integration tests may run processes, such as the apiserver, but do so locally.
> E2e tests run a full Kubernetes cluster (remote).
- Write integration tests for functionality currently only covered by [e2e tests].
- Improve coverage for edge cases and different inputs to functions.
- Improve handling of invalid arguments.
- Refactoring existing tests to pull out common code into reusable functions.
> _This should be very well scoped as it impacts existing tests and reviewers need to make sure nothing breaks._
### New libraries
Encapsulated libraries (collections of functions devoted to one simple purpose - e.g. date/time utils)
are great contributions for experienced contributors - either programming in Go, or
with Kubernetes.
Because the libraries are encapsulated, it is easier for reviewers to determine the correctness
of their interactions with the existing system. If the functionality is new or can be disabled with a flag, the
risk of accepting the change is reduced, improving the chance the change will be accepted.
### Modifying existing libraries
Tasks to perform non-trivial changes to existing libraries should be reserved only for folks who have made
multiple successful contributions of code - either tests or libraries. PRs to modify existing libraries typically
have multiple reviewers, and can have subtle side effects that need to be carefully checked for.
Improvements in documentation and testing (above) reduces the burden to modify existing code.
## Managing a backlog issue
### Setting Labels
For contributors to pick up new tasks independently, the scope and complexity of the task must be well documented
on the issue. We use labels to define the most important metadata about the issues in the backlog.
#### Size
- size/S
> 4-10 hours
- size/M
> 10-20 hours
- size/L
> 20+ hours
#### Type (docs / tests / feature)
- type/code-cleanup
> Usually some refactoring or small rewrites of code.
- type/code-documentation
> Write doc.go with package overview and examples or document types and functions.
- type/code-feature
> Usually a new go package / library for some functionality. Should be encapsulated.
- type/code-test-coverage
> Audit tests for a package. Run coverage tools and also manually look at what functions are missing unit or
> integration tests. Write tests for these functions.
### Description
- Clear description of what the outcome is
- Pointers to examples if they exist
- Clear stakeholder who will be responsive to the issue and is committed to getting the functionality added
## Assigning an issue once work has started
1. Contributor messages stakeholder on the issue, and maybe on slack
1. Stakeholder moves issue from backlog to assigned
1. Contributor updates issue weekly and publishes work in progress to a fork
> The issue should be updated with a link to the fork.
1. Once work is ready for review, contributor files a PR and notifies the stakeholder
## What is expected as part of contributing a library?
### Documentation
- doc.go
- Comments on all functions and types
### Tests
- Unit tests for functions and types
- Integration tests with a local control plane (forthcoming)
- Maybe e2e tests (only a couple)
### Ownership of addressing issues
- Fix bugs discovered in the contribution after it has been accepted

26
docs/roadmap/template.md Normal file
View File

@ -0,0 +1,26 @@
Use this template for writing roadmaps for releases
# Release 1.X roadmap
## Planned Features
### Feature Y
Short description
Owners
- Link to design proposal
- Link to issue
### Feature Z
Short description
- Link to design proposal
- Link to issue
## Planned Technical Debt Cleanup
## Planned Bug Fixes