Sync published with master (#8822)

* Updated Windows Release that supports Kubernetes

Changed from old outdated Edge release to reflect use of a stable release.  Kubernetes page actually reflects this version as well (so its an error on this page only).

* Interlock link fixes (#8798)

* Remove outdated links/fix links

* Next steps link fix

* Next steps link fixes

* Logging driver 920 (#8625)

* Logging driver port from vnext-engine

* Update json-file.md

* Update json-file.md

* Port changes from vnext-engine

* Updates based on feedback

* Added note back in

* Added note back in

* Added limitations per Anusha

* New dual logging info

* Added link to new topic

Needs verification.

* Changes per feedback.

* Updates per feedback

* Updates per feedback

* Updated 20m

* Added CE version

* Added missing comma

* Updates per feedback

* Add raw tag
Add TOC entry - subject to change

* Add entry for local logging driver

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Update config/containers/logging/configure.md

Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>

* Updates per feedback

* Updates per feedback

* Update zfs-driver.md (#8735)

* Update zfs-driver.md

* Add suggested correction

* Removed HA Proxy Link

* Added Azure Disk and Azure File Storage for UCP Workloads (#8774)

* Added Azure Disk and Azure File

I have added Azure Disk and Azure file documentation for use with UCP
3.0 or newer.

* Added the Azure Disk Content
* Added the Azure File Content
* Updated the Toc to include Azure Disk and Azure File

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Responding to feedback, inc changing Azure File to Azure Files

Following on from Steven and Deeps feedback this commit addresses those
nits. Including changing `Operators` to `Platform Operators`, switching
`Azure File` to `Azure Files` and many small formating changes.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Minor style updates

* Minor style updates

* Final edits

* Removed Ubuntu 14.04 warnings from Docker UCP install Page (#8804)

We dropped support for Ubuntu 14.04 in Enterprise 2.1 / UCP 3.1, however
the installation instructions still carry 14.04 warnings.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Fix broken link (#8801)

* ubuntu.md: remove old docker-ce-cli (#8665)

I hit the following error when "upgrading" docker-ce 18.09 to docker-ee 17.06:

> dpkg: error processing archive /var/cache/apt/archives/docker-ee_3%3a17.06.2~ee~19~3-0~ubuntu_amd64.deb (--unpack):
 trying to overwrite '/usr/share/fish/vendor_completions.d/docker.fish', which is also in package docker-ce-cli 5:18.09.4~2.1.rc1-0~ubuntu-xenial

This commit adds `docker-ce-cli` to the list in "uninstall old packages" to fix this.

* Updated UCP CLI Reference to 3.1.7 (#8805)

-Updated all of the UCP 3.1.7 references.
-Alphabeticalised each reference
-Added very a value is expected or not after each variable.

Signed-off-by: Olly Pomeroy <olly@docker.com>

* Fix numbering issue

* Fix formatting

* Added UCP Kubernetes Secure RBAC Defaults (#8810)

* Added Kubernetes Secure RBAC Defaults

* Style updates

* Final edits
This commit is contained in:
Maria Bermudez 2019-05-20 18:53:05 -07:00 committed by GitHub
parent da6c0eb2c4
commit be059a0c15
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
23 changed files with 670 additions and 155 deletions

View File

@ -1377,8 +1377,12 @@ manuals:
path: /ee/ucp/kubernetes/kubernetes-network-encryption/
- sectiontitle: Persistent Storage
section:
- title: Use NFS storage
- title: Use NFS Storage
path: /ee/ucp/kubernetes/storage/use-nfs-volumes/
- title: Use Azure Disk Storage
path: /ee/ucp/kubernetes/storage/use-azure-disk/
- title: Use Azure Files Storage
path: /ee/ucp/kubernetes/storage/use-azure-files/
- title: Use AWS EBS Storage
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
- title: API reference

View File

@ -27,7 +27,8 @@ Starting with Docker Engine Enterprise 18.03.1-ee-1, you can use `docker logs` t
logs regardless of the configured logging driver or plugin. This capability, sometimes referred to
as dual logging, allows you to use `docker logs` to read container logs locally in a consistent format,
regardless of the remote log driver used, because the engine is configured to log information to the “local”
logging driver. Refer to [Configure the default logging driver](/configure) for additional information.
logging driver. Refer to [Configure the default logging driver](/config/containers/logging/configure) for additional information.
## Prerequisites

View File

@ -16,18 +16,18 @@ Docker Trusted Registry has a global setting for repository event auto-deletion.
3. Scroll down to **Repository Events** and turn on ***Auto-Deletion***.
![](../../images/auto-delete-repo-events-0.png){: .img-fluid .with-border}
![](../../images/auto-delete-repo-events-0.png){: .img-fluid .with-border}
4. Specify the conditions with which an event auto-deletion will be triggered.
![](../../images/auto-delete-repo-events-1.png){: .img-fluid .with-border}
![](../../images/auto-delete-repo-events-1.png){: .img-fluid .with-border}
DTR allows you to set your auto-deletion conditions based on the following optional repository event attributes:
DTR allows you to set your auto-deletion conditions based on the following optional repository event attributes:
| Name | Description | Example |
|:----------------|:---------------------------------------------------| :----------------|
| Age | Lets you remove events older than your specified number of hours, days, weeks or months| `2 months` |
| Max number of events | Lets you specify the maximum number of events allowed in the repositories. | `6000` |
| Name | Description | Example |
|:----------------|:---------------------------------------------------| :----------------|
| Age | Lets you remove events older than your specified number of hours, days, weeks or months| `2 months` |
| Max number of events | Lets you specify the maximum number of events allowed in the repositories. | `6000` |
If you check and specify both, events in your repositories will be removed during garbage collection if either condition is met. You should see a confirmation message right away.
@ -35,7 +35,7 @@ If you check and specify both, events in your repositories will be removed durin
6. Navigate to **System > Job Logs** to confirm that `onlinegc` has happened.
![](../../images/auto-delete-repo-events-2.png){: .img-fluid .with-border}
![](../../images/auto-delete-repo-events-2.png){: .img-fluid .with-border}
## Where to go next

View File

@ -52,17 +52,6 @@ Make sure you follow the [UCP System requirements](system-requirements.md)
for opening networking ports. Ensure that your hardware or software firewalls
are open appropriately or disabled.
> Ubuntu 14.04 mounts
>
> For UCP to install correctly on Ubuntu 14.04, `/mnt` and other mounts
> must be shared:
> ```
> sudo mount --make-shared /mnt
> sudo mount --make-shared /
> sudo mount --make-shared /run
> sudo mount --make-shared /dev
> ```
To install UCP:
1. Use ssh to log in to the host where you want to install UCP.

View File

@ -23,7 +23,7 @@ For an example, see [Deploy stateless app with RBAC](deploy-stateless-app.md).
## Subjects
A subject represents a user, team, organization, or service account. A subject
A subject represents a user, team, organization, or a service account. A subject
can be granted a role that defines permitted operations against one or more
resource sets.
@ -34,19 +34,19 @@ resource sets.
- **Organization**: A group of teams that share a specific set of permissions,
defined by the roles of the organization.
- **Service account**: A Kubernetes object that enables a workload to access
cluster resources that are assigned to a namespace.
cluster resources which are assigned to a namespace.
Learn to [create and configure users and teams](create-users-and-teams-manually.md).
## Roles
Roles define what operations can be done by whom. A role is a set of permitted
operations against a type of resource, like a container or volume, that's
assigned to a user or team with a grant.
operations against a type of resource, like a container or volume, which is
assigned to a user or a team with a grant.
For example, the built-in role, **Restricted Control**, includes permission to
For example, the built-in role, **Restricted Control**, includes permissions to
view and schedule nodes but not to update nodes. A custom **DBA** role might
include permissions to `r-w-x` volumes and secrets.
include permissions to `r-w-x` (read, write, and execute) volumes and secrets.
Most organizations use multiple roles to fine-tune the appropriate access. A
given team or user may have different roles provided to them depending on what
@ -71,7 +71,7 @@ To control user access, cluster resources are grouped into Docker Swarm
is a logical area for a Kubernetes cluster. Kubernetes comes with a `default`
namespace for your cluster objects, plus two more namespaces for system and
public resources. You can create custom namespaces, but unlike Swarm
collections, namespaces _can't be nested_. Resource types that users can
collections, namespaces _cannot be nested_. Resource types that users can
access in a Kubernetes namespace include pods, deployments, network policies,
nodes, services, secrets, and many more.
@ -80,11 +80,12 @@ Together, collections and namespaces are named *resource sets*. Learn to
## Grants
A grant is made up of *subject*, *role*, and *resource set*.
A grant is made up of a *subject*, a *role*, and a *resource set*.
Grants define which users can access what resources in what way. Grants are
effectively Access Control Lists (ACLs), and when grouped together, they
provide comprehensive access policies for an entire organization.
effectively **Access Control Lists** (ACLs) which
provide comprehensive access policies for an entire organization when grouped
together.
Only an administrator can manage grants, subjects, roles, and access to
resources.
@ -96,6 +97,37 @@ resources.
> and applies grants to users and teams.
{: .important}
## Secure Kubernetes defaults
For cluster security, only users and service accounts granted the `cluster-admin` ClusterRole for
all Kubernetes namespaces via a ClusterRoleBinding can deploy pods with privileged options. This prevents a
platform user from being able to bypass the Universal Control Plane Security Model.
These privileged options include:
- `PodSpec.hostIPC` - Prevents a user from deploying a pod in the host's IPC
Namespace.
- `PodSpec.hostNetwork` - Prevents a user from deploying a pod in the host's
Network Namespace.
- `PodSpec.hostPID` - Prevents a user from deploying a pod in the host's PID
Namespace.
- `SecurityContext.allowPrivilegeEscalation` - Prevents a child process
of a container from gaining more privileges than its parent.
- `SecurityContext.capabilities` - Prevents additional [Linux
Capabilities](https://docs.docker.com/engine/security/security/#linux-kernel-capabilities)
from being added to a pod.
- `SecurityContext.privileged` - Prevents a user from deploying a [Privileged
Container](https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities).
- `Volume.hostPath` - Prevents a user from mounting a path from the host into
the container. This could be a file, a directory, or even the Docker Socket.
If a user without a cluster admin role tries to deploy a pod with any of these
privileged options, an error similar to the following example is displayed:
```bash
Error from server (Forbidden): error when creating "pod.yaml": pods "mypod" is forbidden: user "<user-id>" is not an admin and does not have permissions to use privileged mode for resource
```
## Where to go next
- [Create and configure users and teams](create-users-and-teams-manually.md)

View File

@ -120,7 +120,6 @@ Because Interlock passes the extension configuration directly to the extension,
different configuration options available. Refer to the documentation for each extension for supported options:
- [Nginx](nginx-config.md)
- [HAproxy](haproxy-config.md)
#### Customize the default proxy service
The default proxy service used by UCP to provide layer 7 routing is NGINX. If users try to access a route that hasn't been configured, they will see the default NGINX 404 page:

View File

@ -0,0 +1,239 @@
---
title: Configuring Azure Disk Storage for Kubernetes
description: Learn how to add persistent storage to your Docker Enterprise clusters running on Azure with Azure Disk.
keywords: Universal Control Plane, UCP, Docker EE, Kubernetes, storage, volume
redirect_from:
---
Platform operators can provide persistent storage for workloads running on
Docker Enterprise and Microsoft Azure by using Azure Disk. Platform
operators can either pre-provision Azure Disks to be consumed by Kubernetes
Pods, or can use the Azure Kubernetes integration to dynamically provision Azure
Disks on demand.
## Prerequisites
This guide assumes you have already provisioned a UCP environment on
Microsoft Azure. The Cluster must be provisioned after meeting all of the
prerequisites listed in [Install UCP on
Azure](/ee/ucp/admin/install/install-on-azure.md).
Additionally, this guide uses the Kubernetes Command Line tool `$
kubectl` to provision Kubernetes objects within a UCP cluster. Therefore, this
tool must be downloaded, along with a UCP client bundle. For more
information on configuring CLI access for UCP, see [CLI Based
Access](/ee/ucp/user-access/cli/).
## Manually provision Azure Disks
An operator can use existing Azure Disks or manually provision new ones to
provide persistent storage for Kubernetes Pods. Azure Disks can be manually
provisioned in the Azure Portal, using ARM Templates or the Azure CLI. The
following example uses the Azure CLI to manually provision an Azure
Disk.
```bash
$ RG=myresourcegroup
$ az disk create \
--resource-group $RG \
--name k8s_volume_1 \
--size-gb 20 \
--query id \
--output tsv
```
Using the Azure CLI command in the previous example should return the Azure ID of the Azure Disk
Object. If you are provisioning Azure resources using an alternative method,
make sure you retrieve the Azure ID of the Azure Disk, because it is needed for another step.
```
/subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
```
You can now create Kubernetes Objects that refer to this Azure Disk. The following
example uses a Kubernetes Pod. However, the same Azure Disk syntax can be
used for DaemonSets, Deployments, and StatefulSets. In the following example, the
Azure Disk Name and ID reflect the manually created Azure Disk.
```bash
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: mypod-azuredisk
spec:
containers:
- image: nginx
name: mypod
volumeMounts:
- name: mystorage
mountPath: /data
volumes:
- name: mystorage
azureDisk:
kind: Managed
diskName: k8s_volume_1
diskURI: /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
EOF
```
## Dynamically provision Azure Disks
### Define the Azure Disk Storage Class
Kubernetes can dynamically provision Azure Disks using the Azure Kubernetes
integration, which was configured when UCP was installed. For Kubernetes
to determine which APIs to use when provisioning storage, you must
create Kubernetes Storage Classes specific to each storage backend. For more
information on Kubernetes Storage Classes, see [Storage
Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
In Azure there are 2 different Azure Disk types that can be consumed by
Kubernetes: Azure Disk Standard Volumes and Azure Disk Premium Volumes. For more
information on their differences, see [Azure
Disks](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/disks-types).
Depending on your use case, you can deploy one or both of the Azure Disk storage Classes (Standard and Advanced).
To create a Standard Storage Class:
```bash
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Standard_LRS
kind: Managed
EOF
```
To Create a Premium Storage Class:
```bash
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: premium
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Premium_LRS
kind: Managed
EOF
```
To determine which Storage Classes have been provisioned:
```bash
$ kubectl get storageclasses
NAME PROVISIONER AGE
premium kubernetes.io/azure-disk 1m
standard kubernetes.io/azure-disk 1m
```
### Create an Azure Disk with a Persistent Volume Claim
After you create a Storage Class, you can use Kubernetes
Objects to dynamically provision Azure Disks. This is done using Kubernetes
Persistent Volumes Claims. For more information on Kubernetes Persistent Volume
Claims, see
[PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction).
The following example uses the standard storage class and creates a 5 GiB Azure Disk. Alter these values to fit your use case.
```bash
$ cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: azure-disk-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
EOF
```
At this point, you should see a new Persistent Volume Claim and Persistent Volume
inside of Kubernetes. You should also see a new Azure Disk created in the Azure
Portal.
```bash
$ kubectl get persistentvolumeclaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
azure-disk-pvc Bound pvc-587deeb6-6ad6-11e9-9509-0242ac11000b 5Gi RWO standard 1m
$ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-587deeb6-6ad6-11e9-9509-0242ac11000b 5Gi RWO Delete Bound default/azure-disk-pvc standard 3m
```
### Attach the new Azure Disk to a Kubernetes pod
Now that a Kubernetes Persistent Volume has been created, you can mount this into
a Kubernetes Pod. The disk can be consumed by any Kubernetes object type, including
a Deployment, DaemonSet, or StatefulSet. However, the following example just mounts
the persistent volume into a standalone pod.
```bash
$ cat <<EOF | kubectl create -f -
kind: Pod
apiVersion: v1
metadata:
name: mypod-dynamic-azuredisk
spec:
containers:
- name: mypod
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: azure-disk-pvc
EOF
```
### Azure Virtual Machine data disk capacity
In Azure, there are limits to the number of data disks that can be attached to
each Virtual Machine. This data is shown in [Azure Virtual Machine
Sizes](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes-general).
Kubernetes is aware of these restrictions, and prevents pods from
deploying on Nodes that have reached their maximum Azure Disk Capacity.
This can be seen if a pod is stuck in the `ContainerCreating` stage:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mypod-azure-disk 0/1 ContainerCreating 0 4m
```
Describing the pod displays troubleshooting logs, showing the node has
reached its capacity:
```bash
$ kubectl describe pods mypod-azure-disk
<...>
Warning FailedAttachVolume 7s (x11 over 6m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" : Attach volume "kubernetes-dynamic-pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" to instance "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Compute/virtualMachines/worker-03" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: failed request: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="The maximum number of data disks allowed to be attached to a VM of this size is 4." Target="dataDisks"
```
## Where to go next
- [Deploy an Ingress Controller on
Kubernetes](/ee/ucp/kubernetes/layer-7-routing/)
- [Discover Network Encryption on
Kubernetes](/ee/ucp/kubernetes/kubernetes-network-encryption/)

View File

@ -0,0 +1,246 @@
---
title: Configuring Azure Files Storage for Kubernetes
description: Learn how to add persistent storage to your Docker Enterprise clusters running on Azure with Azure Files.
keywords: Universal Control Plane, UCP, Docker EE, Kubernetes, storage, volume
redirect_from:
---
Platform operators can provide persistent storage for workloads running on
Docker Enterprise and Microsoft Azure by using Azure Files. You can either
pre-provision Azure Files Shares to be consumed by
Kubernetes Pods or can you use the Azure Kubernetes integration to dynamically
provision Azure Files Shares on demand.
## Prerequisites
This guide assumes you have already provisioned a UCP environment on
Microsoft Azure. The cluster must be provisioned after meeting all
prerequisites listed in [Install UCP on
Azure](/ee/ucp/admin/install/install-on-azure.md).
Additionally, this guide uses the Kubernetes Command Line tool `$
kubectl` to provision Kubernetes objects within a UCP cluster. Therefore, you must download
this tool along with a UCP client bundle. For more
information on configuring CLI access to UCP, see [CLI Based
Access](/ee/ucp/user-access/cli/).
## Manually Provisioning Azure Files
You can use existing Azure Files Shares or manually provision new ones to
provide persistent storage for Kubernetes Pods. Azure Files Shares can be
manually provisioned in the Azure Portal using ARM Templates or using the Azure
CLI. The following example uses the Azure CLI to manually provision
Azure Files Shares.
### Creating an Azure Storage Account
When manually creating an Azure Files Share, first create an Azure
Storage Account for the file shares. If you have already provisioned
a Storage Account, you can skip to [Creating an Azure Files
Share](#creating-an-azure-file-share).
> **Note**: the Azure Kubernetes Driver does not support Azure Storage Accounts
> created using Azure Premium Storage.
```bash
$ REGION=ukwest
$ SA=mystorageaccount
$ RG=myresourcegroup
$ az storage account create \
--name $SA \
--resource-group $RG \
--location $REGION \
--sku Standard_LRS
```
### Creating an Azure Files Share
Next, provision an Azure Files Share. The size of this share can be
adjusted to fit the end user's requirements. If you have already created an
Azure Files Share, you can skip to [Configuring a Kubernetes
Secret](#configuring-a-kubernetes-secret).
```bash
$ SA=mystorageaccount
$ RG=myresourcegroup
$ FS=myfileshare
$ SIZE=5
# This Azure Collection String can also be found in the Azure Portal
$ export AZURE_STORAGE_CONNECTION_STRING=`az storage account show-connection-string --name $SA --resource-group $RG -o tsv`
$ az storage share create \
--name $FS \
--quota $SIZE \
--connection-string $AZURE_STORAGE_CONNECTION_STRING
```
### Configuring a Kubernetes Secret
After a File Share has been created, you must load the Azure Storage
Account Access key as a Kubernetes Secret into UCP. This provides access to
the file share when Kubernetes attempts to mount the share into a pod. This key
can be found in the Azure Portal or retrieved as shown in the following example by the Azure CLI:
```bash
$ SA=mystorageaccount
$ RG=myresourcegroup
$ FS=myfileshare
# The Azure Storage Account Access Key can also be found in the Azure Portal
$ STORAGE_KEY=$(az storage account keys list --resource-group $RG --account-name $SA --query "[0].value" -o tsv)
$ kubectl create secret generic azure-secret \
--from-literal=azurestorageaccountname=$SA \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
```
### Mount the Azure Files Share into a Kubernetes Pod
The final step is to mount the Azure Files Share, using the Kubernetes Secret, into
a Kubernetes Pod. The following code creates a standalone Kubernetes pod, but you
can also use alternative Kubernetes Objects such as Deployments, DaemonSets, or
StatefulSets, with the existing Azure Files Share.
```bash
$ FS=myfileshare
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: mypod-azurefile
spec:
containers:
- image: nginx
name: mypod
volumeMounts:
- name: mystorage
mountPath: /data
volumes:
- name: mystorage
azureFile:
secretName: azure-secret
shareName: $FS
readOnly: false
EOF
```
## Dynamically Provisioning Azure Files Shares
### Defining the Azure Disk Storage Class
Kubernetes can dynamically provision Azure Files Shares using the Azure
Kubernetes integration, which was configured when UCP was installed. For
Kubernetes to know which APIs to use when provisioning storage, you must
create Kubernetes Storage Classes specific to each storage
backend. For more information on Kubernetes Storage Classes, see [Storage
Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
> Today, only the Standard Storage Class is supported when using the Azure
> Kubernetes Plugin. File shares using the Premium Storage Class will fail to
> mount.
```bash
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
parameters:
skuName: Standard_LRS
EOF
```
To see which Storage Classes have been provisioned:
```bash
$ kubectl get storageclasses
NAME PROVISIONER AGE
azurefile kubernetes.io/azure-file 1m
```
### Creating an Azure Files Share using a Persistent Volume Claim
After you create a Storage Class, you can use Kubernetes
Objects to dynamically provision Azure Files Shares. This is done using
Kubernetes Persistent Volumes Claims
[PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction).
Kubernetes uses an existing Azure Storage Account if one exists inside of the
Azure Resource Group. If an Azure Storage Account does not exist,
Kubernetes creates one.
The following example uses the standard storage class and creates a 5 GB Azure
File Share. Alter these values to fit your use case.
```bash
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-file-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: standard
resources:
requests:
storage: 5Gi
EOF
```
At this point, you should see a newly created Persistent Volume Claim and Persistent Volume:
```bash
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
azure-file-pvc Bound pvc-f7ccebf0-70e0-11e9-8d0a-0242ac110007 5Gi RWX standard 22s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-f7ccebf0-70e0-11e9-8d0a-0242ac110007 5Gi RWX Delete Bound default/azure-file-pvc standard 2m
```
### Attach the new Azure Files Share to a Kubernetes Pod
Now that a Kubernetes Persistent Volume has been created, mount this into
a Kubernetes Pod. The file share can be consumed by any Kubernetes object type
such as a Deployment, DaemonSet, or StatefulSet. However, the following
example just mounts the persistent volume into a standalone pod.
```bash
$ cat <<EOF | kubectl create -f -
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: azure-file-pvc
EOF
```
## Where to go next
- [Deploy an Ingress Controller on
Kubernetes](/ee/ucp/kubernetes/layer-7-routing/)
- [Discover Network Encryption on
Kubernetes](/ee/ucp/kubernetes/kubernetes-network-encryption/)

View File

@ -120,7 +120,7 @@ is available in [17.12 Edge (mac45)](/docker-for-mac/edge-release-notes/#docker-
[17.12 Stable (mac46)](/docker-for-mac/release-notes/#docker-community-edition-17120-ce-mac46-2018-01-09){: target="_blank" class="_"} and higher.
> - [Kubernetes on Docker Desktop for Windows](/docker-for-windows/kubernetes/){: target="_blank" class="_"}
is available in
[18.02 Edge (win50)](/docker-for-windows/edge-release-notes/#docker-community-edition-18020-ce-rc1-win50-2018-01-26){: target="_blank" class="_"} and higher edge channels only.
[18.06.0 CE (win70)](/docker-for-windows/release-notes/){: target="_blank" class="_"} and higher as well as edge channels.
[Install Docker](/engine/installation/index.md){: class="button outline-btn"}
<div style="clear:left"></div>

View File

@ -47,7 +47,7 @@ Older versions of Docker were called `docker` or `docker-engine`. In addition,
if you are upgrading from Docker CE to Docker EE, remove the Docker CE package.
```bash
$ sudo apt-get remove docker docker-engine docker-ce docker.io
$ sudo apt-get remove docker docker-engine docker-ce docker-ce-cli docker.io
```
It's OK if `apt-get` reports that none of these packages are installed.

View File

@ -37,10 +37,11 @@ Note:
## Options
| Option | Description |
|:--------------------------|:---------------------------|
|`--debug, D`|Enable debug mode|
|`--jsonlog`|Produce json formatted output for easier parsing|
|`--interactive, i`|Run in interactive mode and prompt for configuration values|
|`--id`|The ID of the UCP instance to back up|
|`--passphrase`|Encrypt the tar file with a passphrase|
| Option | Description |
|:-----------------------|:--------------------------------------------------------------------|
| `--debug, -D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, -i` | Run in interactive mode and prompt for configuration values |
| `--id` *value* | The ID of the UCP instance to back up |
| `--no-passphrase` | Opt out to encrypt the tar file with a passphrase (not recommended) |
| `--passphrase` *value* | Encrypt the tar file with a passphrase |

View File

@ -27,9 +27,9 @@ to configure DTR.
## Options
| Option | Description |
|:--------------------------|:---------------------------|
|`--debug, D`|Enable debug mode|
|`--jsonlog`|Produce json formatted output for easier parsing|
|`--ca`|Only print the contents of the ca.pem file|
|`--cluster`|Print the internal UCP swarm root CA and cert instead of the public server cert|
| Option | Description |
|:-------------|:--------------------------------------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--ca` | Only print the contents of the ca.pem file |
| `--cluster` | Print the internal UCP swarm root CA and cert instead of the public server cert |

View File

@ -23,3 +23,9 @@ a client bundle.
This ID is used by other commands as confirmation.
## Options
| Option | Description |
|:-------------|:-------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |

View File

@ -24,11 +24,11 @@ the ones that are missing.
## Options
| Option | Description |
|:--------------------------|:---------------------------|
|`--debug, D`|Enable debug mode|
|`--jsonlog`|Produce json formatted output for easier parsing|
|`--pull`|Pull UCP images: `always`, when `missing`, or `never`|
|`--registry-username`|Username to use when pulling images|
|`--registry-password`|Password to use when pulling images|
|`--list`|List all images used by UCP but don't pull them|
| Option | Description |
|:------------------------------|:-----------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--list` | List all images used by UCP but don't pull them |
| `--pull` *value* | Pull UCP images: `always`, when `missing`, or `never`|
| `--registry-password` *value* | Password to use when pulling images |
| `--registry-username` *value* | Username to use when pulling images |

View File

@ -31,15 +31,16 @@ docker container run -it --rm \
| Option | Description |
|:-----------------|:----------------------------------------------------------|
| `backup` | Create a backup of a UCP manager node |
| `dump-certs` | Print the public certificates used by this UCP web server |
| `example-config` | Display an example configuration file for UCP |
| `help` | Shows a list of commands or help for one command |
| `id` | Print the ID of UCP running on this node |
| `images` | Verify the UCP images on this node |
| `install` | Install UCP on this node |
| `restart` | Start or restart UCP components running on this node |
| `stop` | Stop UCP components running on this node |
| `upgrade` | Upgrade the UCP cluster |
| `images` | Verify the UCP images on this node |
| `uninstall-ucp` | Uninstall UCP from this swarm |
| `dump-certs` | Print the public certificates used by this UCP web server |
| `support` | Create a support dump for this UCP node |
| `id` | Print the ID of UCP running on this node |
| `backup` | Create a backup of a UCP manager node |
| `restore` | Restore a UCP cluster from a backup |
| `example-config` | Display an example configuration file for UCP |
| `stop` | Stop UCP components running on this node |
| `support` | Create a support dump for this UCP node |
| `uninstall-ucp` | Uninstall UCP from this swarm |
| `upgrade` | Upgrade the UCP cluster |

View File

@ -42,44 +42,45 @@ If you are installing on Azure, see [Install UCP on Azure](/ee/ucp/admin/install
## Options
| Option | Description |
|:-------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--admin-password` | The UCP administrator password. Must be at least 8 characters. |
| `--admin-username` | The UCP administrator username |
| `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility |
| `--cloud-provider` | The cloud provider for the cluster
| `--cni-installer-url` | Deprecated feature. A URL pointing to a Kubernetes YAML file to be used as an installer for the CNI plugin of the cluster. If specified, the default CNI plugin is not installed. If the URL uses the HTTPS scheme, no certificate verification is performed. |
| `--controller-port` | Port for the web UI and API
| `--data-path-addr` | Address or interface to use for data path traffic. Format: IP address or network interface name
| `--debug, D` | Enable debug mode |
| `--disable-tracking` | Disable anonymous tracking and analytics |
| `--disable-usage` | Disable anonymous usage reporting |
| `--dns` | Set custom DNS servers for the UCP containers |
| `--dns-opt` | Set DNS options for the UCP containers |
| `--dns-search` | Set custom DNS search domains for the UCP containers |
| `--enable-profiling` | Enable performance profiling |
| `--existing-config` | Use the latest existing UCP config during this installation. The install fails if a config is not found. |
| `--external-server-cert` | Use the certificates in the `ucp-controller-server-certs` volume instead of generating self-signed certs during installation |
| `--external-service-lb` | Set the external service load balancer reported in the UI |
| `--force-insecure-tcp` | Force install to continue even with unauthenticated Docker Engine ports |
| `--force-minimums` | Force the install/upgrade even if the system doesn't meet the minimum requirements. |
| `--host-address` | The network address to advertise to other nodes. Format: IP address or network interface name |
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--kube-apiserver-port` | Port for the Kubernetes API server (default: 6443) |
| `--kv-snapshot-count` | Number of changes between key-value store snapshots |
| `--kv-timeout` | Timeout in milliseconds for the key-value store |
| `--license` | Add a license: e.g.` --license "$(cat license.lic)" ` |
| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IPs from (Default: `192.168.0.0/16`) |
| `--preserve-certs` | Don't generate certificates if they already exist |
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
| `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility |
| `--registry-username` | Username to use when pulling images |
| `--registry-password` | Password to use when pulling images |
| `--san` | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
| `--skip-cloud-provider` | Disables checks that rely on detecting the cloud provider (if any) on which the cluster is currently running. |
| `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility |
| `--swarm-port` | Port for the Docker Swarm manager. Used for backwards compatibility |
| `--swarm-grpc-port` | Port for communication between nodes |
| `--unlock-key` | The unlock key for this swarm-mode cluster, if one exists. |
| `--unmanaged-cni` |The default value of `false` indicates that Kubernetes networking is managed by UCP with its default managed CNI plugin, Calico. When set to `true`, UCP does not deploy or manage the lifecycle of the default CNI plugin - the CNI plugin is deployed and managed independently of UCP. Note that when `unmanaged-cni=true`, networking in the cluster will not function for Kubernetes until a CNI plugin is deployed. |
| Option | Description |
|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--debug, -D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, -i` | Run in interactive mode and prompt for configuration values |
| `--admin-password` *value* | The UCP administrator password [$UCP_ADMIN_PASSWORD] |
| `--admin-username` *value* | The UCP administrator username [$UCP_ADMIN_USER] |
| `--binpack` | Set the Docker Swarm scheduler to binpack mode. Used for backwards compatibility |
| `--cloud-provider` *value* | The cloud provider for the cluster |
| `--cni-installer-url` *value* | A URL pointing to a kubernetes YAML file to be used as an installer for the CNI plugin of the cluster. If specified, the default CNI plugin will not be installed. If the URL is using the HTTPS scheme, no certificate verification will be performed |
| `--controller-port` *value* | Port for the web UI and API (default: 443) |
| `--data-path-addr` *value* | Address or interface to use for data path traffic. Format: IP address or network interface name [$UCP_DATA_PATH_ADDR] |
| `--disable-tracking` | Disable anonymous tracking and analytics |
| `--disable-usage` | Disable anonymous usage reporting |
| `--dns-opt` *value* | Set DNS options for the UCP containers [$DNS_OPT] |
| `--dns-search` *value* | Set custom DNS search domains for the UCP containers [$DNS_SEARCH] |
| `--dns` *value* | Set custom DNS servers for the UCP containers [$DNS] |
| `--enable-profiling` | Enable performance profiling |
| `--existing-config` | Use the latest existing UCP config during this installation. The install will fail if a config is not found |
| `--external-server-cert` | Customize the certificates used by the UCP web server |
| `--external-service-lb` *value* | Set the IP address of the load balancer that published services are expected to be reachable on |
| `--force-insecure-tcp` | Force install to continue even with unauthenticated Docker Engine ports. |
| `--force-minimums` | Force the install/upgrade even if the system does not meet the minimum requirements |
| `--host-address` *value* | The network address to advertise to other nodes. Format: IP address or network interface name [$UCP_HOST_ADDRESS] |
| `--kube-apiserver-port` *value* | Port for the Kubernetes API server (default: 6443) |
| `--kv-snapshot-count` *value* | Number of changes between key-value store snapshots (default: 20000) [$KV_SNAPSHOT_COUNT] |
| `--kv-timeout` *value* | Timeout in milliseconds for the key-value store (default: 5000) [$KV_TIMEOUT] |
| `--license` *value* | Add a license: e.g. --license "$(cat license.lic)" [$UCP_LICENSE] |
| `--nodeport-range` *value* | Allowed port range for Kubernetes services of type NodePort (Default: 32768-35535) (default: "32768-35535") |
| `--pod-cidr` *value* | Kubernetes cluster IP pool for the pods to allocated IP from (Default: 192.168.0.0/16) (default: "192.168.0.0/16") |
| `--preserve-certs` | Don't generate certificates if they already exist |
| `--pull` *value* | Pull UCP images: 'always', when 'missing', or 'never' (default: "missing") |
| `--random` | Set the Docker Swarm scheduler to random mode. Used for backwards compatibility |
| `--registry-password` *value* | Password to use when pulling images [$REGISTRY_PASSWORD] |
| `--registry-username` *value* | Username to use when pulling images [$REGISTRY_USERNAME] |
| `--san` *value* | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) [$UCP_HOSTNAMES] |
| `--skip-cloud-provider-check` | Disables checks which rely on detecting which (if any) cloud provider the cluster is currently running on |
| `--swarm-experimental` | Enable Docker Swarm experimental features. Used for backwards compatibility |
| `--swarm-grpc-port` *value* | Port for communication between nodes (default: 2377) |
| `--swarm-port` *value* | Port for the Docker Swarm manager. Used for backwards compatibility (default: 2376) |
| `--unlock-key` *value* | The unlock key for this swarm-mode cluster, if one exists. [$UNLOCK_KEY] |
| `--unmanaged-cni` | Flag to indicate if cni provider is calico and managed by UCP (calico is the default CNI provider) |

View File

@ -18,7 +18,7 @@ docker container run --rm -it \
## Options
| Option | Description |
|:--------------------------|:---------------------------|
|`--debug, D`|Enable debug mode|
|`--jsonlog`|Produce json formatted output for easier parsing|
| Option | Description |
|:-------------|:-------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |

View File

@ -58,13 +58,13 @@ Notes:
## Options
| Option | Description |
|:-------------------|:----------------------------------------------------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
| `--passphrase` | Decrypt the backup tar file with the provided passphrase |
| `--san` | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
| `--host-address` | The network address to advertise to other nodes. Format: IP address or network interface name |
| `--data-path-addr` | Address or interface to use for data path traffic |
| `--unlock-key` | The unlock key for this swarm-mode cluster, if one exists. |
| Option | Description |
|:---------------------------|:----------------------------------------------------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
| `--data-path-addr` *value* | Address or interface to use for data path traffic |
| `--host-address` *value* | The network address to advertise to other nodes. Format: IP address or network interface name |
| `--passphrase` *value* | Decrypt the backup tar file with the provided passphrase |
| `--san` *value* | Add subject alternative names to certificates (e.g. --san www1.acme.com --san www2.acme.com) |
| `--unlock-key` *value* | The unlock key for this swarm-mode cluster, if one exists. |

View File

@ -18,7 +18,7 @@ docker container run --rm -it \
## Options
| Option | Description |
|:--------------------------|:---------------------------|
|`--debug, D`|Enable debug mode|
|`--jsonlog`|Produce json formatted output for easier parsing|
| Option | Description |
|:-------------|:-------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |

View File

@ -22,8 +22,7 @@ This command creates a support dump file for the specified node(s), and prints i
## Options
| Option | Description |
|:--------------------------|:---------------------------|
|`--loglines`|Specify number of lines to grab from `journalctl`. The default is 10,000 lines.|
|`--servicedriller`|Run the swarm service driller (ssd) tool. For more information on this tool, see [Docker Swarm Service Driller (ssd)](https://github.com/docker/libnetwork/tree/master/cmd/ssd) Not run by default.|
|`--nodes`|Select specific nodes on which to produce a support dump. Comma-separated node IDs are allowed. The default selects all nodes.|
| Option | Description |
|:-------------|:-------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |

View File

@ -30,13 +30,13 @@ UCP is installed again.
## Options
| Option | Description |
| :-------------------- | :---------------------------------------------------------- |
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
| `--registry-username` | Username to use when pulling images |
| `--registry-password` | Password to use when pulling images |
| `--id` | The ID of the UCP instance to uninstall |
| `--purge-config` | Remove UCP configs during uninstallation |
| Option | Description |
|:------------------------------|:----------------------------------------------------------- |
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
| `--id` *value* | The ID of the UCP instance to uninstall |
| `--pull` *value* | Pull UCP images: `always`, when `missing`, or `never` |
| `--purge-config` | Remove UCP configs during uninstallation |
| `--registry-password` *value* | Password to use when pulling images |
| `--registry-username` *value* | Username to use when pulling images |

View File

@ -29,19 +29,16 @@ healthy and that all nodes have been upgraded successfully.
## Options
| Option | Description |
|:----------------------|:------------------------------------------------------------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
| `--admin-username` | The UCP administrator username |
| `--admin-password` | The UCP administrator password |
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
| `--registry-username` | Username to use when pulling images |
| `--registry-password` | Password to use when pulling images |
| `--id` | The ID of the UCP instance to upgrade |
| `--host-address` | Override the previously configured host address with this IP or network interface |
| `--force-minimums` | Force the install/upgrade even if the system does not meet the minimum requirements |
| `--pod-cidr` | Kubernetes cluster IP pool for the pods to allocated IP. The default IP pool is `192.168.0.0/16`. |
| `--nodeport-range` | Allowed port range for Kubernetes services of type `NodePort`. The default port range is `32768-35535`. |
| `--cloud-provider` | The cloud provider for the cluster |
| Option | Description |
|:------------------------------|:------------------------------------------------------------------------------------|
| `--debug, D` | Enable debug mode |
| `--jsonlog` | Produce json formatted output for easier parsing |
| `--interactive, i` | Run in interactive mode and prompt for configuration values |
| `--admin-password` *value* | The UCP administrator password |
| `--admin-username` *value* | The UCP administrator username |
| `--force-minimums` | Force the install/upgrade even if the system does not meet the minimum requirements |
| `--host-address` *value* | Override the previously configured host address with this IP or network interface |
| `--id` | The ID of the UCP instance to upgrade |
| `--pull` | Pull UCP images: `always`, when `missing`, or `never` |
| `--registry-password` *value* | Password to use when pulling images |
| `--registry-username` *value* | Username to use when pulling images |

View File

@ -263,7 +263,7 @@ There are several factors that influence the performance of Docker using the
filesystems like ZFS. ZFS mitigates this by using a small block size of 128k.
The ZFS intent log (ZIL) and the coalescing of writes (delayed writes) also
help to reduce fragmentation. You can monitor fragmentation using
`zfs status`. However, there is no way to defragment ZFS without reformatting
`zpool status`. However, there is no way to defragment ZFS without reformatting
and restoring the filesystem.
- **Use the native ZFS driver for Linux**: The ZFS FUSE implementation is not