Merge branch 'amberjack' of github.com:docker/docs-private into amberjack

This commit is contained in:
Maria Bermudez 2019-05-16 11:08:42 -07:00
commit 959ff17d7e
19 changed files with 643 additions and 45 deletions

View File

@ -325,12 +325,16 @@ guides:
title: View a container's logs
- path: /config/containers/logging/configure/
title: Configure logging drivers
- path: /config/containers/logging/dual-logging/
title: Use docker logs with a logging driver
- path: /config/containers/logging/plugins/
title: Use a logging driver plugin
- path: /config/containers/logging/log_tags/
title: Customize log driver output
- sectiontitle: Logging driver details
section:
- path: /config/containers/logging/local/
title: Local file logging driver
- path: /config/containers/logging/logentries/
title: Logentries logging driver
- path: /config/containers/logging/json-file/
@ -1484,7 +1488,9 @@ manuals:
- title: Use NFS storage
path: /ee/ucp/kubernetes/storage/use-nfs-volumes/
- title: Use AWS EBS Storage
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
path: /ee/ucp/kubernetes/storage/configure-aws-storage/
- title: Configure iSCSI
path: /ee/ucp/kubernetes/storage/use-iscsi/
- title: API reference
path: /reference/ucp/3.1/api/
nosync: true

View File

@ -529,7 +529,7 @@ an error.
### credential_spec
> **Note**: this option was added in v3.3.
> **Note**: This option was added in v3.3. Using group Managed Service Account (gMSA) configurations with compose files is supported in Compose version 3.8.
Configure the credential spec for managed service account. This option is only
used for services using Windows containers. The `credential_spec` must be in the
@ -558,6 +558,23 @@ credential_spec:
registry: my-credential-spec
```
#### Example gMSA configuration
When configuring a gMSA credential spec for a service, you only need
to specify a credential spec with `config`, as shown in the following example:
```
version: "3.8"
services:
myservice:
image: myimage:latest
credential_spec:
config: my_credential_spec
configs:
my_credentials_spec:
file: ./my-credential-spec.json|
```
### depends_on
Express dependency between services, Service dependencies cause the following

View File

@ -19,7 +19,6 @@ unless you configure it to use a different logging driver.
In addition to using the logging drivers included with Docker, you can also
implement and use [logging driver plugins](/engine/admin/logging/plugins.md).
## Configure the default logging driver
To configure the Docker daemon to default to a specific logging driver, set the
@ -60,7 +59,7 @@ the default output for commands such as `docker inspect <CONTAINER>` is JSON.
To find the current default logging driver for the Docker daemon, run
`docker info` and search for `Logging Driver`. You can use the following
command:
command on Linux, macOS, or PowerShell on Windows:
{% raw %}
```bash
@ -146,8 +145,8 @@ see more options.
| Driver | Description |
|:------------------------------|:--------------------------------------------------------------------------------------------------------------|
| `none` | No logs are available for the container and `docker logs` does not return any output. |
| [`local`](local.md) | Logs are stored in a custom format designed for minimal overhead. |
| [`json-file`](json-file.md) | The logs are formatted as JSON. The default logging driver for Docker. |
| [`local`](local.md) | Writes logs messages to local filesystem in binary files using Protobuf. |
| [`syslog`](syslog.md) | Writes logging messages to the `syslog` facility. The `syslog` daemon must be running on the host machine. |
| [`journald`](journald.md) | Writes log messages to `journald`. The `journald` daemon must be running on the host machine. |
| [`gelf`](gelf.md) | Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash. |
@ -160,6 +159,25 @@ see more options.
## Limitations of logging drivers
The `docker logs` command is not available for drivers other than `json-file`
and `journald`.
- Users of Docker Enterprise can make use of "dual logging", which enables you to use the `docker logs`
command for any logging driver. Refer to
[Reading logs when using remote logging drivers](/config/containers/logging/dual-logging/) for information about
using `docker logs` to read container logs locally for many third party logging solutions, including:
- syslog
- gelf
- fluentd
- awslogs
- splunk
- etwlogs
- gcplogs
- Logentries
- When using Docker Community Engine, the `docker logs` command is only available on the following drivers:
- `local`
- `json-file`
- `journald`
- Reading log information requires decompressing rotated log files, which causes a temporary increase in disk usage (until the log entries from the rotated files are read) and an increased CPU usage while decompressing.
- The capacity of the host storage where dockers data directory resides determines the maximum size of the log file information.

View File

@ -0,0 +1,114 @@
---
description: Learn how to read container logs locally when using a third party logging solution.
keywords: docker, logging, driver
title: Using docker logs to read container logs for remote logging drivers
---
## Overview
Prior to Docker Engine Enterprise 18.03, the `jsonfile` and `journald` log drivers supported reading
container logs using `docker logs`. However, many third party logging drivers had no
support for locally reading logs using `docker logs`, including:
- syslog
- gelf
- fluentd
- awslogs
- splunk
- etwlogs
- gcplogs
- Logentries
This created multiple problems, especially with UCP, when attempting to gather log data in an
automated and standard way. Log information could only be accessed and viewed through the
third-party solution in the format specified by that third-party tool.
Starting with Docker Engine Enterprise 18.03.1-ee-1, you can use `docker logs` to read container
logs regardless of the configured logging driver or plugin. This capability, sometimes referred to
as dual logging, allows you to use `docker logs` to read container logs locally in a consistent format,
regardless of the remote log driver used, because the engine is configured to log information to the “local”
logging driver. Refer to [Configure the default logging driver](/configure) for additional information.
## Prerequisites
- Docker Enterprise - Dual logging is only supported for Docker Enterprise, and is enabled by default starting with
Engine Enterprise 18.03.1-ee-1.
## Usage
Dual logging is enabled by default. You must configure either the docker daemon or the container with remote logging driver.
The following example shows the results of running a `docker logs` command with and without dual logging availability:
### Without dual logging capability:
When a container or `dockerd` was configured with a remote logging driver such as splunk, an error was
displayed when attempting to read container logs locally:
- Step 1: Configure Docker daemon
```
$ cat /etc/docker/daemon.json
{
"log-driver": "splunk",
"log-opts": {
...
}
}
```
- Step 2: Start the container
```
$ docker run -d busybox --name testlog top
```
- Step 3: Read the container logs
```
$ docker logs 7d6ac83a89a0
The docker logs command was not available for drivers other than json-file and journald.
```
### With dual logging capability:
To configure a container or docker with a remote logging driver such as splunk:
- Step 1: Configure Docker daemon
```
$ cat /etc/docker/daemon.json
{
"log-driver": "splunk",
"log-opts": {
...
}
}
```
- Step 2: Start the container
```
$ docker run -d busybox --name testlog top
```
- Step 3: Read the container logs
```
$ docker logs 7d6ac83a89a0
2019-02-04T19:48:15.423Z [INFO] core: marked as sealed
2019-02-04T19:48:15.423Z [INFO] core: pre-seal teardown starting
2019-02-04T19:48:15.423Z [INFO] core: stopping cluster listeners
2019-02-04T19:48:15.423Z [INFO] core: shutting down forwarding rpc listeners
2019-02-04T19:48:15.423Z [INFO] core: forwarding rpc listeners stopped
2019-02-04T19:48:15.599Z [INFO] core: rpc listeners successfully shut down
2019-02-04T19:48:15.599Z [INFO] core: cluster listeners successfully shut down
```
Note:
For a local driver, such as json-file and journald, there is no difference in functionality
before or after the dual logging capability became available. The log is locally visible in both scenarios.
## Limitations
- You cannot specify more than one log driver.
- If a container using a logging driver or plugin that sends logs remotely suddenly has a "network" issue,
no write to the local cache occurs.
- If a write to `logdriver` fails for any reason (file system full, write permissions removed),
the cache write fails and is logged in the daemon log. The log entry to the cache is not retried.
- Some logs might be lost from the cache in the default configuration because a ring buffer is used to
prevent blocking the stdio of the container in case of slow file writes. An admin must repair these while the daemon is shut down.

View File

@ -26,22 +26,20 @@ located in `/etc/docker/` on Linux hosts or
configuring Docker using `daemon.json`, see
[daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file).
The following example sets the log driver to `json-file` and sets the `max-size`
and `max-file` options.
The following example sets the log driver to `json-file` and sets the `max-size` and 'max-file' options.
```json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
"max-file": "3"
}
}
```
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `max-file` in the example above) must therefore be enclosed in quotes (`"`).
**Note**: `log-opt` configuration options in the `daemon.json` configuration
file must be provided as strings. Boolean and numeric values (such as the value
for `max-file` in the example above) must therefore be enclosed in quotes (`"`).
Restart Docker for the changes to take effect for newly created containers. Existing containers do not use the new logging configuration.
@ -65,6 +63,7 @@ The `json-file` logging driver supports the following logging options:
| `labels` | Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt labels=production_status,geo` |
| `env` | Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Used for advanced [log tag options](log_tags.md). | `--log-opt env=os,customer` |
| `env-regex` | Similar to and compatible with `env`. A regular expression to match logging-related environment variables. Used for advanced [log tag options](log_tags.md). | `--log-opt env-regex=^(os|customer).` |
| `compress` | Toggles compression for rotated logs. Default is `disabled`. | `--log-opt compress=true` |
### Examples

View File

@ -1,46 +1,53 @@
---
description: Describes how to use the local binary (Protobuf) logging driver.
keywords: local, protobuf, docker, logging, driver
description: Describes how to use the local logging driver.
keywords: local, docker, logging, driver
redirect_from:
- /engine/reference/logging/local/
- /engine/admin/logging/local/
title: local binary file Protobuf logging driver
title: Local File logging driver
---
This `log-driver` writes to `local` binary files using Protobuf [Protocol Buffers](https://en.wikipedia.org/wiki/Protocol_Buffers)
The `local` logging driver captures output from container's stdout/stderr and
writes them to an internal storage that is optimized for performance and disk
use.
By default the `local` driver preserves 100MB of log messages per container and
uses automatic compression to reduce the size on disk.
> *Note*: the `local` logging driver currently uses file-based storage. The
> file-format and storage mechanism are designed to be exclusively accessed by
> the Docker daemon, and should not be used by external tools as the
> implementation may change in future releases.
## Usage
To use the `local` driver as the default logging driver, set the `log-driver`
and `log-opt` keys to appropriate values in the `daemon.json` file, which is
located in `/etc/docker/` on Linux hosts or
`C:\ProgramData\docker\config\daemon.json` on Windows Server. For more information about
`C:\ProgramData\docker\config\daemon.json` on Windows Server. For more about
configuring Docker using `daemon.json`, see
[daemon.json](/engine/reference/commandline/dockerd.md#daemon-configuration-file).
The following example sets the log driver to `local`.
The following example sets the log driver to `local` and sets the `max-size`
option.
```json
{
"log-driver": "local",
"log-opts": {}
"log-opts": {
"max-size": "10m"
}
}
```
> **Note**: `log-opt` configuration options in the `daemon.json` configuration
> file must be provided as strings. Boolean and numeric values (such as the value
> for `max-file` in the example above) must therefore be enclosed in quotes (`"`).
Restart Docker for the changes to take effect for newly created containers.
Existing containers will not use the new logging configuration.
Restart Docker for the changes to take effect for newly created containers. Existing containers do not use the new logging configuration.
You can set the logging driver for a specific container by using the
`--log-driver` flag to `docker container create` or `docker run`:
```bash
$ docker run \
--log-driver local --log-opt compress="false" \
--log-driver local --log-opt max-size=10m \
alpine echo hello world
```
@ -50,6 +57,15 @@ The `local` logging driver supports the following logging options:
| Option | Description | Example value |
|:------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------|
| `max-size` | The maximum size of each binary log file before rotation. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to `20m`. | `--log-opt max-size=10m` |
| `max-file` | The maximum number of binary log files. If rotating the logs creates an excess file, the oldest file is removed. **Only effective when `max-size` is also set.** A positive integer. Defaults to `5`. | `--log-opt max-file=5` |
| `compress` | Whether or not the binary files should be compressed. Defaults to `true` | `--log-opt compress=true` |
| `max-size` | The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (`k`, `m`, or `g`). Defaults to 20m. | `--log-opt max-size=10m` |
| `max-file` | The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. **Only effective when `max-size` is also set.** A positive integer. Defaults to 5. | `--log-opt max-file=3` |
| `compress` | Toggle compression of rotated log files. Enabled by default. | `--log-opt compress=false` |
### Examples
This example starts an `alpine` container which can have a maximum of 3 log
files no larger than 10 megabytes each.
```bash
$ docker run -it --log-opt max-size=10m --log-opt max-file=3 alpine ash
```

View File

@ -4,7 +4,13 @@ description: Learn about Docker Desktop Enterprise
keywords: Docker EE, Windows, Mac, Docker Desktop, Enterprise
---
Docker Desktop Enterprise (DDE) provides local development, testing, and building of Docker applications on Mac and Windows. With work performed locally, developers can leverage a rapid feedback loop before pushing code or Docker images to shared servers / continuous integration infrastructure.
Welcome to Docker Desktop Enterprise. This page contains information about the Docker Desktop Enterprise (DDE) release. For information about Docker Desktop Community, see:
- [Docker Desktop for Mac (Community)](/docker-for-mac/){: target="_blank" class="_"}
- [Docker Desktop for Windows (Community)](/docker-for-windows/){: target="_blank" class="_"}
Docker Desktop Enterprise provides local development, testing, and building of Docker applications on Mac and Windows. With work performed locally, developers can leverage a rapid feedback loop before pushing code or Docker images to shared servers / continuous integration infrastructure.
Docker Desktop Enterprise takes Docker Desktop Community, formerly known as Docker for Windows and Docker for Mac, a step further with simplified enterprise application development and maintenance. With DDE, IT organizations can ensure developers are working with the same version of Docker Desktop and can easily distribute Docker Desktop to large teams using third-party endpoint management applications. With the Docker Desktop graphical user interface (GUI), developers do not have to work with lower-level Docker commands and can auto-generate Docker artifacts.

View File

@ -16,6 +16,28 @@ For Docker Enterprise Engine release notes, see [Docker Engine release notes](/e
## Docker Desktop Enterprise Releases of 2019
### Docker Desktop Enterprise 2.0.0.4
2019-05-16
- Upgrades
- [Docker 19.03.0-beta4](https://docs.docker.com/engine/release-notes/) in Enterprise 3.0 version pack
- [Docker 18.09.6](https://docs.docker.com/engine/release-notes/), [Kubernetes 1.11.10](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v11110) in Enterprise 2.1 version pack
- [LinuxKit v0.7](https://github.com/linuxkit/linuxkit/releases/tag/v0.7)
- Bug fixes and minor changes
- Fixed a stability issue with the DNS resolver.
- Fixed a race condition where Kubernetes sometimes failed to start after restarting the application.
- Fixed a bug that causes Docker Compose to fail when a user logs out after logging in. See [docker/compose#6517](https://github.com/docker/compose/issues/6517)
- Improved the reliability of `com.docker.osxfs trace` performance profiling command.
- Docker Desktop now supports large lists of resource DNS records on Mac. See [docker/for-mac#2160](https://github.com/docker/for-mac/issues/2160#issuecomment-431571031).
- Users can now run a Docker registry in a container. See [docker/for-mac#3611](https://github.com/docker/for-mac/issues/3611).
- For Linux containers on Windows (LCOW), one physical computer system running Windows 10 Professional or Windows 10 Enterprise version 1809 or later is required.
- Added a dialog box during startup when a shared drive fails to mount. This allows users to retry mounting the drive or remove it from the shared drive list.
- Removed the ability to log in using an email address as a username as this is not supported by the Docker command line.
### Docker Desktop Enterprise 2.0.0.3
2019-04-26
@ -39,19 +61,14 @@ For Docker Enterprise Engine release notes, see [Docker Engine release notes](/e
- Upgrades
- [Docker Compose 1.24.0](https://github.com/docker/compose/releases/tag/1.24.0)
- [Docker Engine 18.09.5](https://docs.docker.com/engine/release-notes/), [Kubernetes 1.11.7](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1117) and [Compose on Kubernetes 0.4.22](https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.22) for Version Pack Enterprise 2.1
- [Docker Engine 17.06.2-ee-21](https://docs.docker.com/engine/release-notes/) for Version Pack Enterprise 2.0
- Bug fixes and minor changes
- For security, only administrators can install or upgrade Version Packs using the `dockerdesktop-admin` tool.
- Truncate UDP DNS responses which are over 512 bytes in size
- Fixed airgap install of kubernetes in version pack enterprise-2.0
- Reset to factory default now resets to admin defaults
- Known issues
@ -69,7 +86,6 @@ For Docker Enterprise Engine release notes, see [Docker Engine release notes](/e
Upgrades:
- Docker 18.09.3 for Version Pack Enterprise 2.1, fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
- Docker 17.06.2-ee-20 for Version Pack Enterprise 2.0, fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
Bug fixes and minor changes:
@ -88,7 +104,6 @@ New features:
Upgrades:
- Docker 18.09.3 for Version Pack Enterprise 2.1, fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
- Docker 17.06.2-ee-20 for Version Pack Enterprise 2.0, fixes [CVE-2019-5736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5736)
Bug fixes and minor changes:

View File

@ -15,10 +15,69 @@ known issues for each DTR version.
You can then use [the upgrade instructions](admin/upgrade.md),
to upgrade your installation to the latest release.
* [Version 2.7](#version-27)
* [Version 2.6](#version-26)
* [Version 2.5](#version-25)
* [Version 2.4](#version-24)
# Version 2.7
## 2.7.0-beta4
(2019-5-16)
### New Features
* **Web Interface**
* Users can now filter events by object type. (docker/dhe-deploy #10231)
* Docker artifacts such as apps, plugins, images, and multi-arch images are shown as distinct types with granular views into app details including metadata and scan results for an application's constituent images. [Learn more](https://beta.docs.docker.com/app/working-with-app/).
* Users can now import a client certificate and key to the browser in order to access the web interface without using their credentials.
* The **Logout** menu item is hidden from the left navigation pane if client certificates are used for DTR authentication instead of user credentials. (docker/dhe-deploy#10147)
* **App Distribution**
* It is now possible to distribute [docker apps](https://github.com/docker/app) via DTR. This includes application pushes, pulls, and general management features like promotions, mirroring, and pruning.
* **Registry CLI**
* The Docker CLI now includes a `docker registry` management command which lets you interact with Docker Hub and trusted registries.
* Features supported on both DTR and Hub include listing remote tags and inspecting image manifests.
* Features supported on DTR alone include removing tags, listing repository events (such as image pushes and pulls), listing asynchronous jobs (such as mirroring pushes and pulls), and reviewing job logs. [Learn more](https://beta.docs.docker.com/engine/reference/commandline/registry/).
* **Client Cert-based Authentication**
* Users can now use UCP client bundles for DTR authentication.
* Users can now add their client certificate and key to their local Engine for performing pushes and pulls without logging in.
* Users can now use client certificates to make API requests to DTR instead of providing their credentials.
### Enhancements
* Users can now edit mirroring policies. (docker/dhe-deploy #10157)
* `docker run -it --rm docker/dtr:2.7.0-beta4` now includes a global option, `--version`, which prints the DTR version and associated commit hash. (docker/dhe-deploy #10144)
* Users can now set up push and pull mirroring policies via the API using an authentication token instead of their credentials. (docker/dhe-deploy#10002)
* DTR is now on Golang `1.12.4`. (docker/dhe-deploy#10274)
* For new mirroring policies, the **Mirror direction** now defaults to the Pull tab instead of Push. (docker/dhe-deploy#10004)
### Bug Fixes
* Fixed an issue where a webhook notification was triggered twice on non-scanning image promotion events on a repository with scan on push enabled. (docker/dhe-deploy#9909)
### Known issues
* **Registry CLI**
* `docker registry info` throws an authentication error even after the user has authenticated to the registry. (ENG-DTR #912)
### Deprecations
* **Upgrade**
* The `--no-image-check` flag has been removed from the `upgrade` command as image check is no longer a part of the upgrade process.
# Version 2.6
## 2.6.6

View File

@ -209,3 +209,12 @@ components. Assigning these values overrides the settings in a container's
*dev indicates that the functionality is only for development and testing. Arbitrary Kubernetes configuration parameters are not tested and supported under the Docker Enterprise Software Support Agreement.
### iSCSI (optional)
Configures iSCSI options for UCP.
| Parameter | Required | Description |
|:------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `--storage-iscsi=true` | no | Enables iSCSI based Persistent Volumes in Kubernetes. Default value is `false`. |
| `--iscsiadm-path=<path>` | no | Specifies the path of the iscsiadm binary on the host. Default value is `/usr/sbin/iscsiadm`. |
| `--iscsidb-path=<path>` | no | specifies the path of the iscsi database on the host. Default value is `/etc/iscsi`. |

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@ -128,4 +128,4 @@ to provide more bandwidth for the user services.
## Next steps
- [Configure Interlock](../config/index.md)
- [Deploy applications](../usage.index.md)
- [Deploy applications](./index.md)

View File

@ -151,13 +151,12 @@ able to start using the service from your browser.
## Next steps
- [Publish a service as a canary instance](./canary.md)
- [Usie context or path-based routing](./context.md)
- [Use context or path-based routing](./context.md)
- [Publish a default host service](./interlock-vip-mode.md)
- [Specify a routing mode](./interlock-vip-mode.md)
- [Use routing labels](./labels-reference.md)
- [Implement redirects](./redirects.md)
- [Implement a service cluster](./service-clusters.md)
- [Implement persistent (sticky) sessions](./sessions.md)
- [Implement SSL](./ssl.md)
- [Secure services with TLS](./tls.md)
- [Configure websockets](./websockets.md)

View File

@ -0,0 +1,299 @@
---
title: Configuring iSCSI for Kubernetes
description: Learn how to configure iSCSI.
keywords: Universal Control Plane, UCP, Docker EE, Kubernetes, storage, volume
---
Internet Small Computer System Interface (iSCSI) is an IP-based standard that provides block-level access
to storage devices. iSCSI takes requests from clients and fulfills these requests on remote SCSI devices.
iSCSI support in UCP enables Kubernetes workloads to consume persistent storage from iSCSI targets.
## iSCSI Components
- iSCSI Initiator Any client that consumes storage and sends iSCSI commands. In a UCP cluster,
the iSCSI initiator must be installed and running on any node where pods can be scheduled.
Configuration, target discovery, and login/logout to a target are primarily performed
by two software components: `iscsid` (service) and `iscsiadm` (CLI tool). These two components are typically
packaged as part of `open-iscsi` on Debian systems and `iscsi-initiator-utils` on RHEL/Centos/Fedora systems.
- `iscsid` is the iSCSI initiator daemon and implements the control path of the iSCSI protocol.
It communicates with `iscsiadm` and kernel modules.
- `iscsiadm` is a CLI tool that allows discovery, login to iSCSI targets, session management, and access
and management of the `open-iscsi` database.
**Note**: iSCSI kernel modules implement the data path. The most common modules used across Linux distributions
are `scsi_transport_iscsi.ko`, `libiscsi.ko` and `iscsi_tcp.ko`. These modules need to be loaded on the host
for proper functioning of the iSCSI initiator.
- iSCSI Target Any server that shares storage and receives iSCSI commands from an initiator.
## Prerequisites
- Basic Kubernetes and iSCSI knowledge is assumed.
- iSCSI storage provider hardware and software set up is complete:
- **Note**: There is no significant demand for RAM/Disk when running external provisioners in UCP clusters. For
setup information specific to a storage vendor, refer to the vendor documentation.
- Kubectl must be set up on clients.
- The iSCSI server must be accessible to UCP worker nodes.
## Limitations
- Not supported on Windows.
## Usage
The following steps are required for configuring iSCSI in Kubernetes via UCP:
1. [Configure iSCSI target](#configure-iscsi-target)
2. [Configure generic iSCSI initiator](#configure-generic-iscsi-initiator)
3. [Configure UCP](#configure-ucp)
Other information included in this topic:
- [In-tree iSCSI volumes](#in-tree-iscsi-volumes)
- [External provisioner and Kubernetes objects](#external-provisioner-and-kubernetes-objects)
- [Authentication](#authentication)
- [Troubleshooting](#troubleshooting)
- [Example](#example)
### Configure iSCSI target
An iSCSI target can run on dedicated/stand-alone hardware, or can be configured in a hyper-converged
manner to run alongside container workloads on UCP nodes. To provide access to the storage device,
each target is configured with one or more logical unit numbers (LUNs).
iSCSI targets are specific to the storage vendor. Refer to the documentation
of the vendor for set up instructions, including applicable RAM and disk space requirements, and
expose them to the UCP cluster.
Exposing iSCSI targets to the UCP cluster involves the following steps:
- Target is configured with client IQNs if necessary for access control.
- Challenge-Handshake Authentication Protocol (CHAP) secrets must be configured for authentication.
- Each iSCSI LUN must be accessible by all nodes in the cluster. Configure the iSCSI service to
expose storage as an iSCSI LUN to all nodes in the cluster. This can be done by allowing all UCP nodes,
and essentially their IQNs, to be part of the targets ACL list.
### Configure generic iSCSI initiator
Every Linux distribution packages the iSCSI initiator software in a particular way. Follow the
instructions specific to the storage provider, using the following steps as a guideline.
1. Prepare all UCP nodes:
- Install OS-specific iSCSI packages and load the necessary iSCSI kernel modules. In the following
example, `scsi_transport_iscsi.ko` and `libiscsi.ko` are pre-loaded by the Linux distro. The `iscsi_tcp` kernel
module must be loaded with a separate command:
1. For CentOS/Red Hat systems:
```
sudo yum install -y iscsi-initiator-utils
sudo modprobe iscsi_tcp
```
2. For Ubuntu systems:
```
sudo apt install open-iscsi
sudo modprobe iscsi_tcp
```
2. Set up UCP nodes as iSCSI initiators.
- Configure initiator names for each node as follows:
```
sudo sh -c 'echo "InitiatorName=iqn.<2019-01.com.example>:<uniqueID>" > /etc/iscsi/<initiatorname>.iscsi
sudo systemctl restart iscsid
```
**Note**: The `iqn` must be in the following format: `iqn.YYYY-MM.reverse.domain.name:OptionalIdentifier`.
### Configure UCP
Using the instructions in the [UCP configuration file](https://docs.docker.com/ee/ucp/admin/configure/ucp-configuration-file/)
help topic, update the UCP configuration file with the following options:
- `--storage-iscsi=true`: enables ISCSI based Persistent Volumes in Kubernetes.
- `--iscsiadm-path=<path>`: specifies the path of the iscsiadm binary on the host. Default value is "/usr/sbin/iscsiadm".
- `--iscsidb-path=<path>`: specifies the path of the iscsi database on the host. Default value is “/etc/iscsi”.
### In-tree iSCSI volumes
The Kubernetes in-tree iSCSI plugin only supports static provisioning. For static provisioning:
1. You must ensure the desired iSCSI LUNs are pre-provisioned in the iSCSI targets.
2. You must create iSCSI PV objects, which correspond to the pre-provisioned LUNs, with the appropriate iSCSI configuration.
3. As PVCs are created to consume storage, the iSCSI PVs bind to the PVCs and satisfy the request for persistent storage.
![iSCSI in-tree architecture](/ee/ucp/images/in-tree-arch.png)
The following example shows how to configure and create a `PersistentVolume` object:
1. Create a YAML file for the `PersistentVolume` object:
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 12Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 192.0.2.100:3260
iqn: iqn.2017-10.local.example.server:disk1
lun: 0
fsType: 'ext4'
readOnly: false
```
Replace the following values with information appropriate for your environment:
- `12Gi` with the size of the storage available.
- `192.0.2.100:3260` with the IP address and port number of the iSCSI target in your environment. Refer to the
storage provider documentation for port information.
- `iqn.2017-10.local.example.server:disk1` is the IQN of the iSCSI initiator, which in this case is the UCP worker
node. Each UCP worker should have a unique IQN. Replace `iqn.2017-10.local.example.server:disk1` with a unique name
for the identifier. More than one `iqn` can be specified, but must be the following format:
`iqn.YYYY-MM.reverse.domain.name:OptionalIdentifier`.
2. Create the `PersistentVolume` using your YAML file by running the following command on the master node:
```
kubectl create -f pv-iscsi.yml
persistentvolume/iscsi-pv created
```
### External provisioner and Kubernetes objects
An external provisioner is a piece of software running out of process from Kubernetes that is responsible for
creating and deleting Persistent Volumes. External provisioners monitor the Kubernetes API server for PV claims
and create PVs accordingly.
![iSCSI external provisioner architecture](/ee/ucp/images/ext-prov-arch.png)
When using an external provisioner, you must perform the following additional steps:
1. Configure external provisioning based on your storage provider. Refer to your storage provider documentation
for deployment information.
2. Define storage classes. Refer to your storage provider dynamic provisioning documentation
for configuration information.
3. Define Persistent Volume Claim(PVC) and Pod.
- When you define a PVC to use the storage class, a PV is created and bound.
4. Start a Pod using the PVC that you defined.
**Note**: Some on-premises storage providers have external provisioners for PV provisioning to backend storage.
### Authentication
CHAP secrets are supported for both iSCSI discovery and session management.
### Troubleshooting
Frequently encountered issues are highlighted in the following list:
- Host might not have iscsi kernel modules loaded. To avoid this, always prepare your UCP worker nodes
by installing the iSCSI packages and the iscsi kernel modules
*prior* to installing UCP. If worker nodes are not prepared correctly *prior* to UCP install, prepare the nodes
and restart the 'ucp-kubelet' container for changes to take effect.
- Some hosts have `depmod` confusion. On some Linux distros, the kernel modules cannot be loaded
until the kernel sources are installed and `depmod` is run. If you experience problems with loading
kernel modules, make sure you run `depmod` after kernel module installation.
### Example
1. See https://github.com/kubernetes-incubator/external-storage/tree/master/iscsi/targetd for a reference external provisioner implementation using a target based external provisioner.
2. On your client machine with `kubectl` installed and the configuration specifying the IP address of a master node,
perform the following steps:
1. Create and apply the storage class:
1. Create a `StorageClass` object in a YAML file named `iscsi-storageclass.yaml, as shown in the following example:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: iscsi-targetd-vg-targetd
provisioner: iscsi-targetd
parameters:
targetPortal: 172.31.8.88
iqn: iqn.2019-01.org.iscsi.docker:targetd
iscsiInterface: default
volumeGroup: vg-targetd
initiators: iqn.2019-01.com.example:node1, iqn.2019-01.com.example:node2
chapAuthDiscovery: "false"
chapAuthSession: "false"
```
2. Use the `StorageClass` YAML file and run the following command.
```
$ kubectl apply -f iscsi-storageclass.yaml
storageclass "iscsi-targetd-vg-targetd" created
$ kubectl get sc
NAME PROVISIONER AGE
iscsi-targetd-vg-targetd iscsi-targetd 30s
```
2. Create and apply a PersistentVolumeClaim
1. Create a `PersistentVolumeClaim` object in a YAML file named `pvc-iscsi.yml` on the master node, open it in an editor, and include the following content:
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: iscsi-claim
Spec:
storageClassName: “iscsi-targetd-vg-targetd”
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
```
Supported `accessModes` values for iSCSI include `ReadWriteOnce` and `ReadOnlyMany`. You can also change the requested
storage size by changing the `storage` value to a different value.
**Note**: The scheduler automatically ensures that pods with the same PersistentVolumeClaim run on the same
worker node.
2. Apply the `PersistentVolumeClaim` YAML file by running the following command on the master node:
```
kubectl apply -f pvc-iscsi.yml -n $NS
persistentvolumeclaim "iscsi-claim" created
```
3. Verify the `PersistentVolume` and `PersistentVolumeClaim` were created successfully and that
the `PersistentVolumeClaim` is bound to the correct volume:
```
$ kubectl get pv,pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
iscsi-claim Bound pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO iscsi-targetd-vg-targetd 1m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO Delete Bound default/iscsi-claim iscsi-targetd-vg-targetd 36s
```
4. Set up pods to use the `PersistentVolumeClaim` when binding to the`PersistentVolume`. Here
a `ReplicationController` is created and used to set up two replica pods running web servers that use
the `PersistentVolumeClaim` to mount the `PersistentVolume` onto a mountpath containing shared resources.
1. Create a ReplicationController object in a YAML file named `rc-iscsi.yml` and open it in an editor
to include the following content:
```
apiVersion: v1
kind: ReplicationController
metadata:
name: rc-iscsi-test
spec:
replicas: 2
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- name: nginx
containerPort: 80
volumeMounts:
- name: iscsi
mountPath: "/usr/share/nginx/html"
volumes:
- name: iscsi
persistentVolumeClaim:
claimName: iscsi-claim
```
2. Use the ReplicationController YAML file and run the following command on the master node:
```
$ kubectl create -f rc-iscsi.yml
replicationcontroller "rc-iscsi-test" created
```
3. Verify pods were created:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rc-iscsi-test-05kdr 1/1 Running 0 9m
rc-iscsi-test-wv4p5 1/1 Running 0 9m
```

View File

@ -43,6 +43,10 @@ examples below. Keep the following notable differences in mind:
UID, GID, and mode are not supported for configs. Configs are currently only
accessible by administrators and users with `system` access within the
container.
- On Windows, create or update a service using `--credential-spec` with the `config://<config-name>` format.
This passes the gMSA credentials file directly to nodes before a container starts. No gMSA credentials are written
to disk on worker nodes. For more information, refer to [Deploy services to a swarm](/engine/swarmservices/).
## How Docker manages configs

View File

@ -73,7 +73,6 @@ examples below. Keep the following notable differences in mind:
accessible by administrators and users with `system` access within the
container.
## How Docker manages secrets
When you add a secret to the swarm, Docker sends the secret to the swarm manager

View File

@ -94,6 +94,44 @@ This passes the login token from your local client to the swarm nodes where the
service is deployed, using the encrypted WAL logs. With this information, the
nodes are able to log into the registry and pull the image.
### Provide credential specs for managed service accounts
In Enterprise Edition 3.0, security is improved through the centralized distribution and management of Group Managed Service Account(gMSA) credentials using Docker Config functionality. Swarm now allows using a Docker Config as a gMSA credential spec, which reduces the burden of distributing credential specs to the nodes on which they are used.
**Note**: This option is only applicable to services using Windows containers.
Credential spec files are applied at runtime, eliminating the need for host-based credential spec files or registry entries - no gMSA credentials are written to disk on worker nodes. You can make credential specs available to Docker Engine running swarm kit worker nodes before a container starts. When deploying a service using a gMSA-based config, the credential spec is passed directly to the runtime of containers in that service.
The `--credential-spec` must be one of the following formats:
- `file://<filename>`: The referenced file must be present in the `CredentialSpecs` subdirectory in the docker data directory, which defaults to `C:\ProgramData\Docker\` on Windows. For example, specifying `file://spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`.
- `registry://<value-name>`: The credential spec is read from the Windows registry on the daemons host.
- `config://<config-name>`: The config name is automatically converted to the config ID in the CLI.
The credential spec contained in the specified `config` is used.
The following simple example retrieves the gMSA name and JSON contents from your Active Directory (AD) instance:
```
name="mygmsa"
contents="{...}"
echo $contents > contents.json
```
Make sure that the nodes to which you are deploying are correctly configured for the gMSA.
To use a Config as a credential spec, create a Docker Config in a credential spec file named `credpspec.json`.
You can specify any name for the name of the `config`.
```
docker config create credspec credspec.json
```
Now you can create a service using this credential spec. Specify the `--credential-spec` flag with the config name:
```
docker service create --credential-spec="config://credspec" <your image>
```
Your service uses the gMSA credential spec when it starts, but unlike a typical Docker Config (used by passing the --config flag), the credential spec is not mounted into the container.
## Update a service
You can change almost everything about an existing service using the

View File

@ -38,7 +38,7 @@ free-to-use, hosted Registry, plus additional features (organization accounts,
automated builds, and more).
Users looking for a commercially supported version of the Registry should look
into [Docker Trusted Registry](/datacenter/dtr/2.1/guides/index.md).
into [Docker Trusted Registry](/ee/dtr/).
## Requirements