Merge branch 'master' into ENGDOCS-268-create-note-for-Enterprise-docs

This commit is contained in:
DocKor 2020-01-06 17:47:59 +01:00 committed by GitHub
commit c83b3e0792
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
39 changed files with 5040 additions and 1233 deletions

File diff suppressed because it is too large Load Diff

View File

@ -654,7 +654,7 @@ auto-magically bump the version of the software in your container.
Each `ENV` line creates a new intermediate layer, just like `RUN` commands. This
means that even if you unset the environment variable in a future layer, it
still persists in this layer and its value can be dumped. You can test this by
still persists in this layer and its value can't be dumped. You can test this by
creating a Dockerfile like the following, and then building it.
```Dockerfile

View File

@ -1,26 +0,0 @@
---
title: Docker for IBM Cloud
description: Docker for IBM Cloud has been deprecated. Check Docker Certified Infrastructure
redirect_from:
- /docker-for-ibm-cloud/administering-swarms/
- /docker-for-ibm-cloud/binding-services/
- /docker-for-ibm-cloud/cli-ref/
- /docker-for-ibm-cloud/deploy/
- /docker-for-ibm-cloud/dtr-ibm-cos/
- /docker-for-ibm-cloud/faqs/
- /docker-for-ibm-cloud/ibm-registry/
- /docker-for-ibm-cloud/index/
- /docker-for-ibm-cloud/load-balancer/
- /docker-for-ibm-cloud/logging/
- /docker-for-ibm-cloud/opensource/
- /docker-for-ibm-cloud/persistent-data-volumes/
- /docker-for-ibm-cloud/quickstart/
- /docker-for-ibm-cloud/registry/
- /docker-for-ibm-cloud/release-notes/
- /docker-for-ibm-cloud/scaling/
- /docker-for-ibm-cloud/why/
- /v17.12/docker-for-ibm-cloud/quickstart/
---
Docker for IBM Cloud has been replaced by
[Docker Certified Infrastructure](/ee/supported-platforms.md).

File diff suppressed because it is too large Load Diff

View File

@ -1,18 +1,14 @@
---
description: Home page for Docker Enterprise documentation
keywords: Docker Enterprise, documentation, manual, guide, reference, api, CLI
title: Docker Enterprise
description: Learn about Docker Enterprise, the industry-leading container platform to securely build, share, and run any application, on any infrastructure.
keywords: Docker EE, Docker Enterprise, UCP, DTR, orchestration, cluster, Kubernetes, CaaS
redirect_from:
- /enterprise/
- /manuals/
---
>{% include enterprise_label_shortform.md %}
The Docker Enterprise platform is the leading container platform for continuous, high-velocity innovation. Docker is the only independent container platform that enables developers to seamlessly build and share any application — from legacy to modern — and operators to securely run them anywhere - from hybrid cloud to the edge.
## Docker Enterprise platform
Docker Enterprise is a secure, scalable, and supported container platform for building and
orchestrating applications across multi-tenant Linux, Windows Server 2016, and Windows Server 2019.
The Docker Enterprise platform is the leading container platform for continuous, high-velocity innovation. Docker Enterprise is the only independent container platform that enables developers to seamlessly build and share any application — from legacy to modern — and operators to securely run them anywhere - from hybrid cloud to the edge.
Docker Enterprise enables deploying highly available workloads using either the Docker Kubernetes Service or Docker Swarm. Docker Enterprise automates many of the tasks that orchestration requires, like provisioning pods, containers, and cluster
resources. Self-healing components ensure that Docker Enterprise clusters remain highly available.
@ -29,17 +25,9 @@ cluster and applications through a single interface.
![](images/docker-ee-overview-1.png){: .with-border}
## Docker Enterprise features
Docker Enterprise provides multi-architecture orchestration using the Docker Kubernetes Service and
Docker Swarm orchestrators. Docker Enterprise enables a secure software supply chain, with policy-based image
promotion, image mirroring between registries - including Docker Hub, and signing & scanning enforcement for container images.
### Docker Kubernetes Service
The Docker Kubernetes Service fully supports all Docker Enterprise features, including
role-based access control, LDAP/AD integration, image scanning and signing enforcement policies,
and security policies.
The Docker Kubernetes Service fully supports all Docker Enterprise features, including role-based access control, LDAP/AD integration, image scanning and signing enforcement policies, and security policies.
Docker Kubernetes Services features include:
@ -182,12 +170,3 @@ KubeDNS is running at https://54.200.115.43:6443/api/v1/namespaces/kube-system/s
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
## Docker Context
A new Docker CLI plugin called `docker context` is available with client version 19.03.0. `docker context` helps manage connections to multiple environments so you do not have to remember and type out connection strings. [Read more](../engine/reference/commandline/context/) about `docker context`.
## Where to go next
- [Supported platforms](supported-platforms.md)
- [Docker Enterprise architecture](docker-ee-architecture.md)

168
ee/overview.md Normal file
View File

@ -0,0 +1,168 @@
---
title: Docker Enterprise
description: Learn about Docker Enterprise, the industry-leading container platform to securely build, share, and run any application, on any infrastructure.
keywords: Docker Enterprise, UCP, DTR, orchestration, cluster, Kubernetes
---
The Docker Enterprise platform is the leading container platform for continuous, high-velocity innovation. Docker Enterprise is the only independent container platform that enables developers to seamlessly build and share any application — from legacy to modern — and operators to securely run them anywhere - from hybrid cloud to the edge.
Docker Enterprise enables deploying highly available workloads using either the Docker Kubernetes Service or Docker Swarm. Docker Enterprise automates many of the tasks that orchestration requires, like provisioning pods, containers, and cluster
resources. Self-healing components ensure that Docker Enterprise clusters remain highly available.
Role-based access control (RBAC) applies to Kubernetes and Swarm orchestrators, and
communication within the cluster is secured with TLS.
[Docker Content Trust](/engine/security/trust/content_trust/) is enforced
for images on all of the orchestrators.
Docker Enterprise includes Docker Universal Control Plane (UCP), the
cluster management solution from Docker. UCP can be installed
on-premises or in your public cloud of choice, and helps manage your
cluster and applications through a single interface.
![](images/docker-ee-overview-1.png){: .with-border}
### Docker Kubernetes Service
The Docker Kubernetes Service fully supports all Docker Enterprise features, including role-based access control, LDAP/AD integration, image scanning and signing enforcement policies, and security policies.
Docker Kubernetes Services features include:
- Kubernetes orchestration full feature set
- CNCF Certified Kubernetes conformance
- Kubernetes app deployment via UCP web UI or CLI (`kubectl`)
- Compose stack deployment for Swarm and Kubernetes apps (`docker stack deploy`)
- Role-based access control for Kubernetes workloads
- Blue-Green deployments, for load balancing to different app versions
- Ingress Controllers with Kubernetes L7 routing
- [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) to define a set of conditions that a pod must run with in order to be accepted into the system
- Note: Pod Security Policies are currently `Beta` status in Kubernetes 1.14
- Container Storage Interface (CSI) support
- iSCSI support for Kubernetes
- Non-disruptive Docker Enterprise platform upgrades (blue-green upgrades)
- Experimental features (planned for full GA in subsequent Docker Enterprise releases):
- Kubernetes-native ingress (Istio)
In addition, UCP integrates with Kubernetes by using admission controllers,
which enable:
- Authenticating user client bundle certificates when communicating directly
with the Kubernetes API server
- Authorizing requests via the UCP role-based access control model
- Assigning nodes to a namespace by injecting a `NodeSelector` automatically
to workloads via admission control
- Keeping all nodes in both Kubernetes and Swarm orchestrator inventories
- Fine-grained access control and privilege escalation prevention without
the `PodSecurityPolicy` admission controller
- Resolving images of deployed workloads automatically, and accepting or
rejecting images based on UCP's signing-policy feature
The default Docker Enterprise installation includes both Kubernetes and Swarm
components across the cluster, so every newly joined worker node is ready
to schedule Kubernetes or Swarm workloads.
### Orchestration platform features
![](images/docker-ee-overview-4.png){: .with-border}
- Docker Enterprise manager nodes are both Swarm managers and Kubernetes masters,
to enable high availability
- Allocate worker nodes for Swarm or Kubernetes workloads (or both)
- Single pane of glass for monitoring apps
- Enhanced Swarm hostname routing mesh with Interlock 2.0
- One platform-wide management plane: secure software supply chain, secure
multi-tenancy, and secure and highly available node management
### Secure supply chain
![](images/docker-ee-overview-3.png){: .with-border}
- DTR support for the Docker App format, based on the [CNAB](https://cnab.io) specification
- Note: Docker Apps can be deployed to clusters managed by UCP, where they will be displayed as _Stacks_
- Image signing and scanning of Kubernetes and Swarm images and Docker Apps for validating and verifying content
- Image promotion with mirroring between registries as well as Docker Hub
- Define policies for automating image promotions across the app development
lifecycle of Kubernetes and Swarm apps
### Centralized cluster management
With Docker, you can join thousands of physical or virtual machines
together to create a cluster, allowing you to deploy your
applications at scale. Docker Enterprise extends the functionality provided by Docker
Engine to make it easier to manage your cluster from a centralized place.
You can manage and monitor your container cluster using a graphical web interface.
### Deploy, manage, and monitor
With Docker Enterprise, you can manage all of the infrastructure
resources you have available, like nodes, volumes, and networks, from a central console.
You can also deploy and monitor your applications and services.
### Built-in security and access control
Docker Enterprise has its own built-in authentication mechanism with role-based access
control (RBAC), so that you can control who can access and make changes to your
cluster and applications. Also, Docker Enterprise authentication integrates with LDAP
services and supports SAML SCIM to proactively synchronize with authentication providers.
[Learn about role-based access control](./ucp/authorization/). You can also opt to enable [PKI authentication](./enable-client-certificate-authentication/) to use client certificates, rather than username and password.
![](images/docker-ee-overview-2.png){: .with-border}
Docker Enterprise integrates with Docker Trusted Registry so that you can keep the
Docker images you use for your applications behind your firewall, where they
are safe and can't be tampered with. You can also enforce security policies and only allow running applications
that use Docker images you know and trust.
#### Windows Application Security
Windows applications typically require Active Directory authentication in order to communicate with other services on the network. Container-based applications use Group Managed Service Accounts (gMSA) to provide this authentication. Docker Swarm fully supports the use of gMSAs with Windows containers.
## Docker Enterprise and the CLI
Docker Enterprise exposes the standard Docker API, so you can continue using the tools
that you already know, [including the Docker CLI client](./ucp/user-access/cli/),
to deploy and manage your applications.
For example, you can use the `docker info` command to check the
status of a Swarm managed by Docker Enterprise:
```bash
docker info
```
Which produces output similar to the following:
```bash
Containers: 38
Running: 23
Paused: 0
Stopped: 15
Images: 17
Server Version: 17.06
...
Swarm: active
NodeID: ocpv7el0uz8g9q7dmw8ay4yps
Is Manager: true
ClusterID: tylpv1kxjtgoik2jnrg8pvkg6
Managers: 1
```
## Use the Kubernetes CLI
Docker Enterprise exposes the standard Kubernetes API, so you can use [kubectl
to manage your Kubernetes workloads](./ucp/user-access/cli/):
```bash
kubectl cluster-info
```
Which produces output similar to the following:
```bash
Kubernetes master is running at https://54.200.115.43:6443
KubeDNS is running at https://54.200.115.43:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```

View File

@ -1,5 +1,5 @@
---
title: Docker Enterprise release notes
title: Install or upgrade Docker Enterprise components
description: Learn about the new features, bug fixes, and breaking changes for Docker Enterprise.
keywords: engine enterprise, ucp, dtr, desktop enterprise, whats new, release notes
---
@ -10,28 +10,7 @@ This page provides information about Docker Enterprise 3.0. For
detailed information about for each enterprise component, refer to the individual component release notes
pages listed in the following **Docker Enterprise components install and upgrade** section.
## Whats New?
| Feature | Component | Component version |
|---------|-----------|-------------------|
| [Group Managed Service Accounts (gMSA)](/engine/swarm/services.md#gmsa-for-swarm) | UCP | 3.2.0 |
| [Open Security Controls Assessment Language (OSCAL)](/compliance/oscal/) | UCP | 3.2.0 |
| [Container storage interface (CSI)](/ee/ucp/kubernetes/storage/use-csi/) | UCP | 3.2.0 |
| [Internet Small Computer System Interface (iSCSI)](/ee/ucp/kubernetes/storage/use-iscsi/) | UCP | 3.2.0 |
| [System for Cross-domain Identity Management (SCIM)](/ee/ucp/admin/configure/integrate-scim/) | UCP | 3.2.0 |
| [Pod Security Policies](/ee/ucp/kubernetes/pod-security-policies/) | UCP | 3.2.0 |
| [Docker Registry CLI (Experimental)](/engine/reference/commandline/registry/) | DTR | 2.7.0 |
| [App Distribution](/ee/dtr/user/manage-applications/) | DTR | 2.7.0 |
| [Client certificate-based Authentication](/ee/enable-client-certificate-authentication/) | DTR and UCP|2.7.0 (DTR) and 3.2.0 (UCP)|
| [Application Designer](/ee/desktop/app-designer/) | Docker Desktop Enterprise | 0.1.4 |
| [Docker App (Experimental)](/app/working-with-app/) |CLI | 0.8.0 |
| [Docker Assemble (Experimental)](/assemble/install/) | CLI | 0.36.0 |
| [Docker Buildx (Experimental)](/buildx/working-with-buildx/)| CLI | 0.2.2 |
| [Docker Cluster](/cluster/) | CLI | 1.0.0 |
| [Docker Template CLI (Experimental)](/app-template/working-with-template/) | CLI | 0.1.4 |
## Docker Enterprise components install and upgrade
This page provides information about Docker Enterprise 3.0. For detailed information about for each enterprise component, refer to the individual component release notes below.
| Component Release Notes | Version | Install | Upgrade |
|---------|-----------|-------------------|-------------- |
@ -40,8 +19,4 @@ pages listed in the following **Docker Enterprise components install and upgrade
| [DTR](/ee/dtr/release-notes/) | 2.7 | [Install DTR](/ee/dtr/admin/install/) | [Upgrade DTR](/ee/dtr/admin/upgrade/) |
| [Docker Desktop Enterprise](/ee/desktop/release-notes/) | 2.1.0 |Install Docker Desktop Enterprise [Mac](/ee/desktop/admin/install/mac/), [Windows](/ee/desktop/admin/install/windows/) | Upgrade Docker Desktop Enterprise [Mac](/ee/desktop/admin/install/mac/), [Windows](/ee/desktop/admin/install/windows/) |
Refer to the [Compatibility Matrix](https://success.docker.com/article/compatibility-matrix) and the [Maintenance Lifecycle](https://success.docker.com/article/maintenance-lifecycle) for compatibility and software maintenance details.
Refer to the [Compatibility Matrix](https://success.docker.com/article/compatibility-matrix) and the [Maintenance Lifecycle](https://success.docker.com/article/maintenance-lifecycle) for compatibility and software maintenance details.

View File

@ -8,7 +8,7 @@ keywords: cluster, node, label, swarm, metadata
With Docker UCP, you can add labels to your nodes. Labels are metadata that
describe the node, like its role (development, QA, production), its region
(US, EU, APAC), or the kind of disk (hdd, ssd). Once you have labeled your
(US, EU, APAC), or the kind of disk (HDD, SSD). Once you have labeled your
nodes, you can add deployment constraints to your services, to ensure they
are scheduled on a node with a specific label.
@ -24,7 +24,7 @@ to organize access to your cluster.
## Apply labels to a node
In this example we'll apply the `ssd` label to a node. Then we'll deploy
In this example, we'll apply the `ssd` label to a node. Next, we'll deploy
a service with a deployment constraint to make sure the service is always
scheduled to run on a node that has the `ssd` label.
@ -33,7 +33,7 @@ scheduled to run on a node that has the `ssd` label.
3. In the nodes list, select the node to which you want to apply labels.
4. In the details pane, select the edit node icon in the upper-right corner to edit the node.
![](../../images/add-labels-to-cluster-nodes-3.png)
![](../../images/v32-edit-node.png)
5. In the **Edit Node** page, scroll down to the **Labels** section.
6. Select **Add Label**.
@ -115,13 +115,13 @@ click **Done**.
6. Navigate to the **Nodes** page, and click the node that has the
`disk` label. In the details pane, click the **Inspect Resource**
dropdown and select **Containers**.
drop-down menu and select **Containers**.
![](../../images/use-constraints-in-stack-deployment-2.png)
Dismiss the filter and navigate to the **Nodes** page. Click a node that
doesn't have the `disk` label. In the details pane, click the
**Inspect Resource** dropdown and select **Containers**. There are no
**Inspect Resource** drop-down menu and select **Containers**. There are no
WordPress containers scheduled on the node. Dismiss the filter.
## Add a constraint to a service by using the UCP web UI

View File

@ -15,31 +15,23 @@ UCP 3.0 used its own role-based access control (RBAC) for Kubernetes clusters. N
Kubernetes RBAC is turned on by default for Kubernetes clusters when customers upgrade to UCP 3.1. See [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the v1.11 documentation for more information about Kubernetes role-based access control.
Starting with UCP 3.1, Kubernetes & Swarm roles have separate views. You can view all the roles for a particular cluster under **Access Control** then **Roles**. Select Kubernetes or Swarm to view the specific roles for each.
Starting with UCP 3.1, Kubernetes and Swarm roles have separate views. You can view all the roles for a particular cluster under **Access Control** then **Roles**. Select Kubernetes or Swarm to view the specific roles for each.
## Creating roles
You create Kubernetes roles either through the CLI using `kubectl` or through the UCP web interface.
To create a Kuberenetes role in the UCP web interface:
1. Go to the UCP web interface.
2. Navigate to the **Access Control**.
3. In the lefthand menu, select **Roles**.
![Kubernetes Grants in UCP](/ee/ucp/images/kube-rbac-roles.png)
4. Select the **Kubernetes** tab at the top of the window.
5. Select **Create** to create a Kubernetes role object in the following dialog:
To create a Kubernetes role in the UCP web interface:
1. From the UCP UI, select **Access Control**.
2. From the left navigation menu, select **Roles**.
![Kubernetes Grants in UCP](/ee/ucp/images/v32roles.png)
3. Select the **Kubernetes** tab at the top of the window.
4. Select **Create** to create a Kubernetes role object in the following dialog:
![Kubernetes Role Creation in UCP](/ee/ucp/images/kube-role-create.png)
6. Select a namespace from the **Namespace** dropdown list. Selecting a specific namespace creates a role for use in that namespace, but selecting all namespaces creates a `ClusterRole` where you can create rules for cluster-scoped Kubernetes resources as well as namespaced resources.
7. Provide the YAML for the role, either by entering it in the **Object YAML** editor or select **Click to upload a .yml file** to choose and upload a .yml file instead.
8. When you have finished specifying the YAML, Select **Create** to complete role creation.
5. Select a namespace from the **Namespace** drop-down list. Selecting a specific namespace creates a role for use in that namespace, but selecting all namespaces creates a `ClusterRole` where you can create rules for cluster-scoped Kubernetes resources as well as namespaced resources.
6. Provide the YAML for the role, either by entering it in the **Object YAML** editor or select **Click to upload a .yml file** to choose and upload a .yml file instead.
7. When you have finished specifying the YAML, Select **Create** to complete role creation.
## Creating role grants
@ -48,31 +40,22 @@ Kubernetes provides 2 types of role grants:
- `ClusterRoleBinding` which applies to all namespaces
- `RoleBinding` which applies to a specific namespace
To create a grant for a Kuberenetes role in the UCP web interface:
1. Go to the UCP web UI.
2. Navigate to the **Access Control**.
3. In the lefthand menu, select **Grants**.
![Kubernetes Grants in UCP](/ee/ucp/images/kube-rbac-grants.png)
4. Select the **Kubernetes** tab at the top of the window. All grants to Kubernetes roles can be viewed in the Kubernetes tab.
5. Select **Create New Grant** to start the Create Role Binding wizard and create a new grant for a given user, team or service.
To create a grant for a Kubernetes role in the UCP web interface:
1. From the UCP UI, select **Access Control**.
2. From the left navigation menu, select **Grants**.
![Kubernetes Grants in UCP](/ee/ucp/images/v32grants.png)
3. Select the **Kubernetes** tab at the top of the window. All grants to Kubernetes roles can be viewed in the Kubernetes tab.
4. Select **Create New Grant** to start the Create Role Binding wizard and create a new grant for a given user, team or service.
![Kubernetes Create Role Binding in UCP](../../images/kube-grant-wizard.png)
6. Select the subject type. Your choices are:
5. Select the subject type. Your choices are:
- **All Users**
- **Organizations**
- **Service account**
7. To create a user role binding, select a username from the **Users** dropdown list then select **Next**.
8. Select a resource set for the subject. The **default** namespace is automatically selected. To use a different namespace, select the **Select Namespace** button next to the desired namespace. For `Cluster Role Binding`, slide the **Apply Role Binding to all namespaces** selector to the right.
6. To create a user role binding, select a username from the **Users** drop-down list then select **Next**.
7. Select a resource set for the subject. The **default** namespace is automatically selected. To use a different namespace, select the **Select Namespace** button next to the desired namespace. For `Cluster Role Binding`, slide the **Apply Role Binding to all namespaces** selector to the right.
![Kubernetes Create User Role Binding in UCP](/ee/ucp/images/kube-grant-rolebinding.png)
9. Select **Next** to continue.
10. Select the **Cluster Role** from the dropdown list. If you create a `ClusterRoleBinding` (by selecting **Apply Role Binding to all namespaces**) then you may only select ClusterRoles. If you select a specific namespace, you can choose any role from that namespace or any ClusterRole.
8. Select **Next** to continue.
9. Select the **Cluster Role** from the drop-down list. If you create a `ClusterRoleBinding` (by selecting **Apply Role Binding to all namespaces**) then you may only select ClusterRoles. If you select a specific namespace, you can choose any role from that namespace or any ClusterRole.
![Kubernetes Select Cluster Role in UCP](/ee/ucp/images/kube-grant-roleselect.png)
11. Select **Create** to complete creating the grant.
10. Select **Create** to complete creating the grant.

View File

@ -12,10 +12,10 @@ individual users, administrators or software components that have affected the
system. They are focused on external user/agent actions and security rather than
understanding state or events of the system itself.
Audit Logs capture all HTTP actions (GET, PUT, POST, PATCH, DELETE) to all UCP
Audit logs capture all HTTP actions (GET, PUT, POST, PATCH, DELETE) to all UCP
API, Swarm API and Kubernetes API endpoints that are invoked (except for the
ignored list) and sent to Docker Engine via stdout. Creating audit logs is a UCP
component that integrates with Swarm, K8s, and UCP APIs.
component that integrates with Swarm, Kubernetes, and UCP APIs.
## Logging levels
@ -37,6 +37,8 @@ logging levels are provided:
- **Request**: includes all fields from the Metadata level as well as the
request payload.
> Note
>
> Once UCP audit logging has been enabled, audit logs can be found within the
> container logs of the `ucp-controller` container on each UCP manager node.
> Please ensure you have a
@ -48,10 +50,10 @@ request payload.
You can use audit logs to help with the following use cases:
- **Historical Troubleshooting** - Audit logs are helpful in determining a
sequence of past events that explain why an issue occured.
- **Historical troubleshooting** - Audit logs are helpful in determining a
sequence of past events that explain why an issue occurred.
- **Security Analysis and Auditing** - Security is one of the primary uses for
- **Security analysis and auditing** - Security is one of the primary uses for
audit logs. A full record of all user interactions with the container
infrastructure gives your security team full visibility to questionable or
attempted unauthorized accesses.
@ -63,27 +65,24 @@ generate chargeback information.
created by the event, alerting features can be built on top of event tools that
generate alerts for ops teams (PagerDuty, OpsGenie, Slack, or custom solutions).
## Enabling UCP Audit Logging
## Enabling UCP audit logging
UCP audit logging can be enabled via the UCP web user interface, the UCP API or
via the UCP configuration file.
### Enabling UCP Audit Logging via UI
### Enabling UCP audit logging using the web UI
1) Log in to the **UCP** Web User Interface
2) Navigate to **Admin Settings**
3) Select **Audit Logs**
4) In the **Configure Audit Log Level** section, select the relevant logging
1. Log in to the **UCP** Web User Interface
2. Navigate to **Admin Settings**
3. Select **Audit Logs**
4. In the **Configure Audit Log Level** section, select the relevant logging
level.
![Enabling Audit Logging in UCP](../../images/auditlogging.png){: .with-border}
![Enabling Audit Logging in UCP](../../images/auditlogging.png){: .with-border}
5) Click **Save**
5. Click **Save**
### Enabling UCP Audit Logging via API
### Enabling UCP audit logging using the API
1. Download the UCP Client bundle [Download client bundle from the command line](https://success.docker.com/article/download-client-bundle-from-the-cli).
@ -110,11 +109,10 @@ level.
curl --cert ${DOCKER_CERT_PATH}/cert.pem --key ${DOCKER_CERT_PATH}/key.pem --cacert ${DOCKER_CERT_PATH}/ca.pem -k -H "Content-Type: application/json" -X PUT --data $(cat auditlog.json) https://ucp-domain/api/ucp/config/logging
```
### Enabling UCP Audit Logging via Config File
### Enabling UCP audit logging using the configuration file
Enabling UCP audit logging via the UCP Configuration file can be done before
or after a UCP installation. Following the UCP Configuration file documentation
[here](./ucp-configuration-file/).
Enabling UCP audit logging via the UCP configuration file can be done before
or after a UCP installation. Refer to the [UCP configuration file](./ucp-configuration-file/) topic for more information.
The section of the UCP configuration file that controls UCP auditing logging is:
@ -126,20 +124,23 @@ The section of the UCP configuration file that controls UCP auditing logging is:
The supported variables for `level` are `""`, `"metadata"` or `"request"`.
> Important: The `support_dump_include_audit_logs` flag specifies whether user identification information from the ucp-controller container logs is included in the support dump. To prevent this information from being sent with the support dump, make sure that `support_dump_include_audit_logs` is set to `false`. When disabled, the support dump collection tool filters out any lines from the `ucp-controller` container logs that contain the substring `auditID`.
> Note
>
> The `support_dump_include_audit_logs` flag specifies whether user identification information from the ucp-controller container logs is included in the support dump. To prevent this information from being sent with the support dump, make sure that `support_dump_include_audit_logs` is set to `false`. When disabled, the support dump collection tool filters out any lines from the `ucp-controller` container logs that contain the substring `auditID`.
{: .important}
## Accessing Audit Logs
## Accessing audit logs
The audit logs are exposed today through the `ucp-controller` logs. You can
access these logs locally through the Docker cli or through an external
access these logs locally through the Docker CLI or through an external
container logging solution, such as [ELK](https://success.docker.com/article/elasticsearch-logstash-kibana-logging)
### Accessing Audit Logs via the Docker Cli
### Accessing audit logs using the Docker CLI
1) Source a UCP Client Bundle
To access audit logs using the Docker CLI:
2) Run `docker logs` to obtain audit logs. In the following example,
1. Source a UCP Client Bundle
2. Run `docker logs` to obtain audit logs. In the following example,
we are tailing the command to show the last log entry.
```
@ -210,7 +211,7 @@ Information for the following API endpoints is redacted from the audit logs for
- `/swarm/join` (POST)
- `/swarm/update` (POST)
-`/auth/login` (POST)
- Kube secrete create/update endpoints
- Kubernetes secrete create/update endpoints
## Where to go next

View File

@ -26,14 +26,14 @@ both on the same node. Although you can choose to mix orchestrator types on the
same node, this isn't recommended for production deployments because of the
likelihood of resource contention.
To change a node's orchestrator type from the **Edit node** page:
To change a node's orchestrator type from the Edit Node page:
1. Log in to the Docker Enterprise web UI with an administrator account.
2. Navigate to the **Nodes** page, and click the node that you want to assign
to a different orchestrator.
3. In the details pane, click **Configure** and select **Details** to open
the **Edit node** page.
4. In the **Orchestrator properties** section, click the orchestrator type
the Edit Node page.
4. In the **Orchestrator Properties** section, click the orchestrator type
for the node.
5. Click **Save** to assign the node to the selected orchestrator.
@ -75,7 +75,7 @@ you'll be in the same situation as if you were running in `Mixed` node.
> committed to another workload that was scheduled by the other orchestrator.
> When this happens, the node could run out of memory or other resources.
>
> For this reason, we recommend against mixing orchestrators on a production
> For this reason, we recommend not mixing orchestrators on a production
> node.
{: .warning}
@ -88,10 +88,10 @@ To set the orchestrator for new nodes:
1. Log in to the Docker Enterprise web UI with an administrator account.
2. Open the **Admin Settings** page, and in the left pane, click **Scheduler**.
3. Under **Set orchestrator type for new nodes** click **Swarm** or **Kubernetes**.
3. Under **Set Orchestrator Type for New Nodes**, click **Swarm** or **Kubernetes**.
4. Click **Save**.
![](../../images/join-nodes-to-cluster-1.png){: .with-border}
![](../../images/v32scheduler.png){: .with-border}
From now on, when you join a node to the cluster, new workloads on the node
are scheduled by the specified orchestrator type. Existing nodes in the cluster

View File

@ -18,19 +18,15 @@ Calico / Azure integration.
## Docker UCP Networking
Docker UCP configures the Azure IPAM module for Kubernetes to allocate IP
addresses for Kubernetes pods. The Azure IPAM module requires each Azure virtual
machine which is part of the Kubernetes cluster to be configured with a pool of IP
addresses for Kubernetes pods. The Azure IPAM module requires each Azure VM which is part of the Kubernetes cluster to be configured with a pool of IP
addresses.
There are two options for provisoning IPs for the Kubernetes cluster on Azure:
There are two options for provisioning IPs for the Kubernetes cluster on Azure:
- _An automated mechanism provided by UCP which allows for IP pool configuration and maintenance
for standalone Azure virtual machines._ This service runs within the
`calico-node` daemonset and provisions 128 IP addresses for each
node by default. For information on customizing this value, see [Adjust the IP count value](#adjust-the-ip-count-value).
- _Manual provision of additional IP address for each Azure virtual machine._ This
could be done through the Azure Portal, the Azure CLI `$ az network nic ip-config create`,
or an ARM template. You can find an example of an ARM template
- **An automated mechanism provided by UCP which allows for IP pool configuration and maintenance for standalone Azure virtual machines (VMs).** This service runs within the
`calico-node` daemonset and provisions 128 IP addresses for each node by default. For information on customizing this value, see [Adjust the IP count value](#adjust-the-ip-count-value).
- **Manual provision of additional IP address for each Azure VM.** This
could be done through the Azure Portal, the Azure CLI `$ az network nic ip-config create`, or an ARM template. You can find an example of an ARM template
[here](#manually-provision-ip-address-pools-as-part-of-an-azure-virtual-machine-scale-set).
## Azure Prerequisites
@ -86,7 +82,7 @@ objects are being deployed.
For UCP to integrate with Microsoft Azure, all Linux UCP Manager and Linux UCP
Worker nodes in your cluster need an identical Azure configuration file,
`azure.json`. Place this file within `/etc/kubernetes` on each host. Since the
configution file is owned by `root`, set its permissions to `0644` to ensure
configuration file is owned by `root`, set its permissions to `0644` to ensure
the container user has read access.
The following is an example template for `azure.json`. Replace `***` with real values, and leave the other
@ -112,11 +108,11 @@ There are some optional parameters for Azure deployments:
- `primaryAvailabilitySetName` - The Worker Nodes availability set.
- `vnetResourceGroup` - The Virtual Network Resource group, if your Azure Network objects live in a
seperate resource group.
separate resource group.
- `routeTableName` - If you have defined multiple Route tables within
an Azure subnet.
See the [Kubernetes Azure Cloud Provider Config](https://github.com/kubernetes/cloud-provider-azure/blob/master/docs/cloud-provider-config.md) for more details on this configuration file.
See the [Kubernetes Azure Cloud Provider Configuration](https://github.com/kubernetes/cloud-provider-azure/blob/master/docs/cloud-provider-config.md) for more details on this configuration file.
## Guidelines for IPAM Configuration
@ -127,37 +123,35 @@ See the [Kubernetes Azure Cloud Provider Config](https://github.com/kubernetes/c
> installation process.
The subnet and the virtual network associated with the primary interface of the
Azure virtual machines need to be configured with a large enough address
Azure VMs needs to be configured with a large enough address
prefix/range. The number of required IP addresses depends on the workload and
the number of nodes in the cluster.
For example, in a cluster of 256 nodes, make sure that the address space of the subnet and the
virtual network can allocate at least 128 * 256 IP addresses, in order to run a maximum of 128 pods
concurrently on a node. This would be ***in addition to*** initial IP allocations to virtual machine
NICs (network interfaces) during Azure resource creation.
concurrently on a node. This would be ***in addition to*** initial IP allocations to VM
network interface card (NICs) during Azure resource creation.
Accounting for IP addresses that are allocated to NICs during virtual machine bring-up, set
the address space of the subnet and virtual network to `10.0.0.0/16`. This
ensures that the network can dynamically allocate at least 32768 addresses,
plus a buffer for initial allocations for primary IP addresses.
Accounting for IP addresses that are allocated to NICs during VM bring-up, set the address space of the subnet and virtual network to `10.0.0.0/16`. This
ensures that the network can dynamically allocate at least 32768 addresses, plus a buffer for initial allocations for primary IP addresses.
> Azure IPAM, UCP, and Kubernetes
> Note
>
> The Azure IPAM module queries an Azure virtual machine's metadata to obtain
> a list of IP addresses which are assigned to the virtual machine's NICs. The
> The Azure IPAM module queries an Azure VM's metadata to obtain
> a list of IP addresses which are assigned to the VM's NICs. The
> IPAM module allocates these IP addresses to Kubernetes pods. You configure the
> IP addresses as `ipConfigurations` in the NICs associated with a virtual machine
> IP addresses as `ipConfigurations` in the NICs associated with a VM
> or scale set member, so that Azure IPAM can provide them to Kubernetes when
> requested.
{: .important}
## Manually provision IP address pools as part of an Azure virtual machine scale set
## Manually provision IP address pools as part of an Azure VM scale set
Configure IP Pools for each member of the virtual machine scale set during provisioning by
Configure IP Pools for each member of the VM scale set during provisioning by
associating multiple `ipConfigurations` with the scale sets
`networkInterfaceConfigurations`. Here's an example `networkProfile`
`networkInterfaceConfigurations`. The following is an example `networkProfile`
configuration for an ARM template that configures pools of 32 IP addresses
for each virtual machine in the virtual machine scale set.
for each VM in the VM scale set.
```json
"networkProfile": {
@ -217,26 +211,25 @@ for each virtual machine in the virtual machine scale set.
### Adjust the IP Count Value
During a UCP installation, a user can alter the number of Azure IP addresses
UCP will automatically provision for pods. By default UCP will provision 128
addresses, from the same Azure Subnet as the hosts, for each Virtual Machine in
the cluster. However if you have manually attached additional IP addresses
to the Virtual Machines (via an ARM Template, Azure CLI or Azure Portal) or you
UCP will automatically provision for pods. By default, UCP will provision 128
addresses, from the same Azure Subnet as the hosts, for each VM in the cluster. However, if you have manually attached additional IP addresses
to the VMs (via an ARM Template, Azure CLI or Azure Portal) or you
are deploying in to small Azure subnet (less than /16), an `--azure-ip-count`
flag can be used at install time.
> Note: Do not set the `--azure-ip-count` variable to a value of less than 6 if
> you have not manually provisioned additional IP addresses for each Virtual
> Machine. The UCP installation will need at least 6 IP addresses to allocate
> to the core UCP components that run as Kubernetes pods. That is in addition
> to the Virtual Machine's private IP address.
> Note
>
> Do not set the `--azure-ip-count` variable to a value of less than 6 if
> you have not manually provisioned additional IP addresses for each VM. The UCP installation will need at least 6 IP addresses to allocate
> to the core UCP components that run as Kubernetes pods. This is in addition
> to the VM's private IP address.
Below are some example scenarios which require the `--azure-ip-count` variable
to be defined.
**Scenario 1 - Manually Provisioned Addresses**
If you have manually provisioned additional IP addresses for each Virtual
Machine, and want to disable UCP from dynamically provisioning more IP
If you have manually provisioned additional IP addresses for each VM, and want to disable UCP from dynamically provisioning more IP
addresses for you, then you would pass `--azure-ip-count 0` into the UCP
installation command.
@ -246,16 +239,16 @@ If you want to reduce the number of IP addresses dynamically allocated from 128
addresses to a custom value due to:
- Primarily using the Swarm Orchestrator
- Deploying UCP on a small Azure subnet (for example /24)
- Deploying UCP on a small Azure subnet (for example, /24)
- Plan to run a small number of Kubernetes pods on each node.
For example if you wanted to provision 16 addresses per virtual machine, then
For example if you wanted to provision 16 addresses per VM, then
you would pass `--azure-ip-count 16` into the UCP installation command.
If you need to adjust this value post-installation, see
[instructions](https://docs.docker.com/ee/ucp/admin/configure/ucp-configuration-file/) on how to download the UCP
configuration file, change the value, and update the configuration via the API.
If you reduce the value post-installation, existing virtual machines will not
If you reduce the value post-installation, existing VMs will not
be reconciled, and you will have to manually edit the IP count in Azure.
### Install UCP
@ -264,13 +257,13 @@ Run the following command to install UCP on a manager node. The `--pod-cidr`
option maps to the IP address range that you have configured for the Azure
subnet, and the `--host-address` maps to the private IP address of the master
node. Finally if you want to adjust the amount of IP addresses provisioned to
each virtual machine pass `--azure-ip-count`.
each VM pass `--azure-ip-count`.
> **Note**
> Note
>
> The `pod-cidr` range must match the Azure Virtual Network's Subnet
> attached the hosts. For example, if the Azure Virtual Network had the range
> `172.0.0.0/16` with Virtual Machines provisioned on an Azure Subnet of
> `172.0.0.0/16` with VMs provisioned on an Azure Subnet of
> `172.0.1.0/24`, then the Pod CIDR should also be `172.0.1.0/24`.
```bash

View File

@ -16,12 +16,12 @@ of the [requirements UCP needs to run](system-requirements.md).
Also, you need to ensure that all nodes, physical and virtual, are running
the same version of Docker Enterprise.
> Cloud Providers
> Note
>
> If you are installing on a public cloud platform, there is cloud specific UCP
> If you are installing UCP on a public cloud platform, refer to the cloud-specific UCP
> installation documentation. For [Microsoft
> Azure](./cloudproviders/install-on-azure/) this is **mandatory**, for
> [AWS](./cloudproviders/install-on-aws/) this is optional.
> Azure](./cloudproviders/install-on-azure/), this is **mandatory**. For
> [AWS](./cloudproviders/install-on-aws/), this is optional.
{: important}
## Step 2: Install Docker Enterprise on all nodes
@ -86,12 +86,12 @@ To install UCP:
with SELinux enabled, check the [reference
documentation](/reference/ucp/3.2/cli/install.md).
> Custom Container Networking Interface (CNI) plugins
> Note
>
> UCP will install [Project Calico](https://docs.projectcalico.org/v3.7/introduction/)
> for container-to-container communication for Kubernetes. A platform operator may
> choose to install an alternative CNI plugin, such as Weave or Flannel. Please see
>[Install an unmanaged CNI plugin](/ee/ucp/kubernetes/install-cni-plugin/).
>[Install an unmanaged CNI plugin](/ee/ucp/kubernetes/install-cni-plugin/) for more information.
{: important}
## Step 5: License your installation
@ -125,7 +125,7 @@ To join manager nodes to the swarm,
1. In the UCP web UI, navigate to the **Nodes** page, and click the
**Add Node** button to add a new node.
![](../../images/nodes-page-ucp.png){: .with-border}
![](../../images/v32-add-node.png){: .with-border}
2. In the **Add Node** page, check **Add node as a manager** to turn this node
into a manager and replicate UCP for high-availability.
@ -143,12 +143,11 @@ To join manager nodes to the swarm,
contact UCP. The joining node should be able to contact itself at this
address. The format is `interface:port` or `ip:port`.
Click the copy icon ![](../../images/copy-swarm-token.png) to copy the
`docker swarm join` command that nodes use to join the swarm.
5. Click the copy icon to copy the `docker swarm join` command that nodes use to join the swarm.
![](../../images/add-node-ucp.png){: .with-border}
![](../../images/add-node-ucp.png){: .with-border}
5. For each manager node that you want to join to the swarm, log in using
6. For each manager node that you want to join to the swarm, log in using
ssh and run the join command that you copied. After the join command
completes, the node appears on the **Nodes** page in the UCP web UI.

View File

@ -1,6 +1,6 @@
---
title: Uninstall UCP
description: Learn how to uninstall a Docker Universal Control Plane swarm.
description: Learn how to uninstall a Docker Universal Control Plane.
keywords: UCP, uninstall, install, Docker EE
---
@ -10,19 +10,14 @@ Docker UCP is designed to scale as your applications grow in size and usage.
You can [add and remove nodes](../configure/scale-your-cluster.md) from the
cluster to make it scale to your needs.
You can also uninstall Docker Universal Control Plane from your cluster. In this
case the UCP services are stopped and removed, but your Docker Engines will
continue running in swarm mode. You applications will continue running normally.
You can also uninstall UCP from your cluster. In this case, the UCP services are stopped and removed, but your Docker Engines will continue running in swarm mode. You applications will continue running normally.
If you wish to remove a single node from the UCP cluster, you should instead
[Remove that node from the cluster](../configure/scale-your-cluster.md).
After you uninstall UCP from the cluster, you'll no longer be able to enforce
role-based access control to the cluster, or have a centralized way to monitor
and manage the cluster.
After uninstalling UCP from the cluster, you will no longer be able to join new
nodes using `docker swarm join`, unless you reinstall UCP.
role-based access control (RBAC) to the cluster, or have a centralized way to monitor
and manage the cluster. After uninstalling UCP from the cluster, you will no longer be able to join new nodes using `docker swarm join`, unless you reinstall UCP.
To uninstall UCP, log in to a manager node using ssh, and run the following
command:
@ -37,12 +32,15 @@ docker container run --rm -it \
This runs the uninstall command in interactive mode, so that you are prompted
for any necessary configuration values.
> **Important**: If the `uninstall-ucp` command fails, you can run the following commands to manually uninstall UCP:
If the `uninstall-ucp` command fails, you can run the following commands to manually uninstall UCP:
```bash
#Run the following command on one manager node to remove remaining UCP services
docker service rm $(docker service ls -f name=ucp- -q)
#Run the following command on each manager node to remove remaining UCP containers
docker container rm -f $(docker container ps -a -f name=ucp- -f name=k8s_ -q)
#Run the following command on each manager node to remove remaining UCP volumes
docker volume rm $(docker volume ls -f name=ucp -q)
```
@ -51,8 +49,7 @@ The UCP configuration is kept in case you want to reinstall UCP with the same
configuration. If you want to also delete the configuration, run the uninstall
command with the `--purge-config` option.
[Check the reference
documentation](/reference/ucp/3.0/cli/index.md) to learn the options available.
Refer to the [reference documentation](/reference/ucp/3.0/cli/index.md) to learn the options available.
Once the uninstall command finishes, UCP is completely removed from all the
nodes in the cluster. You don't need to run the command again from other nodes.

View File

@ -24,51 +24,69 @@ A common workflow for creating grants has four steps:
- Group cluster **resources** into Swarm collections or Kubernetes namespaces.
- Create **grants** by combining subject + role + resource set.
## Kubernetes grants
## Creating grants
To create a grant:
1. Log in to the UCP web UI.
2. Click **Access Control**.
3. Click **Grants**.
4. In the Grants window, select **Kubernetes** or **Swarm**.
### Kubernetes grants
With Kubernetes orchestration, a grant is made up of *subject*, *role*, and
*namespace*.
> Note
>
> This section assumes that you have created objects for the grant: subject, role,
> namespace.
{: .important}
To create a Kubernetes grant (role binding) in UCP:
1. Click **Grants** under **Access Control**.
2. Click **Create Role Binding**.
3. Click **Namespaces** under **Kubernetes**.
4. Find the desired namespace and click **Select Namespace**.
5. On the **Roles** tab, select a role.
6. On the **Subjects** tab, select a user, team, organization, or service
account to authorize.
1. Click **Create Role Binding**.
2. Under Subject, select **Users**, **Organizations**, or **Service Account**.
- For Users, select the user from the pull-down menu (these should have already been created as objects).
- For Organizations, select the Organization and Team (optional) from the pull-down menu.
- For Service Account, select the Namespace and Service Account from the pull-down menu.
3. Click **Next** to save your selections.
4. Under Resource Set, toggle the **Apply Role Binding to all namespaces (Cluster Role Binding)** switch.
5. Click **Next**.
6. Under Role, select a cluster role.
7. Click **Create**.
## Swarm grants
### Swarm grants
With Swarm orchestration, a grant is made up of *subject*, *role*, and
*collection*.
> Note
>
> This section assumes that you have created objects to grant: teams/users,
> roles (built-in or custom), and a collection.
![](../images/ucp-grant-model-0.svg){: .with-border}
![](../images/ucp-grant-model.svg){: .with-border}
To create a grant in UCP:
To create a Swarm grant in UCP:
1. Click **Grants** under **Access Control**.
2. Click **Swarm**
3. Click **Create Grant**.
4. In the **Select Subject Type** section, select **Users** or **Organizations**.
5. Click **View Children** until you get to the desired collection and **Select**.
6. On the **Roles** tab, select a role.
7. On the **Subjects** tab, select a user, team, or organization to authorize.
1. Click **Create Grant**.
2. Under Subject, select **Users** or **Organizations**.
- For Users, select a user from the pull-down menu.
- For Organizations, select the Organization and Team (optional) from the pull-down menu.
3. Click **Next**.
4. Under Resource Set, click **View Children** until you get to the desired collection.
5. Click **Select Collection**.
6. Click **Next**.
7. Under Role, select a role from the pull-down menu.
8. Click **Create**.
> Note
>
> By default, all new users are placed in the `docker-datacenter` organization.
> To apply permissions to all Docker EE users, create a grant with the
> `docker-datacenter` org as a subject.
> To apply permissions to all Docker Enterprise users, create a grant with the
> `docker-datacenter` organization as a subject.
{: .important}
## Where to go next

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 243 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 350 KiB

BIN
ee/ucp/images/v32grants.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 287 KiB

BIN
ee/ucp/images/v32nodes.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

BIN
ee/ucp/images/v32roles.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 242 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 187 KiB

BIN
ee/ucp/images/v32users.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

View File

@ -14,37 +14,35 @@ solution from Docker. You install it on-premises or in your virtual private
cloud, and it helps you manage your Docker cluster and applications through a
single interface.
![](images/overview-1.png){: .with-border}
![](images/v32dashboard.png){: .with-border}
## Centralized cluster management
With Docker, you can join up to thousands of physical or virtual machines
together to create a container cluster that allows you to deploy your
applications at scale. Docker Universal Control Plane extends the
functionality provided by Docker to make it easier to manage your cluster
from a centralized place.
applications at scale. UCP extends the functionality provided by Docker to make it easier to manage your cluster from a centralized place.
You can manage and monitor your container cluster using a graphical UI.
![](images/overview-2.png){: .with-border}
![](images/v32nodes.png){: .with-border}
## Deploy, manage, and monitor
With Docker UCP, you can manage from a centralized place all of the computing
With UCP, you can manage from a centralized place all of the computing
resources you have available, like nodes, volumes, and networks.
You can also deploy and monitor your applications and services.
## Built-in security and access control
Docker UCP has its own built-in authentication mechanism and integrates with
UCP has its own built-in authentication mechanism and integrates with
LDAP services. It also has role-based access control (RBAC), so that you can
control who can access and make changes to your cluster and applications.
[Learn about role-based access control](authorization/index.md).
![](images/overview-3.png){: .with-border}
![](images/v32users.png){: .with-border}
Docker UCP integrates with Docker Trusted Registry so that you can keep the
UCP integrates with Docker Trusted Registry (DTR) so that you can keep the
Docker images you use for your applications behind your firewall, where they
are safe and can't be tampered with.
@ -64,7 +62,7 @@ cluster that's managed by UCP:
docker info
```
This command produces the output that you expect from the Docker EE Engine:
This command produces the output that you expect from Docker Enterprise:
```bash
Containers: 38
@ -85,4 +83,4 @@ Managers: 1
## Where to go next
- [Install UCP](admin/install/index.md)
- [Docker EE Platform 2.0 architecture](/ee/docker-ee-architecture.md)
- [Docker Enterprise architecture](/ee/docker-ee-architecture.md)

View File

@ -1,6 +1,6 @@
---
title: Layer 7 routing overview
description: Learn how to route layer 7 traffic to your Swarm services
description: Learn how to route Layer 7 traffic to your Swarm services
keywords: routing, UCP, interlock, load balancing
---
@ -8,15 +8,18 @@ keywords: routing, UCP, interlock, load balancing
Application-layer (Layer 7) routing is the application routing and load balancing (ingress routing) system included with Docker Enterprise for Swarm orchestration. Interlock architecture takes advantage of the underlying Swarm components to provide scalable Layer 7 routing and Layer 4 VIP mode functionality.
> Note
>
> The HTTP routing mesh functionality was redesigned in UCP 3.0 for greater security and flexibility. The functionality was also renamed to “Layer 7 routing” to make it easier for new users to get started.
Interlock is specific to the Swarm orchestrator. If you're trying to route
traffic to your Kubernetes applications, check
[layer 7 routing with Kubernetes.](../kubernetes/layer-7-routing.md)
traffic to your Kubernetes applications, refer to [Layer 7 routing with Kubernetes](../kubernetes/layer-7-routing.md) for more information.
Interlock uses the Docker Remote API to automatically configure extensions such as NGINX or HAProxy for application traffic. Interlock is designed for:
- Full integration with Docker (Swarm, Services, Secrets, Configs)
- Enhanced configuration (context roots, TLS, zero downtime deploy, rollback)
- Support for external load balancers (nginx, haproxy, F5, etc) via extensions
- Support for external load balancers (NGINX, HAProxy, F5, etc) via extensions
- Least privilege for extensions (no Docker API access)
Docker Engine running in swarm mode has a routing mesh, which makes it easy
@ -30,22 +33,22 @@ mesh. Even though the service is running on a single node, users can access
WordPress using the domain name or IP of any of the nodes that are part of
the swarm.
UCP extends this one step further with layer 7 layer routing (also known as
application layer 7), allowing users to access Docker services using domain names
UCP extends this one step further with Layer 7 layer routing (also known as
application Layer 7), allowing users to access Docker services using domain names
instead of IP addresses. This functionality is made available through the Interlock component.
![layer 7 routing](../images/interlock-overview-2.svg)
Using Interlock in the previous example, users can access the WordPress service using
`http://wordpress.example.org`. Interlock takes care of routing traffic to
the right place.
the correct place.
## Terminology
- Cluster: A group of compute resources running Docker
- Swarm: A Docker cluster running in Swarm mode
- Upstream: An upstream container that serves an application
- Proxy Service: A service that provides load balancing and proxying (such as Nginx)
- Proxy Service: A service that provides load balancing and proxying (such as NGINX)
- Extension Service: A helper service that configures the proxy service
- Service Cluster: A service cluster is an Interlock extension+proxy service
- gRPC: A high-performance RPC framework
@ -53,27 +56,26 @@ the right place.
![Interlock Design](../images/interlock-design.png)
## Interlock services
Interlock has
three primary services:
Interlock has three primary services:
* **Interlock**: This is the central piece of the layer 7 routing solution.
* **Interlock**: This is the central piece of the Layer 7 routing solution.
The core service is responsible for interacting with the Docker Remote API and building
an upstream configuration for the extensions. It uses the Docker API to monitor events, and manages the extension and
proxy services. This is served on a gRPC API that the
extensions are configured to access.
* **Interlock-extension**: This is a helper service that queries the Interlock gRPC API for the
upstream configuration. The extension service uses this to configure
the proxy service. For proxy services that use files such as Nginx or HAProxy, the
the proxy service. For proxy services that use files such as NGINX or HAProxy, the
extension service generates the file and sends it to Interlock using the gRPC API. Interlock
then updates the corresponding Docker Config object for the proxy service.
* **Interlock-proxy**: This is a proxy/load-balancing service that handles requests for the upstream application services. These
* **Interlock-proxy**: This is a proxy/load-balancing service that handles requests for the upstream application services. These
are configured using the data created by the corresponding extension service. By default, this service is a containerized
NGINX deployment.
Interlock manages both extension and proxy service updates for both configuration changes
and application service deployments. There is no intervention from the operator required.
and application service deployments. There is no intervention from the operator required.
The following image shows the default Interlock configuration, once you enable layer 7
The following image shows the default Interlock configuration, once you enable Layer 7
routing in UCP:
![](../images/interlock-architecture-1.svg)
@ -89,7 +91,7 @@ components run on manager nodes.
Layer 7 routing in UCP supports:
* **High availability**: All the components used for layer 7 routing leverage
* **High availability**: All the components used for Layer 7 routing leverage
Docker swarm for high availability, and handle failures gracefully.
* **Automatic configuration**: Interlock uses the Docker API for configuration. You do not have to manually
update or restart anything to make services available. UCP monitors your services and automatically
@ -99,20 +101,20 @@ operator to individually customize and scale the proxy layer to handle user requ
* **TLS**: You can leverage Docker secrets to securely manage TLS Certificates
and keys for your services. Both TLS termination and TCP passthrough are supported.
* **Context-based routing**: Interlock supports advanced application request routing by context or path.
* **Host mode networking**: By default, layer 7 routing leverages the Docker Swarm
* **Host mode networking**: By default, Layer 7 routing leverages the Docker Swarm
routing mesh, but Interlock supports running proxy and application services in "host" mode networking, allowing
you to bypass the routing mesh completely. This is beneficial if you want
maximum performance for your applications.
* **Security**: The layer 7 routing components that are exposed to the outside
world run on worker nodes. Even if they are compromised, your cluster aren't.
* **SSL**: Interlock leverages Docker Secrets to securely store and use SSL certificates for services. Both
* **Security**: The Layer 7 routing components that are exposed to the outside
world run on worker nodes. Even if they are compromised, your cluster is not affected.
* **SSL**: Interlock leverages Docker Secrets to securely store and use SSL certificates for services. Both
SSL termination and TCP passthrough are supported.
* **Blue-Green and Canary Service Deployment**: Interlock supports blue-green service deployment allowing an operator to deploy a new application while the current version is serving. Once traffic is verified to the new application, the operator
can scale the older version to zero. If there is a problem, the operation is easily reversible.
* **Blue-Green and Canary Service Deployment**: Interlock supports blue-green service deployment allowing an operator to deploy a new application while the current version is serving. Once traffic is verified to the new application, the operator
can scale the older version to zero. If there is a problem, the operation is easily reversible.
* **Service Cluster Support**: Interlock supports multiple extension+proxy combinations allowing for operators to partition load
balancing resources for uses such as region or organization based load balancing.
* **Least Privilege**: Interlock supports (and recommends) being deployed where the load balancing
proxies do not need to be colocated with a Swarm manager. This makes the
proxies do not need to be colocated with a Swarm manager. This makes the
deployment more secure by not exposing the Docker API access to the extension or proxy services.
## Next steps

View File

@ -1,7 +1,6 @@
---
title: Implement application redirects
description: Learn how to implement redirects using swarm services and the
layer 7 routing solution for UCP.
description: Learn how to implement redirects using swarm services and the Layer 7 routing solution for UCP.
keywords: routing, proxy, redirects, interlock
---
@ -9,6 +8,10 @@ keywords: routing, proxy, redirects, interlock
The following example publishes a service and configures a redirect from `old.local` to `new.local`.
> Note
>
> There is currently a limitation where redirects do not work if a service is configured for TLS passthrough in Interlock proxy.
First, create an overlay network so that service traffic is isolated and secure:
```bash

View File

@ -8,15 +8,15 @@ redirect_from:
>{% include enterprise_label_shortform.md %}
Platform operators can provide persistent storage for workloads running on
Docker Enterprise and Microsoft Azure by using Azure Files. You can either
Docker Enterprise and Microsoft Azure by using Azure Files. You can either
pre-provision Azure Files Shares to be consumed by
Kubernetes Pods or can you use the Azure Kubernetes integration to dynamically
provision Azure Files Shares on demand.
## Prerequisites
This guide assumes you have already provisioned a UCP environment on
Microsoft Azure. The cluster must be provisioned after meeting all
This guide assumes you have already provisioned a UCP environment on
Microsoft Azure. The cluster must be provisioned after meeting all
prerequisites listed in [Install UCP on
Azure](/ee/ucp/admin/install/install-on-azure.md).
@ -31,8 +31,8 @@ Access](/ee/ucp/user-access/cli/).
You can use existing Azure Files Shares or manually provision new ones to
provide persistent storage for Kubernetes Pods. Azure Files Shares can be
manually provisioned in the Azure Portal using ARM Templates or using the Azure
CLI. The following example uses the Azure CLI to manually provision
Azure Files Shares.
CLI. The following example uses the Azure CLI to manually provision
Azure Files Shares.
### Creating an Azure Storage Account
@ -42,7 +42,7 @@ a Storage Account, you can skip to [Creating an Azure Files
Share](#creating-an-azure-file-share).
> **Note**: the Azure Kubernetes Driver does not support Azure Storage Accounts
> created using Azure Premium Storage.
> created using Azure Premium Storage.
```bash
$ REGION=ukwest
@ -83,7 +83,7 @@ $ az storage share create \
After a File Share has been created, you must load the Azure Storage
Account Access key as a Kubernetes Secret into UCP. This provides access to
the file share when Kubernetes attempts to mount the share into a pod. This key
can be found in the Azure Portal or retrieved as shown in the following example by the Azure CLI:
can be found in the Azure Portal or retrieved as shown in the following example by the Azure CLI:
```bash
$ SA=mystorageaccount
@ -142,7 +142,7 @@ Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/).
> Today, only the Standard Storage Class is supported when using the Azure
> Kubernetes Plugin. File shares using the Premium Storage Class will fail to
> mount.
> mount.
```bash
$ cat <<EOF | kubectl create -f -
@ -158,6 +158,8 @@ mountOptions:
- gid=1000
parameters:
skuName: Standard_LRS
storageAccount: <existingstorageaccount> # Optional
location: <existingstorageaccountlocation> # Optional
EOF
```
@ -173,14 +175,13 @@ azurefile kubernetes.io/azure-file 1m
After you create a Storage Class, you can use Kubernetes
Objects to dynamically provision Azure Files Shares. This is done using
Kubernetes Persistent Volumes Claims
[PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction).
Kubernetes uses an existing Azure Storage Account if one exists inside of the
Kubernetes [Persistent Volumes Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#introduction).
Kubernetes uses an existing Azure Storage Account if one exists inside of the
Azure Resource Group. If an Azure Storage Account does not exist,
Kubernetes creates one.
Kubernetes creates one.
The following example uses the standard storage class and creates a 5 GB Azure
File Share. Alter these values to fit your use case.
File Share. Alter these values to fit your use case.
```bash
$ cat <<EOF | kubectl create -f -
@ -198,7 +199,7 @@ spec:
EOF
```
At this point, you should see a newly created Persistent Volume Claim and Persistent Volume:
At this point, you should see a newly created Persistent Volume Claim and Persistent Volume:
```bash
$ kubectl get pvc
@ -240,6 +241,50 @@ spec:
EOF
```
### Troubleshooting
When creating Persistent Volume Claims, the volume may constantly stay in a
`Pending` state.
```
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
azure-file-pvc Pending standard 32s
```
If that is the case, the `persistent-volume-binder` service account does not
have the relevant Kubernetes RBAC permissions. The storage account creates a
Kubernetes secret to store the Azure Files Storage Account Key.
```
$ kubectl describe pvc azure-file-pvc
...
Warning ProvisioningFailed 7s (x3 over 37s) persistentvolume-controller
Failed to provision volume with StorageClass "standard": Couldn't create secret
secrets is forbidden: User "system:serviceaccount:kube-system:persistent-volume-binder"
cannot create resource "secrets" in API group "" in the namespace "default": access denied
```
To grant the `persistent-volume-binder` service account the relevant the RBAC
permissions, create the following RBAC ClusterRole.
```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
subjectName: kube-system-persistent-volume-binder
name: kube-system-persistent-volume-binder:cluster-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: persistent-volume-binder
namespace: kube-system
```
## Where to go next
- [Deploy an Ingress Controller on

View File

@ -434,9 +434,9 @@ leading to a gap in permissions checks. (ENGORC-2781)
| Component | Version |
| ----------- | ----------- |
| UCP | 3.1.12 |
| Kubernetes | 1.14.3 |
| Calico | 3.5.7 |
| Interlock | 2.4.0 |
| Kubernetes | 1.11.10 |
| Calico | 3.8.2 |
| Interlock | 3.0.0 |
| Interlock NGINX proxy | 1.14.2 |
## 3.1.11

View File

@ -1,33 +1,18 @@
---
title: Docker Engine release notes
description: Learn about the new features, bug fixes, and breaking changes for Docker Engine - Community and Enterprise
keywords: docker, docker engine, ee, ce, whats new, release notes
description: Learn about the new features, bug fixes, and breaking changes for Docker Engine - Community
keywords: docker, docker engine, ce, whats new, release notes
toc_min: 1
toc_max: 2
skip_read_time: true
redirect_from:
- /ee/engine/release-notes/
- /release-notes/docker-ce/
---
>{% include enterprise_label_shortform.md %}
This document describes the latest changes, additions, known issues, and fixes
for Docker Engine - Enterprise.
Docker Engine - Enterprise builds upon the corresponding Docker Engine -
Community that it references. Docker Engine - Enterprise includes enterprise
features as well as back-ported fixes (security-related and priority defects)
from the open source. It also incorporates defect fixes for environments in
which new features cannot be adopted as quickly for consistency and
compatibility reasons.
> **Note:**
> New in 18.09 is an aligned release model for Docker Engine - Community and
> Docker Engine - Enterprise. The new versioning scheme is YY.MM.x where x is an
> incrementing patch version. The enterprise engine is a superset of the
> community engine. They will ship concurrently with the same x patch version
> based on the same code base.
for Docker Engine - Community.
> **Note:**
> The client and container runtime are now in separate packages from the daemon

View File

@ -1,28 +1,33 @@
---
description: Home page for Get Docker
keywords: Docker, documentation, manual
keywords: Docker, download, documentation, manual
landing: true
title: Get Docker
---
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Dockers methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.
You can download and install Docker on multiple platforms. Refer to the following section and choose the best installation path for you.
<div class="component-container">
<!--start row-->
<div class="row">
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="docker-for-mac/"> <img src="../images/apple_48.svg" alt="Docker Desktop for Mac"> </a>
<a href="docker-for-mac/install/"> <img src="../images/apple_48.svg" alt="Docker Desktop for Mac"> </a>
</div>
<h3 id="docker-for-mac"><a href="docker-for-mac/">Docker Desktop for Mac</a></h3>
<h3 id="docker-for-mac"><a href="docker-for-mac/install/">Docker Desktop for Mac</a></h3>
<p>A native application using the macOS sandbox security model which delivers all Docker tools to your Mac.</p>
</div>
</div>
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="docker-for-windows/"> <img src="../images/windows_48.svg" alt="Docker Desktop for Windows"> </a>
<a href="docker-for-windows/install/"> <img src="../images/windows_48.svg" alt="Docker Desktop for Windows"> </a>
</div>
<h3 id="docker-for-windows"><a href="docker-for-windows/">Docker Desktop for Windows</a></h3>
<h3 id="docker-for-windows/install/"><a href="docker-for-windows/install/">Docker Desktop for Windows</a></h3>
<p>A native Windows application which delivers all Docker tools to your Windows computer.</p>
</div>
</div>

View File

@ -57,7 +57,7 @@ redirect_from:
{% include_relative nav.html selected="1" %}
Welcome! We are excited that you want to learn Docker. The _Docker Get Started Tutorial_
Welcome! We are excited that you want to learn Docker. The _Docker Community QuickStart_
teaches you how to:
1. Set up your Docker environment (on this page)

View File

@ -9,9 +9,6 @@ skip_read_time: true
---
{% assign page.title = site.name %}
<div class="row">
<div markdown="1" class="col-xs-12 col-sm-12 col-md-12 col-lg-6 block">
## Get started with Docker
Try our multi-part walkthrough that covers writing your first app,
@ -20,23 +17,41 @@ production servers in the cloud. Total reading time is less than an hour.
[Get started with Docker](/get-started/){: class="button outline-btn"}
</div>
<div markdown="1" class="col-xs-12 col-sm-12 col-md-12 col-lg-6 block">
## Try Docker Enterprise
Run your solution in production with Docker Enterprise to get a
management dashboard, security scanning, LDAP integration, content signing,
multi-cloud support, and more. Click below to test-drive a running instance of
Docker Enterprise without installing anything.
[Try Docker Enterprise](https://trial.docker.com){: class="button outline-btn" onclick="ga('send', 'event', 'EE Trial Referral', 'Front Page', 'Click');"}
</div>
</div>
## Docker products
<div class="component-container">
<!--start row-->
<div class="row">
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="docker-for-mac/install/"> <img src="../images/apple_48.svg" alt="Docker Desktop for Mac"> </a>
</div>
<h3 id="docker-for-mac"><a href="docker-for-mac/install/">Docker Desktop for Mac</a></h3>
<p>A native application using the macOS sandbox security model which delivers all Docker tools to your Mac.</p>
</div>
</div>
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="docker-for-windows/install/"> <img src="../images/windows_48.svg" alt="Docker Desktop for Windows"> </a>
</div>
<h3 id="docker-for-windows"><a href="docker-for-windows/install/">Docker Desktop for Windows</a></h3>
<p>A native Windows application which delivers all Docker tools to your Windows computer.</p>
</div>
</div>
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="install/linux/ubuntu/"> <img src="../images/linux_48.svg" alt="Docker for Linux"> </a>
</div>
<h3 id="docker-for-linux"><a href="install/linux/ubuntu/">Docker for Linux</a></h3>
<p>Install Docker on a computer which already has a Linux distribution installed.</p>
</div>
</div>
</div>
</div>
<div class="row">
<div markdown="1" class="col-xs-12 col-sm-12 col-md-12 col-lg-6 block">
@ -58,47 +73,9 @@ channel for more predictability.
Designed for enterprise development and IT teams who build, ship, and run
business critical applications in production at scale. Integrated, certified,
and supported to provide enterprises with the most secure container platform in
the industry to modernize all applications. Docker Enterprise comes with enterprise
[add-ons](#docker-ee-add-ons) like Universal Control Plane (UCP) for managing and
orchestrating the container runtime, and Docker Trusted Registry (DTR) for storing and
securing images in an enterprise grade registry.
the industry to modernize all applications. Docker Enterprise comes with Universal Control Plane (UCP) for managing and orchestrating the container runtime, and Docker Trusted Registry (DTR) for storing and securing images in an enterprise grade registry.
[Learn more about Docker Enterprise products](/ee/supported-platforms/){: class="button outline-btn"}
[Learn more about Docker Enterprise](/ee/){: class="button outline-btn"}
</div>
</div><!-- end row -->
## Run Docker anywhere
<div class="component-container">
<!--start row-->
<div class="row">
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="docker-for-mac/"> <img src="../images/apple_48.svg" alt="Docker Desktop for Mac"> </a>
</div>
<h3 id="docker-for-mac"><a href="docker-for-mac/">Docker Desktop for Mac</a></h3>
<p>A native application using the macOS sandbox security model which delivers all Docker tools to your Mac.</p>
</div>
</div>
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="docker-for-windows/"> <img src="../images/windows_48.svg" alt="Docker Desktop for Windows"> </a>
</div>
<h3 id="docker-for-windows"><a href="docker-for-windows/">Docker Desktop for Windows</a></h3>
<p>A native Windows application which delivers all Docker tools to your Windows computer.</p>
</div>
</div>
<div class="col-sm-12 col-md-12 col-lg-4 block">
<div class="component">
<div class="component-icon">
<a href="install/linux/ubuntu/"> <img src="../images/linux_48.svg" alt="Docker for Linux"> </a>
</div>
<h3 id="docker-for-linux"><a href="install/linux/ubuntu/">Docker for Linux</a></h3>
<p>Install Docker on a computer which already has a Linux distribution installed.</p>
</div>
</div>
</div>
</div>

View File

@ -1,5 +1,5 @@
---
title: About Docker Engine - Community
title: Docker Engine overview
description: Lists the installation methods
keywords: docker, installation, install, Docker Engine - Community, Docker Engine - Enterprise, docker editions, stable, edge
redirect_from:
@ -17,129 +17,36 @@ redirect_from:
- /engine/installation/
- /en/latest/installation/
- /linux/
toc_max: 2
---
Docker Engine - Community is ideal for developers and small
teams looking to get started with Docker and experimenting with container-based
apps. Docker Engine - Community has three types of update channels, **stable**, **test**, and **nightly**:
Docker Engine is an open source containerization technology for building and
containerizing your applications. Docker Engine acts as a client-server
application with:
* A server with a long-running daemon process [`dockerd`](/engine/reference/commandline/dockerd/).
* APIs which specify interfaces that programs can use to talk to and
instruct the Docker daemon.
* A command line interface (CLI) client [`docker`](/engine/reference/commandline/cli/).
The CLI uses Docker APIs to control or interact with the Docker daemon
through scripting or direct CLI commands. Many other Docker applications use the
underlying API and CLI. The daemon creates and manage Docker objects, such as
images, containers, networks, and volumes.
Docker Engine has three types of update channels, **stable**, **test**, and **nightly**:
* **Stable** gives you latest releases for general availability.
* **Test** gives pre-releases that are ready for testing before general availability.
* **Nightly** gives you latest builds of work in progress for the next major release.
## Releases
For more information, see [Release channels](#release-channels).
For the Docker Engine - Community engine, the open
repositories [Docker Engine](https://github.com/docker/engine) and
[Docker Client](https://github.com/docker/cli) apply.
## Supported platforms
Releases of Docker Engine and Docker Client for general availability
are versioned using dotted triples. The components of this triple
are `YY.mm.<patch>` where the `YY.mm` component is referred to as the
year-month release. The version numbering format is chosen to illustrate
cadence and does not guarantee SemVer, but the desired date for general
availability. The version number may have additional information, such as
beta and release candidate qualifications. Such releases are considered
"pre-releases".
The cadence of the year-month releases is every 6 months starting with
the `18.09` release. The patch releases for a year-month release take
place as needed to address bug fixes during its support cycle.
Docker Engine - Community binaries for a release are available on [download.docker.com](https://download.docker.com/)
as packages for the supported operating systems. Docker Engine - Enterprise binaries are
available on the [Docker Hub](https://hub.docker.com/) for the supported operating systems. The
release channels are available for each of the year-month releases and
allow users to "pin" on a year-month release of choice. The release
channel also receives patch releases when they become available.
### Nightly builds
Nightly builds are created once per day from the master branch. The version
number for nightly builds take the format:
0.0.0-YYYYmmddHHMMSS-abcdefabcdef
where the time is the commit time in UTC and the final suffix is the prefix
of the commit hash, for example `0.0.0-20180720214833-f61e0f7`.
These builds allow for testing from the latest code on the master branch. No
qualifications or guarantees are made for the nightly builds.
The release channel for these builds is called `nightly`.
### Pre-releases
In preparation for a new year-month release, a branch is created from
the master branch with format `YY.mm` when the milestones desired by
Docker for the release have achieved feature-complete. Pre-releases
such as betas and release candidates are conducted from their respective release
branches. Patch releases and the corresponding pre-releases are performed
from within the corresponding release branch.
While pre-releases are done to assist in the stabilization process, no
guarantees are provided.
Binaries built for pre-releases are available in the test channel for
the targeted year-month release using the naming format `test-YY.mm`,
for example `test-18.09`.
### General availability
Year-month releases are made from a release branch diverged from the master
branch. The branch is created with format `<year>.<month>`, for example
`18.09`. The year-month name indicates the earliest possible calendar
month to expect the release to be generally available. All further patch
releases are performed from that branch. For example, once `v18.09.0` is
released, all subsequent patch releases are built from the `18.09` branch.
Binaries built from this releases are available in the stable channel
`stable-YY.mm`, for example `stable-18.09`, as well as the corresponding
test channel.
### Relationship between Docker Engine - Community and Docker Engine - Enterprise code
For a given year-month release, Docker releases both Docker Engine - Community and Docker Engine - Enterprise variants concurrently. Docker Engine - Enterprise is a superset of the code delivered in Docker Engine - Community. Docker maintains publicly visible repositories for the Docker Engine - Community code
as well as private repositories for the Docker Engine - Enterprise code. Automation (a bot) is used to keep the branches between Docker Engine - Community and Docker Engine - Enterprise in sync so as features
and fixes are merged on the various branches in the Docker Engine - Community repositories (upstream), the corresponding Docker Engine - Enterprise repositories and branches are kept
in sync (downstream). While Docker and its partners make every effort
to minimize merge conflicts between Docker Engine - Community and Docker Engine - Enterprise, occasionally they will happen, and Docker will work hard to resolve them in a timely fashion.
## Next release
The activity for upcoming year-month releases is tracked in the milestones
of the repository.
## Support
Docker Engine - Community releases of a year-month branch are supported with patches
as needed for 7 months after the first year-month general availability
release. Docker Engine - Enterprise releases are supported for 24 months after the first
year-month general availability release.
This means bug reports and backports to release branches are assessed
until the end-of-life date.
After the year-month branch has reached end-of-life, the branch may be
deleted from the repository.
### Reporting security issues
The Docker maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
Please DO NOT file a public issue; instead send your report privately
to security@docker.com.
Security reports are greatly appreciated, and Docker will publicly thank you
for it. Docker also likes to send gifts — if you're into swag, make sure to
let us know. Docker currently does not offer a paid security bounty program
but are not ruling it out in the future.
### Supported platforms
Docker Engine - Community is available on multiple platforms. Use the following tables
to choose the best installation path for you.
Docker Engine is available on a variety of Linux platforms, [Mac](/docker-for-mac/install/)
and [Windows](/docker-for-windows/install/) through Docker Desktop, Windows
Server, and as a static binary installation. Find your preferred operating
system below.
#### Desktop
@ -162,6 +69,63 @@ to choose the best installation path for you.
| [Fedora]({{ install-prefix-ce }}/fedora/) | [{{ green-check }}]({{ install-prefix-ce }}/fedora/) | | [{{ green-check }}]({{ install-prefix-ce }}/fedora/) | | |
| [Ubuntu]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) | [{{ green-check }}]({{ install-prefix-ce }}/ubuntu/) |
## Release channels
### Stable
Year-month releases are made from a release branch diverged from the master
branch. The branch is created with format `<year>.<month>`, for example
`18.09`. The year-month name indicates the earliest possible calendar
month to expect the release to be generally available. All further patch
releases are performed from that branch. For example, once `v18.09.0` is
released, all subsequent patch releases are built from the `18.09` branch.
### Test
In preparation for a new year-month release, a branch is created from
the master branch with format `YY.mm` when the milestones desired by
Docker for the release have achieved feature-complete. Pre-releases
such as betas and release candidates are conducted from their respective release
branches. Patch releases and the corresponding pre-releases are performed
from within the corresponding release branch.
> **Note:**
> While pre-releases are done to assist in the stabilization process, no
> guarantees are provided.
Binaries built for pre-releases are available in the test channel for
the targeted year-month release using the naming format `test-YY.mm`,
for example `test-18.09`.
### Nightly
Nightly builds give you the latest builds of work in progress for the next major
release. They are created once per day from the master branch with the version
format:
0.0.0-YYYYmmddHHMMSS-abcdefabcdef
where the time is the commit time in UTC and the final suffix is the prefix
of the commit hash, for example `0.0.0-20180720214833-f61e0f7`.
These builds allow for testing from the latest code on the master branch.
> **Note:**
> No qualifications or guarantees are made for the nightly builds.
The release channel for these builds is called `nightly`.
## Support
Docker Engine releases of a year-month branch are supported with patches as needed for 7 months after the first year-month general availability
release.
This means bug reports and backports to release branches are assessed
until the end-of-life date.
After the year-month branch has reached end-of-life, the branch may be
deleted from the repository.
### Backporting
Backports to the Docker products are prioritized by the Docker company. A
@ -176,16 +140,22 @@ or by adding a comment to the PR.
Patch releases are always backward compatible with its year-month version.
## Not covered
### Licensing
As a general rule, anything not mentioned in this document may change in any release.
Docker is licensed under the Apache License, Version 2.0. See
[LICENSE](https://github.com/moby/moby/blob/master/LICENSE) for the full
license text.
## Exceptions
## Reporting security issues
Exceptions are made in the interest of __security patches__. If a break
in release procedure or product functionality is required, it will
be communicated clearly, and the solution will be considered against
total impact.
The Docker maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
Please DO NOT file a public issue; instead send your report privately
to security@docker.com.
Security reports are greatly appreciated, and Docker will publicly thank you
for it.
## Get started

View File

@ -34,7 +34,7 @@ Docker Engine - Community is supported on `x86_64` (or `amd64`), `armhf`, and `a
### Uninstall old versions
Older versions of Docker were called `docker`, `docker.io `, or `docker-engine`.
Older versions of Docker were called `docker`, `docker.io`, or `docker-engine`.
If these are installed, uninstall them:
```bash

View File

@ -25,8 +25,8 @@ and distributions for different Docker editions, see
To install Docker, you need the 64-bit version of one of these Fedora versions:
- 28
- 29
- 30
- 31
### Uninstall old versions
@ -169,13 +169,20 @@ from the repository.
Docker is installed but not started. The `docker` group is created, but no users are added to the group.
3. Start Docker.
3. Cgroups Exception
For Fedora 31, you'll have to enable the [backwards compatibility for Cgroups](https://fedoraproject.org/wiki/Common_F31_bugs#Other_software_issues).
```bash
$ sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
```
4. Start Docker
```bash
$ sudo systemctl start docker
```
4. Verify that Docker Engine - Community is installed correctly by running the `hello-world`
5. Verify that Docker Engine - Community is installed correctly by running the `hello-world`
image.
```bash

View File

@ -42,7 +42,7 @@ Docker Engine - Community is supported on `x86_64` (or `amd64`), `armhf`, `arm64
### Uninstall old versions
Older versions of Docker were called `docker`, `docker.io `, or `docker-engine`.
Older versions of Docker were called `docker`, `docker.io`, or `docker-engine`.
If these are installed, uninstall them:
```bash

7
manuals/index.md Normal file
View File

@ -0,0 +1,7 @@
---
title: Product Manuals
description: Learn about Docker Engine - Community
keywords: Docker Engine - Community, Docker Community
---
Landing page for the Docker Community product manuals

View File

@ -21,18 +21,15 @@ various APIs, CLIs, and file formats.
| [Docker CLI](/engine/reference/commandline/cli/) | The main CLI for Docker, includes all `docker` commands |
| [Compose CLI](/compose/reference/overview/) | The CLI for Docker Compose, which allows you to build and run multi-container applications |
| [Daemon CLI (dockerd)](/engine/reference/commandline/dockerd/) | Persistent process that manages containers |
| [DTR CLI](/reference/dtr/{{ site.dtr_version }}/cli/index.md) | Deploy and manage Docker Trusted Registry |
| [UCP CLI](/reference/ucp/{{ site.ucp_version }}/cli/index.md) | Deploy and manage Universal Control Plane |
## Application programming interfaces (APIs)
| API | Description |
|:------------------------------------------------------|:---------------------------------------------------------------------------------------|
| [Engine API](/engine/api/) | The main API for Docker, provides programmatic access to a daemon |
| [DTR API](/reference/dtr/{{ site.dtr_version }}/api/) | Provides programmatic access to a Docker Trusted Registry deployment |
| [Registry API](/registry/spec/api/) | Facilitates distribution of images to the engine |
| [Template API](app-template/api-reference)| Allows users to create new Docker applications by using a library of templates.|
| [UCP API](/reference/ucp/{{ site.ucp_version }}/api/) | Provides programmatic access to a Universal Control Plane deployment |
## Drivers and specifications
@ -40,10 +37,4 @@ various APIs, CLIs, and file formats.
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------|
| [Image specification](/registry/spec/manifest-v2-2/) | Describes the various components of a Docker image |
| [Registry token authentication](/registry/spec/auth/) | Outlines the Docker registry authentication scheme |
| [Registry storage drivers](/registry/storage-drivers/) | Enables support for given cloud providers when storing images with Registry |
## Compliance control reference
| Reference | Description |
|:---------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|
| [NIST 800-53 control reference](/compliance/reference/800-53/) | All of the NIST 800-53 Rev. 4 controls applicable to Docker Enterprise Edition can be referenced in this section. |
| [Registry storage drivers](/registry/storage-drivers/) | Enables support for given cloud providers when storing images with Registry |

View File

@ -1,9 +0,0 @@
---
description: We've sent you a welcome email with links to previous newsletters.
keywords: Docker, documentation, manual, guide, reference, api
title: Thank you for subscribing to Docker weekly
skip_read_time: true
---
We've sent you a welcome email with links to previous newsletters.
Check your inbox to confirm you received it.