Merge pull request #8245 from docker/master

Sync published with master
This commit is contained in:
Maria Bermudez 2019-02-13 18:13:11 -08:00 committed by GitHub
commit c7e3b881de
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 10024 additions and 28 deletions

View File

@ -323,9 +323,9 @@ Supported documentation includes the current version plus the previous five vers
If you are using a version of the documentation that is no longer supported, which means that the version number is not listed in the site dropdown list, you can still access that documentation in the following ways: If you are using a version of the documentation that is no longer supported, which means that the version number is not listed in the site dropdown list, you can still access that documentation in the following ways:
- By entering your version number in the branch selection list for this repo - By entering your version number and selecting it from the branch selection list for this repo
- By directly accessing the Github URL for your version. For example, https://github.com/docker/docker.github.io/tree/v1.9 for `v1.9` - By directly accessing the Github URL for your version. For example, https://github.com/docker/docker.github.io/tree/v1.9 for `v1.9`
- By running a container of the specific [tag for your documentation version](https://cloud.docker.com/u/docs/repository/docker/docs/docker.github.io/general#read-these-docs-offline){: target="_blank"} - By running a container of the specific [tag for your documentation version](https://cloud.docker.com/u/docs/repository/docker/docs/docker.github.io/general#read-these-docs-offline)
in Docker Hub. For example, run the following to access `v1.9`: in Docker Hub. For example, run the following to access `v1.9`:
```bash ```bash

View File

@ -40,6 +40,10 @@ Docker UCP requires each node on the cluster to have a static IP address.
Before installing UCP, ensure your network and nodes are configured to support Before installing UCP, ensure your network and nodes are configured to support
this. this.
## Avoid IP range conflicts
The `service-cluster-ip-range` Kubernetes API Server flag is currently set to `10.96.0.0/16` and cannot be changed.
## Time synchronization ## Time synchronization
In distributed systems like Docker UCP, time synchronization is critical In distributed systems like Docker UCP, time synchronization is critical

View File

@ -6,29 +6,51 @@ redirect_from:
title: Leverage multi-CPU architecture support title: Leverage multi-CPU architecture support
notoc: true notoc: true
--- ---
Docker images can support multiple architectures, which means that a single
image may contain variants for different architectures, and sometimes for different
operating systems, such as Windows.
Docker Desktop for Mac provides `binfmt_misc` multi architecture support, so you can run When running an image with multi-architecture support, `docker` will
containers for different Linux architectures, such as `arm`, `mips`, `ppc64le`, automatically select an image variant which matches your OS and architecture.
and even `s390x`.
Most of the official images on Docker Hub provide a [variety of architectures](https://github.com/docker-library/official-images#architectures-other-than-amd64).
For example, the `busybox` image supports `amd64`, `arm32v5`, `arm32v6`,
`arm32v7`, `arm64v8`, `i386`, `ppc64le`, and `s390x`. When running this image
on an `x86_64` / `amd64` machine, the `x86_64` variant will be pulled and run,
which can be seen from the output of the `uname -a` command that's run inside
the container:
```bash
$ docker run busybox uname -a
Linux 82ef1a0c07a2 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 x86_64 GNU/Linux
```
**Docker Desktop for Mac** provides `binfmt_misc` multi-architecture support,
which means you can run containers for different Linux architectures
such as `arm`, `mips`, `ppc64le`, and even `s390x`.
This does not require any special configuration in the container itself as it uses This does not require any special configuration in the container itself as it uses
<a href="http://wiki.qemu.org/" target="_blank">qemu-static</a> from the Docker for <a href="http://wiki.qemu.org/" target="_blank">qemu-static</a> from the **Docker for
Mac VM. Mac VM**. Because of this, you can run an ARM container, like the `arm32v7` or `ppc64le`
variants of the busybox image:
You can run an ARM container, like the <a href="https://www.balena.io/blog/how-resin-io-works/" target="_blank"> ### arm32v7 variant
balena</a> arm builds: ```bash
$ docker run arm32v7/busybox uname -a
```
$ docker run balenalib/armv7hf-debian uname -a
Linux 3d3ffca44f6e 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 armv7l GNU/Linux
$ docker run justincormack/ppc64le-debian uname -a
Linux edd13885f316 4.1.12 #1 SMP Tue Jan 12 10:51:00 UTC 2016 ppc64le GNU/Linux
Linux 9e3873123d09 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 armv7l GNU/Linux
``` ```
Multi architecture support makes it easy to build <a href="https://blog.docker.com/2017/11/multi-arch-all-the-things/" target="_blank"> ### ppc64le variant
multi architecture Docker images</a> or experiment with ARM images and binaries ```bash
from your Mac. $ docker run ppc64le/busybox uname -a
Linux 57a073cc4f10 4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018 ppc64le GNU/Linux
```
Notice that this time, the `uname -a` output shows `armv7l` and
`ppc64le` respectively.
Multi-architecture support makes it easy to build <a href="https://blog.docker.com/2017/11/multi-arch-all-the-things/" target="_blank">multi-architecture Docker images</a> or experiment with ARM images and binaries from your Mac.

View File

@ -112,6 +112,8 @@ Configures audit logging options for UCP components.
Specifies scheduling options and the default orchestrator for new nodes. Specifies scheduling options and the default orchestrator for new nodes.
> **Note**: If you run the `kubectl` command, such as `kubectl describe nodes`, to view scheduling rules on Kubernetes nodes, it does not reflect what is configured in UCP Admin settings. UCP uses taints to control container scheduling on nodes and is unrelated to kubectl's `Unschedulable` boolean flag.
| Parameter | Required | Description | | Parameter | Required | Description |
|:------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------| |:------------------------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------|
| `enable_admin_ucp_scheduling` | no | Set to `true` to allow admins to schedule on containers on manager nodes. The default is `false`. | | `enable_admin_ucp_scheduling` | no | Set to `true` to allow admins to schedule on containers on manager nodes. The default is `false`. |
@ -181,7 +183,7 @@ components. Assigning these values overrides the settings in a container's
| `metrics_retention_time` | no | Adjusts the metrics retention time. | | `metrics_retention_time` | no | Adjusts the metrics retention time. |
| `metrics_scrape_interval` | no | Sets the interval for how frequently managers gather metrics from nodes in the cluster. | | `metrics_scrape_interval` | no | Sets the interval for how frequently managers gather metrics from nodes in the cluster. |
| `metrics_disk_usage_interval` | no | Sets the interval for how frequently storage metrics are gathered. This operation can be expensive when large volumes are present. | | `metrics_disk_usage_interval` | no | Sets the interval for how frequently storage metrics are gathered. This operation can be expensive when large volumes are present. |
| `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 512MB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. | | `rethinkdb_cache_size` | no | Sets the size of the cache used by UCP's RethinkDB servers. The default is 1GB, but leaving this field empty or specifying `auto` instructs RethinkDB to determine a cache size automatically. |
| `cloud_provider` | no | Set the cloud provider for the kubernetes cluster. | | `cloud_provider` | no | Set the cloud provider for the kubernetes cluster. |
| `pod_cidr` | yes | Sets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is `192.168.0.0/16`. | | `pod_cidr` | yes | Sets the subnet pool from which the IP for the Pod should be allocated from the CNI ipam plugin. Default is `192.168.0.0/16`. |
| `calico_mtu` | no | Set the MTU (maximum transmission unit) size for the Calico plugin. | | `calico_mtu` | no | Set the MTU (maximum transmission unit) size for the Calico plugin. |

View File

@ -8,9 +8,9 @@ Docker UCP supports Network File System (NFS) persistent volumes for
Kubernetes. To enable this feature on a UCP cluster, you need to set up Kubernetes. To enable this feature on a UCP cluster, you need to set up
an NFS storage volume provisioner. an NFS storage volume provisioner.
> Kubernetes storage drivers > ### Kubernetes storage drivers
> >
> Currently, NFS is the only Kubernetes storage driver that UCP supports. >NFS is one of the Kubernetes storage drivers that UCP supports. See [Kubernetes Volume Drivers](https://success.docker.com/article/compatibility-matrix#kubernetesvolumedrivers) in the Compatibility Matrix for the full list.
{: important} {: important}
## Enable NFS volume provisioning ## Enable NFS volume provisioning

View File

@ -42,6 +42,8 @@ this.
## Avoid IP range conflicts ## Avoid IP range conflicts
The `service-cluster-ip-range` Kubernetes API Server flag is currently set to `10.96.0.0/16` and cannot be changed.
Swarm uses a default address pool of `10.0.0.0/16` for its overlay networks. If this conflicts with your current network implementation, please use a custom IP address pool. To specify a custom IP address pool, use the `--default-address-pool` command line option during [Swarm initialization](../../../../engine/swarm/swarm-mode.md). Swarm uses a default address pool of `10.0.0.0/16` for its overlay networks. If this conflicts with your current network implementation, please use a custom IP address pool. To specify a custom IP address pool, use the `--default-address-pool` command line option during [Swarm initialization](../../../../engine/swarm/swarm-mode.md).
> **Note**: Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be installed using this flag and UCP must be installed on top of it. > **Note**: Currently, the UCP installation process does not support this flag. To deploy with a custom IP pool, Swarm must first be installed using this flag and UCP must be installed on top of it.

View File

@ -40,7 +40,14 @@ can be nested inside one another, to create hierarchies.
You can nest collections inside one another. If a user is granted permissions You can nest collections inside one another. If a user is granted permissions
for one collection, they'll have permissions for its child collections, for one collection, they'll have permissions for its child collections,
pretty much like a directory structure.. pretty much like a directory structure. As of UCP `3.1`, the ability to create a nested
collection of more than 2 layers deep within the root `/Swarm/` collection has been deprecated.
The following image provides two examples of nested collections with the recommended maximum
of two nesting layers. The first example illustrates an environment-oriented collection, and the second
example illustrates an application-oriented collection.
![](../images/nested-collection.png){: .with-border}
For a child collection, or for a user who belongs to more than one team, the For a child collection, or for a user who belongs to more than one team, the
system concatenates permissions from multiple roles into an "effective role" for system concatenates permissions from multiple roles into an "effective role" for

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

View File

@ -32,8 +32,8 @@ Instances must have the following [AWS Identity and Access Management](https://d
### Infrastructure Configuration ### Infrastructure Configuration
- Apply the roles and policies to Kubernetes masters and workers as indicated in the above chart. - Apply the roles and policies to Kubernetes masters and workers as indicated in the above chart.
- EC2 instances must be set to the private DNS hostname of the instance (will typically end in `.internal`) - Set the hostname of the EC2 instances to the private DNS hostname of the instance. See [DNS Hostnames](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-hostnames) and [To change the system hostname without a public DNS name](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-hostname.html#set-hostname-system) for more details.
- EC2 instances must also be labeled with the key `KubernetesCluster` with a matching value across all nodes. - Label the EC2 instances with the key `KubernetesCluster` and assign the same value across all nodes, for example, `UCPKubenertesCluster`.
### Cluster Configuration ### Cluster Configuration

View File

@ -192,7 +192,9 @@ There are several backward-incompatible changes in the Kubernetes API that may a
The following features are deprecated in UCP 3.1. The following features are deprecated in UCP 3.1.
* Collections * Collections
* User-created nested collections more than 2 layers deep within the root `/Swarm/` collection are deprecated and will be removed in future versions of the product. In the future, we recommend that at most only two levels of collections be created within UCP under the shared Cluster collection designated as `/Swarm/`. For example, if a production collection is created as a collection under the cluster collection `/Swarm/` as `/Swarm/production/` then at most one level of nestedness should be created, as in `/Swarm/production/app/`. * The ability to create a nested collection of more than 2 layers deep within the root `/Swarm/` collection is now deprecated and will not be included in future versions of the product. However, current nested collections with more than 2 layers are still retained.
* Docker recommends a maximum of two layers when creating collections within UCP under the shared cluster collection designated as `/Swarm/`. For example, if a production collection called `/Swarm/production` is created under the shared cluster collection, `/Swarm/`, then only one level of nesting should be created: `/Swarm/production/app/`. See [Nested Collections](/ee/ucp/authorization/group-resources/#nested-collections) for more details.
* Kubernetes * Kubernetes
* **PersistentVolumeLabel** admission controller is deprecated in Kubernetes 1.11. This functionality will be migrated to [Cloud Controller Manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/) * **PersistentVolumeLabel** admission controller is deprecated in Kubernetes 1.11. This functionality will be migrated to [Cloud Controller Manager](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/)

File diff suppressed because it is too large Load Diff