Add new separate S3 guide with S3 compatible servers

Signed-off-by: Tony Holdstock-Brown <tony@docker.com>
This commit is contained in:
Tony Holdstock-Brown 2017-02-21 09:17:19 -08:00 committed by Joao Fernandes
parent 994abe8ace
commit ab84302012
4 changed files with 152 additions and 78 deletions

View File

@ -1316,6 +1316,8 @@ manuals:
section:
- path: /datacenter/dtr/2.2/guides/admin/configure/external-storage/
title: Overview
- path: /datacenter/dtr/2.2/guides/admin/configure/external-storage/s3/
title: S3
- path: /datacenter/dtr/2.2/guides/admin/configure/external-storage/nfs/
title: NFS
- path: /datacenter/dtr/2.2/guides/admin/configure/set-up-high-availability/

View File

@ -6,12 +6,19 @@ title: Configure DTR image storage
---
After installing Docker Trusted Registry, one of your first tasks is to
designate and configure the Trusted Registry storage backend. This document provides the following:
designate and configure the Trusted Registry storage backend. This document
provides the following:
* Information describing your storage backend options.
* Configuration steps using either the Trusted Registry UI or a YAML file.
While there is a default storage backend, `filesystem`, the Trusted Registry offers other options that are cloud-based. This flexibility to configure to a different storage backend allows you to:
The default storage backend, `filesystem`, stores and serves images from the
*local* filesystem. In a HA setup this fails, as each node can only access its
own files.
DTR allows you to confiugure your image storage via distributed stores, such as
Amazon S3, NFS, or Google Cloud Storage. This flexibility to configure to a
different storage backend allows you to:
* Scale your Trusted Registry
* Leverage storage redundancy
@ -19,8 +26,7 @@ While there is a default storage backend, `filesystem`, the Trusted Registry off
* Take advantage of other features that are critical to your organization
At first, you might have explored Docker Trusted Registry and Docker Engine by
installing
them on your system in order to familiarize yourself with them.
installing them on your system in order to familiarize yourself with them.
However, for various reasons such as deployment purposes or continuous
integration, it makes sense to think about your long term organizations needs
when selecting a storage backend. The Trusted Registry natively supports TLS and
@ -28,94 +34,44 @@ basic authentication.
## Understand the Trusted Registry storage backend
By default, your Trusted Registry data resides as a data volume on the host
`filesystem`. This is where your repositories and images are stored. This
storage driver is the local posix `filesystem` and is configured to use a
directory tree in the local filesystem. It's suitable for development or small
deployments. The `filesystem` can be located on the same computer as the Trusted Registry, or on a separate system.
Your Trusted Registry data (images etc.) are stored using the configured
**storage driver** within DTR's settings. This defaults to the local
filesystem which uses your OS' posix operations to store and serve images.
Additionally, the Trusted Registry supports these cloud-based storage drivers:
* Amazon Simple Storage Solution **S3**
* Amazon Simple Storage Solution **S3** (and S3-compatible servers)
* OpenStack **Swift**
* Microsoft **Azure** Blob Storage
* **Google Cloud** Storage
### Filesystem
If you select `filesystem`, then the Trusted Registry uses the local disk to
store registry files. This backend has a single, required `rootdirectory`
parameter which specifies a subdirectory of `/var/local/dtr/imagestorage` in
which all registry files are stored. The default value of `/local` means the
files are stored in `/var/local/dtr/image-storage/local`.
The `filesystem` driver operates on the host's local filesystem. In HA
environments this needs to be shared via NFS, otherwise each node in your setup
will only be able to see their own local data. For more information on
configuring NFS [see the NFS docs](/datacenter/dtr/2.2/guides/admin/
configure/external-storage/nfs/).
The Trusted Registry stores all its data at this location, so ensure there is
adequate space available. To do so, you can run the following commands:
By default, docker creates a volume named `dtr-registry-${replica-id}` which is
used to host your data. You can supply a different volume name or directory
when installing or reconfiguring docker to change where DTR stores your data
locally.
* To analyze the disk usage: `docker exec -it <container_name> bash` then run `df -h`.
* To see the file size of your containers, use the `-s` argument of `docker ps -s`.
When using your local filesystem (or NFS) to serve images ensure there is enough
available space, otherwise pushes will begin to fail.
You can see the total space used locally by running `du -hs "path-to-volume"`.
The path to the docker volume can be found by running `docker volume ls` to list
volumes and `docker volume inspect dtr-registry-$replicaId` to show the path.
### Amazon S3
S3 stores data as objects within “buckets” where you read, write, and delete
objects in that container. It too, has a `rootdirectory` parameter. If you select this option, there will be some tasks that you need to first perform [on AWS](https://aws.amazon.com/s3/getting-started/).
DTR supports AWS S3 plus other file servers that are S3 compatible, such as
Minio. For more information on configuring S3 or a compatible backend see the
[S3 configuration guide](
/datacenter/dtr/2.2/guides/admin/configure/external-storage/s3/).
1. You must create an S3 bucket, and write down its name and the AWS zone it
runs on.
2. Determine write permissions for your bucket.
3. S3 flavor comes with DEBUG=false by default. If you need to debug, then you need to add `-e DEBUG=True`.
4. Specify an AWS region, which is dependent on your S3 location, for example, use `-e AWS_REGION=”eu-west-1”`.
5. Ensure your host time is correct. If your host clock is still running on the main computer, but not on the docker host virtual machine, then you will have
time differences. This may cause an issue if you try to authenticate with Amazon
web services.
6. You will also need your AWS access key and secret key. Learn [more about it ](http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) here.
Conversely, you can further limit what users access in the Trusted Registry when you use AW to host your Trusted Registry. Instead of using the UI to enter information, you can create an [IAM user policy](http://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html) which is a JSON description of the effects, actions, and resources available to
a user. The advantage of using this method instead of configuring through the Trusted Registry UI is that you can restrict what users can access. You apply the policy as part of the process of installing the Trusted Registry on AW. To set a policy through the AWS command line, save the policy into a file,
for example `TrustedRegistryUserPerms.json`, and pass it to the
put-user-policy AWS command:
```
$ aws iam put-user-policy --user-name MyUser --policy-name TrustedRegistryUserPerms --policy-document file://C:\Temp\TrustedRegistryUserPerms.json
```
You can also set a policy through your AWS console. For more information about
setting IAM policies using the command line or the console, review the AWS
[Overview of IAM Policies](http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) article or visit the console Policies page.
The following example describes the minimum permissions set which allows
Trusted Registry users to access, push, pull, and delete images.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::<INSERT YOUR BUCKET HERE>"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<INSERT YOUR BUCKET HERE>/*"
}
]
}
```
### OpenStack Swift

View File

@ -0,0 +1,116 @@
---
description: S3 storage configuration for Docker Trusted Registry
keywords: docker, dtr, storage driver, s3 storage, s3 compatible
title: Configuring S3 storage within DTR
---
DTR supports AWS S3 to store your images, plus other file servers that have an
S3 compatible API such as Minio. Other blobstores that are S3 compatible
generally use the same terminology, though setup may be slightly different.
### About S3
S3 stores data as objects within “buckets” where you read, write, and delete
objects in that bucket. All read and write operations will be sent to S3 (or
your S3-compatible server), ensuring availability and durability of your images.
### Configuring S3 itself
This section deals with creating and configuring bucket policies within AWS; if
you're using an S3-compatible server you may skip this section.
Prior to configuring DTR you need to make a "bucket" within S3. Buckets are
uniquely named containers in which S3 stores files.
You must:
1. Create a bucket within S3, choosing a region which is closest to your DTR
2. Note the bucket and region name for configuring DTR
You then need to configure authorization for your bucket. You can choose to use
an [access and secret key combination](
http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) for
the entirety of DTR (simple, but potentially less secure) or configure an [IAM
policy for the bucket and DTR](
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html) (more
complex to configure but also more secure, as access is restricted to only the
bucket).
If using an access key and secret key you should copy these and begin
configuring your storage settings within DTR
**Creating an IAM policy for your bucket**
You can set a policy through your AWS console to manage permissions for DTR.
For more information about setting IAM policies using the command line or the
console, review the AWS [Overview of IAM Policies](
http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) article
or visit the console Policies page.
The following example describes the minimum permissions set which allows
Trusted Registry users to access, push, pull, and delete images.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::<INSERT YOUR BUCKET HERE>"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::<INSERT YOUR BUCKET HERE>/*"
}
]
}
```
To set a policy through the AWS command line, save the policy into a file,
for example `TrustedRegistryUserPerms.json`, and pass it to the
put-user-policy AWS command:
```
$ aws iam put-user-policy --user-name MyUser --policy-name TrustedRegistryUserPerms --policy-document file://C:\Temp\TrustedRegistryUserPerms.json
```
You can also save this policy using the AWS console online.
### Configuring your storage settings
To configure your storage settings you must be a DTR administrator, and you must
have a bucket created within S3 with the
You first need to create a bucket within Amazon S3
1. First navigate to the storage settings tab, within "settings"
2. Choose "S3" from the storage list (even when using an S3-compatible backend):
![](../../../images/s3-1.png){: .with-border}
3. Fill out the form using your bucket name, region, and optionally access keys.
If you're using an IAM policy the secret key and access key can be
left blank. Also, when using an S3-compatible server these are likely your
username and password.
4. If using an S3 compatible server, change the region endpoint to the URL of
your server
5. Within "Advanced Settings" you can choose to use V4 auth and HTTPS. By
default HTTPS is on and V4 auth is off.
When hitting "Save" DTR validates that it can read and write to your new
settings and saves them once validated.

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB