Remove IBM from cloud editions

This commit is contained in:
Joao Fernandes 2018-04-23 13:14:16 -07:00 committed by Joao Fernandes
parent 7d776869ef
commit 62989c4e57
21 changed files with 0 additions and 2386 deletions

View File

@ -65,44 +65,6 @@ guides:
- path: /edge/engine/reference/commandline/docker/
title: Docker Edge CLI reference
nosync: true
- sectiontitle: Docker for IBM Cloud (Beta)
section:
- path: /docker-for-ibm-cloud/why/
title: Why Docker for IBM Cloud?
- path: /docker-for-ibm-cloud/quickstart/
title: Quick start
- path: /docker-for-ibm-cloud/
title: "Setup & prerequisites"
- path: /docker-for-ibm-cloud/administering-swarms/
title: Administer clusters
- path: /docker-for-ibm-cloud/scaling/
title: Scale clusters
- path: /docker-for-ibm-cloud/deploy/
title: Deploy apps
- path: /docker-for-ibm-cloud/binding-services/
title: Bind IBM Cloud services
- sectiontitle: Use registry images in clusters
section:
- path: /docker-for-ibm-cloud/registry/
title: Registry overview
- path: /docker-for-ibm-cloud/dtr-ibm-cos/
title: Set up DTR to use IBM Cloud Object Storage
- path: /docker-for-ibm-cloud/ibm-registry/
title: Use images in IBM Cloud Container Registry
- path: /docker-for-ibm-cloud/persistent-data-volumes/
title: Save persistent data in storage volumes
- path: /docker-for-ibm-cloud/load-balancer/
title: Use a load balancer
- path: /docker-for-ibm-cloud/logging/
title: Send logging and metric data to IBM Cloud
- path: /docker-for-ibm-cloud/cli-ref/
title: CLI reference for bx d4ic commands
- path: /docker-for-ibm-cloud/faqs/
title: FAQs
- path: /docker-for-ibm-cloud/opensource/
title: Open source licensing
- path: /docker-for-ibm-cloud/release-notes/
title: Release notes
- sectiontitle: Docker for AWS
section:
- path: /docker-for-aws/why/

View File

@ -1,358 +0,0 @@
---
description: Administer swarm clusters with Docker EE for IBM Cloud
keywords: ibm, ibm cloud, logging, iaas, tutorial
title: Administer swarm clusters with Docker EE for IBM Cloud
---
Docker Enterprise Edition for IBM Cloud (Beta) comes with a variety of integrations that simplify the swarm administration process.
Use the Docker EE for IBM Cloud CLI plug-in (`bx d4ic`) to provision swarm mode clusters and resources. Manage your cluster with the `bx d4ic` plug-in and the Docker EE Universal Control Plane (UCP) web UI.
## Create swarms
Create a Docker EE swarm cluster in IBM Cloud.
> Your beta license allows you to provision up to 20 nodes
>
> During the beta, your cluster can have a maximum of 20 nodes, up to 14 of which can be worker nodes. If you need more nodes than this, work with your Docker representative to acquire an additional Docker EE license.
Before you begin:
* [Complete the setup requirements](/docker-for-ibm-cloud/index.md#prerequisites).
* Make sure that you have the appropriate [IBM Cloud infrastructure permissions](faqs.md#what-ibm-cloud-infrastructure-permissions-do-i-need).
* Log in to [IBM Cloud infrastructure](https://control.softlayer.com/), select your user profile, and under the **API Access Information** section retrieve your **API Username** and **Authentication Key**.
* [Add your SSH key to IBM Cloud infrastructure](https://knowledgelayer.softlayer.com/procedure/add-ssh-key), note its label, and locate the file path of the private SSH key on your machine.
* Retrieve the Docker EE installation URL that you received in your beta welcome email.
To create a Docker EE for IBM Cloud cluster from the CLI:
1. Log in to the IBM Cloud CLI. If you have a federated ID, use the `--sso` option.
```bash
$ bx login [--sso]
```
2. Target the IBM Cloud org and space.
```bash
$ bx target --cf
```
3. Review the `bx d4ic create` command parameters. You must provide parameters that are marked `Required`. Optional parameters are set to the default.
| Parameter | Description | Default Value | Required? |
| ---- | ----------- | ------------- | --- |
| `--sl-user`, `-u` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **API Username** under the API Access Information section. | | Required |
| `--sl-api-key`, `-k` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **Authentication Key** under the API Access Information section. | | Required |
| `--ssh-label`, `--label` | Your IBM Cloud infrastructure SSH key label for the manager node. To create a key, [log in to IBM Cloud infrastructure](https://control.softlayer.com/) and select **Devices > Manage > SSH Keys > Add**. Copy the key label and insert it here. | | Required |
| `--ssh-key` | The path to the SSH key on your local client that matches the SSH key label in your IBM Cloud infrastructure account. | | Required |
| `--swarm-name`, `--name` | The name for your swarm and prefix for the names of each node. | | Required |
| `--docker-ee-url` | The Docker EE installation URL associated with your subscription. [Email IBM](mailto:sealbou@us.ibm.com) to get a trial subscription during the beta. | | Required |
| `--manager` | Deploy 1, 3, or 5 manager nodes. | 3 | Optional |
| `--workers`, `-w` | Deploy a minimum of 1 and maximum of 10 worker nodes. | 3 | Optional |
| `--datacenter`, `-d` | The location (data center) that you deploy the cluster to. Availabe locations are dal12, dal13, fra02, hkg02, lon04, par01, syd01, syd04, tor01, wdc06, wdc07. | wdc07 | Optional |
| `--verbose`, `-v` | Enable verbose mode | | Optional |
| `--hardware` | If "dedicated" then the nodes are created on hosts with compute instances in the same account. | Shared | Optional |
| `--manager-machine-type` | The machine type of the manager nodes: u1c.1x2, u1c.2x4, b1c.4x16, b1c.16x64, b1c.32x128, or b1c.56x242. More powerful machine types cost more, but deliver better performance. For example, u1c.2x4 is 2 cores and 4 GB memory, and b1c.56x242 is 56 cores and 242 GB memory. | b1c.4x16 | Optional |
| `--worker-machine-type` | The machine type of the worker nodes: u1c.1x2, u1c.2x4, b1c.4x16, b1c.16x64, b1c.32x128, or b1c.56x242. More powerful machine types cost more, but deliver better performance. For example, u1c.2x4 is 2 cores and 4 GB memory, and b1c.56x242 is 56 cores and 242 GB memory. | u1c.1x2 | Optional |
| `--disable-dtr-storage` | By default, the `bx d4ic create` command orders an IBM Cloud Swift API Object Storage account and creates a container named `dtr-container`. If you want to prevent this, include the `--disable-dtr-storage`. Then, [set up IBM Cloud Object Storage](dtr-ibm-cos.md) yourself so that DTR works with your cluster. | Enabled by default. | Optional |
4. Create the cluster. Use the `--swarm-name` flag to name your cluster, and fill in the credentials, SSH, and Docker EE installation URL variables with the information that you retrieved before you began.
```bash
$ bx d4ic create --swarm-name my_swarm \
--sl-user user.name.1234567 \
--sl-api-key api-key\
--ssh-label my_ssh_label \
--ssh-key filepath_to_my_ssh_key \
--docker-ee-url my_docker-ee-url
```
> Set environment variables
>
> You can set your infrastructure API credentials and Docker EE installation URL as environment variables so that you do not need to include them as options when using `bx d4ic` commands. For example:
>
> export SOFTLAYER_USERNAME=user.name.1234567
>
> export SOFTLAYER_API_KEY=api-key
>
> export D4IC_DOCKER_EE_URL=my_docker-ee-url
5. Note the cluster **Name**, **ID**, and **UCP Password**.
> Swarm provisioning
>
> Your cluster is provisioned in two stages, and takes a few minutes to provision. Don't try to modify your cluster just yet!
> First, the manager node is deployed. Then, the additional infrastructure resources are deployed, including the worker nodes, DTR nodes, load balancers, subnet, and NFS volume.
* **Provisioning Stage 1**: Check the status of the manager node:
{% raw %}
```bash
$ docker logs cluster-name_ID
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path:
Outputs:
manager_public_ip = 169.##.###.##
swarm_d4ic_id = ID
swarm_name = cluster-name
ucp_password = UCP-password
```
{% endraw %}
* **Provisioning Stage 2**: Check the status of the cluster infrastructure:
{% raw %}
```bash
$ bx d4ic show --swarm-name cluster-name --sl-user user.name.1234567 --sl-api-key api_key
Getting swarm information...
Infrastructure Details
Swarm
ID ID
Name cluster-name
Created By user.name.1234567
Nodes
ID Name Public IP Private IP CPU Memory Datacenter Infrakit Group
46506407 cluster-name-mgr1 169.##.###.## 10.###.###.## 2 4096 wdc07 managers
...
Load Balancers
ID Name Address Type
ID-string cluster-name-mgr cluster-name-mgr-1234567-wdc07.lb.bluemix.net mgr
...
Subnets
ID Gateway Datacenter
ID-number 10.###.###.## wdc07
NFS Volumes
ID ID-number
Mount Address fsf-wdc0701b-fz.adn.networklayer.com:/ID_number/data01
Datacenter wdc07
Capacity 20
Type ENDURANCE_FILE_STORAGE
Tier Level 10_IOPS_PER_GB
OK
```
{% endraw %}
After creating the cluster, [log in to Docker UCP and download the Docker UCP client certificate bundle](#use-the-universal-control-plane).
## Use the Universal Control Plane
Docker EE for IBM Cloud uses [Docker Universal Control Plane (UCP)](/datacenter/ucp/2.2/guides/) to provide integrated container management and security, from development to production.
### Access UCP
Before you begin, [create a cluster](#create-swarms). If you have the clsuter **Name**, **ID**, and **UCP Password**, proceed to Step 2.
1. Retrieve your UCP password by using the cluster **Name** and **ID** that you made when you [created the cluster](#create-swarms).
```bash
$ docker logs cluster-name_ID
...
ucp_password = UCP-password
...
```
If you need to get the **Name** and **ID** of your cluster, run `bx d4ic list --sl-user SOFTLAYER_USERNAME --sl-api-key SOFTLAYER_API_KEY`.
{:.tip}
2. Retrieve the **UCP URL** address.
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
```
3. Copy the **UCP URL** for your cluster from the `bx d4ic list` command, and in your browser navigate to it.
4. Log in to UCP. Your credentials are `admin` and the UCP password from the `docker logs` command, or the credentials that your admin created for you.
### Download client certificates
[Download the client certificate bundle](/datacenter/ucp/2.2/guides/user/access-ucp/cli-based-access/#download-client-certificates) to create objects and deploy services from a local Docker client.
1. [Access UCP](#access-ucp).
2. Under your user name (for example, **admin**), click **My Profile**.
3. Click **Client Bundles** > **New Client Bundle**. A zip file is generated.
4. In the GUI, you are now shown a label and public key. You can edit the label by clicking the pencil icon and giving it a name, such as _d4ic-ucp_.
5. In a terminal, navigate and unzip the client bundle.
```bash
$ cd Downloads && unzip ucp-bundle-admin.zip
```
> Keep your client bundle handy
>
> Move the certificate environment variable directory to a safe and accessible location on your machine. It contains secret information. You'll use it a lot!
6. From the client bundle directory, update your `DOCKER_HOST` and `DOCKER_CERT_PATH` environment variables by loading the `env.sh` script contents into your environment.
```bash
$ source env.sh
```
> Set your environment to use Docker EE for IBM Cloud
>
> Repeat this to set your environment variables each time you enter a new terminal session, or after you unset your variables, to connect to the Docker EE for IBM Cloud swarm.
7. Verify that your certificates are being sent to Docker Engine. The command returns information on your swarm.
```bash
$ docker info
```
## View swarm resources
### Cluster-level resources
To review resources used within a particular Docker EE cluster, use the CLI or UCP.
**CLI**: The `bx d4ic` CLI lists, modifies, and automates cluster infrastructure, as well as the URLs to access UCP, DTR, or exposed Docker services.
* To review a list of your clusters and their UCP URLs: `bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key`.
* To review details about the cluster, such as the IP address of manager nodes or the status of the cluster load balancers: `bx d4ic show --swarm-name my_swarm --sl-user user.name.1234567 --sl-api-key api_key`.
**UCP**: The Docker EE Universal Control Plane provides a web UI to manage swarm users and deployed applications. You can view swarm-related stacks, services, containers, images, nodes, networks, volumes, and secrets.
### Account-level resources
For an account-level view of services and infrastructure that can be used in your swarm, log in to your [IBM Cloud](https://console.bluemix.net/) account.
* The IBM Cloud dashboard provides information on connected IBM Cloud services in the account, such as Watson and Internet of Things.
* The IBM Cloud infrastructure portal shows account infrastructure resources such as virtual devices, storage, and networking.
### Other resources
To gather logging and metric data from your swarm, first [enable logging for the cluster](logging.md), and then access the data in your IBM Cloud organization and space.
## UCP and CLIs
Docker EE for IBM Cloud employs a flexible architecture and integration with IBM Cloud that you can use to leverage IBM Cloud resources and customize your swarm environment. Docker EE UCP exposes the standard Docker API, and as such, includes certain functions that instead should be done by using Docker EE for IBM Cloud capabilities.
> Self-healing capabilities so you don't need to modify cluster infrastructure.
>
> Docker EE for IBM Cloud uses the InfraKit toolkit to support self-healing infrastructure. After you create the swarm, the cluster maintains that specified number of nodes. If a manager node fails, you do not need to promote a worker node to manager; the swarm self-recovers the manager node.
>
> Do not use UCP to modify a cluster's underlying infrastructure, such as adding or promoting worker nodes to managers.
The table outlines when to use UCP and when to use the `bx d4ic` CLI for various types of tasks.
| Task type | UCP or `bx d4ic` CLI | Description |
| --- | --- | --- |
| Swarm nodes | CLI | [Create](#create-swarms), update, modify, and [delete](#delete-swarms) swarm nodes. |
| Certificates | UCP and CLI | From UCP, [download client bundles](#download-client-certificates) after swarm is created in CLI, and every time certificates are changed. From the CLI, run the script from the client bundle downloaded from UCP. |
| Labels | UCP | [Add and modify labels](/engine/userguide/labels-custom-metadata/). If the swarm nodes are removed or modified (such as during a rolling update), the labels must be re-created. |
| Access | UCP | Control access and grant permissions by users, roles, and teams. |
| Secrets | UCP and CLI | In UCP, [manage](/datacenter/ucp/2.2/guides/user/secrets/) and [grant access](/datacenter/ucp/2.2/guides/user/secrets/grant-revoke-access/) to secrets for general usage. For IBM Cloud services, use the CLI `bx d4ic key-create` [command](cli-ref.md#bx-d4ic-key-create) to create secrets. |
| Docker services | UCP and CLI | In UCP, view and manage connected services. From the CLI, [bind IBM Cloud services](binding-services.md). |
| Apps | UCP and CLI | In UCP, view and manage connected apps. From the CLI, [deploy apps](deploy.md) and run containers. |
| Registry | UCP and CLI | For UCP, use [Docker Trusted Registry](/datacenter/dtr/2.4/guides/) installed on the manager node. From the CLI, [run Docker and IBM Cloud Container Registry commands](registry.md). |
| Networking | UCP and CLI | Each cluster has three load balancers that you can use to access and expose various services. You cannot configure these load balancer. In UCP, view networks and related resources. Do not deploy services on the same port that the HTTP Routing Mesh uses. From the CLI, [retrieve load balancer URLs and expose services](load-balancer.md). |
| Data Storage Volumes | CLI | From the CLI, [create and connect file storage volumes](persistent-data-volumes.md) to your swarm. Do NOT use UCP to create IBM Cloud infrastructure data storage volumes. |
| Logging | UCP and CLI | From UCP, can send logs to a remote syslog server. From the CLI, [enable logging and monitoring](logging.md) to IBM Cloud and access by using Grafana and Kibana GUIs. |
## Grant user access
For IBM Cloud account access management, consult the [IBM Cloud Identity and Access Management documentation](https://console.bluemix.net/docs/iam/quickstart.html#getstarted).
For Docker EE cluster access management, use the [UCP Access Control documentation](/datacenter/ucp/2.2/guides/access-control/).
## Control public network access to swarms
By default, Docker EE for IBM Cloud uses [IBM Cloud Security Groups](https://console.bluemix.net/docs/infrastructure/security-groups/sg_index.html#getting-started-with-security-groups) to control access to your clusters by setting rules for incoming and outgoing traffic. You need to have [permissions in your IBM Cloud infrastructure account](faqs.md#what-ibm-cloud-infrastructure-permissions-do-i-need) to manage security groups so that you can provision clusters with security groups. There are two security groups: one for the DTR and worker nodes, and one for the manager nodes.
The security group for DTR and worker nodes blocks all traffic on the public network. DTR and service load balancers communicate with these nodes through the cluster's private network. For public access to services that are running on worker nodes, [use the service load balancer](load-balancer.md#service-load-balancer).
The security group for the manager nodes allows public network traffic to the manager nodes only through port 56422. To access the cluster's manager nodes:
1. Log in to the IBM Cloud CLI. If you have a federated ID, use the `--sso` option.
```bash
$ bx login [--sso]
```
2. Target the IBM Cloud org and space:
```bash
$ bx target --cf
```
3. Retrieve your swarm name:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
```
4. Get the **Public IP** of the manager you want to access, such as `my_swarm-mgr1`:
```bash
$ bx d4ic show --swarm-name my_swarm --sl-user user.name.1234567 --sl-api-key api_key
```
5. Access the manager:
```bash
$ ssh -A docker@managerIP -p 56422
```
The first time that you access a manager node, you might be asked to confirm the connection and add the public IP and port to the list of known hosts.
## Delete swarms
Before you begin:
* Log in to [IBM Cloud infrastructure](https://control.softlayer.com/), select your user profile, and under the **API Access Information** section retrieve your **API Username** and **Authentication Key**.
* Retrieve the label of your IBM Cloud infrastructure SSH key, and locate the file path of the private SSH key on your machine.
To delete a swarm:
1. Log in to the IBM Cloud CLI. If you have a federated ID, use the `--sso` option.
```bash
$ bx login [--sso]
```
2. Target the IBM Cloud org and space:
```bash
$ bx target --cf
```
3. Delete the swarm:
```bash
$ bx d4ic delete (--swarm-name my_swarm | --id swarm_ID )\
--sl-user user.name.1234567 \
--sl-api-key api_key \
--ssh-key filepath_to_my_ssh_key \
[--force]
```
4. Restore the default Docker client settings by running the commands shown in the CLI:
```none
unset DOCKER_HOST
unset DOCKER_TLS_VERIFY
unset DOCKER_CERT_PATH
```
> Cloud Object Storage is not deleted by default
>
> When you create a swarm, by default you create an IBM Cloud Swift API Object Storage account and a container named `dtr-container` so that DTR can securely store images outside the swarm. When you delete the swarm, the Cloud Object Storage account is not deleted and remains in your IBM Cloud infrastructure account.
>
> If you want to delete it, see [Delete IBM Cloud Object Storage](dtr-ibm-cos.md#delete-ibm-cloud-object-storage). If you keep the account, you can link it to future swarms by using the `--disable-dtr-storage` parameter when you [create the new swarm](#create-swarms) and then by [configuring IBM Cloud Object Storage Regional Swift API for DTR](dtr-ibm-cos.md#configure-ibm-cloud-object-storage-regional-swift-api-for-dtr).

View File

@ -1,180 +0,0 @@
---
description: Bind IBM Cloud services to swarms
keywords: ibm, ibm cloud, services, watson, AI, IoT, iaas, tutorial
title: Bind IBM Cloud services to swarms
---
With Docker EE for IBM Cloud, you can easily bind services to your cluster to enhance your apps with Watson, AI, Internet of Things, and other services available in the IBM Cloud catalog.
## Bind IBM Cloud services
Before you begin:
* Ensure that you have [set up your IBM Cloud account](/docker-for-ibm-cloud/index.md).
* [Install the IBM Cloud CLI and plug-ins](/docker-for-ibm-cloud/index.md#install-the-clis).
* [Create a cluster](administering-swarms.md).
* Get the name of the cluster to which you want to bind the service by running `bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key`.
* Identify an existing or create a new IBM Cloud service. To list existing services, run `bx service list`.
* Identify an existing registry namespace or create a registry namespace. [IBM Cloud Container Registry example](https://console.bluemix.net/docs/services/Registry/registry_setup_cli_namespace.html#registry_namespace_add).
* Review example files that are customized for an IBM Watson Conversation service, to give you an idea of how you might develop your own service files. **Note**: The steps include examples for another type of service to give you ideas for other ways you might build your files.
* Example [Dockerfile](https://github.com/docker/docker.github.io/tree/master/docker-for-ibm-cloud/scripts/Dockerfile)
* Example [docker-service.yaml file](https://github.com/docker/docker.github.io/tree/master/docker-for-ibm-cloud/scripts/docker-stack.yaml)
There are three main steps in binding IBM Cloud services to your Docker EE for IBM Cloud cluster:
1. Create a Docker secret.
2. Build a Docker image that uses the IBM Cloud service.
3. Create a Docker service.
### Step 1: Create a Docker secret
1. Log in to IBM Cloud. If you have a federated account, use the `--sso` option.
```bash
$ bx login [--sso]
```
2. Target the org and space that has the service:
```bash
$ bx target --cf
```
3. Create the Docker secret for the service. The `--swarm-name` is the cluster that you're binding the service to. The `--service-name` flag must match the name of your IBM Cloud service. The `--service-key` flag is used to create the Docker service YAML file. The `--cert-path` is the filepath to your cluster's UCP client bundle certificates. Include your IBM Cloud infrastructure credentials if you have not set the environment variables.
```bash
$ bx d4ic key-create --swarm-name my_swarm \
--service-name my_ibm_service \
--service-key my_secret \
--cert-path filepath/to/certificate/repo \
--sl-user user.name.1234567 \
--sl-api-key api_key
```
4. Verify the secret is created:
```bash
$ docker secret ls
```
5. Update your service code to use the secret that you created. For example:
{% raw %}
```none
...
// WatsonSecret holds Watson VR service keys
type WatsonSecret struct {
URL string `json:"url"`
Note string `json:"note"`
APIKey string `json:"api_key"`
}
...
var watsonSecretName = "watson-secret"
var watsonSecrets WatsonSecret
...
watsonSecretFile, err := ioutil.ReadFile("/run/secrets/" + watsonSecretName)
if err != nil {
fmt.Println(err.Error())
os.Exit(1)
}
json.Unmarshal(watsonSecretFile, &watsonSecrets)
fmt.Println("Watson URL: ", watsonSecrets.URL)
...
msgQ.Add("api_key", watsonSecrets.APIKey)
...
```
{% endraw %}
### Step 2: Build a Docker image
1. Log in to the registry that you are using to store the image.
2. Create a Dockerfile following [Dockerfile best practices](/engine/userguide/eng-image/dockerfile_best-practices/).
> Docker images
>
> If you are unfamiliar with Docker images, try the [Getting Started](/get-started/).
**Example** snippet for a Dockerfile that uses _mmssearch_ service.
{% raw %}
```none
FROM golang:latest
WORKDIR /go/src/mmssearch
COPY . /go/src/mmssearch
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 go/src/mmssearch/main .
CMD ["./main"]
LABEL version=demo-3
```
{% endraw %}
3. Navigate to the directory of the Dockerfile, and build the image. Don't forget the period in the `docker build` command.
```bash
$ cd directory/path && docker build -t my_image_name .
```
4. Test the image locally before pushing to your registry.
5. Tag the image:
```bash
$ docker tag my_image_name registry-path/namespace/image:tag
```
6. Push the image to your registry:
```bash
$ docker push registry-path/namespace/image:tag
```
### Step 3: Create a Docker service
1. Develop a `docker-service.yaml` file using the [compose file reference](/compose/compose-file/).
* Save the file in an easily accessible directory, such as the one that has the Dockerfile that you used in the previous step.
* For the `image` field, use the same `registry/namespace/image:tag` path that you made in the previous step for the for the service `image` field.
* For the service `environment` field, use a service environment, such as a workspace ID, from the IBM Cloud service that you made before you began.
* **Example** snippet for a `docker-service.yaml` that uses _mmssearch_ with a Watson secret.
{% raw %}
```none
mmssearch:
image: mmssearch:latest
build: .
ports:
- "8080:8080"
secrets:
- source: watson-secret
target: watson-secret
secrets:
watson-secret:
external: true
```
{% endraw %}
2. Connect to your cluster by setting the environment variables from the [client certificate bundle that you downloaded](administering-swarms.md#download-client-certificates).
```bash
$ cd filepath/to/certificate/repo && source env.sh
```
3. Navigate to the directory of the `docker-service.yaml` file.
4. Deploy the service:
```bash
$ docker stack deploy my_service_name \
--with-registry-auth \
--compose-file docker-stack.yaml
```
5. Verify that the service has been deployed to your cluster:
```bash
$ docker service ls
```

View File

@ -1,159 +0,0 @@
---
description: CLI reference for Docker for IBM Cloud
keywords: ibm, ibm cloud, cli, iaas, reference
title: CLI reference for Docker EE for IBM Cloud
---
With the Docker EE for IBM Cloud (beta) plug-in for the IBM Cloud CLI, you can manage your Docker swarms alongside other IBM Cloud operations.
## Docker for IBM Cloud plug-in commands
Refer to these commands to manage your Docker EE for IBM Cloud clusters.
* To view a list of commands, run the `bx d4ic help` command.
* For help with a specific command, run `bx d4ic help [command_name]`.
* To view the version of your Docker for IBM Cloud plug-in, run the `bx d4ic version` command.
| Commands | | |
|---|---|---|
| [bx d4ic create](#bx-d4ic-create) | [bx d4ic delete](#bx-d4ic-delete) | [bx d4ic key-create](#bx-d4ic-key-create) |
| [bx d4ic list](#bx-d4ic-list) | [bx d4ic logmet](#bx-d4ic-logmet) | [bx d4ic show](#bx-d4ic-show) |
## bx d4ic create
Create a Docker EE swarm cluster.
### Usage
```bash
$ bx d4ic create --sl-user SOFTLAYER_USERNAME --sl-api-key SOFTLAYER_API_KEY --ssh-label SSH_KEY_LABEL --ssh-key SSH_KEY_PATH --docker-ee-url DOCKER_EE_URL --swarm-name SWARM_NAME [--datacenter DATACENTER] [--workers NUMBER] [--managers NUMBER] [--hardware SHARED|DEDICATED] [--manager-machine-type MANAGER_MACHINE_TYPE] [--worker-machine-type WORKER_MACHINE_TYPE] [--disable-dtr-storage] [-v] [--version VERSION]
```
### Options
| Name, shorthand | Description | Default | Required? |
|---|---|---|---|
| `--sl-user`, `-u` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **API Username** under the API Access Information section. | | Required |
| `--sl-api-key`, `-k` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **Authentication Key** under the API Access Information section. | | Required |
| `--ssh-label`, `--label` | Your IBM Cloud infrastructure SSH key label for the manager node. To create a key, [log in to IBM Cloud infrastructure](https://control.softlayer.com/) and select **Devices > Manage > SSH Keys > Add**. Copy the key label and insert it here. | | Required |
| `--ssh-key` | The path to the SSH key on your local client that matches the SSH key label in your IBM Cloud infrastructure account. | | Required |
| `--swarm-name`, `--name` | The name for your swarm and prefix for the names of each node. | | Required |
| `--docker-ee-url` | The Docker EE installation URL associated with your subscription. [Email IBM](mailto:sealbou@us.ibm.com) to get a trial subscription during the beta. | | Required |
| `--manager` | Deploy 1, 3, or 5 manager nodes. | 3 | Optional |
| `--workers`, `-w` | Deploy a minimum of 1 and maximum of 10 worker nodes. | 3 | Optional |
| `--datacenter`, `-d` | The location (data center) that you deploy the cluster to. Availabe locations are dal12, dal13, fra02, hkg02, lon04, par01, syd01, syd04, tor01, wdc06, wdc07. | wdc07 | Optional |
| `--verbose`, `-v` | Enable verbose mode | | Optional |
| `--hardware` | If "dedicated" then the nodes are created on hosts with compute instances in the same account. | Shared | Optional |
| `--manager-machine-type` | The machine type of the manager nodes: u1c.1x2, u1c.2x4, b1c.4x16, b1c.16x64, b1c.32x128, or b1c.56x242. Higher machine types cost more, but deliver better performance: for example, u1c.2x4 is 2 cores and 4 GB memory, and b1c.56x242 is 56 cores and 242 GB memory. | b1c.4x16 | Optional |
| `--worker-machine-type` | The machine type of the worker nodes: u1c.1x2, u1c.2x4, b1c.4x16, b1c.16x64, b1c.32x128, or b1c.56x242. Higher machine types cost more, but deliver better performance: for example, u1c.2x4 is 2 cores and 4 GB memory, and b1c.56x242 is 56 cores and 242 GB memory. | u1c.1x2 | Optional |
| `--disable-dtr-storage` | By default, the `bx d4ic create` command orders an IBM Cloud Swift API Object Storage account and creates a container named `dtr-container`. If you want to prevent this, include the `--disable-dtr-storage`. Then, [set up IBM Cloud Object Storage](dtr-ibm-cos.md) yourself so that DTR works with your cluster. | Enabled by default. | Optional |
| `--version` | The Docker EE version of the created cluster. For the beta, only the default version is available. | Default version | Optional |
## bx d4ic delete
Delete a Docker EE swarm cluster.
### Usage
```bash
$ bx d4ic delete (--swarm-name SWARM_NAME | --id ID) --sl-user SOFTLAYER_USERNAME --sl-api-key SOFTLAYER_API_KEY --ssh-label SSH_KEY_LABEL --ssh-key SSH_KEY_PATH [--insecure] [--force]
```
### Options
| Name, shorthand | Description | Default | Required? |
|---|---|---|---|
| `--sl-user`, `-u` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **API Username** under the API Access Information section. | | Required |
| `--sl-api-key`, `-k` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **Authentication Key** under the API Access Information section. | | Required |
| `--ssh-label`, `--label` | Your IBM Cloud infrastructure SSH key label for the manager node. To create a key, [log in to IBM Cloud infrastructure](https://control.softlayer.com/) and select **Devices > Manage > SSH Keys > Add**. Copy the key label and insert it here. | | Required |
| `--ssh-key` | The path to the SSH key on your local client that matches the SSH key label in your IBM Cloud infrastructure account. | | Required |
| `--swarm-name`, `--name` | The name of your cluster. If the name is not provided, you must provide the ID. | | Required |
| `--id` | The ID of your cluster. If the ID is not provided, you must provide the name. | | Required |
| `--verbose`, `-v`| Enable verbose mode | | Optional |
| `--insecure` | Do not verify the identity of the remote host and accept any host key. This is not recommended. | | Optional |
| `--force`, `-f` | Force deletion without confirmation. | | Optional |
## bx d4ic key-create
Create a key for a service instance. Before you can create a key, create an IBM Cloud service.
### Usage
```bash
$ bx d4ic key-create (--swarm-name SWARM_NAME | --id ID) --cert-path CERT_PATH --service-name SERVICE_NAME --service-key SERVICE_KEY --sl-user SOFTLAYER_USERNAME --sl-api-key SOFTLAYER_API_KEY
```
### Options
| Name, shorthand | Description | Default | Required? |
|---|---|---|---|
| `--cert-path`, `--cp` | The directory containing the [Docker UCP client certificate bundle](administering-swarms.md#download-client-certificates). | | Required |
| `--swarm-name`, `--name` | The name of your cluster. If the name is not provided, you must provide the ID. | | Required |
| `--id` | The ID of your cluster. If the ID is not provided, you must provide the name. | | Required |
| `--service-name`, `--name` | Name of an IBM Cloud service. | | Required |
| `--service-key`, `--key` | Key of an IBM Cloud service. | | Required |
| `--sl-user`, `-u` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **API Username** under the API Access Information section. | | Required |
| `--sl-api-key`, `-k` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **Authentication Key** under the API Access Information section. | | Required |
## bx d4ic list
List the clusters in your Docker EE for IBM Cloud account.
### Usage
```bash
$ bx d4ic list --sl-user SOFTLAYER_USERNAME --sl-api-key SOFTLAYER_API_KEY [--json]
```
### Options
| Name, shorthand | Description | Default | Required? |
|---|---|---|---|
| `--sl-user`, `-u` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **API Username** under the API Access Information section. | | Required |
| `--sl-api-key`, `-k` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **Authentication Key** under the API Access Information section. | | Required |
| `--json` | Prints the output as JSON. | | Optional |
## bx d4ic logmet
Enable or disable transmission of container log and metric data to IBM Cloud [Log Analysis](https://console.bluemix.net/docs/services/CloudLogAnalysis/log_analysis_ov.html#log_analysis_ov) and [Monitoring](https://console.bluemix.net/docs/services/cloud-monitoring/monitoring_ov.html#monitoring_ov) services.
### Usage
```bash
$ bx d4ic logmet (--swarm-name SWARM_NAME | --id ID) --cert-path CERT_PATH --sl-user SOFTLAYER_USERNAME --sl-api-key SOFTLAYER_API_KEY [--enable | --disable]
```
### Options
| Name, shorthand | Description | Default | Required? |
|---|---|---|---|
| `--swarm-name`, `--name` | The name of your cluster. If the name is not provided, you must provide the ID. | | Required |
| `--id` | The ID of your cluster. If the ID is not provided, you must provide the name. | | Required |
| `--cert-path`, `--cp` | The directory containing the [Docker UCP client certificate bundle](administering-swarms.md#download-client-certificates). | | Required |
| `--sl-user`, `-u` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **API Username** under the API Access Information section. | | Required |
| `--sl-api-key`, `-k` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **Authentication Key** under the API Access Information section. | | Required |
| `--enable` | Send log activity to IBM Cloud [Log Analysis](https://console.bluemix.net/docs/services/CloudLogAnalysis/log_analysis_ov.html#log_analysis_ov) and [Monitoring](https://console.bluemix.net/docs/services/cloud-monitoring/monitoring_ov.html#monitoring_ov) services to the ORG and SPACE that you're currently logged in to. You must include either `--enable` or `--disable` in the command. | | Optional |
| `--disable` | Disable sending log activity to IBM Cloud Log Analysis and Monitoring services. You must include either `--enable` or `--disable` in the command. | | Optional |
## bx d4ic show
Show information about the IBM Cloud infrastructure components, such as load balancer URLs, of a specific cluster.
### Usage
```bash
$ bx d4ic show (--swarm-name SWARM_NAME | --id ID) --sl-user SOFTLAYER_USERNAME --sl-api-key SOFTLAYER_API_KEY [--json]
```
### Options
| Name, shorthand | Description | Default | Required? |
|---|---|---|---|
| `--sl-user`, `-u` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **API Username** under the API Access Information section. | | Required |
| `--sl-api-key`, `-k` | [Log in to IBM Cloud infrastructure](https://control.softlayer.com/), select your profile, and locate your **Authentication Key** under the API Access Information section. | | Required |
| `--id` | The ID of the cluster. You must provide either the ID or the swarm name. | | Required |
| `--swarm-name`, `--name` | The name of your cluster. You must provide either the name or the ID.| | Required |
| `--json` | Prints the output as JSON. | | Optional |

View File

@ -1,99 +0,0 @@
---
description: Deploy Apps on Docker EE for IBM Cloud
keywords: ibm cloud, ibm, iaas, deploy
title: Deploy your app on Docker EE for IBM Cloud
---
## Deploy apps
Before you begin:
* Ensure that you [completed the account prerequisites](/docker-for-ibm-cloud/index.md).
* [Create a swarm](administering-swarms.md#create-swarms).
* [Set the environment variables to your swarm](administering-swarms.md#download-client-certificates).
* Review [best practices for creating a Dockerfile](/engine/userguide/eng-image/dockerfile_best-practices/) for your app's image.
* Review [Docker Compose file reference](/compose/compose-file/) for creating a YAML to define services, networks, or volumes that your app uses.
Steps:
* To deploy services and applications, see the [UCP documentation](/datacenter/ucp/2.2/guides/user/services/deploy-a-service/).
* To deploy IBM Cloud services such as Watson, see [Binding IBM Cloud services to swarms](binding-services.md).
## Run apps
After deploying an app to the cluster, you can create containers and services by using Docker commands such as `docker run`.
You can run websites too. Ports exposed with `-p` are automatically exposed
through the platform load balancer. For example:
```bash
$ docker service create nginx \
--name nginx \
-p 80:80
```
Learn more about [load balancing in Docker EE for IBM Cloud](load-balancer.md).
### Execute Docker commands in all swarm nodes
You might need to execute a Docker command in all the nodes across the swarm, such as when installing a volume plug-in. Use the `swarm-exec` tool:
```bash
$ swarm-exec {Docker command}
```
The following example installs a test plug-in in all the nodes in the swarm:
```bash
$ swarm-exec docker plugin install --grant-all-permissions mavenugo/test-docker-netplugin
```
The `swarm-exec` tool internally uses the Docker global-mode service that runs a task on
each node in the cluster. The task in turn executes the `docker`
command. The global-mode service also guarantees that when a new node is added
to the cluster or during upgrades, a new task is executed on that node and hence
the `docker` command is automatically executed.
### Distributed Application Bundles
To deploy complex multi-container apps, you can use [distributed application
bundles](/compose/bundles.md). You can either run `docker deploy` to deploy a
bundle on your machine over an SSH tunnel, or copy the bundle (for example using
`scp`) to a manager node, SSH into the manager, and then run `docker deploy` (if
you have multiple managers, ensure that your session is on one that
has the bundle file).
> SSH into manager
>
> Remember, the port to access your manager is 56422. For example: `ssh -A docker@managerIP -p 56422`.
A good sample app to test application bundles is the [Docker voting
app](https://github.com/docker/example-voting-app).
By default, apps deployed with bundles do not have ports publicly exposed.
When you change port mappings for services, Docker automatically updates the
underlying platform load balancer:
```bash
$ docker service update --publish-add 80:80 my_service
```
> Publishing services on ports
>
> Your cluster's service load balancer can expose up to 10 ports. [Learn more](load-balancer.md#service-load-balancer).
### Images in private repos
To create swarm services using images in private repos, first make sure that you're
authenticated and have access to the private repo, and then create the service with
the `--with-registry-auth` flag. The following example assumes you're using Docker
Hub:
```bash
$ docker login
...
$ docker service create --with-registry-auth user/private-repo
...
```
The swarm caches and uses the cached registry credentials when creating containers for the service.
See [Using images with Docker for IBM Cloud](registry.md) for more information about using IBM Cloud Container Registry and Docker Trusted Registry.

View File

@ -1,249 +0,0 @@
---
description: Set up IBM Cloud Object Storage for Docker Trusted Registry
keywords: ibm, ibm cloud, registry, dtr, iaas, tutorial
title: Set up IBM Cloud Object Storage for Docker Trusted Registry
---
# Use DTR with Docker EE for IBM Cloud
To use [Docker Trusted Registry (DTR)](/datacenter/dtr/2.4/guides/) with Docker EE for IBM Cloud, DTR stores images on [IBM Cloud Object Storage (COS)](https://ibm-public-cos.github.io/crs-docs/).
> COS for DTR enabled by default
>
> When you [create a cluster](administering-swarms.md#create-swarms), Docker EE for IBM Cloud orders an IBM Cloud Swift API Object Storage account and creates a container named `dtr-container`.
>
> If you used the `--disable-dtr-storage` parameter to prevent the `dtr-container` from being created, then follow the steps on this page to configure DTR to store images using IBM COS. You can order a new COS account, or use an existing one.
>
> Whether you use the default Swift or set up S3, the COS persists after you delete your cluster. To permanently remove the COS, see [Delete IBM Cloud Object Storage](#delete-ibm-cloud-object-storage).
With IBM Cloud Object Storage S3, your DTR files are stored in "buckets", and users have permissions to read, write, and delete files from those buckets. When you integrate DTR with IBM Cloud Object Storage, DTR sends all read and write operations to the COS bucket so that the images are persisted there.
## Configure DTR to use IBM Cloud Object Storage
To use IBM Cloud Object Storage, make sure that you have [an IBM Cloud Pay As You Go or Subscription account](https://console.bluemix.net/docs/pricing/index.html#accounts) that can provision infrastructure resources. You might need to [upgrade or link your account](https://console.bluemix.net/docs/pricing/index.html#accounts).
If you already have an IBM Cloud Object Storage account that you want to use for your swarm, you can skip the create instructions and configure your [S3](#configure-the-s3-bucket-access-and-permissions-in-dtr) or [Swift](#configure-the-regional-swift-api-container-access-in-dtr) Cloud Object Storage accounts for DTR.
## Create an IBM Cloud Object Storage account instance
1. Log in to your [IBM Cloud infrastructure](https://control.softlayer.com/) account.
2. Click **Storage** > **Object Storage** > **Order Object Storage**.
3. Select the storage type, **Cloud Object Storage - S3 API** or **Cloud Object Storage - Standard Regional Swift API**, then click **Continue**.
4. Select the **Master Service Agreement** acknowledge check box, then click **Place Order**.
5. The new object storage account instance is provisioned shortly, and appears in the table. Click the **Short Description** field to identify what the instance is for, such as **my-d4ic-dtr-cos**.
The next steps depend on which type of COS you created:
* [Configure S3 API](#configure-ibm-cloud-object-storage-s3-for-dtr)
* [Configure Regional Swift API](#configure-ibm-cloud-object-storage-regional-swift-api-for-dtr)
## Configure IBM Cloud Object Storage S3 for DTR
Create and configure your COS S3 bucket to use with DTR.
### Create an IBM Cloud Object Storage bucket
Before you begin, [create an IBM Cloud Object Storage account instance](#create-an-ibm-cloud-object-storage-account-instance).
1. From your [IBM Cloud infrastructure](https://control.softlayer.com/) **Storage** > **Object Storage** page, select the storage account instance that you made previously.
2. From **Manage Buckets**, click the **Add Bucket** icon.
3. Configure your bucket.
* For **Resiliency/Location**, select the region that you want to use, such as **Region - us south**. For higher resiliency, you can select a **Cross Region** option.
* For **Storage Class**, keep the default of **Standard**.
* For the **Bucket Name**, give your bucket a name, such as **dtr**. The name must be unique within your IBM Cloud infrastructure COS account; [learn more about naming requirements](https://ibm-public-cos.github.io/crs-docs/storing-and-retrieving-objects#using-buckets).
4. Click **Add**.
Next, [configure the S3 bucket access and permissions in DTR](#configure-the-s3-bucket-access-and-permissions-in-dtr).
### Configure the S3 bucket access and permissions in DTR
Once youve created a bucket, you can configure DTR to use it. You use the information from your IBM Cloud Object Storage instance to configure DTR external storage.
Before you begin:
* Retrieve your [IBM Cloud infrastructure user name and API key](https://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key).
* [Create a cluster](administering-swarms.md#create-swarms) and [set up UCP](administering-swarms.md#use-the-universal-control-plane).
* Get your UCP password by running `$ docker logs mycluster_ID`.
Steps:
1. From your [IBM Cloud infrastructure](https://control.softlayer.com/) **Storage** > **Object Storage** page, select the storage account instance that you made previously.
2. Select **Access & Permissions**. Keep this page handy, as you use its information to fill in the DTR external storage required fields later in these steps.
3. Retrieve your cluster's DTR URL. If you don't remember your cluster's name, you can use the `bx d4ic list` command:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
$ bx d4ic show --swarm-name mycluster --sl-user user.name.1234567 --sl-api-key api_key
...
Load Balancers
ID Name Address Type
...
ID mycluster-dtr mycluster-dtr-1234567-wdc07.lb.bluemix.net dtr
```
4. Log in to DTR with your credentials of `admin` and the UCP password that you previously retrieved, or the credentials that your admin assigned you.
```none
https://mycluster-dtr-1234567-wdc07.lb.bluemix.net
```
5. From the DTR GUI, navigate to **Settings** > **Storage**.
6. Choose **Cloud** storage type, and then select the **Amazon S3** cloud storage provider.
7. Fill in the **S3 Settings** form with the information from your IBM Cloud Object Storage **dtr** bucket. The following table describes the required fields and what information to include:
| Field | Description of what to include |
| --- | --- |
| Root directory | The path in the bucket where images are stored. You can leave this blank. |
| AWS Region name | The AWS region does not affect IBM COS, so you can fill it in with any region, such as us-east-2. |
| S3 bucket name | The name of the IBM COS bucket that you previously made, **dtr**. |
| AWS access key | From **Access & Permissions** in your IBM COS bucket, expand **Access Keys** and include your **Access Key ID**. |
| AWS secret key | From **Access & Permissions** in your IBM COS bucket, expand **Access Keys** and include your **Secret Access Key**. |
| Region endpoint | From **Access & Permissions** in your IBM COS bucket, expand **Region Endpoints** and include the **Public** endpoint for the region that you created your COS bucket in, such as **us-south**: `s3.us-south.objectstorage.softlayer.net`. |
8. From the DTR GUI, click **Save**.
## Configure IBM Cloud Object Storage Regional Swift API for DTR
Create and configure your COS Regional Swift API container to use with DTR.
### Create an IBM Cloud Object Storage Regional Swift API container
Before you begin, [create an IBM Cloud Object Storage account instance](#create-an-ibm-cloud-object-storage-account-instance). You also can reuse a COS Regional Swift API account that you made with a previously deleted swarm. When you [create a new swarm](administering-swarms.md#create-swarms), include the `--disable-dtr-storage` parameter and then follow these steps to reuse the COS account.
1. From your [IBM Cloud infrastructure](https://control.softlayer.com/) **Storage** > **Object Storage** page, select the storage account instance that you made previously.
2. Select a data center location within the region that you created your swarm in. For example, by default swarms are created in `wdc07`, so select **Washington 1, wdc - wdc01**.
| Docker for IBM Cloud location | Cloud Object Storage location |
| --- | --- |
| Dallas `dal12`, `dal13` | Dallas 5 `dal - dal05` |
| Frankfurt `fra02` | Frankfurt 2 `fra - fra02` |
| Hong Kong `hgk02` | Hong Kong 2 `hkg - hkg02` |
| London `lon04` | London 2 `lon - lon02` |
| Paris `par01` | Paris 1 `par - par01` |
| Sydney `syd01`, `syd04` | Sydney 1 `syd - syd01` |
| Toronto `tor01` | Toronto 1 `tor - tor01` |
| Washington DC `wdc06`, `wdc07`| Washington 1 `wdc - wdc01` |
3. Click **Add Container**.
4. Name the container `dtr-container`.
5. Click the **View Credentials** link.
6. Copy the following information:
* **Authentication Endpoint**: Public URL. For example, `https://wdc.objectstorage.softlayer.net/auth/v1.0/`.
* **Username**: The user name for the COS account. For example, `IBMOS1234567-01:user.name.1234567`.
* **API Key (Password)**: An alphanumeric password for the COS account.
Next, [configure the Regional Swift API container access and permissions in DTR](#configure-the-regional-swift-api-container-access-in-dtr).
### Configure the Regional Swift API container access in DTR
Once youve created a COS Swift API container, you can configure DTR to use it. You use the information from your IBM Cloud Object Storage instance to configure DTR external storage.
Before you begin:
* Retrieve your [IBM Cloud infrastructure user name and API key](https://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key).
* [Create a cluster](administering-swarms.md#create-swarms) and [set up UCP](administering-swarms.md#use-the-universal-control-plane).
* Get your UCP password by running `$ docker logs mycluster_ID`.
Steps:
1. Retrieve the container name, public authentication endpoint, user name, and API key that you copied from the previous step.
2. Retrieve your cluster's DTR URL. If you don't remember your cluster's name, you can use the `bx d4ic list` command:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
$ bx d4ic show --swarm-name mycluster --sl-user user.name.1234567 --sl-api-key api_key
...
Load Balancers
ID Name Address Type
...
ID mycluster-dtr mycluster-dtr-1234567-wdc07.lb.bluemix.net dtr
```
3. Log in to DTR with your credentials of `admin` and the UCP password that you previously retrieved, or the credentials that your admin assigned you.
```none
https://mycluster-dtr-1234567-wdc07.lb.bluemix.net
```
4. From the DTR GUI, navigate to **Settings** > **Storage**.
5. Choose **Cloud** storage type, and then select the **OpenStack Swift** cloud storage provider.
6. Fill in the **Swift Settings** form with the information from your IBM Cloud Object Storage **dtr-container** container:
* **Authorization URL**: Fill in the public authentical endpoint that you previously retrieved. For example, `https://wdc.objectstorage.softlayer.net/auth/v1.0/`.
* **Username**: Fill in the COS account user name that you previously retrieved. For example, `IBMOS1234567-01:user.name.1234567`.
* **Password**: Fill in the COS account API Key (Password) that you previously retrieved.
* **Container**: Fill in the container's name that you previously made, such as `dtr-container`.
7. From the DTR GUI, click **Save**.
## Configure your client
Docker EE for IBM Cloud uses a TLS certificate in its storage backend, so you must configure your Docker Engine to trust DTR.
Before you begin:
* Retrieve your [IBM Cloud infrastructure user name and API key](https://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key).
* [Create a cluster](administering-swarms.md#create-swarms) and [set up UCP](administering-swarms.md#use-the-universal-control-plane).
* Get your UCP password by running `$ docker logs mycluster_ID`.
* [Configure DTR to use IBM Cloud Object Storage](#configure-dtr-to-use-ibm-cloud-object-storage).
1. List your swarm cluster's name and then retrieve its DTR URL:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
$ bx d4ic show --swarm-name mycluster --sl-user user.name.1234567 --sl-api-key api_key
...
Load Balancers
ID Name Address Type
...
ID mycluster-dtr mycluster-dtr-1234567-wdc07.lb.bluemix.net dtr
```
2. Navigate to the DTR page for certificate authority by appending to the DTR URL: `/ca`. Log in with your credentials of `admin` and the UCP password that you previously retrieved, or the credentials that your admin assigned you.
```none
https://mycluster-dtr-1234567-wdc07.lb.bluemix.net/ca
```
3. Follow the OS-specific instructions to [configure your host](/datacenter/dtr/2.4/guides/user/access-dtr/) to use the DTR CA.
## Verify that DTR is running
Verify that DTR is set up properly by [pulling and pushing an image](/datacenter/dtr/2.4/guides/user/manage-images/pull-and-push-images/).
## Delete IBM Cloud Object Storage
1. Log in to your [IBM Cloud infrastructure account](https://control.softlayer.com/).
2. Select **Storage** > **Object**.
3. For the object storage **Account Name** that you want to delete, click the cancel icon.
> Find your object storage account name
>
> You can see the name of the swarm associated with the object storage account in the description field.
> Alternatively, you can find the object storage account name for your swarm by running `bx d4ic show --swarm-name my_swarm --sl-user user.name.1234567 --sl-api-key api_key`.
4. Choose when you want to cancel the object storage account and click **Continue**.
5. Check the acknowledgement and click **Cancel Object Storage Account**.

View File

@ -1,176 +0,0 @@
---
description: Frequently asked questions
keywords: ibm faqs
title: Docker for IBM Cloud frequently asked questions (FAQs)
---
## How do I sign up?
Docker EE for IBM Cloud is an unmanaged, native Docker environment within IBM Cloud that runs Docker Enterprise Edition software. Docker EE for IBM Cloud is available on **December 20th 2017 as a closed Beta**.
[Request access to the beta](mailto:sealbou@us.ibm.com). Once you do, we'll be in touch shortly!
## What IBM Cloud infrastructure permissions do I need?
To provision the resources that make up a Docker swarm, the account administrator needs to enable certain permissions for users in the [IBM Cloud infrastructure customer portal](https://control.softlayer.com/).
You can navigate to user permissions by going to **Account > Users > User name > Permissions**.
Make sure that you enable the permissions in the following table.
* The **View Only** user role does not have any of these enabled by default.
* The **Basic User** role has some of these enabled by default.
* The **Super User** role has most of these enabled by default.
> Save your setting changes!
>
> Don't forget to click **Set Permissions** as you go through the tabs of each permission set so that you don't lose your settings.
<table summary="The minimum user permissions that are required to provision and manage a Docker EE swarm mode cluster for IBM Cloud.">
<caption>Table 1. The minimum user permissions that are required to provision and manage a Docker EE swarm mode cluster for IBM Cloud.
</caption>
<thead>
<th colspan="1">Permissions set</th>
<th colspan="1">Description</th>
<th colspan="1">Required permissions</th>
</thead>
<tbody>
<tr>
<td>Devices</td>
<td>Connect to and configure your VSI, load balancers, and firewalls.</td>
<td>
<ul>
<li>View hardware detail</li>
<li>View virtual server details</li>
<li>Hardware firewall</li>
<li>Software firewall manage</li>
<li>Manage load balancers</li>
<li>Manage device monitoring</li>
<li>Reboot server and view IPMI system information</li>
<li>Issue OS Reloads and initial rescue kernel<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Manage port control</li>
</ul>
</td>
</tr>
<tr>
<td>Network</td>
<td>Provision, connect, and expose IP addresses.</td>
<td>
<ul>
<li>Add compute with public network port<a href="#edge-footnote2"><sup>2</sup></a></li>
<li>View bandwidth statistics</li>
<li>Add IP addresses</li>
<li>Manage email delivery service</li>
<li>Manage network VLAN spanning<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Manage security groups<a href="#edge-footnote2"><sup>2</sup></a></li>
</ul></td>
</tr>
<tr>
<td>Services</td>
<td>Provision and manage services such as CDN, DNS records, SSH keys, NFS storage volumes.</td>
<td>
<ul>
<li>View CDN bandwidth statistics</li>
<li>Vulnerability scanning</li>
<li>Manage CDN account<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Manage CDN file transfers<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>View licenses</li>
<li>Manage DNS, reverse DNS, and WHOIS</li>
<li>Antivirus/spyware</li>
<li>Host IDS</li>
<li>Manage SSH keys<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Manage storage<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>View Certificates (SSL)<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Manage Certificates (SSL)<a href="#edge-footnote1"><sup>1</sup></a></li>
</ul>
</td>
</tr>
<tr>
<td>Account</td>
<td>General settings to provision or remove services and instances.</td>
<td>
<ul>
<li>View account summary</li>
<li>Manage notification subscribers</li>
<li>Add/upgrade cloud instances<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Cancel server<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Cancel services<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Add server<a href="#edge-footnote1"><sup>1</sup></a></li>
<li>Add/upgrade services<a href="#edge-footnote1"><sup>1</sup></a></li>
</ul>
</td>
</tr></tbody></table>
`1`: A **Basic User** needs these permissions added to the account.
{: id="edge-footnote1" }
`2`: Both **Basic** and **Super** users need these permissions added to the account.
{: id="edge-footnote2" }
## Which IBM Cloud region and locations (data centers) will this work with?
Docker EE for IBM Cloud is available in the following IBM Cloud regions and locations (data centers).
| Region | Region Prefix | Cities | Available locations |
| --- | --- | --- | --- |
| Frankfurt region | `eu-de`| Frankfurt, Paris | `fra02`, `par01` |
| United Kingdom | `eu-gb` | London | `lon04` |
| Sydney | `au-syd` | Hong Kong, Sydney | `hkg02`, `syd01`, `syd04` |
| US South | `ng` | Dallas, Toronto, Washington DC | `dal12`, `dal13`, `tor01`, `wdc06`, `wdc07`|
> Default location
>
> By default, clusters are created in US South, `wdc07`.
## Where are my container logs and metrics?
You must enable logging. See [Enabling logging and metric data for your swarm](logging.html) for more information.
## Why don't `bx d4ic` commands work?
The Docker EE for IBM Cloud CLI plug-in simplifies your interaction with IBM Cloud infrastructure resources. As such, many `bx d4ic` commands require you to provide your infrastructure account user name and API key credentials as options during the command (`--sl-user <user.name.1234567> --sl-api-key <api-key>`).
Instead of including these in each command, you can [set your environment variables](/docker-for-ibm-cloud/index.md#set-infrastructure-environment-variables).
## Why can't I target an organization or space in IBM Cloud?
Before you can target an organization or space in IBM Cloud, the account owner or administrator must set up the organization or space. See [Creating organizations and spaces](https://console.bluemix.net/docs/admin/orgs_spaces.html#orgsspacesusers) for more information.
## Can I manually change the load balancer configuration?
No. If you make any manual changes to the load balancer, they are removed the next time that the load balancer is updated or swarm changes are made. This is because the swarm service configuration is the source of record for service ports. If you add listeners to the load balancer manually, they could conflict with what is in cluster, and cause issues.
## How do I run administrative commands?
SSH into a manager node. Manager nodes are accessed on port 56422.
**Tip**: Because this port differs from the default (-p 22), you can add an alias to your `.profile` to make the SSH process simpler:
```none
alias ssh-docker='function __t() { ssh-keygen -R [$1]:56422 > /dev/null 2>&1; ssh -A -p 56422 -o StrictHostKeyChecking=no docker@$1; unset -f __t; }; __t'
```
## Can I run my containers in Kubernetes clusters?
Docker EE for IBM Cloud supports swarm clusters. If you are looking for the IBM Cloud Container Service for Kubernetes clusters, see the [IBM Cloud documentation](https://console.bluemix.net/docs/containers/container_index.html).
## Are there any known issues?
Yes. News, updates, and known issues are recorded by version on the [Release notes](release-notes.md) page.
## Where do I report problems or bugs?
Contact us through email at docker-for-ibmcloud-beta@docker.com.
If your stack is misbehaving, run the following diagnostic tool from one of the managers to collect your docker logs and send them to Docker:
```bash
$ docker-diagnose
OK hostname=manager1
OK hostname=worker1
OK hostname=worker2
Done requesting diagnostics.
Your diagnostics session ID is 1234567890-xxxxxxxxxxxxxx
Please provide this session ID to the maintainer debugging your issue.
```
> **Note**: Your output may be slightly different from the above, depending on your swarm configuration.

View File

@ -1,108 +0,0 @@
---
description: Use Docker images stored in IBM Cloud Container Registry
keywords: ibm, ibm cloud, registry, iaas, tutorial
title: Use images stored in IBM Cloud Container Registry
---
# Use IBM Cloud Container Registry to securely store your Docker images
[IBM Cloud Container Registry](https://www.ibm.com/cloud/container-registry) works with Docker EE for IBM Cloud to provide a secure image registry to use when creating swarm services and containers.
## Install the CLI and set up a namespace
Follow the [Getting Started with IBM Cloud Container Registry](https://console.bluemix.net/docs/services/Registry/index.html) instructions to install the registry CLI and set up a namespace.
## Log in to Docker with private registry credentials
### Log in with IBM Cloud Container Registry
You can place the credentials of your IBM Cloud account into Docker by running the registry login command:
```bash
$ bx cr login
```
The `bx cr login` command periodically expires, so you might need to run the command again when working with Docker or use a token.
### Log in with IBM Cloud Container Registry tokens
To prevent repeatedly logging in with `bx cr login`, you can create a non-expiring registry token with read-write permissions to use with Docker. Each token that you create is unique to the registry region. Repeat the steps for each registry region that you want to use with Docker.
1. [Create a registry token](https://console.bluemix.net/docs/services/Registry/registry_tokens.html#registry_tokens_create).
2. Instead of using the `bx cr login` command, log in to Docker with the registry token. Target the region for which you are using the registry, such as `ng` for US South:
```bash
$ docker login -u token -p my_registry_token registry.ng.bluemix.net
```
### Log in with other private registries
You can also log in to Docker with other private registries. View that registry's documentation for the appropriate authentication methods. To use Docker Trusted Registry, [configure external IBM Cloud Object Storage](dtr-ibm-cos.md).
### Log in before running certain Docker commands
Log in to Docker (whether by the registry log-in, token, or other method) before running the following Docker commands:
- `docker pull` to download an image from IBM Cloud Container Registry.
- `docker push` to upload an image to IBM Cloud Container Registry.
- `docker service create` to create a service that uses an image that is stored in IBM Cloud Container Registry.
- Any Docker command that has the `--with-registry-auth` parameter.
## Create a container using an IBM Cloud Container Registry image
You can create a container using a registry image. You might want to run the image locally to test it before [creating a swarm service](#create-a-swarm-service-using-an-ibm-cloud-container-registry-image) based on the image.
Before you begin:
- [Install the registry CLI and set up a namespace](#install-the-cli-and-set-up-a-namespace).
- [Add an image in your registry namespace](https://console.bluemix.net/docs/services/Registry/registry_images_.html#registry_images_) to use to create the swarm service.
- [Log in to Docker](#log-in-to-docker-with-private-registry-credentials) with the appropriate registry credentials.
To create a local container that uses an IBM Cloud Container Registry image:
1. Get the name and tag of the image you want to use to create the service:
```bash
$ bx cr images
```
2. Run the image locally:
```bash
$ docker run --name my_container my_image:tag
```
**Tip**: If you no longer need the container, use the `docker kill` [command](/engine/reference/commandline/kill/) to remove it.
## Create a swarm service using an IBM Cloud Container Registry image
You can create a service that schedules tasks to spawn containers that are based on an image in your IBM Cloud Container Registry.
Before you begin:
- Install the Docker for IBM Cloud CLI.
- [Install the registry CLI and set up a namespace](#install-the-cli-and-set-up-a-namespace).
- [Add an image in your registry namespace](https://console.bluemix.net/docs/services/Registry/registry_images_.html#registry_images_) to use to create the service.
- [Log in to Docker](#log-in-to-docker-with-private-registry-credentials) with the appropriate registry credentials.
- [Create a Docker swarm](/engine/swarm/swarm-mode/#create-a-swarm).
To create a Docker swarm service that uses an IBM Cloud Container Registry image:
1. Get the name and tag of the image you want to use to create the service:
```bash
$ bx cr images
```
2. Connect to your Docker for IBM Cloud swarm. Navigate to the directory where you [downloaded the UCP credentials](administering-swarms.md#download-client-certificates) and run the script. For example:
```bash
$ cd filepath/to/certificate/repo && source env.sh
```
3. Send the registry authentication details when creating the Docker service for the image `<your-image:tag>`:
```bash
$ docker service create --name my_service --with-registry-auth my_image:tag
```
4. Verify that your service was created:
```bash
$ docker service ls --filter name=my_service
```
> More about creating Docker services
>
> You can also use the `docker service create` [command options](/engine/reference/commandline/service_create/) to set additional features such as replicas, global mode, or secrets. Visit the [Docker swarm services](/engine/swarm/services/) documentation to learn more.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 287 KiB

View File

@ -1,118 +0,0 @@
---
description: Setup & Prerequisites
keywords: ibm cloud, ibm, iaas, tutorial
title: Docker EE for IBM Cloud setup & prerequisites
redirect_from:
---
## Docker Enterprise Edition (EE) for IBM Cloud
Docker EE for IBM Cloud is an unmanaged, native Docker environment within IBM Cloud that runs Docker Enterprise Edition software. Docker EE for IBM Cloud is available on **December 20th 2017 as a closed Beta**.
[Email IBM to request access to the closed beta](mailto:sealbou@us.ibm.com). In the welcome email you receive, you are given the Docker EE installation URL that you use for the beta.
Looking for the IBM Cloud Container Service? See the [IBM Cloud documentation](https://console.bluemix.net/docs/containers/container_index.html).
## Prerequisites
To create a swarm cluster in IBM Cloud, you must have certain accounts, credentials, and environments set up.
### Accounts
If you do not have an IBM Cloud account, [register for a Pay As You Go IBM Cloud account](https://console.bluemix.net/registration/).
If you already have an IBM Cloud account, make sure that you can provision infrastructure resources. You might need to [upgrade or link your account](https://console.bluemix.net/docs/account/index.html#accounts).
For a full list of infrastructure permissions, see [What IBM Cloud infrastructure permissions do I need?](faqs.md). In general you need the ability
to provision the following types of resources:
* File and block storage.
* Load balancers.
* SSH keys.
* Subnet IPs.
* Virtual server devices.
* VLANs.
### Credentials
[Add your SSH key to IBM Cloud infrastructure](https://knowledgelayer.softlayer.com/procedure/add-ssh-key), label it, and note the label for use when [administering swarms](administering-swarms.md).
Log in to [IBM Cloud infrastructure](https://control.softlayer.com/), select your user profile, and under the **API Access Information** section retrieve your **API Username** and **Authentication Key**.
### Environment
If you have not already, [create an organization and space](https://console.bluemix.net/docs/admin/orgs_spaces.html#orgsspacesusers) in IBM Cloud. You must be the account owner or administrator to complete this step.
## Install the CLIs
To use Docker EE for IBM Cloud, you need the following CLIs:
* IBM Cloud CLI version.
* Docker for IBM Cloud plug-in.
* Optional: IBM Cloud Container Registry plug-in.
Steps:
1. Install the [IBM Cloud CLI](https://console.bluemix.net/docs/cli/reference/bluemix_cli/get_started.html#getting-started).
2. Log in to the IBM Cloud CLI. Enter your credentials when prompted. If you have a federated ID, use the `--sso` option.
```bash
$ bx login [--sso]
```
3. Install the Docker EE for IBM Cloud plug-in. The prefix for running commands is `bx d4ic`.
```bash
$ bx plugin install docker-for-ibm-cloud -r Bluemix
```
4. Optional: To manage a private IBM Cloud Container Registry, install the plug-in. The prefix for running commands is `bx cr`.
```bash
$ bx plugin install container-registry -r Bluemix
```
5. Verify that the plug-ins have been installed properly:
```bash
$ bx plugin list
```
## Set infrastructure environment variables
The Docker EE for IBM Cloud CLI plug-in simplifies your interaction with IBM Cloud infrastructure resources. As such, many `bx d4ic` commands require you to provide your infrastructure account user name and API key credentials.
Instead of including these in each command, you can set your environment variables.
Steps:
1. [Log in to IBM Cloud infrastructure user profile](https://control.bluemix.net/account/user/profile).
2. Under the **API Access Information** section, locate your **API Username** and **Authentication Key**.
3. Retrieve your Docker EE installation URL. For beta, you received this in your welcome email.
4. From the CLI, set the environment variables with your infrastructure credentials and your Docker EE installation URL:
```none
export SOFTLAYER_USERNAME=user.name.1234567
export SOFTLAYER_API_KEY=my_authentication_key
export D4IC_DOCKER_EE_URL=my_docker-ee-url
```
5. Verify that your environment variables were set.
```bash
$ env | grep SOFTLAYER && env | grep D4IC_DOCKER_EE_URL
SOFTLAYER_API_KEY=my_authentication_key
SOFTLAYER_USERNAME=user.name.1234567
D4IC_DOCKER_EE_URL=my_docker-ee-url
```
## What's next?
* [Create a swarm](administering-swarms.md#create-swarms).
* [Access UCP](administering-swarms.md#access-ucp) and the [download client certificate bundle](administering-swarms.md#download-client-certificates).
* [Learn when to use UCP and the CLIs](administering-swarms.md#ucp-and-clis).
* [Configure DTR to use IBM Cloud Object Storage](dtr-ibm-cos.md).

View File

@ -1,214 +0,0 @@
---
description: Load Balancer
keywords: IBM Cloud load balancer
title: Load balance Docker EE for IBM Cloud clusters
---
Docker Enterprise Edition (EE) for IBM Cloud deploys three load balancers to each cluster so that you can:
* [Access Docker EE Universal Control Plan (UCP) and cluster manager node](#manager-load-balancer).
* [Access Docker Trusted Registry (DTR)](#dtr-load-balancer).
* [Expose services created in the cluster](#service-load-balancer).
The load balancers are preconfigured for you. Do not change the configurations.
## Manager load balancer
The manager load balancer is preconfigured to connect your local Docker client, your cluster, `bx d4ic` commands, and Docker EE UCP.
Ports:
* UCP is listening on load balancer port 443.
* Agent traffic on the manager nodes is on port 56443.
Use the manager load balancer URL to access Docker EE [UCP](/datacenter/ucp/2.2/guides/).
1. Get your cluster's load balancer URL for UCP:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
```
2. In your browser, navigate to the URL and log in.
**Tip**: Your user name is `admin` or the user name that your admin created for you. You got the password when you [created the cluster](administering-swarms.md#create-swarms) or when your admin created your credentials.
## DTR load balancer
The DTR load balancer is used to access DTR and run registry commands such as `docker push` or `docker pull`.
The DTR load balancer exposes the following ports:
* 443 for HTTPS
* 80 for HTTP
Use the load balancer to access [DTR](/datacenter/dtr/2.4/guides/).
1. Get the name of the cluster for which you want to access DTR:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
```
2. Get the DTR URL for the cluster:
```bash
$ bx d4ic show --swarm-name my_swarm --sl-user user.name.1234567 --sl-api-key api_key
```
3. In your browser, navigate to the URL and log in. UCP and DTR share the same login.
**Tip**: Your user name is `admin` or the user name that your admin created for you. You got the password when you [created the cluster](administering-swarms.md#create-swarms) or when your admin created your credentials.
## Service load balancer
When you create a service, any ports that are opened with `--publish` or `-p` are automatically published through the load balancer.
> Reserved ports
>
> Several ports are reserved and cannot be used to expose services:
> * 56501 for the service load balancer management.
> * 443 for the UCP web UI.
> * 56443 for the Agent.
For example:
```bash
$ docker service create --name nginx -p 80:80 nginx
```
This opens up port 80 on the load balancer, and directs any traffic
on that port to your service.
> Note: 10 ports on the service load balancer
>
> Each cluster's service load balancer can have 10 ports opened. If you create new services or update a service to publish it on a port but already used 10 ports, new ports are not added and the service cannot be accessed through the load balancer.
> If you need more than 10 ports, you can explore alternative solutions such as [UCP domain names](/datacenter/ucp/2.2/guides/admin/configure/use-domain-names-to-access-services/) or [Træfik](https://github.com/containous/traefik). You can also [create another cluster](administering-swarms.md#create-swarms).
To learn more about general swarm networking, see the [Docker container networking](/engine/userguide/networking/) and [Manage swarm service networks](/engine/swarm/networking/) guides.
### Access a service with the service load balancer
Get a publicly accessible HTTP URL for your app by publishing a Docker service on an unused port. For secure HTTPS URLs, see [Services with SSL certificates](#services-with-ssl-certificates).
1. Connect to your Docker EE for IBM Cloud swarm. Navigate to the directory where you [downloaded the UCP credentials](administering-swarms.md#download-client-certificates) and run the script. For example:
```bash
$ cd filepath/to/certificate/repo && source env.sh
```
2. Create the service that you want to expose by using the `docker service create` [command](/engine/reference/commandline/service_create/). For example:
```bash
$ docker service create --name nginx-test \
--publish 8080:80 \
--replicas 3 \
nginx
```
3. List the name of the cluster such as `mycluster`, and then use it to show the service (**svc**) load balancer URL:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
$ bx d4ic show --swarm-name mycluster --sl-user user.name.1234567 --sl-api-key api_key
```
4. To access the service that you exposed on a port, use the service (**svc**) load balancer URL that you retrieved. The load balancer might need a few minutes to update. For example:
```bash
$ curl mycluster-svc-1234567-wdc07.lb.bluemix.net:8080/
...
<title>Welcome to nginx!</title>
...
```
### Services with SSL certificates
You can publicly expose your app securely with an HTTPS URL. Use [IBM Cloud infrastructure SSL Certificates](https://knowledgelayer.softlayer.com/topic/ssl-certificates) to authenticate and encrypt online transactions that are transmitted through your cluster's load balancer.
When you create a certificate for your domain, specify the **Common Name**. When you create the Docker service, include the certificate common name to use the certificate for SSL termination for your service.
Learn more about the [labels for SSL termination and health check paths](#labels-for-ssl-termination-and-health-check-paths), then follow along with an [example command to expose a service on HTTPS](#example-command-for-https).
#### Labels for SSL termination and health check paths
When you create the Docker service to expose your app with an HTTPS URL, you need to specify two labels that:
* Specify your SSL certificate's **Common Name**.
* Set the health check path.
**Start a service that uses SSL termination**:
Start a service that listens on ports that you specify. The service load balancer provides SSL termination on ports that use your SSL certificate's common name, `com.ibm.d4ic.lb.cert=certificate-common-name`, when you create the service.
In the label, you must append `@HTTPS:port` to list the ports that you want to publish.
For example:
```bash
$ docker service create --name name \
...
--label com.ibm.d4ic.lb.cert=certificate-common-name@HTTPS:444
...
```
To specify other or multiple ports, append them as follows:
* Links HTTPS to port 444: `--label com.ibm.d4ic.lb.cert=certificate-common-name@HTTPS:444`
* Links HTTPS to ports 444 and 8080: `--label com.ibm.d4ic.lb.cert=certificate-common-name@HTTPS:444,HTTPS:8080`
**Set a health check path when using SSL termination**:
By default, the service load balancer sets a health check path to `/`. If the service cannot respond with a 200 message to a `GET` request on the `/` path, then you must include a health monitor path label when you create the service. For example:
```bash
--label com.ibm.d4ic.healthcheck.path=/demo/hello@444
```
When the route is published, the health check is set to the path that you specify in the label. Choose a path that can respond with a 200 message to a `GET` request.
#### Example command for HTTPS
The following `docker service create` command expands on the example from the [previous section](#access-a-service-with-the-service-load-balancer) to create a demo service that is published on a different port than the default and includes a health check path.
Before you begin:
1. Log in to [IBM Cloud infrastructure](https://control.softlayer.com/).
2. [Add or an import an SSL certificate](https://knowledgelayer.softlayer.com/topic/ssl-certificates) to use. In your infrastructure account, you can access the page from **Security** > **SSL** > **Certificates**.
3. Note the certificate **Common Name**.
Steps:
1. Connect to your Docker EE for IBM Cloud swarm. Navigate to the directory where you [downloaded the UCP credentials](administering-swarms.md#download_client_certificates) and run the script. For example:
```bash
$ cd filepath/to/certificate/repo && source env.
```
2. Create the service that you want to expose by using the `docker service create` [command](/engine/reference/commandline/service_create/). For example:
```bash
$ docker service create --name nginx-test \
--publish 444:80 \
--replicas 3 \
--label com.ibm.d4ic.lb.cert=certificate-common-name@HTTPS:444 \
--label com.ibm.d4ic.healthcheck.path=/@444 \
nginx
```
3. List the name of the cluster such as `mycluster`, and then use it to show the service (**svc**) load balancer URL:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
$ bx d4ic show --swarm-name mycluster --sl-user user.name.1234567 --sl-api-key api_key
```
4. To access the service that you exposed on a port, use the service (**svc**) load balancer URL that you retrieved. The load balancer might need a few minutes to update. For example:
```bash
$ curl --cacert https://mycluster-svc-1234567-wdc07.lb.bluemix.net:444
...
<title>Welcome to nginx!</title>
...
```

View File

@ -1,76 +0,0 @@
---
description: Logging for Docker for IBM Cloud
keywords: ibm, ibm cloud, logging, iaas, tutorial
title: Send logging and metric cluster data to IBM Cloud
---
You can enable Docker Enterprise Edition for IBM Cloud to send logging and metric data about the nodes and containers in your Docker EE cluster to the IBM Cloud [Log Analysis](https://console.bluemix.net/docs/services/CloudLogAnalysis/log_analysis_ov.html#log_analysis_ov) and [Monitoring](https://console.bluemix.net/docs/services/cloud-monitoring/monitoring_ov.html#monitoring_ov) services.
> Logging on services other than IBM Cloud
>
> If you want to configure logging and metrics for Docker EE to a remote logging service that is not IBM Cloud, see [Configure UCP logging](/datacenter/ucp/2.0/guides/configuration/configure-logs/).
## Enable logging and metrics
By default, logging and metrics are disabled. After you enable logging and metrics, containers are deployed to your cluster and begin to transmit data to IBM Cloud [Log Analysis](https://console.bluemix.net/docs/services/CloudLogAnalysis/log_analysis_ov.html#log_analysis_ov) and [Monitoring](https://console.bluemix.net/docs/services/cloud-monitoring/monitoring_ov.html#monitoring_ov) services.
Before you begin, make sure that you [installed the IBM Cloud CLI and Docker for IBM Cloud plug-in](/docker-for-ibm-cloud/index.md).
To enable logging and metrics:
1. Log in to IBM Cloud. If you have a federated ID, use the `--sso` option.
```bash
$ bx login [--sso]
```
2. After logging in to IBM Cloud, target the organization and space to which you want to send the logging and metric data:
```bash
$ bx target --cf
```
3. Connect to your swarm by setting the environment variables from the [client certificate bundle that you downloaded](administering-swarms.md#download-client-certificates). For example:
```bash
$ cd filepath/to/certificate/repo && source env.sh
```
4. Get the name of your cluster. If you did not [set your environment variables](/docker-for-ibm-cloud/index.md#set-infrastructure-environment-variables), include your IBM Cloud infrastructure credentials.
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
```
5. Enable logging and metrics. Replace the _my_swarm_ variable with the name of your cluster and include the path to the Docker EE client certificate bundle. Include your IBM Cloud infrastructure credentials if you have not set the environment variables.
```bash
$ bx d4ic logmet --swarm-name my_swarm \
--cert-path filepath/to/certificate/repo \
--sl-user user.name.1234567 \
--sl-api-key api_key \
--enable
```
## Disable logging and metrics
You might want to disable logging and metrics for reasons such as sending data to a different server [specified in Docker Enterprise Edition UCP](/datacenter/ucp/2.0/guides/configuration/configure-logs/). After disabling, data is no longer transmitted to IBM Cloud Log Analysis and Monitoring services.
To disable logging and metrics, get the name of the swarm and run the disable command:
```bash
$ bx d4ic logmet --swarm-name my_swarm \
--cert-path filepath/to/certificate/repo \
--sl-user user.name.1234567 \
--sl-api-key api_key \
--disable
```
## Review logging and metrics
Use the following links to access Kibana and Grafana for data transmitted to IBM Cloud. Select the IBM Cloud organization and space that your cluster is in to view its information.
View the [IBM Cloud Log Analysis](https://console.bluemix.net/docs/services/CloudLogAnalysis/log_analysis_ov.html#log_analysis_ov) and [IBM Cloud Monitoring](https://console.bluemix.net/docs/services/cloud-monitoring/monitoring_ov.html#monitoring_ov) documentation to learn more.
| Region | Logging | Metrics|
| --- | --- | --- |
| US South | [https://logging.ng.bluemix.net/](https://logging.ng.bluemix.net/) | [https://metrics.ng.bluemix.net/](https://metrics.ng.bluemix.net/) |
| United Kingdom | [https://logmet.eu-gb.bluemix.net/](https://logmet.eu-gb.bluemix.net/)| [https://metrics.eu-gb.bluemix.net/](https://metrics.eu-gb.bluemix.net/) |

View File

@ -1,7 +0,0 @@
---
description: Docker's use of Open Source
keywords: docker, opensource
title: Open source components and licensing
---
Docker EE for IBM Cloud is built using open source software.

View File

@ -1,216 +0,0 @@
---
description: Persistent data volumes
keywords: ibm persistent data volumes
title: Save persistent data in file storage volumes
---
Docker EE for IBM Cloud comes with the `d4ic-volume` plug-in preinstalled. With this plug-in, you can set up your cluster to save persistent data in your IBM Cloud infrastructure account's file storage volumes. Learn how to set up data volumes, create swarm services that use volumes, and clean up volumes.
## Set up file storage for persistent data volumes
With Docker EE for IBM Cloud, you can create new or use existing IBM Cloud infrastructure file storage to save persistent data in your cluster.
> Volumes
>
> Volumes are tied at the cluster level, not the IBM Cloud account level. Follow the steps in this document to set up storage volumes for your cluster. Do not use other methods such as UCP.
### Create file storage
Create an IBM Cloud infrastructure file storage volume from Docker EE for IBM Cloud. The volume might take a few minutes to provision. It has default settings of `Endurance` storage type, `2` IOPS, and `20` capacity (measured in GB).
If you want to change the default settings for storage type, IOPS, or capacity, [review file storage provisioning](https://console.bluemix.net/docs/infrastructure/FileStorage/index.html#provisioning) information. For Docker EE for IBM Cloud, the minimum IOPS per GB is 2.
1. Connect to your cluster manager node:
1. Get your cluster name by running `bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key`.
2. Get your manager public IP by running `bx d4ic show --swarm-name my_swarm --sl-user user.name.1234567 --sl-api-key api_key`.
3. Connect to the manager by running `ssh -A docker@managerIP -p 56422`.
2. Create the volume:
```bash
$ docker volume create my_volume \
--opt request=provision \
--driver d4ic-volume
```
3. **Optional**: If you want to change the default settings of the volume, specify the options:
```bash
$ docker volume create my_volume \
--opt request=provision \
--opt type=Performance \
--opt iops=100 \
--opt capacity=40 \
--opt billingType=monthly \
--driver d4ic-volume
```
> File storage options
>
> If the options specified cannot be provisioned in IBM Cloud infrastructure file storage, the volume is not created. Change the values to ones within [the provisioning scope](https://console.bluemix.net/docs/infrastructure/FileStorage/index.html#provisioning) and try again.
4. Verify that the volume is created by inspecting it:
```bash
$ docker volume ls
DRIVER VOLUME NAME
d4ic-volume: latest my_volume
$ docker volume inspect my_volume
```
Example output:
{% raw %}
```bash
[
{
"Driver": "d4ic-volume:latest",
"Labels": {},
"Mountpoint": "my_file_storage_volume_mount_point",
"Name": "my_volume",
"Options": {
"request": "provision"
},
"Scope": "global",
"Status": {
"Settings": {
"Capacity": 20,
"Datacenter": "wdc07",
"ID": 12345678,
"Iops": "",
"Mountpoint": "my_file_storage_volume_mount_point",
"Notes": "docker_volume_name:my_volume;docker_swarm_id:my_swarmID",
"Status": "PROVISION_COMPLETED",
"StorageType": "ENDURANCE_FILE_STORAGE"
}
}
}
]
```
{% endraw %}
> File storage provisioning
>
> The `docker volume create` request might take some time to provision the file storage volume in IBM Cloud infrastructure. If the request fails but a new file storage was created in your infrastructure account, a connection error might have disrupted provisioning. Follow the instructions for [using existing IBM Cloud file storage](#use-existing-ibm-cloud-file-storage) in your swarm volume.
Now [create a swarm service](#create-swarm-services-with-persistent-data) to use your persistent data volume.
### Use existing IBM Cloud file storage
Use an existing IBM Cloud infrastructure file storage volume with Docker EE for IBM Cloud. After you configure the volume to be used with Docker EE for IBM Cloud, you can [create swarm services](#create-swarm-services-with-persistent-data) that use the volume and [clean up](#clean-up-volumes-in-your-swarm) the volume.
1. Connect to your Docker EE for IBM Cloud cluster that you want to mount the volume to. Navigate to the directory where you [downloaded the UCP credentials](administering-swarms.md#download-client-certificates) and run the script. For example:
```bash
$ cd filepath/to/certificate/repo && source env.sh
```
2. Retrieve the cluster ID:
{% raw %}
```bash
$ docker info --format={{.Swarm.Cluster.ID}}
```
{% endraw %}
3. From your browser, log in to your [IBM Cloud infrastructure account](https://control.softlayer.com/) and access the file storage volume that you want to use.
4. Under notes, add the `docker_volume_name` field to the first line. Add a unique volume name and swarm ID.
```bash
docker_volume_name:my_volume;docker_swarm_id:my_swarmID
```
> Volume names
>
> Don't use the same Docker volume name for multiple file storage volumes within the same cluster!
5. **Optional**: If you have other notes in the file storage volume, add a semicolon after the Docker volume name. Make sure that the Docker volume name is on the first line. Example:
```bash
docker_volume_name:my_volume;docker_swarm_id:my_swarmID;
other_field:other_notes
```
Now [create a swarm service](#create-swarm-services-with-persistent-data) to use your persistent data volume.
## Create swarm services with persistent data
Before you begin creating services or running tasks for Docker EE for IBM Cloud swarms with persistent data, [set up file storage volumes](#set-up-file-storage-for-persistent-data-volumes). Volumes are shared across all instances of the service.
### Create a service with persistent data
You can create a service to schedule tasks across the worker nodes in your swarm.
Before you begin:
- Connect to the cluster manager node.
1. Get your cluster name by running `bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key`.
2. Get your manager public IP by running `bx d4ic show --swarm-name my_swarm --sl-user user.name.1234567 --sl-api-key api_key`.
3. Connect to the manager by running `ssh -A docker@managerIP -p 56422`.
- [Set up file storage for persistent data](#set-up-file-storage-for-persistent-data-volumes).
Create a service that specifies the volume you want to use. The example creates _my_service_ that schedules a task to spawn swarm containers based on the Alpine image, creates 3 repliacs, mounts to _my_volume_, and sets the volume destination (dst) path within each container to the _/dst/directory_.
```bash
$ docker service create --name my_service \
--mount type=volume,source=my_volume,dst=/dst/directory,volume-driver=d4ic-volume \
--replicas=3 \
alpine ping 8.8.8.8
```
> Do not provision a volume when you create a Docker service
>
> When you use the `docker service create` command, do not specify the `volume-opt=request=provision` option. Instead, use the `docker volume create` [command](#create-file-storage) to provision new file storage volumes. You run this command only on one manager node for the swarm, and only once per shared file storage volume.
### Run a task with persistent data
You can run a task on a single Docker node that connects to your IBM Cloud infrastructure file storage volume. If you want to connect the volume to multiple containers in your Docker swarm, [create a service](#create-a-service-with-persistent-data) instead.
Before you begin:
- Connect to a node.
- [Set up file storage for persistent data](#set-up-file-storage-for-persistent-data-volumes).
Create a task that specifies the volume you want to use. The example creates a task that spawns an image based on the Busybox image, mounts it to _my_volume_, and creates the volume path within the container to the _/dst/directory_.
```bash
$ docker run -it --volume my_volume:/dst/directory busybox sh
```
## Clean up volumes in your swarm
You can remove services with persistent data, delete volumes, or disconnect a IBM Cloud infrastructure file storage volume.
### Remove services with persistent data
You can remove the persistent data volume service from the Docker swarm. Your IBM Cloud infrastructure file storage volume still exists, and can be mounted to other swarms. If you want to use the service without persistent data, remove the service and create it again without mounting the volume.
Example command:
```bash
$ docker service rm my_service
```
### Delete volumes
You can permanently delete your IBM Cloud infrastructure file storage volume. Any data that is stored on the volume is lost when you delete it. Before deleting a volume, ensure that no service is using the volume for persistent data. You can check your services by using the `docker service inspect` [command](/engine/reference/commandline/service_inspect/).
Example command:
```bash
$ docker volume rm my_volume
```
### Disconnect volumes
You can disconnect a particular IBM Cloud infrastructure file storage volume.
1. Log in to your IBM Cloud infrastructure account and access the file storage volume that you want to disconnect.
2. Under notes, delete the entry `docker_volume_name:my_volume;docker_swarm_id:my_swarmID`.

View File

@ -1,161 +0,0 @@
---
description: Docker EE for IBM Cloud (Beta) Quick Start
keywords: ibm, ibm cloud, quickstart, iaas, tutorial
title: Docker EE for IBM Cloud (Beta) Quick Start
---
# Docker Enterprise Edition for IBM Cloud (Beta) Quick Start
Are you ready to orchestrate Docker Enterprise Edition swarm clusters that are enhanced with the full suite of secure IBM Cloud platform, infrastructure, and Watson services? Great! Let's get you started.
To request access to the closed beta, [contact IBM](mailto:sealbou@us.ibm.com).
![Getting started with Docker for IBM Cloud in 4 easy steps](img/quickstart.png)
## Step 1: Get all your accounts in order
1. Set up your IBM Cloud account:
* [Register for a Pay As You Go IBM Cloud account](https://console.bluemix.net/registration/).
* If you already have an IBM Cloud account, make sure that you can provision infrastructure resources. You might need to [upgrade or link your account](https://console.bluemix.net/docs/account/index.html#accounts).
2. Get your IBM Cloud infrastructure credentials:
* [Add your SSH key to IBM Cloud infrastructure](https://knowledgelayer.softlayer.com/procedure/add-ssh-key), label it, and note the label.
* Get your [account API credentials](https://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key).
3. If you have not already, [create an organization and space](https://console.bluemix.net/docs/admin/orgs_spaces.html#orgsspacesusers) to use when using IBM Cloud services. You must be the account owner or administrator to complete this step.
4. Get the Docker EE URL associated with your subscription. [Email IBM](mailto:sealbou@us.ibm.com) to get a trial subscription during the beta.
5. Set your environment variables to use your IBM Cloud infrastructure credentials and your Docker EE installation URL. For example:
```none
export SOFTLAYER_USERNAME=user.name.1234567
export SOFTLAYER_API_KEY=api-key
export D4IC_DOCKER_EE_URL=my_docker-ee-url
```
Now let's download some Docker for IBM Cloud tools.
## Step 2: Install the CLIs
1. Install the [IBM Cloud CLI](https://console.bluemix.net/docs/cli/reference/bluemix_cli/get_started.html#getting-started).
2. Install the Docker for IBM Cloud plug-in. The prefix for running commands is `bx d4ic`.
```bash
$ bx plugin install docker-for-ibm-cloud -r Bluemix
```
3. Optional: To manage a private IBM Cloud Container Registry, install the plug-in. The prefix for running commands is `bx cr`.
```bash
$ bx plugin install container-registry -r Bluemix
```
4. Verify that the plug-ins have been installed properly:
```bash
$ bx plugin list
```
Now we're ready to get to the fun stuff: making a cluster!
## Step 3: Create clusters
Create a Docker EE swarm cluster in IBM Cloud. For beta, your cluster can have a maximum of 20 nodes, up to 14 of which can be worker nodes. If you need more nodes than this, work with your Docker representative to acquire an additional Docker EE license.
1. Log in to the IBM Cloud CLI. If you have a federated ID, use the `--sso` option.
```bash
$ bx login [--sso]
```
2. Target the IBM Cloud org and space:
```bash
$ bx target --cf
```
3. Create the cluster. Use the `--swarm-name` flag to name your cluster, and fill in the credentials, SSH, and Docker EE installation URL variables with the information that you previously retrieved.
```bash
$ bx d4ic create --swarm-name my_swarm \
--sl-user user.name.1234567 \
--sl-api-key api_key \
--ssh-label my_ssh_label \
--ssh-key filepath_to_my_ssh_key \
--docker-ee-url my_docker-ee-url
```
> Customize your cluster
>
> You can customize your cluster by
> [using other flags and options](cli-ref.md#bx-d4ic-create) with
> `bx d4ic help create`, but this example uses a basic swarm.
5. Note the cluster **Name**, **ID**, and **UCP Password**.
Congrats! Your Docker EE for IBM Cloud cluster is provisioning. First, the manager node is deployed. Then, the rest of the infrastructure resources are deployed, including the worker nodes, DTR nodes, load balancers, subnet, and NFS volume.
* To check manager node status: `docker logs cluster-name_ID`.
* To check infrastructure resources: `bx d4ic show --swarm-name cluster-name --sl-user user.name.1234567 --sl-api-key api_key`.
## Step 4: Use UCP
Check it out: Docker for IBM Cloud uses [Docker Universal Control Plane (UCP)](/datacenter/ucp/2.2/guides/) to help you manage your cluster through a simple web UI!
### Step 4a: Access UCP
Before you begin, get your cluster **Name**, **ID**, and **UCP Password** that you previously noted.
1. Retrieve your cluster's **UCP URL**:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
```
2. In your browser, navigate to the **UCP URL**.
3. Log in to UCP. Your credentials are `admin` and the UCP password from the `bx d4ic create` command output, or the credentials that your admin created for you.
We're almost done! We just need to download the UCP certificate bundle so that you can create and deploy services from your local Docker client to the cluster.
### Step 4b: Download client certificates
1. From the UCP GUI under your user name (for example, **admin**), click **My Profile**.
2. Click **Client Bundles** > **New Client Bundle**. A zip file is generated.
3. In the GUI, you see a label and public key. To edit the label, click the pencil icon and give it a name, such as _d4ic-ucp_.
4. In a terminal, navigate and unzip the client bundle:
```bash
$ cd Downloads && unzip ucp-bundle-admin.zip
```
> Keep your client bundle handy
>
> Move the certificate environment variable directory to a safe and
> accessible location on your machine. It gets used a lot.
5. From the client bundle directory, update your `DOCKER_HOST` and `DOCKER_CERT_PATH` environment variables by loading the `env.sh` script contents into your environment:
```bash
$ source env.sh
```
> Set your environment to use Docker EE for IBM Cloud
>
> Repeat this to set your environment variables each time you enter a new terminal session, or after you unset your variables, to connect to the Docker EE for IBM Cloud swarm.
That's it! Your Docker EE for IBM Cloud cluster is provisioned, connected to UCP, and ready to go.
What's next, you ask? Why not try to:
* [Learn when to use UCP and the CLIs](administering-swarms.md#ucp-and-clis).
* [Deploy an app](deploy.md).
* [Scale your swarm cluster](scaling.md).
* [Set up DTR to use IBM Cloud Object Storage](dtr-ibm-cos.md).

View File

@ -1,28 +0,0 @@
---
description: Set up a registry to use images in your clusters
keywords: ibm, ibm cloud, registry, dtr, iaas, tutorial
title: Registry overview
---
Learn about image registries, installing the registry CLI and setting up a namespace, logging in with private registry credentials, and creating a swarm service with a registry image.
## Image registries
Images are typically stored in a registry that can either be accessible by the public (public registry) or set up with limited access for a small group of users (private registry). Public registries, such as Docker Hub, can be used to get started with Docker and to create your first containerized app in a swarm.
When it comes to enterprise applications, use a private registry, like Docker Trusted Registry or IBM Cloud Container Registry, to protect your images from being used and changed by unauthorized users. Private registries must be set up by the registry admin to ensure that the credentials to access the private registry are available to the swarm users.
To deploy a container image in Docker swarm mode, you create a service that uses the image.
## Docker Trusted Registry
[Docker Trusted Registry (DTR)](/datacenter/dtr/2.4/guides/) is a private registry that runs on a Docker EE cluster. Once deployed, you can use the DTR GUI or Docker CLI to manage your Docker images. Set up DTR to save images on external storage. Use DTR when you need a secure image registry thats integrated with Docker EE UCP.
DTR uses IBM Cloud Object Storage to securely store your images externally. By default, an IBM Cloud Object Storage Swift account and `dtr-container` is created during [cluster provisioning](#administering-swarms.md#create-swarms). If you prevented the `dtr-container` from creating with the `--disable-dtr-storage` parameter, then [configure DTR to use IBM Cloud Object Storage S3](dtr-ibm-cos.md).
## IBM Cloud Container Registry
[IBM Cloud Container Registry](https://console.bluemix.net/docs/services/Registry/registry_overview.html#registry_overview) is a global, multi-tenant private image registry that you can use to safely store and share Docker images with other users in your IBM Cloud account.
Images in the registry are automatically scanned by Vulnerability Adviser so that you build and scale your containers using secure images. Take control of your image usage and billing by setting quota limits to manage storage and pull traffic. Use IBM Cloud Container Registry when you need to share images across clusters and with users who do not have access to your swarm.
Learn how to [use IBM Cloud Container Registry with Docker EE for IBM Cloud](ibm-registry.md).

View File

@ -1,52 +0,0 @@
---
description: Release notes for Docker EE for IBM Cloud. Learn more about the changes introduced in the latest versions.
keywords: ibm cloud, ibm, iaas, release notes
title: Docker EE for IBM Cloud (beta) release notes
---
Here you can learn about new features, bug fixes, breaking changes, and known issues for the latest Docker Enterprise Edition (EE) for IBM Cloud (beta) version.
## Version 1.0.2 (closed beta)
(26 January 2017)
Start using version 1.0.2 of Docker EE for IBM Cloud today!
1. Update the CLI plug-in:
```bash
$ bx plugin update docker-for-ibm-cloud -r Bluemix
```
2. [Deploy a new cluster](administer-swarmd.md#create-swarms).
The second release of the closed beta includes the following enhancements:
* [IBM Cloud Security Groups](https://console.bluemix.net/docs/infrastructure/security-groups/sg_overview.html#about-security-groups) securely control access to cluster traffic.
* Users get their UCP password in the output of the [cluster create process](administer-swarmd.md#create-swarms).
* By default, IBM Cloud Swift API Object Storage is used for the DTR container instead of local storage volume to improve high availability.
* The [previous known issue](#service-load-balancer-has-the-configuration-of-an-older-service) about the service load balancer having the configuration of an older service is resolved.
## Version 1.0.1 (closed beta)
(20 December 2017)
Docker Enterprise Edition for IBM Cloud is a Container-as-a-Service platform that helps you modernize and extend your applications. It provides an unmanaged, native Docker environment running within IBM Cloud, giving you the ability to enhance your apps with services from the IBM Cloud catalog such as Watson, Internet of Things, Data, Logging, Monitoring, and many more.
The beta is based on the latest Docker EE version 17.06. You receive a 90-day Docker EE license for 20 Linux x86-64 nodes that you use when creating a cluster with the IBM Cloud `bx d4ic` CLI plug-in.
[Sign up for the closed beta](mailto:sealbou@us.ibm.com). Then use the [quick start](quickstart.md) to spin up your first Docker EE for IBM Cloud swarm.
### Known issues
#### Service load balancer has the configuration of an older service
**What it is**: You update (`docker service update`) or a create a new service after removing an old one (`docker service rm` then immediately `docker service create`) with a change to [the certificate or health check path](load-balancer.md#labels-for-ssl-termination-and-health-check-paths) values. The service load balancer still has the configuration of the older service.
**Why it's happening**: The InfraKit ingress controller does not check if the certificate or health check path has changed between the service load balancer configuration and the Docker swarm listener configuration. This results in the old configuration not getting updated to the new values.
**What to do about it**: Avoid updating a service with the `docker service update` command.
First remove the old service with `docker service rm`, and then wait for a time so that the service load balancer configuration updates. Then, create the new service with `docker service create`.
If you must use `docker service update`, remove the configuration for the port of the service load balancer in the [IBM Cloud infrastructure web UI](https://control.softlayer.com/). The InfraKit ingress controller then recreates the configuration with the new values.

View File

@ -1,51 +0,0 @@
---
description: Scale your swarm stack
keywords: ibm cloud, ibm, iaas, tutorial
title: Modify your Docker EE for IBM Cloud swarm infrastructure
---
## Scale workers
Before you begin:
* [Create a cluster](administering-swarms.md#create-swarms).
* Retrieve your IBM Cloud infrastructure [API username and key](https://knowledgelayer.softlayer.com/procedure/retrieve-your-api-key).
Steps:
1. Connect to your Docker EE for IBM Cloud swarm. Navigate to the directory where you [downloaded the UCP credentials](administering-swarms.md#download-client-certificates) and run the script. For example:
```bash
$ cd filepath/to/certificate/repo && source env.sh
```
2. Note the name of the cluster that you want to scale:
```bash
$ bx d4ic list --sl-user user.name.1234567 --sl-api-key api_key
```
3. Note the manager leader node:
```bash
$ docker node ls
```
4. Get the public IP address of the leader node, replacing _my_swarm_ with the swarm you want to scale:
```bash
$ bx d4ic show --swarm-name my_swarm --sl-user user.name.1234567 --sl-api-key api_key
```
5. Connect to the leader node using the _leaderIP_ you previously retrieved:
```bash
$ ssh docker@leaderIP -p 56422
```
6. Use InfraKit to modify the number of swarm mode cluster resources. For example, the following commands set the target number of worker nodes in the cluster to 8. You can use the same commands to reduce the number of worker node instances.
```bash
$ /var/ibm/d4ic/infrakit.sh local stack/vars change -c cluster_swarm_worker_size=8
$ /var/ibm/d4ic/infrakit.sh local stack/groups commit-group file:////infrakit_files/defn-wkr-group.json
```

View File

@ -1,30 +0,0 @@
#################################################
# Watson Conversation Demo (Containerized)
#################################################
# Example Dockerfile for an IBM Watson Conversation service deployed to a Docker for IBM Cloud swarm.
# Standard Ubuntu Container
FROM ubuntu:16.04
# Updates and Package Installs
RUN apt-get update
RUN apt-get install --no-install-recommends -y git vim net-tools ca-certificates curl bzip2 jq
# Install Node
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install --no-install-recommends -y nodejs
# Install and configure
RUN git clone https://github.com/watson-developer-cloud/conversation-simple.git
RUN cd conversation-simple && npm install
RUN echo "export CONVERSATION_USERNAME=\$(cat /run/secrets/watson_conversation | jq \".username\" | sed 's/\"//g')" > /conversation-simple/run.sh && \
echo "export CONVERSATION_PASSWORD=\$(cat /run/secrets/watson_conversation | jq \".password\" | sed 's/\"//g')" >> /conversation-simple/run.sh && \
echo "npm start" >> /conversation-simple/run.sh && \
chmod +x /conversation-simple/run.sh
# Runtime
EXPOSE 3000
WORKDIR /conversation-simple
CMD /conversation-simple/run.sh

View File

@ -1,28 +0,0 @@
# Example YAML file for an IBM Watson Conversation service deployed to a Docker for IBM Cloud swarm.
version: "3.4"
services:
# The name of your service, in this case "chat"
chat:
# This example uses IBM Cloud Container Registry, with the namespace and image:tag matching the ones used when building the Docker image.
image: registry.ng.bluemix.net/mynamespace/watson_conversation:latest
ports:
- 3000:3000
deploy:
replicas: 2
update_configs:
parallelism: 1
restart_policy:
condition: on-failure
# Not all IBM Cloud services use workspace IDs.
# To retrieve the Watson Conversation service workpace ID, take the following steps:
# Log in to IBM Cloud, select the Watson Conversation service, click "Launch tool"
# Click the "Actions" expand button in the corner of your service tile, click "View Details", click the "Workspace ID" copy icon
environment:
- WORKSPACE_ID-11aa1111-a1aa-1111-aa11-11aa1a1a11aa
# secret name matches the `--service-key <secret-name>` flag of the `bx d4ic key-create` command.
secrets:
- watson_conversation
secrets:
# secret name matches the `--service-key <secret-name>` flag of the `bx d4ic key-create` command.
watson_conversation:
external: true

View File

@ -1,38 +0,0 @@
---
description: Why Docker EE for IBM Cloud?
keywords: ibm cloud, ibm, iaas, why
title: Why Docker EE for IBM Cloud?
---
Docker Enterprise Edition for IBM Cloud (Beta) was created and is being actively developed to ensure that Docker users can enjoy a fantastic out-of-the-box experience on Docker for enterprise-grade workloads. It is now available as a beta.
As an informed user, you might be curious to know what this project offers you for running your development, staging, or production workloads.
## Native to Docker
Docker EE for IBM Cloud provides a Docker-native solution that you can use to avoid operational complexity and using unneeded additional APIs to the Docker stack.
Docker EE for IBM Cloud allows you to interact with Docker directly (including native Docker orchestration), instead of distracting you with the need to navigate extra layers on top of Docker. You can focus instead on the thing that matters most: running your workloads. This helps you and your team to deliver more value to the business faster, to speak one common "language", and to have fewer details to keep in your head at once.
The skills that you and your team have already learned, and continue to learn, using Docker on the desktop or elsewhere automatically carry over to using Docker EE for IBM Cloud. The added consistency across clouds also helps to ensure that a migration or multi-cloud strategy is easier to accomplish in the future, if desired.
## Skip the boilerplate and maintenance work
Docker EE for IBM Cloud bootstraps all of the recommended infrastructure to start using Docker on IBM Cloud automatically. You don't need to worry about rolling your own instances, security groups, or load balancers when using Docker EE for IBM Cloud.
Likewise, setting up and using Docker swarm mode functionality for container orchestration is managed across the cluster's lifecycle when you use Docker EE for IBM Cloud. Docker has already coordinated the various bits of automation you would otherwise need to glue together on your own to bootstrap Docker swarm mode on the platforms. When the cluster is finished booting, you can jump right in and start running `docker service` commands to schedule tasks for your worker nodes.
## Self-cleaning and self-healing
Even the most conscientious admin can be caught off guard by issues such as exhaustive logging or the Linux kernel unexpectedly ending memory-hungry processes. In Docker EE for IBM Cloud, your cluster is resilient to a variety of such issues by default.
You can enable or disable logging for swarms, so chatty logs don't use up all of your disk space. Likewise, the "system prune" option allows you to ensure unused Docker resources such as old images are cleaned up automatically.
The lifecycle of nodes is managed using InfraKit, so that if a node enters an unhealthy state for unforeseen reasons, the node is removed from service and replaced automatically. Container tasks that were running on the unhealthy node are rescheduled.
You can breathe easier as these self-cleaning and self-healing properties reduce the risk of downtime.
## Logging native to the platforms
Centralized logging is a critical component of many modern infrastructure stacks. To have these logs indexed and searchable proves invaluable for debugging application and system issues as they come up. With Docker EE for IBM Cloud, you can enable seamless logging to you IBM Cloud account.
# Try it today
Ready to get started? [Try Docker for IBM Cloud today](https://www.ibm.com/us-en/marketplace/docker-for-ibm-cloud).
We'd be happy to hear your feedback via e-mail at docker-for-ibmcloud-beta@docker.com.