From 0ce917794ed8e76090f665fefb5a97a0d799d63b Mon Sep 17 00:00:00 2001 From: ruffsl Date: Mon, 20 Jul 2015 18:20:12 -0700 Subject: [PATCH 1/4] Adding Deployment suggestions to ROS docs --- ros/content.md | 128 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 128 insertions(+) diff --git a/ros/content.md b/ros/content.md index 0c2d1f436..5773b6c2b 100644 --- a/ros/content.md +++ b/ros/content.md @@ -19,6 +19,134 @@ You can then build and run the Docker image: docker build -t my-ros-app . docker run -it --rm --name my-running-app my-ros-app +## Deployment use cases + +This dockerized image of ROS is intended to provide a simplified and consistent platform to build and deploy distributed robotic applications. Built from the [official Ubuntu image](https://registry.hub.docker.com/_/ubuntu/) and ROS's official Debian packages, it includes recent supported releases for quick access and download. This provides roboticists in research and industry with an easy way to develop, reuse and ship software for autonomous actions and task planning, control dynamics, localization and mapping, swarm behavior, as well as general system integration. + +Developing such complex systems with cutting edge implementations of newly published algorithms remains challenging, as repeatability and reproducibility of robotic software can fall to the wayside in the race to innovate. With the added difficulty in coding, tuning and deploying multiple software components that span across many engineering disciplines, a more collaborative approach becomes attractive. However, the technical difficulties in sharing and maintaining a collection of software over multiple robots and platforms has for a while exceeded time and effort than many smaller labs and businesses could afford. + +With the advancements and standardization of software containers, roboticists are primed to acquire a host of improved developer tooling for building and shipping software. To help alleviate the growing pains and technical challenges of adopting new practices, we have focused on providing an official resource for using ROS with these new technologies. + +## Deployment suggestions + +The available tags include supported distros along with a hierarchy tags based off the most common meta-package dependencies, designed to have a small footprint and simple configuration: +- `ros-core`: barebone ROS install +- `ros-base`: basic tools and libraries (also tagged with distro name with LTS version as `latest`) +- `robot`: basic install for robots +- `perception`: basic install for perception tasks + +The rest of the common meta-packages such as `desktop` and `desktop-full` are hosted on automatic build repos under OSRF's Docker Hub oginsanal profile [here](https://registry.hub.docker.com/u/osrf/ros). These meta-packages include graphical dependencies and hook a host of other large packages such as X11, X server, etc. So in the interest of keep the official images lean and secure, the desktop packages are just be hosted with OSRF's profile. + +### Volumes + +ROS uses the `~/.ros/` directory for storing logs, and debugging info. If you wish to persist these files beyond the lifecycle of the containers which produced them, the `~/.ros/` folder can be mounted to an external volume on the host, or a derived image can specify volumes to be managed by the Docker engine. By default, the container runs as the `root` user, so `/root/.ros/` would be the full path to these files. + +For example, if one wishes to use their own `.ros` folder that already resides in their local home directory, with a username of `ubuntu`, we can simple launch the container with an additional volume argument: + + docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros + +### Devices + +Some application may require device access for acquiring images from connected cameras, control input from human interface device, or GPUS for hardware acceleration. This can be done using the [`--device`](https://docs.docker.com/reference/run/) run argument to mount the device inside the container, providing processes inside hardware access. + +### Networks + +The ROS runtime "graph" is a peer-to-peer network of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure. ROS implements several different styles of communication, including synchronous RPC-style communication over services, asynchronous streaming of data over topics, and storage of data on a Parameter Server. To abide by the best practice of [one process per container](https://docs.docker.com/articles/dockerfile_best-practices/), Docker networks can be used to string together several running ROS processes. For further details about [ROS NetworkSetup](http://wiki.ros.org/ROS/NetworkSetup) wik artical, or see the Deployment example below. + +## Deployment example + +If we want our all ROS nodes to easily talk to each other, we'll can use a virtual network to connect the separate containers. In this short example, we'll create a virtual network, spin up a new container running `roscore` advertised as the `master` service on the new network, then spawn a message publisher and subscriber process as services on the same network. + +### Build image + +> Build a ROS image that includes ROS tutorials using this `Dockerfile:` + + FROM ros:indigo-ros-base + # install ros tutorials packages + RUN apt-get update && apt-get install -y + ros-indigo-ros-tutorials \ + ros-indigo-common-tutorials \ + && rm -rf /var/lib/apt/lists/ + +> Then to build the image from within the same directory: + + docker build --tag ros:ros-tutorials . + +#### Create network + +> To create a new network `foo`, we use the network command: + + docker network create foo + +> Now that we have a network, we can create services. Services advertise there location on the network, making it easy to resolve the location/address of the service specific container. We'll use this make sure our ROS nodes can find and connect to our ROS `master`. + +#### Run services + +> To create a container for the ROS master and advertise it's service: + + docker run -it --rm\ + --publish-service=master.foo \ + --name master \ + ros:ros-tutorials \ + roscore + +> Now you can see that master is running and is ready manage our other ROS nodes. To add our `talker` node, we'll need to point the relevant environment variable to the master service: + + docker run -it --rm\ + --publish-service=talker.foo \ + --env ROS_HOSTNAME=talker \ + --env ROS_MASTER_URI=http://master:11311 \ + --name talker \ + ros:ros-tutorials \ + rosrun roscpp_tutorials talker + +> Then in another terminal, run the `listener` node similarly: + + docker run -it --rm\ + --publish-service=listener.foo \ + --env ROS_HOSTNAME=listener \ + --env ROS_MASTER_URI=http://master:11311 \ + --name listener \ + ros:ros-tutorials \ + rosrun roscpp_tutorials listener + +> Alright! You should see `listener` is now echoing each message the `talker` broadcasting. You can then list the containers and see something like this: + + $ docker service ls + SERVICE ID NAME NETWORK CONTAINER + 67ce73355e67 listener foo a62019123321 + 917ee622d295 master foo f6ab9155fdbe + 7f5a4748fb8d talker foo e0da2ee7570a + +> And for the services: + + $ docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + a62019123321 ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp listener + e0da2ee7570a ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp talker + f6ab9155fdbe ros:ros-tutorials "/ros_entrypoint.sh About a minute ago Up About a minute 11311/tcp master + +#### Introspection + +> Ok, now that we see the two nodes are communicating, let get inside one of the containers and do some introspection what exactly the topics are: + + docker exec -it master bash + source /ros_entrypoint.sh + +> If we then use `rostopic` to list published message topics, we should see something like this: + + $ rostopic list + /chatter + /rosout + /rosout_agg + +#### Tear down + +> To tear down the structure we've made, we just need to stop the containers and the services. We can stop and remove the containers using `Ctrl^C` where we launched the containers or using the stop command with the names we gave them: + + docker stop master talker listener + docker rm master talker listener + # More Resources [ROS.org](http://www.ros.org/): Main ROS website From 5d8cc1d3c852e6210e737787bbc2651035f27526 Mon Sep 17 00:00:00 2001 From: Ruffin Date: Fri, 31 Jul 2015 19:07:33 -0700 Subject: [PATCH 2/4] Adding note for experimental Docker for future networking features. --- ros/content.md | 1 + 1 file changed, 1 insertion(+) diff --git a/ros/content.md b/ros/content.md index 5773b6c2b..3f50350f5 100644 --- a/ros/content.md +++ b/ros/content.md @@ -55,6 +55,7 @@ The ROS runtime "graph" is a peer-to-peer network of processes (potentially dist ## Deployment example +**NOTE:** This requires the experimental version of Docker for future networking features. If we want our all ROS nodes to easily talk to each other, we'll can use a virtual network to connect the separate containers. In this short example, we'll create a virtual network, spin up a new container running `roscore` advertised as the `master` service on the new network, then spawn a message publisher and subscriber process as services on the same network. ### Build image From 9a94522bea2e9e10eacdf8c34ddbef82fa62393a Mon Sep 17 00:00:00 2001 From: Ruffin Date: Mon, 3 Aug 2015 14:48:58 -0700 Subject: [PATCH 3/4] Fixing typos --- ros/content.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ros/content.md b/ros/content.md index 3f50350f5..6009b32a6 100644 --- a/ros/content.md +++ b/ros/content.md @@ -35,7 +35,7 @@ The available tags include supported distros along with a hierarchy tags based o - `robot`: basic install for robots - `perception`: basic install for perception tasks -The rest of the common meta-packages such as `desktop` and `desktop-full` are hosted on automatic build repos under OSRF's Docker Hub oginsanal profile [here](https://registry.hub.docker.com/u/osrf/ros). These meta-packages include graphical dependencies and hook a host of other large packages such as X11, X server, etc. So in the interest of keep the official images lean and secure, the desktop packages are just be hosted with OSRF's profile. +The rest of the common meta-packages such as `desktop` and `desktop-full` are hosted on automatic build repos under OSRF's Docker Hub profile [here](https://registry.hub.docker.com/u/osrf/ros). These meta-packages include graphical dependencies and hook a host of other large packages such as X11, X server, etc. So in the interest of keep the official images lean and secure, the desktop packages are just be hosted with OSRF's profile. ### Volumes @@ -51,7 +51,7 @@ Some application may require device access for acquiring images from connected c ### Networks -The ROS runtime "graph" is a peer-to-peer network of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure. ROS implements several different styles of communication, including synchronous RPC-style communication over services, asynchronous streaming of data over topics, and storage of data on a Parameter Server. To abide by the best practice of [one process per container](https://docs.docker.com/articles/dockerfile_best-practices/), Docker networks can be used to string together several running ROS processes. For further details about [ROS NetworkSetup](http://wiki.ros.org/ROS/NetworkSetup) wik artical, or see the Deployment example below. +The ROS runtime "graph" is a peer-to-peer network of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure. ROS implements several different styles of communication, including synchronous RPC-style communication over services, asynchronous streaming of data over topics, and storage of data on a Parameter Server. To abide by the best practice of [one process per container](https://docs.docker.com/articles/dockerfile_best-practices/), Docker networks can be used to string together several running ROS processes. For further details about [ROS NetworkSetup](http://wiki.ros.org/ROS/NetworkSetup) wik article, or see the Deployment example below. ## Deployment example From 8b51441072460242d5a46229c91721af7b68475f Mon Sep 17 00:00:00 2001 From: Ruffin Date: Tue, 11 Aug 2015 16:25:54 -0700 Subject: [PATCH 4/4] Tweeking version note --- ros/content.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ros/content.md b/ros/content.md index 6009b32a6..a4b02ba84 100644 --- a/ros/content.md +++ b/ros/content.md @@ -55,7 +55,7 @@ The ROS runtime "graph" is a peer-to-peer network of processes (potentially dist ## Deployment example -**NOTE:** This requires the experimental version of Docker for future networking features. +**NOTE:** This requires at least version 1.8 of Docker for networking features. If we want our all ROS nodes to easily talk to each other, we'll can use a virtual network to connect the separate containers. In this short example, we'll create a virtual network, spin up a new container running `roscore` advertised as the `master` service on the new network, then spawn a message publisher and subscriber process as services on the same network. ### Build image