library/rosMaintained by:
the Open Source Robotics Foundation
Where to get help:
the Docker Community Slack, Server Fault, Unix & Linux, or Stack Overflow
Dockerfile linkshumble-ros-core, humble-ros-core-jammy
humble-ros-base, humble-ros-base-jammy, humble
humble-perception, humble-perception-jammy
jazzy-ros-core, jazzy-ros-core-noble
jazzy-ros-base, jazzy-ros-base-noble, jazzy, latest
jazzy-perception, jazzy-perception-noble
kilted-ros-core, kilted-ros-core-noble
kilted-ros-base, kilted-ros-base-noble, kilted
kilted-perception, kilted-perception-noble
rolling-ros-core, rolling-ros-core-noble
rolling-ros-base, rolling-ros-base-noble, rolling
rolling-perception, rolling-perception-noble
Where to file issues:
[***]
Supported architectures: (more info)
amd64, arm64v8
Published image artifact details:
repo-info repo's repos/ros/ directory (history)
(image metadata, transfer size, etc)
Image updates:
official-images repo's library/ros label
official-images repo's library/ros file (history)
Source of this description:
docs repo's ros/ directory (history)
The Robot Operating System (ROS) is a set of software libraries and tools that help you build robot applications. From drivers to state-of-the-art algorithms, and with powerful developer tools, ROS has what you need for your next robotics project. And it's all open source.
***.org/wiki/Robot_Operating_System
 for type in "${!types[@]}"; do rosdep install -y \ --from-paths \ ros2/demos/demo_nodes_cpp \ ros2/demos/demo_nodes_py \ --ignore-src \ --reinstall \ --simulate \ ${types[$type]} \ | grep 'apt-get install' \ | awk '{gsub(/'\''/,"",$4); print $4}' \ | sort -u > /tmp/${type}_debs.txt done EOF # multi-stage for building FROM $FROM_IMAGE AS builder ARG OVERLAY_WS # install build dependencies COPY --from=cacher /tmp/build_debs.txt /tmp/build_debs.txt RUN --mount=type=cache,target=/etc/apt/apt.conf.d,from=cacher,source=/etc/apt/apt.conf.d \ --mount=type=cache,target=/var/lib/apt/lists,from=cacher,source=/var/lib/apt/lists \ --mount=type=cache,target=/var/cache/apt,sharing=locked \ < /tmp/build_debs.txt xargs apt-get install -y # build overlay source WORKDIR $OVERLAY_WS COPY --from=cacher $OVERLAY_WS/src ./src RUN . /opt/ros/$ROS_DISTRO/setup.sh && \ colcon build \ --packages-select \ demo_nodes_cpp \ demo_nodes_py \ --mixin release # multi-stage for running FROM $FROM_IMAGE-ros-core AS runner ARG OVERLAY_WS # install exec dependencies COPY --from=cacher /tmp/exec_debs.txt /tmp/exec_debs.txt RUN --mount=type=cache,target=/etc/apt/apt.conf.d,from=cacher,source=/etc/apt/apt.conf.d \ --mount=type=cache,target=/var/lib/apt/lists,from=cacher,source=/var/lib/apt/lists \ --mount=type=cache,target=/var/cache/apt,sharing=locked \ < /tmp/exec_debs.txt xargs apt-get install -y # setup overlay install ENV OVERLAY_WS=$OVERLAY_WS COPY --from=builder $OVERLAY_WS/install $OVERLAY_WS/install RUN sed --in-place --expression \ '$isource "$OVERLAY_WS/install/setup.bash"' \ /ros_entrypoint.sh # run launch file CMD ["ros2", "launch", "demo_nodes_cpp", "talker_listener_launch.py"]
The example above consists of three sequential stages. The cacher stage first updates the apt lists and ROS index, uses vcstool to clone a demo repo into the workspace source directory, and derives build and runtime dependency sets using rosdep. The builder stage installs the derived build dependencies, sources the ROS install underlay, and compiles the source in release mode using colcon. Finally, the runner stage installs only runtime dependencies, copies the compiled workspace artifacts, and sets up the environment to launch the demo. Note the example consists of several subtle optimizations:
--mount to cache temp data without bloating layersros-core for smallest runtime imageFor comparison, the resulting runner image is similar in size to the earlier aptgetter example. This allows you to develop and distribute custom ROS packages without significantly increasing image size compared to pre-built Debian installations:
console$ docker image ls my/ros --format "table {{.Tag}}\t{{.Size}}" TAG SIZE aptgetter 504MB runner 510MB builder 941MB $ docker image ls ros --format "table {{.Tag}}\t{{.Size}}" TAG SIZE rolling-ros-core 489MB rolling 876MB
For more advance examples such as daisy chaining multiple overlay workspaces to improve caching of docker image build layers, using tools such as ccache to accelerate compilation with colcon, or using buildkit to save build time and bandwidth even when dependencies change, the project Dockerfiles in the Navigation2 repo are excellent resources.
This dockerized image of ROS is intended to provide a simplified and consistent platform to build and deploy distributed robotic applications. Built from the official Ubuntu image and ROS's official Debian packages, it includes recent supported releases for quick access and download. This provides roboticists in research and industry with an easy way to develop, reuse and ship software for autonomous actions and task planning, control dynamics, localization and mapping, swarm behavior, as well as general system integration.
Developing such complex systems with cutting edge implementations of newly published algorithms remains challenging, as repeatability and reproducibility of robotic software can fall to the wayside in the race to innovate. With the added difficulty in coding, tuning and deploying multiple software components that span across many engineering disciplines, a more collaborative approach becomes attractive. However, the technical difficulties in sharing and maintaining a collection of software over multiple robots and platforms has for a while exceeded time and effort than many smaller labs and businesses could afford.
With the advancements and standardization of software containers, roboticists are primed to acquire a host of improved developer tooling for building and shipping software. To help alleviate the growing pains and technical challenges of adopting new practices, we have focused on providing an official resource for using ROS with these new technologies.
For a complete listing of supported architectures and base images for each ROS Distribution Release, please read the official REP on target platforms here.
The available tags include supported distros along with a hierarchy tags based off the most common meta-package dependencies, designed to have a small footprint and simple configuration:
ros-core: minimal ROS installros-base: basic tools and libraries (also tagged with distro name with LTS version as latest)In the interest of keeping ros-core tag minimal in image size, developer tools such as rosdep, colcon and vcstools are not shipped in ros_core, but in ros-base instead.
The rest of the common meta-packages such as desktop are hosted on repos under OSRF's Docker Hub profile here. These meta-packages include graphical dependencies and hook a host of other large packages such as X11, X server, etc. So in the interest of keeping the official images lean and secure, the desktop packages are just being hosted with OSRF's profile.
ROS uses the ~/.ros/ directory for storing logs, and debugging info. If you wish to persist these files beyond the lifecycle of the containers which produced them, the ~/.ros/ folder can be mounted to an external volume on the host, or a derived image can specify volumes to be managed by the Docker engine. By default, the container runs as the root user, so /root/.ros/ would be the full path to these files.
For example, if one wishes to use their own .ros folder that already resides in their local home directory, with a username of ubuntu, we can simply launch the container with an additional volume argument:
console$ docker run -v "/home/ubuntu/.ros/:/root/.ros/" ros
Some application may require device access for acquiring images from connected cameras, control input from human interface device, or GPUS for hardware acceleration. This can be done using the --device run argument to mount the device inside the container, providing processes inside hardware access.
ROS allows for peer-to-peer networking of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure. ROS implements several different styles of communication, including synchronous RPC-style communication over services, asynchronous streaming of typed data over topics, combinations of both prior via request/reply and status/feedback over actions, and run-time settings via configuration over parameters. To abide by the best practice of one process per container, Docker networks can be used to string together several running ROS processes. For further details see the Deployment example further below.
Alternatively, more permissive network settings can be used to share all host network interfaces with the container, such as host network driver, simplifying connectivity with external network participants. Be aware however that this removes the networking namespace separation between containers, and can affect the ability of DDS participants to communicate between containers, as documented here.
In this example we'll demonstrate using docker compose to spawn a pair of message publisher and subscriber nodes in separate containers connected through shared software defined network.
Create the directory
~/ros_demosand add the firstDockerfileexample from above. In the same directory, also create filecompose.yamlwith the following that runs a C++ publisher with a Python subscriber:
yamlservices: talker: build: ./ command: ros2 run demo_nodes_cpp talker listener: build: ./ environment: - "PYTHONUNBUFFERED=1" command: ros2 run demo_nodes_py listener
Use
docker composeinside the same directory to launch our ROS nodes. Given the containers created derive from the same docker compose project, they will coexist on shared project network:
console$ docker compose up -d
Notice that a new network named
ros_demos_defaulthas been created, as can be shown further with:
console$ docker network inspect ros_demos_default
We can monitor the logged output of each container, such as the listener node like so:
console$ docker compose logs listener
Finally, we can stop and remove all the relevant containers using
docker composefrom the same directory:
console$ docker compose stop $ docker compose rm
Note: the auto-generated network,
ros_demos_default, will persist until you explicitly remove it usingdocker compose down.
Docs: ROS Developer Documentation
Q&A: Ask questions. Get answers
Forums: Hear the latest discussions
Packages: Discover indexed packages
OSRF: Open Source Robotics Foundation
View package index for license information on software contained in this image.
As with all Docker images, these likely also contain other software which may be under other licenses (such as Bash, etc from the base distribution, along with any direct or indirect dependencies of the primary software being contained).
Some additional license information which was able to be auto-detected might be found in the repo-info repository's ros/ directory.
As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses for all software contained within.
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
MacOS OrbStack 容器配置
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务