MoveIt Pro ML on Jetson Devices
The following Dockerfile changes will not work unless you have setup GPU Acceleration and GPU Inference
The moveit_pro_ml
package enables the use of machine learning models in MoveIt Pro Behaviors. These ML Behaviors can
be run on higher end CPUs in a reasonable amount of time, but will not be performant on Jetson devices if the GPU isn't utilized.
Enabling a Jetson GPU with MoveIt Pro
For Nvidia tegra release 36 (Jetson 6.0, cat /etc/nv_tegra_release
to see your release), the following dependencies are tested to work. For other releases of Tegra, you may have to find which versions of cudnn, cudnn-dev, onnxruntime, and l4t you need, and modify the below URLs accordingly. See this ONNX runtime page for information on different versions of jetpack.
Add the following lines to your user_ws Dockerfile in the base-gpu
stage (you may have to first apt install wget):
RUN wget -O onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl https://nvidia.box.com/shared/static/6l0u97rj80ifwkk8rqbzj1try89fk26z.whl -q --show-progress --progress=dot:giga && \
pip install onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl && \
rm onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl
Additionally, you must add the NVIDIA l4t apt sources file. This is located under /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
on a normal Jetson install, but is also reproduced below:
# SPDX-FileCopyrightText: Copyright (c) 2019-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
deb [trusted=yes] https://repo.download.nvidia.com/jetson/common r36.4 main
deb [trusted=yes] https://repo.download.nvidia.com/jetson/t234 r36.4 main
deb [trusted=yes] https://repo.download.nvidia.com/jetson/ffmpeg r36.4 main
Also in the base-gpu
stage, copy this file to your docker container and install the required NVIDIA dependencies:
# jetson repo source
COPY /etc/apt/sources.list.d/nvidia-l4t-apt-source.list /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
# apt update to add jetson repos, then install NVIDIA dependencies
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -q -y --no-install-recommends \
libcudnn9-cuda-12 \
libcudnn9-dev-cuda-12 \
libcublas-12-6 \
cuda-cudart-12-6 \
libcurand-12-6 \
libcufft-12-6 \
libnvinfer10 \
libnvinfer-plugin10 \
libnvonnxparsers10 \
libtree
Lastly, ensure your user inside the docker container is added to the video and dialout groups.
This should be added right after the creation of the user (the lines with useradd
and groupadd
) in the base-gpu
stage.
# Add user to dialout group to enable communication with serial USB devices (gripper, FTS, ...)
# Add user to video group to enable communication with cameras
RUN usermod -aG dialout,video ${USERNAME}
Then rebuild Pro with moveit_pro build
and your Jetson GPU should now be fully enabled for GPU Inference.