Configure NVIDIA Jetson
PickNik currently supports running MoveIt Pro on the NVIDIA AGX Orin with JetPack 6.2 or higher. Support for other NVIDIA Jetson platforms is provided on a best-effort basis.
Configuring JetPack for GPU Acceleration
The following instructions assume that you already have Docker installed and that your system has internet access.
GPU acceleration for MoveIt Pro requires installing the nvidia-container-toolkit. On NVIDIA Jetson platforms with JetPack, this package is provided in an APT repository which is already configured on the system. To install and configure the package, open a terminal and run:
sudo apt update && sudo apt install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
To verify that the toolkit is installed properly, run the command nvidia-smi directly on the AGX Orin (host) which should produce output similar to:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.4.0 Driver Version: 540.4.0 CUDA Version: 12.6 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Orin (nvgpu) N/A | N/A N/A | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | N/A N/A |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
Then, run the following example container:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
And observe that the output matches the output of running the nvidia-smi command on the host.
Configuring MoveIt Pro for GPU Inference
On MoveIt Pro 8.8 and above, the following instructions are not required because a JetPack-specific Docker Image is automatically selected by moveit_pro run based on your environment.
The moveit_pro_ml package enables the use of machine learning models in MoveIt Pro Behaviors. These ML Behaviors can
be run on higher end CPUs in a reasonable amount of time but will not be performant on Jetson devices if the GPU isn't utilized.
Enabling a Jetson GPU with MoveIt Pro
For NVIDIA Linux4Tegra (L4T) release 36 (Jetson 6.x, cat /etc/nv_tegra_release to see your release), the following dependencies are tested to work.
For other releases of Linux4Tegra, you may have to find which versions of cudnn, cudnn-dev, and onnxruntime you need and modify the below URLs accordingly.
See this ONNX runtime page for information on different versions of JetPack.
Add the following lines to your user_ws Dockerfile in the base-gpu stage (you may have to first apt install wget):
RUN wget -O onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl https://nvidia.box.com/shared/static/6l0u97rj80ifwkk8rqbzj1try89fk26z.whl -q --show-progress --progress=dot:giga && \
pip install onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl && \
rm onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl
ADD https://repo.download.nvidia.com/jetson/nvidia-repo-pin-600 /etc/apt/preferences.d/nvidia-repo-pin-600
RUN curl -fsSL --ssl-reqd https://repo.download.nvidia.com/jetson/jetson-ota-public.asc | gpg --dearmor -o /etc/apt/keyrings/jetson-ota-public && \
echo "deb [signed-by=/etc/apt/keyrings/jetson-ota-public] https://repo.download.nvidia.com/jetson/common r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install --no-install-recommends -q -y \
cuda-cudart-12-6 \
libcublas-12-6 \
libcudnn9-cuda-12 \
libcudnn9-dev-cuda-12 \
libcufft-12-6 \
libcurand-12-6 \
libnvinfer-plugin10 \
libnvinfer10 \
libnvonnxparsers10 \
libtree && \
ldconfig
ENV CUDA_HOME="/usr/local/cuda"
ENV PATH="/usr/local/cuda/bin:${PATH}"
ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"
RUN echo "/usr/lib/aarch64-linux-gnu/tegra" >> /etc/ld.so.conf.d/nvidia-tegra.conf && \
echo "/usr/lib/aarch64-linux-gnu/tegra-egl" >> /etc/ld.so.conf.d/nvidia-tegra.conf
RUN rm /usr/share/glvnd/egl_vendor.d/50_mesa.json
RUN mkdir -p /usr/share/glvnd/egl_vendor.d/ && echo '\
{\
"file_format_version" : "1.0.0",\
"ICD" : {\
"library_path" : "libEGL_nvidia.so.0"\
}\
}' > /usr/share/glvnd/egl_vendor.d/10_nvidia.json
RUN mkdir -p /usr/share/egl/egl_external_platform.d/ && echo '\
{\
"file_format_version" : "1.0.0",\
"ICD" : {\
"library_path" : "libnvidia-egl-wayland.so.1"\
}\
}' > /usr/share/egl/egl_external_platform.d/nvidia_wayland.json
RUN ldconfig
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
Lastly, ensure your user inside the docker container is added to the video and dialout groups.
This should be added right after the creation of the user (the lines with useradd and groupadd) in the base-gpu stage.
# Add user to dialout group to enable communication with serial USB devices (gripper, FTS, ...)
# Add user to video group to enable communication with cameras
RUN usermod -aG dialout,video ${USERNAME}
Then rebuild Pro with moveit_pro build and your Jetson GPU should now be fully enabled for GPU Inference.
NVIDIA AGX Orin Onboard Storage Limitations
The NVIDIA AGX Orin is provided with 64GB of eMMC 5.1 onboard storage. This eMMC storage is capable of a theoretical maximum linear transfer speed of 100MB/s. The NVIDIA AGX Orin Developer Kit is also provided with a PCIe x4 Gen4 M.2 slot for expansion which is capable of transfer speeds many times that of the onboard eMMC. For this reason, PickNik recommends installing a comopatible M.2 NVMe storage device and configuring Docker to use this device for image storage using the instructions below.
Choosing a Drive
For the NVIDIA AGX Orin Developer Kit, NVIDIA recommends an NVMe storage device which has the following specifications:
- M.2 key M
- 2280 form factor
- PCIe Gen4 (Gen3 supported with Gen3 speeds / Gen5 supported at Gen4 speeds)
- At least 1TB capacity