Skip to main content
Version: 8

Configure NVIDIA Jetson

note

PickNik currently supports running MoveIt Pro on the NVIDIA AGX Orin with JetPack 6.2 or higher. Support for other NVIDIA Jetson platforms is provided on a best-effort basis.

Configuring JetPack for GPU Acceleration

info

The following instructions assume that you already have Docker installed and that your system has internet access.

GPU acceleration for MoveIt Pro requires installing the nvidia-container-toolkit. On NVIDIA Jetson platforms with JetPack, this package is provided in an APT repository which is already configured on the system. To install and configure the package, open a terminal and run:

sudo apt update && sudo apt install nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

To verify that the toolkit is installed properly, run the command nvidia-smi directly on the AGX Orin (host) which should produce output similar to:

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.4.0 Driver Version: 540.4.0 CUDA Version: 12.6 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Orin (nvgpu) N/A | N/A N/A | N/A |
| N/A N/A N/A N/A / N/A | Not Supported | N/A N/A |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+

Then, run the following example container:

docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

And observe that the output matches the output of running the nvidia-smi command on the host.

Configuring MoveIt Pro for GPU Inference

note

On MoveIt Pro 8.8 and above, the following instructions are not required because a JetPack-specific Docker Image is automatically selected by moveit_pro run based on your environment.

The moveit_pro_ml package enables the use of machine learning models in MoveIt Pro Behaviors. These ML Behaviors can be run on higher end CPUs in a reasonable amount of time but will not be performant on Jetson devices if the GPU isn't utilized.

Enabling a Jetson GPU with MoveIt Pro

For NVIDIA Linux4Tegra (L4T) release 36 (Jetson 6.x, cat /etc/nv_tegra_release to see your release), the following dependencies are tested to work. For other releases of Linux4Tegra, you may have to find which versions of cudnn, cudnn-dev, and onnxruntime you need and modify the below URLs accordingly. See this ONNX runtime page for information on different versions of JetPack.

Add the following lines to your user_ws Dockerfile in the base-gpu stage (you may have to first apt install wget):

RUN wget -O onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl https://nvidia.box.com/shared/static/6l0u97rj80ifwkk8rqbzj1try89fk26z.whl -q --show-progress --progress=dot:giga && \
pip install onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl && \
rm onnxruntime_gpu-1.19.0-cp310-cp310-linux_aarch64.whl

ADD https://repo.download.nvidia.com/jetson/nvidia-repo-pin-600 /etc/apt/preferences.d/nvidia-repo-pin-600
RUN curl -fsSL --ssl-reqd https://repo.download.nvidia.com/jetson/jetson-ota-public.asc | gpg --dearmor -o /etc/apt/keyrings/jetson-ota-public && \
echo "deb [signed-by=/etc/apt/keyrings/jetson-ota-public] https://repo.download.nvidia.com/jetson/common r36.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list

RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && \
apt-get install --no-install-recommends -q -y \
cuda-cudart-12-6 \
libcublas-12-6 \
libcudnn9-cuda-12 \
libcudnn9-dev-cuda-12 \
libcufft-12-6 \
libcurand-12-6 \
libnvinfer-plugin10 \
libnvinfer10 \
libnvonnxparsers10 \
libtree && \
ldconfig

ENV CUDA_HOME="/usr/local/cuda"
ENV PATH="/usr/local/cuda/bin:${PATH}"
ENV LD_LIBRARY_PATH="/usr/local/cuda/lib64:${LD_LIBRARY_PATH}"

RUN echo "/usr/lib/aarch64-linux-gnu/tegra" >> /etc/ld.so.conf.d/nvidia-tegra.conf && \
echo "/usr/lib/aarch64-linux-gnu/tegra-egl" >> /etc/ld.so.conf.d/nvidia-tegra.conf

RUN rm /usr/share/glvnd/egl_vendor.d/50_mesa.json
RUN mkdir -p /usr/share/glvnd/egl_vendor.d/ && echo '\
{\
"file_format_version" : "1.0.0",\
"ICD" : {\
"library_path" : "libEGL_nvidia.so.0"\
}\
}' > /usr/share/glvnd/egl_vendor.d/10_nvidia.json

RUN mkdir -p /usr/share/egl/egl_external_platform.d/ && echo '\
{\
"file_format_version" : "1.0.0",\
"ICD" : {\
"library_path" : "libnvidia-egl-wayland.so.1"\
}\
}' > /usr/share/egl/egl_external_platform.d/nvidia_wayland.json

RUN ldconfig

ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all

Lastly, ensure your user inside the docker container is added to the video and dialout groups. This should be added right after the creation of the user (the lines with useradd and groupadd) in the base-gpu stage.

# Add user to dialout group to enable communication with serial USB devices (gripper, FTS, ...)
# Add user to video group to enable communication with cameras
RUN usermod -aG dialout,video ${USERNAME}

Then rebuild Pro with moveit_pro build and your Jetson GPU should now be fully enabled for GPU Inference.

NVIDIA AGX Orin Onboard Storage Limitations

The NVIDIA AGX Orin is provided with 64GB of eMMC 5.1 onboard storage. This eMMC storage is capable of a theoretical maximum linear transfer speed of 100MB/s. The NVIDIA AGX Orin Developer Kit is also provided with a PCIe x4 Gen4 M.2 slot for expansion which is capable of transfer speeds many times that of the onboard eMMC. For this reason, PickNik recommends installing a comopatible M.2 NVMe storage device and configuring Docker to use this device for image storage using the instructions below.

Choosing a Drive

For the NVIDIA AGX Orin Developer Kit, NVIDIA recommends an NVMe storage device which has the following specifications:

  • M.2 key M
  • 2280 form factor
  • PCIe Gen4 (Gen3 supported with Gen3 speeds / Gen5 supported at Gen4 speeds)
  • At least 1TB capacity

Installing the NVMe Drive

warning

In addition to the instructions below, follow all best-practices for safe electronics handling such as anti-static procedures.

  1. Power off the NVIDIA AGX Orin Developer Kit and disconnect power and data cables.
  2. Flip the Developer Kit upside-down.
  3. Identify connector J1 using the Carrier Board Component Identification Diagram.
  4. Remove the mounting screw in the stand-off directly across from connector J1.
  5. Fully insert the M.2 NVMe drive edge connector into the carrier board connector J1 at a 45 degree angle.
  6. Gently push the drive down toward the carrier board until the semi-circular cutout in the end of the drive aligns with the stand-off from which you removed the screw in step 4.
  7. While holding the drive down, re-install the screw into the stand-off to affix the drive to the carrier board. Take care not to over-tighten this screw.
  8. Flip the Developer Kit right-side-up and reconnect power and data cables.

Formatting and Mounting

  1. Power on the NVIDIA AGX Orin Developer Kit.
  2. Log in using your username and password.
  3. Open a terminal window (keyboard shortcut: CTRL+ALT+T).
  4. In the terminal window type lsblk and hit enter. If your new drive was detected correctly, there should be an entry at the bottom of the output similar to nvme0n1 259:0 0 931.5G 0 disk. If your device is named differently, write down the name and use it in the following steps.
  5. Type sudo fdisk /dev/nvme0n1, hit enter, enter your password, then hit enter again.
    1. Type n and hit enter to create a new partition.
    2. Hit enter to make the new partition a "primary" partition for this drive.
    3. Hit enter to make this the first partition on this drive.
    4. Hit enter to use the default value for the first sector.
    5. Hit enter to use the default value for the last sector.
    6. Type w and hit enter to write the changes to disk.
    7. Type q and hit enter to quit the fdisk tool.
  6. Type lsblk and hit enter. Confirm that the drive nvme0n1 now has a new partition below it named similarly to nvme0n1p1.
  7. Type sudo mkfs.ext4 /dev/nvme0n1p1 which will format the new partition as ext4 (common Linux extended filesystem format).
  8. Type sudo mkdir /mnt/nvme which will create a new folder that we will use as the mount point for the new partition.
  9. Type lsblk -f which will print the partition information but also include the UUID field which we will use in the following steps. Highlight the UUID of the new partition with your mouse and copy it to your clipboard (right-click -> Copy).
  10. Edit the /etc/fstab file with a terminal editor like vim or nano. Add a new line to the end of this file with the format UUID=<partition_uuid> /mnt/nvme ext4 defaults 0 2, replacing <partition_uuid> with the UUID that you copied in step 9 (right-click -> Paste).
  11. Save and exit the editor.
  12. Type sudo reboot and hit enter to reboot the NVIDIA AGX Orin.
  13. Once the Orin has rebooted and you have logged in again, you can verify that the new partition is mounted using the command ls /mnt/nvme in a terminal to see the contents of the new partition.

Migrating Docker Storage Location

note

The following instructions assume that you already have Docker installed and that you carefully followed the instructions above to install and configure an NVMe drive.

  1. Log in using your username and password.
  2. Open a terminal window (keyboard shortcut: CTRL+ALT+T).
  3. Stop the running Docker services by typing sudo systemctl stop docker.socket docker.service containerd, hitting enter, entering your password, and hitting enter again.
  4. Type sudo mkdir /mnt/nvme/docker and hit enter to create a new folder on the NVMe drive which is mounted at the location /mnt/nvme.
  5. Edit or create the file /etc/docker/daemon.json using a terminal editor like vim or nano.
    1. If the file is empty, add the following text: { "data-root": "/mnt/nvme/docker" }
    2. If the file already contains some data, insert the text "data-root": "/mnt/nvme/docker", (note the trailing comma) on a new line following the first {.
  6. Save and exit the editor.
  7. Type sudo rsync -avxP /var/lib/docker/ /mnt/nvme/docker/ and hit enter to migrate the existing Docker images to the new storage location. This may take several minutes.
  8. Type sudo systemctl start docker to start the Docker services again.
  9. Verify that the changes have taken effect by typing docker info -f '{{ .DockerRootDir}}' and hitting enter. The output should be the path to the new storage location. You can also type docker image ls to see all images which were migrated.
  10. To reclaim space on the eMMC storage after verifying that the changes were successful, type sudo rm -rf /var/lib/docker/ and hit enter.

Other Performance Considerations

NVIDIA Jetson platforms are designed to be configurable for different power and performance needs. By default, the NVIDIA AGX Orin comes configured with "medium" performance and power settings which can be changed with the nvpmodel utility. For more details on this utility, see the NVIDIA Jetson Linux Developer Guide.

MAXN Power Mode

If your environment supports higher power draw and heat output from the NVIDIA AGX Orin, you can use the command below to set the power mode to "MAXN" (or "unlimited") for better compute performance:

sudo nvpmodel -m 0

For more information on available modes, power consumption, and clock frequencies, refer to the NVIDIA Jetson Linux Developer Guide.

Jetson Clocks

Even when using "MAXN" power mode, the clock frequencies configured for each chip or accelerator are defined as ranges. When running real-time applications on these platforms, the delays introduced when scaling clock frequencies can cause "hitches" or momentary latencies. To alleviate this, NVIDIA provides another utility called jetson_clocks which pins each chip's or accelerator's frequency to the max frequency in its range. However, the configuration changes that jetson_clocks applies are not persistent across reboots. In order to apply jetson_clocks' changes on each boot, a systemd service can be used. To create this service, create the file /etc/systemd/system/jetson-clocks.service using sudo and a terminal editor such as vim or nano and populate it with the following text:

[Unit]
Description=Jetson Clocks Service
After=multi-user.target

[Service]
Type=oneshot
ExecStart=/usr/bin/jetson_clocks
RemainAfterExit=true

[Install]
WantedBy=multi-user.target

Then, to enable the service to start on boot, run sudo systemctl enable jetson-clocks.service. To enable the service immediately, run sudo systemctl start jetson-clocks.service.