Using NVIDIA GPUs with Docker and MoveIt Pro
In 7.4.0, the installer scripts will do most of the following steps for you automatically, but this guide is useful for those not using the MoveIt Pro CLI or who want to do more advanced configuration.
MoveIt Pro has two different integrations of GPU support:
- GPU Acceleration, where MoveIt Pro will utilize GPU resources when rendering simulators and cameras.
- GPU Inference, where MoveIt Pro will utilize GPU resources for Machine Learning models.
GPU Acceleration can be used without GPU Inference, but GPU Inference requires GPU Acceleration to be enabled.
The workflow for enabling both is below.
Please read and follow ALL the steps below carefully.
Dependencies
NVIDIA Drivers
Please ensure that you have NVIDIA drivers installed for your system:
nvidia-smi
You should get something similar to the following output:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.10 Driver Version: 535.86.10 CUDA Version: 12.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 34C P8 9W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
If the command isn't found, please follow the official NVIDIA guide for your GPU type.
For real-time NVIDIA driver support, please follow this guide
NVIDIA Container Toolkit
Next, install the nvidia-container-toolkit
by following this guide.
Please ensure that you follow the steps under "Configuring Docker" to restart the Docker daemon.
To verify that the toolkit is installed properly, please run the following sample container:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
And observe that the output matches the above nvidia-smi
command.
GPU Acceleration
These steps are required for GPU Inference.
To enable GPU Acceleration, add the following lines to your workspace docker-compose.yaml
, an example of which can be found here:
agent_bridge:
runtime: nvidia
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility,graphics, video]
drivers:
runtime: nvidia
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu, compute, utility, graphics, video]
This will allow you to access your host hardware in the container.
To ensure this is working properly, start MoveIt Pro:
moveit_pro run -v
And in a separate terminal:
moveit_pro shell
nvidia-smi
You should see the same output from earlier in the container shell.
GPU Inference
The GPU Inference packages grabbed are large. The first time build with GPU Inferencing will take a while, so get some coffee!
To enable GPU Inference, follow the reference in the example workspace Dockerfile and add the following lines above and inside the correct user-overlay
section:
# NOTE: THIS ALREADY IS IN YOUR DOCKERFILE
FROM base AS user-overlay
########################
# ENABLE GPU INFERENCE #
# ######################
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install wget -y -q --no-install-recommends && \
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb && \
dpkg -i cuda-keyring_1.1-1_all.deb && \
apt-get update && \
apt-get install -q -y \
libcudnn9-cuda-12 \
libcudnn9-dev-cuda-12 \
libcublas-12-6 \
cuda-cudart-12-6 \
libcurand-12-6 \
libcufft-12-6 \
libnvinfer10 \
libnvinfer-plugin10 \
libnvonnxparsers10 \
libtree
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.10/dist-packages/onnxruntime/capi:/usr/lib/x86_64-linux-gnu:/usr/local/cuda-12.6/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
Rebuild your whole image:
moveit_pro build
And you can now use MoveIt Pro ML Inference.
GPU Inference requires more configuration for the NVIDIA Jetson, found here.
To test this functionality, you can utilize the core behavior "GetMasks2DFromPointQuery" in an objective. The segmentation with inference should be significantly faster than without and you should notice higher resource usage in nvidia-smi
.