Configure NVIDIA GPUs
In 7.4.0 and above, the installer scripts will do most of the following steps for you automatically, but this guide is useful for those not using the MoveIt Pro CLI or who want to do more advanced configuration.
MoveIt Pro has two different integrations of GPU support:
- GPU Acceleration, where MoveIt Pro will utilize GPU resources when rendering simulators and cameras.
- GPU Inference, where MoveIt Pro will utilize GPU resources for Machine Learning models.
GPU Acceleration can be used without GPU Inference, but GPU Inference requires GPU Acceleration to be enabled.
The workflow for enabling both is below.
Please read and follow ALL the steps below carefully.
The instructions below do not apply to NVIDIA Jetson hardware. For instructions specific to that platform, see Configuring NVIDIA Jetson.
Dependencies
NVIDIA Drivers
Please ensure that you have NVIDIA drivers installed for your system:
nvidia-smi
You should get something similar to the following output:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.195.03 Driver Version: 570.195.03 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4060 ... Off | 00000000:01:00.0 On | N/A |
| N/A 40C P8 3W / 115W | 806MiB / 8188MiB | 30% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
If the command isn't found then you likely do not have the NVIDIA driver installed. To install the NVIDIA driver on Ubuntu, see the instructions below:
sudo add-apt-repository multiverse
sudo apt update
sudo apt install ubuntu-drivers-common ubuntu-restricted-extras
sudo apt update
sudo apt install nvidia-driver-570
sudo reboot
For Debian or other Debian or Ubuntu derivatives, please follow this guide and install nvidia-driver-570.
MoveIt Pro GPU Inference is currently only compatible with NVIDIA driver versions >= 560. See the CUDA Toolkit Compatibility Matrix for more details. "Proprietary" versions of the driver are preferred (as opposed to "Open" or "Open Kernel" versions) but some hardware may require the "Open" version.
For real-time NVIDIA driver support, please follow this guide
NVIDIA Container Toolkit
Next, install the nvidia-container-toolkit following this guide.
Please ensure that you follow the steps under Configuring Docker to configure and restart the Docker daemon.
To verify that the toolkit is installed properly, please run the following sample container:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
And observe that the output matches the above nvidia-smi command.
GPU Acceleration and GPU Inference
If both the NVIDIA driver and nvidia-container-toolkit are installed, any moveit_pro command will automatically detect the existence of the GPU and utilize it for acceleration and inference.
When using the MoveIt Pro CLI, this is done by automatically including an additional Docker Compose file installed on your system at /opt/moveit_pro/nvidia-compose.yaml.
If you are running raw docker compose commands instead of the moveit_pro CLI, you must explicitly include /opt/moveit_pro/nvidia-compose.yaml yourself, as described in not using the MoveIt Pro CLI.
You can verify this by confirming that the base Docker image used by MoveIt Pro contains the terms cuda and cudnn or by running moveit_pro shell and then nvidia-smi inside the container.
To test GPU inference functionality, you can utilize the core Behavior "GetMasks2DFromPointQuery" in an Objective.
The segmentation with inference should be significantly faster than without and you should notice higher resource usage in nvidia-smi.
Additionally, if inference falls back to CPU, you will receive a notice in the web UI.