3. Motion Planning
🕒 Duration: ~2 hours
💪 Level: Intermediate
Tutorial Overview
This tutorial introduces the powerful motion planning capabilities of MoveIt Pro, combining navigation, perception, and whole-body planning into a unified workflow. Set in the hangar_sim
environment, you’ll learn how to navigate your mobile robot, execute trajectories, perform joint-space and Cartesian-space motion planning, generate automated coverage paths, and leverage the MoveIt Task Constructor (MTC) to handle sophisticated manipulation tasks. We'll finish with a complex example of an ML-based pick-and-place task combining all the building blocks learned so far. Whether you're planning point-to-point movements or scanning surfaces in a warehouse, this tutorial gives you hands-on experience with advanced robot Behaviors that blend mobility, manipulation, and perception.
Pre-reqs
You should have already installed MoveIt Pro. We will assume you have already completed Tutorial 2 and know how to build Behavior Trees.
Types of Motion Control in MoveIt Pro
Let's start with a summary of the types of motion control available in MoveIt Pro and the corresponding Behaviors and Subtrees that can be used to achieve them. Don't worry too much about the details of each type right now, we will cover them in more detail later in this tutorial.
Motion Control Type | Collision Aware? | Behavior Input Type |
---|---|---|
Inverse Kinematics (IK) Solver | Yes | ComputeInverseKinematics std::vector<geometry_msgs::PoseStamped> |
Joint Interpolation | Yes** | GeneratePointToPointTrajectory moveit_studio_agent_msgs::RobotJointState |
Global Freespace Planning | Yes | Move to Joint State moveit_studio_agent_msgs::RobotJointState |
Global Freespace Planning | Yes | Move to Pose geometry_msgs::PoseStamped |
Global Freespace Planning | Yes | Move to Waypoint std::string (MoveIt Pro saved waypoint name) |
Cartesian Planning | Yes** | PlanCartesianPath std::vector<geometry_msgs::PoseStamped> |
Coverage Path Generation | No | GenerateCoveragePath geometry_msgs::PoseStamped |
Joint Trajectory Controller | No | ExecuteFollowJointTrajectory trajectory_msgs::JointTrajectory |
Joint Trajectory w/Admittance Controller | No | ExecuteTrajectoryWithAdmittance trajectory_msgs::JointTrajectory |
Cartesian Velocity-Force Control | No | PublishVelocityForceCommand moveit_pro_controllers_msgs::VelocityForceCommand |
Protected Cartesian Velocity-Force Control | Yes | PoseJog moveit_pro_controllers_msgs::VelocityForceCommand |
** Possible with separate ValidateTrajectory
Start MoveIt Pro
Launch the application using:
moveit_pro run -c hangar_sim
The hangar_sim
robot config launches a virtual environment representing an airplane hangar with a mobile manipulator. The robot has a mobile base, a robotic arm, and integrated sensors.
Concepts
This tutorial requires understanding a few key concepts that will be used throughout the exercises:
Joint State
Represents the values of the robot's actuators, including their positions, velocities, and efforts. It is used to describe the robot's configuration in joint space. In MoveIt Pro, joint states are represented by the moveit_studio_agent_msgs::RobotJointState
message type, which can contain multiple joint states for different robot components (e.g., arm, base).
Pre-defined waypoints in MoveIt Pro can be loaded into a RobotJointState
type by using the RetrieveWaypoint
Behavior.
Poses
A pose is a representation of an object's position and orientation in space. It is typically represented as a geometry_msgs::PoseStamped
message, which includes a position (x, y, z) and orientation (quaternion). Poses are frequently used to define the target positions for the robot's end effector during motion planning, or the location of a robot link in the environment.
A geometry_msgs::PoseStamped
type contain the pose of a ‘child’ frame in the ‘parent’ coordinates.
Poses can be created using the CreateStampedPose
Behavior, which allows you to specify the position and orientation of the pose in a specific reference frame. They can be visualized with the VisualizePose
Behavior.
Here's an example:
Cartesian paths
A 'path' in Cartesian space is just a sequence of poses.
We typically create the poses and then add them to a vector of poses, using the AddPoseStampedToVector
Behavior.
For longer sequences of poses, they can be stored in a YAML file and loaded using the LoadPoseStampedVectorFromYaml
Behavior.
Joint-space trajectories
A joint-space trajectory is a sequence of joint states that defines how the robot's actuators should move over time. It is typically represented as a trajectory_msgs::JointTrajectory
message, which includes the joint names, positions, velocities, and accelerations for each point in the trajectory.
Motion planners typically take as input Cartesian-space or joint-space targets, and output joint-space trajectories, which can be executed by the robot's controllers. Trajectories are generated at a given sampling rate, which is the frequency at which the controller expects to receive new trajectory points. Points in the trajectory (also called setpoints) are evenly spaced in time at the specified sampling rate.
Planning Scene
The planning scene is a representation of the robot's environment, including the robot itself, obstacles, and other objects.
It is used by motion planners to determine valid paths and trajectories for the robot.
In MoveIt Pro, the planning scene is represented by the moveit_msgs::PlanningScene
message type.
Obstacles can be added manually from the UI (keep-out zones), in Behavior Trees via planning scene manipulation Behaviors, or automatically from perception data by configuring the perception pipeline.
Planning vs. Control
In MoveIt Pro, there is a clear distinction between planning and control:
- Planning refers to the process of computing a valid path or trajectory for the robot to follow, given its current state and the desired target. This is typically done using motion planners, which take into account the planning scene and the robot's kinematics.
- Control refers to the process of executing the planned trajectory on the robot's hardware.
The output of the planning process is typically a joint-space trajectory, which is the input to the control component. Collision checking is normally done during the planning phase, to ensure that the planned trajectory does not collide with any obstacles in the planning scene, since controllers typically do not perform collision checking.
Creating and executing joint-space trajectories
To illustrate some of the concepts above, we will start with a simple Objective that moves the robot arm to a predefined joint-space waypoint by creating a direct point-to-point joint-space trajectory, without any motion planning involved.
For this, we will use the RetrieveWaypoint
Behavior and the Interpolate to Joint State
Subtree.
- Create a new Objective called "Creating and executing joint-space trajectory".
- Add these two nodes:
- Run this Objective
It will retrieve the waypoint called Arm Forward
and move the robot arm to that configuration.
You can also find this example pre-built in the Point-to-Point Trajectory
example Objective in hangar_sim
.
- Take your time to explore the contents of the
Interpolate to Joint State
Subtree - Notice the main Behaviors involved:
GetCurrentPlanningScene
retrieves the current planning scene, which is used to extract the current robot joint state later, withGetRobotJointState
.GeneratePointToPointTrajectory
generates a joint-space trajectory from the 'start' joint state to the 'target' joint state.ValidateTrajectory
checks if the generated trajectory is valid, i.e., it does not collide with any obstacles in the planning scene.ExecuteFollowJointTrajectory
executes the generated trajectory on the robot's hardware.
This Objective won't perform any motion planning around obstacles, it will move the robot arm directly to the specified waypoint. This is useful for simple tasks where you know the exact joint positions you want to move to, and the direct path is feasible.
See how we still check for collisions with the ValidateTrajectory
Behavior: the Objective will detect if the trajectory is going to collide, but it will not try to find a different path if it does.
Explore the input ports of the GeneratePointToPointTrajectory
Behavior and their built-in documentation, to understand how you can set the speed of the trajectory, relative to the robot maximum speed, the trajectory sampling rate, etc.
Planning trajectories around obstacles
In this section we will learn how to plan trajectories around obstacles, both for base navigation and for arm manipulation.
MoveIt Pro supports different types of motion planning, including:
- Joint-space planning: Planning the arm (and optionally the base) to move to a specific joint configuration, avoiding obstacles.
- Cartesian planning: Planning the arm (and optionally the base) to move along a path in Cartesian space.
- Task planning: Planning complex tasks that involve multiple steps, such as pick-and-place operations, using the MoveIt Task Constructor (MTC).
- Mobile base navigation: Planning the mobile base to move to a specific location in the environment, avoiding obstacles, while the arm is static.
In the following sections, we will explore how to use these different types of motion planning.
Joint-space path planning for arm(s)
In this section we will learn how to plan joint-space paths for the robot arm, using the PlanToJointGoal
Behavior, which can plan a path to a specific joint configuration, avoiding obstacles in the planning scene.
Take a look at the existing Objective called Move to Arm Upright
, or create it yourself, using the Move to Waypoint
Subtree, to look like this:
The Move To Waypoint
Subtree is a combination of RetrieveWaypoint
and another Subtree, Move to Joint State
.
The Move to Joint State
Subtree is a core Subtree that can be used to move the robot arm to a specific joint configuration, using motion planning around obstacles.
Take your time to explore the contents of the Move to Joint State
Subtree and notice the main Behaviors involved:
SwitchController
: Switches the robot controller to thejoint_trajectory_controller
, which is used to execute joint-space trajectories.PlanToJointGoal
: Plans a joint-space trajectory to the target joint configuration, avoiding obstacles in the planning scene.ExecuteFollowJointTrajectory
: Executes the planned joint-space trajectory on the robot's hardware.
Unlike the Interpolate to Joint State
example described previously, this Objective will perform motion planning around obstacles, which includes collision checking.
Therefore it is not necessary to validate the trajectory again with the ValidateTrajectory
Behavior, as it is done already as part of the planning process.
That's why the trajectory can be executed directly with the ExecuteFollowJointTrajectory
Behavior.
Please take some time to explore the input ports of the PlanToJointGoal
Behavior and their documentation, more specifically:
joint_goal
: The target joint configuration to plan to.keep_orientation
and other related parameters: These control how the planner handles the end effector orientation during the planning process.velocity_scale_factor
andacceleration_scale_factor
: These control the speed and acceleration of the planned trajectory, relative to the robot's maximum speed and acceleration.seed
andmax_iterations
: These control the random seed and maximum number of iterations for the planner, which can be useful for exploring different planning solutions or dedicating more time to solving the planning problem.
Moving in joint-space to a 3D pose
A common use case is to move the robot end effector to a specific 3D pose in space, in a way obstacles are avoided.
This requires computing the Inverse Kinematics (IK) for the target pose, to determine the joint configuration that achieves that pose, and moving to that joint configuration.
The ComputeInverseKinematics
Behavior can be used to compute the IK for a given pose, and then the PlanToJointGoal
Behavior can be used to plan a trajectory to that joint configuration.
Take a look at the Compute IK, Plan and Move
example in hangar_sim
, or try to create your own.
These are the main Behaviors involved:
ComputeInverseKinematics
takes a vector of target poses instead of a single pose, because it can compute IK for multiple links simultaneously (e.g. multi-arm systems).
Therefore you'll need to put the target pose in a vector, using AddPoseStampedToVector
.
Cartesian-space path planning
Cartesian planning creates a trajectory where the robot’s end effector follows a precise series of poses in space, often along a straight line or curve. This is useful for painting, welding, spraying, drawing, or scanning—where the tool must stay in contact with or aligned relative to a surface.
Common characteristics of Cartesian-space planning:
- Path is defined in Cartesian (X, Y, Z + orientation) space.
- Planner ensures smooth transitions using waypoints.
- A drawback is that it is sensitive to collisions and IK feasibility.
This is in contrast to freespace planning, which plans a trajectory from a start to goal configuration without specific constraints on the path itself. Freespace planning solves for a valid, collision-free path in joint space, but it may produce curved, non-intuitive motions.
MoveIt Pro supports both approaches and allows blending them using composite Behaviors, depending on your task’s complexity.
Cartesian plan to a single pose
Let’s start learning about Cartesian planning by moveing to a single 3D pose. Later we will plan through a series of points.
We previously introduced the Compute IK, Plan and Move
example as a way to move to a target 3D Pose.
This example is similar, but it plans and moves in Cartesian-space, instead of joint-space.
The robot end-effector will be constrained to move in a straight line to the target pose.
Let’s create a simple application that moves the robot forward 2 meters, in straight line. Create a new Objective called Move Forward 2m
that consists of three Behaviors:
- Behavior:
CreateStampedPose
- orientation_xyzw =
0;0;0;1
- positions_xyz =
0;0;1
- reference_frame =
grasp_link
- orientation_xyzw =
- Behavior:
VisualizePose
- Subtree:
Move to a StampedPose
Your Behavior Tree should look like this:
Run this new Objective and you should be prompted with the new trajectory. Approve it and watch the mobile manipulator move forward.
You can clear markers between steps to make visualizing things easier.
This pattern of creating a pose, visualizing it, and moving towards it is a common pattern in debugging robot Behaviors in MoveIt Pro.
Cartesian plan to multiple points
Remember that a path is a vector of poses, and you can create a path by adding multiple poses to a vector. Then you can plan a trajectory that moves the robot end-effector through those poses, connecting them with straight lines.
This can be done with the PlanCartesianPath
Behavior. We used this Behavior in the last section also, but it was hidden inside the Move to a StampedPose
subtree.
It has many parameters that can be tuned. Take some time to explore them and their documentation. One of the main input parameters is the blending radius, which smooths out the motion between straight lines. The Cartesian-space density is the distance between waypoints at which the Inverse Kinematics are computed (defaults should work well for most applications):
Create a new Objective called Cartesian plan through poses
.
Next add the following Behaviors:
- Behavior:
CreateStampedPose
- orientation_xyzw =
0;0;0;1
- positions_xyz =
0;0;0.2
- reference_frame =
grasp_link
- orientation_xyzw =
- Behavior:
AddPoseStampedToVector
- input =
{stamped_pose}
- input =
- Behavior:
VisualizePose
- Behavior:
BreakpointSubscriber
- Behavior:
PlanCartesianPath
- Behavior:
ExecuteFollowJointTrajectory
Your Behavior Tree should look like this:
Before running, we recommend you get the robot in a good position by running the Objective Move to Arm Upright
, which is a shortcut to move to a waypoint.
Now run Cartesian plan through poses
and you should see it pause with the visualized pose:
Press the Resume button at the top right to continue its operation.
Cartesian plan with more points: drawing a square
To see a more complex Cartesian example, without having to build it yourself tediously, let’s draw a square.
- First, run
Move to Look at Plane
- Then run
Cartesian Plan Simple Square
The robot should produce this result:
The key difference between this example and the previous is that the Cartesian planner is constraining both the end effector position and orientation, by setting the parameter position_only: false
.
Feel free to explore the Behavior Tree to better understand how this was accomplished.
Drawing Multi-Pose Paths
You can load more complex geometries from file, rather than having to hard code everything with Behavior Tree nodes. In this example we load a path from a file called picknik.yaml
.
We also use the ROS transform library TF to publish a static TF using PublishStaticFrame
in the local
frame. The full Behavior Tree looks like this:
Run the example Objective Cartesian Draw Geometry From File
and you should see:
Coverage Path Planning
Coverage paths are used in a variety of applications including painting, spraying, sanding, smoothing, or inspecting an area. In this section we will explore how to auto-generate coverage paths for a simple rectangle. More complex coverage paths over contoured surfaces are also possible.
Coverage Path Conventions
You can specify the area to cover by providing:
- An origin (the bottom-right corner).
- The width
- The height
The following diagram explains this convention:
A lawn-mower path is automatically generated, with a given distance between strides.
In MoveIt Pro there is a Behavior called GenerateCoveragePaths
used to create these paths:
Run the existing Objective for this called Generate Grid Pattern on Airplane
. You should see it do this:
Seems like a small path, right? Let’s modify this Objective by editing the Behavior Tree.
Find the Behavior called GenerateCoveragePath
and open up the parameters sidepanel. Change the width
to 2.0
. You should now see a much more impressive path:
Feel free to adjust any other settings you would like, also.
Scan & Plan Workflow
In this final section we bring together all the pieces by combining perception with the Cartesian planning we’ve covered above. In this example the robot scans the side of the airplane before generating the motion plan.
Run the Objective Find and Spray Plane
to see it in action. You should see:
Why so far away from the plane surface? In the examples in this hangar_sim
world we have added a large amount of collision padding around the plane to prevent the robot from getting stuck under it during mobile base navigation. Unfortunately this has also prevented us from touching the surface during our manipulation tasks. Future work is to improve the simulation environment and examples to support both scenarios.
The key difference in this example is we use a pre-built subtree called Get Point Cloud Center Pose
that returns the pose of the plane of the airplane sidebody:
Using that center point, we generate the Cartesian plan. Let’s modify Find and Spray Plane
to have a more dense scanning pattern. Edit the Objective and again find the Behavior GenerateCoveragePath
. Open the parameter sidepanel and change the parameter stride_distance
to 0.1
.
Run the Objective and you might notice the motion planner takes longer to compute the new path. You should then see the following path generated:
Task planning (MTC)
MoveIt Pro provides advanced capabilities for planning complex tasks through the use of the MoveIt Task Constructor (MTC). MTC enables you to break down complex planning tasks into multiple interdependent steps for use by motion planners. It can be used to plan a solution to multi-stage problems.
Benefits of MoveIt Pro MTC Behaviors
MoveIt Pro provides a library of Behaviors to make using MTC easier. They provide a simplified way to create, plan, and execute MTC tasks. You can:
- Use Behaviors to set up and extend the task with common building blocks.
- Choose from a variety of building blocks.
- Reuse components of existing tasks much easier than writing low-level C++.
Building an MTC Objective
For a simple example of an MTC Objective, find and view the Move to Pose
Objective, where you will see several MTC Behaviors. The Move to Pose
Objective moves the end effector pose to the target input pose. We use this Objective in the iMarker Teleoperation mode, with the target pose being provided by the user in the UI.
You will notice this is yet another way to move the robot end effector to a target pose, similar to the Compute IK, Plan and Move
and Move Forward 2m
examples we saw earlier.
In this case, the planning problem is set up as a more complex search task, where multiple IK solutions will be attempted to find a valid solution that avoids obstacles in the planning scene.
Edit the Objective and try searching for "MTC" to see all the MTC Behaviors available in MoveIt Pro.
There are many built-in MTC Behaviors, and you will typically use a common structure to build an MTC Objective similar to what you see in Move to Pose
:
InitializeMTCTask
creates a task object and initializes the common global properties like trajectory execution info. The task object is then stored on the blackboard to be modified by following Behaviors.SetupMTCCurrentState
takes the created task and sets up a generator stage corresponding to the current state as the task's start state.SetupMTCxxx
here you can chain a number of MTC behaviors and stages together to create the desired behavior of your custom Objective.Move to Pose
uses theSetupMTCPlanToPose
behavior, which contains several MTC stages.PlanMTCTask
calls the plan() function of the given MTC task and stores the solution to the blackboard.ExecuteMTCTask
reads an MTC solution from the blackboard and executes it.
Overall MTC is a complex topic that we do not cover in-depth in these getting started tutorials. For a hands-on example, see the Task Constructor how-to guide.
Task Constructor Debugging
In this section we will explore how to introspect failed MTC tasks.
We first need to trigger an MTC failure - to do this we will use teleoperation. Click on the Teleoperation button at the top right, select the Interactive Marker mode, and move the virtual marker arrows so that the robot end effector will collide with the floor, similar to shown in the picture:
You should see a popup message that PlanMTCTask
has failed with error code "PLANNING_FAILED".
To debug this further, open the MoveIt Pro configured Rviz:
moveit_pro rviz
By clicking on the pose IK stage, we can view all IK solutions attempted. By reading the comment field, we can see that, as expected, because we put the end effector target in a position with no possible solutions, they all failed due to collision with the floor. By clicking on the comment field, each of these solutions can be viewed in the Rviz visualization window.
When to use MTC vs. other planning Behaviors?
MTC is a powerful tool for complex planning tasks, but it may not always be the best choice. Here are some guidelines:
- Use MTC when you have a complex task that requires multiple steps, such as pick-and-place operations, where you need to plan a sequence of actions that depend on each other.
- Use other Behaviors when you have a simple task that can be solved with a single motion planner, such as moving the robot end effector to a specific pose or joint configuration, or when you need to perform a single Cartesian path planning task.
Mobile Base Navigation
In this section, we explore basic navigation of the mobile robot using ROS’s Nav2 stack. Nav2 provides autonomous navigation capabilities using maps and localization.
Navigate to a Location in the Hangar
-
Start the Objective
Navigate to Clicked Point
. -
In the Visualization pane, click anywhere in the hangar to set a navigation goal.
-
Approve the green check mark to start movement.
-
Repeat this process to then move the robot to the yellow box zone for our next exercises.
Occupancy Grid
MoveIt Pro is able to visualize the occupancy grid, global costmap, and local costmap that is published by Nav2 in the UI.
The occupancy grid is the map that Nav2 uses to internally represent a top down view of the environment that the robot is navigating in. The occupancy grid is typically generated with some type of mapping or SLAM tool offline. The grid cells in the occupancy grid encode information if that cell is free, occupied, or unknown.
Use the Navigation toggle button (bottom-right of Visualization pane) to show and hide the Occupancy Grid, to get a better understanding of what that is.
Global Costmap
The global costmap is an internal representation of the environment where now there are costs associated with each grid cell.
These costs typically reflect safety margins around obstacles (known as the “inflation layer”) as well as additional pertinent information. The global costmap is used by a global planner to generate an end-to-end path from the robot’s current position to a goal.
Use the Navigation toggle button (bottom-right of Visualization pane) to show and hide the Global Costmap, to get a better understanding of what that is.
Local Costmap
The local costmap provides a small representation of the environment that is immediately around the robot.
Like the global costmap, the local costmap has costs associated with each grid cell. The local costmap is intended to capture dynamic changes in the environment at a much higher frequency. The local costmap is used by the local controller to generate velocity commands for the robot base so that it can follow the end-to-end path generated by the global planner while still avoiding obstacles near the robot. To learn more, see Nav2’s Environmental Representation page.
Use the Navigation toggle button (bottom-right of Visualization pane) to show and hide the Local Costmap, to get a better understanding of what that is.
The parameters for the local and global costmaps are defined under the global_costmap
and local_costmap
namespace in the filesystem under ~/moveit_pro/moveit_pro_example_ws/src/hangar_sim/params/nav2_params.yaml
. Try adjusting the inflation_radius
parameter in the file and re-launching MoveIt Pro to see the effect on both costmaps.
Navigation Planners and Controllers
There are some navigation Behaviors which require providing the name of a planner or controller as an input. In Nav2, the planner is used to compute a path to complete some Objective function. Generally this means to compute a path to a goal. The planner uses the global costmap to generate the path.
In Nav2, a controller is used to generate the command velocity for the robot so that it can follow a global path. The controller uses the local costmap to generate the path such that it avoids high cost areas in the local costmap while still trying its best to follow the path.
ML-Based Pick and Place
Next we will explore an example Objective that involves using machine learning to detect scattered boxes, and leverage our mobile robot to move the boxes to a predefined zone in a warehouse.
Move Boxes to Loading Zone
As a final example using the Behaviors we have learned, we will create an Objective that uses machine learning to detect boxes in the environment and move them to a loading zone.
This Objective requires a fairly powerful computer to run, due to its use of several ML models. It is ongoing work to improve performance of these ML models, but you may not be able to run this example without NVIDIA hardware. If you can’t run this successfully that’s ok, but take a look at the Objectives to see how things are put together.
Panning around the Visualization UI can be tricky - on your mouse, try holding down the middle key and the right button at the same time to pan around and change your frame for zooming.
Run the Objective ML Move Boxes to Loading Zone
. Watch the robot scan a region, detect boxes, and place them in a zone.
The Objective is designed to operate in an unstructured environment. It relies on wrist camera images to extract scene information and generate a motion plan. First, the robot navigates to the general region where boxes are expected to be. Then, the wrist camera images are fed through a vision language model to extract segmentation masks for all visible boxes. The exact prompt used for segmentation can be tuned for better performance. Given the masks, motion planning is initiated for each possible box to be grasped. Once the first viable motion and grasp is found, the robot will navigate to that box and pick it up. After picking up the box, the robot will motion plan to the designated loading zone and drop the box off.
Planning motion to pick up a box is an interesting example of MTC usage, where the task is to find a valid pose on the box to apply suction to, and then plan a collision-free trajectory to that pose. 'Vacuum" pose candidates are computed via the GenerateVacuumGraspPoses
Behavior. The MTC task then evaluates all those poses and chooses one that is valid and optimizes the overall path length.
It is worth noting the SetupMTCBatchPoseIK
Behavior, which is the MTC-equivalent of the ComputeInverseKinematics
Behavior we saw earlier.
It allows us to compute multiple IK solutions and feed them as candidates to the MTC task.
This specific MTC task, reading it from the bottom up, does the following:
- Computes IK candidates for the vacuum poses on the box, thus feeding the pipeline with multiple target joint-space configurations.
- Checks that from the target pose there is a straight-line Cartesian path to move vertically, i.e. an 'approach' motion.
- Checks that the beginning of the 'approach' motion can connect, in joint-space, to the current robot joint state.
- Disables collisions between the robot vacuum gripper and the 'octomap', which is a sensor-based 3D representation of the environment. This tells the planner that collisions with the boxes are allowed, since we are trying to pick them up.
Summary
In this tutorial, we explored the core concepts and practical workflows for motion planning in MoveIt Pro. We covered the differences between joint-space and Cartesian-space planning, how to create and execute trajectories, and how to plan around obstacles using both mobile base navigation and arm manipulation. We introduced advanced planning techniques such as coverage path planning, scan-and-plan workflows, and the MoveIt Task Constructor (MTC) for multi-stage tasks. Finally, we demonstrated the integration of machine learning for perception-driven pick-and-place operations, showcasing how MoveIt Pro unifies navigation, perception, and manipulation in complex robotic applications.
🎉 Congratulations!