Skip to main content
Version: 7

2. Perception & Advanced Motion Planning

Tutorial Overview

This tutorial teaches you how to use MoveIt Pro for tasks involving perception and advanced motion planning.

The tutorial provides step-by-step instructions with examples for creating objectives to perform tasks such as stacking blocks, picking and placing objects, and using different perception methods. It also guides you through debugging tools and techniques.

This tutorial will teach you how to:

  1. Find objects using April tags
  2. Create Subtrees
  3. Use MoveIt Task Constructor (MTC) behaviors to pick objects
  4. Add a keep-out zone to your planning scene
  5. Add a breakpoint to your objective
  6. Register an object in a point cloud
  7. Segment an object in a point cloud

Start MoveIt Pro

You should have already installed MoveIt Pro. Launch the application using:

moveit_pro run -c lab_sim

You should now be ready to run the following training examples.

Stacking Blocks with April Tags and Subtrees

In this exercise, you will build a new objective that will detect and stack the blocks in the scene.

The objective will use April Tags, a type of Augmented Reality (AR) tag used in robotics, to determine the pose of a cube on the table. Then the objective will pick up the cube and place it in a precise position.

At a high level, this is the overall flow of the objective:

  1. Initialize robot and scene
  2. Get object pose using April tag-based computer vision
  3. Pick from pose
  4. Place object

You’ll create four subtrees for this objective based on the flow above. Subtrees are behavior trees that can be instantiated inside of other behavior trees. They’re useful for abstracting away complexity, minimizing node duplication within behavior trees, and re-using common trees in different objectives.

Build - “Initialize Robot and Scene” Subtree

First, create a new objective called Stack Blocks. If you’re unsure how to create a new objective, please refer to the Basic Usage Tutorial.

New objective

The first subtree in this application consists of four steps:

  1. Open Gripper
  2. Look at Table
  3. Clear Snapshot
  4. Take wrist camera snapshot

To start creating the subtree, remove the AlwaysSuccess behavior from your new empty objective and drag a second Sequence behavior into your new objective:

Sequence

Next, choose Create Subtree from the icons above the sequence.

Create Subtree

Name the subtree Initialize Robot and Scene. Note the Subtree-only Objective is checked by default. This means that this objective can only be run as a subtree within another objective, not as a stand-alone objective.

Subtree-only Objective

Your Stack Blocks objective should now look like this:

Stack Blocks

Next, edit the subtree using the edit icon.

Edit

Add the following four subtrees to your subtree as pictured:

Subtrees

When finished, run the Stack Blocks objective. It should be under the Uncategorized section. You should see the robot move to the Look at Table waypoint and then take a snapshot, adding a point cloud to the Visualization pane.

Build - “Get Object Pose” Subtree

The next step in the Stack Blocks objective is to get the pose of a block, by using the April tags to locate them.

Return to the Build tab, and edit the Stack Blocks objective.

Add a new Sequence to the objective, with the following behaviors and ports. The ports listed below are the most relevant and important for this objective to work, so they are listed here even if the value is unchanged from the default. If you want to know more about how each behavior works, see the description within the behavior.

  1. LoadObjectiveParameters - loads parameters from a Yaml file
    a. config_file_name: apriltag_detection_config.yaml
    b. parameters: {parameters}
  2. GetCameraInfo - Gets the camera information from a ROS topic
    a. topic_name: /wrist_camera/camera_info
    b. message_out: {camera_info}
    GetCameraInfo
  3. GetImage
    a. camera: /wrist_camera/color
  4. DetectAprilTags a. detections: {detections}
    b. parameters: {parameters}
    c. camera_info: {camera_info}
    DetectAprilTags
  5. GetDetectionPose
    a. detections: {detections}
    b. target_id: \-1
    c. detection_pose: {detection_pose}
  6. TransformPoseFrame
    a. input_pose: {detection_pose}
    b. target_frame_id: world
    c. output_pose: {output_pose}
  7. VisualizePose
    a. pose: {output_pose} VisualizePose

Your complete subtree should look like this:

Complete subtree

Now run your Stack Blocks objective, and you should see a 3-axis colored pose marker appear on the detected block:

Stack Blocks

Edit the Stack Blocks objective again, and convert your new sequence to a subtree called Get Object Pose. In the previous example, we created a subtree first and then populated it with behaviors, but that isn’t required. You can convert any Sequence node to a subtree at any time, which enables you to reuse as much as possible!

Convert

Once the subtree is created, edit the subtree, choose the root node called Get Object Pose, and add an output port to the tree, to output the pose that was detected, called output_pose, and place it on the parent tree’s blackboard as {object_pose}.

Get Object Pose

The Stack Blocks objective should now look like this.

Stack Blocks

Resetting the Simulation

In the next section, you’ll be moving the blocks around on the table. If your blocks get into a state you don’t want them in you have two options for resetting the scene. First, you can restart MoveIt Pro. The second option is to run the MuJoCo viewer to reset the scene.

info

Performance Note
Running the MuJoCo viewer can impact system performance, and may not be feasible for lower-powered systems.

To enable the MuJoCo viewer, exit MoveIt Pro using CTRL-C, then follow this MuJoCo configuration guide under the section Optional Params -> MuJoCo Viewer. For the lab_sim configuration, the ros2_control tag can be found in the lab_sim/description/picknik_ur.xacro file.

Re-launch MoveIt Pro and the MuJoCo viewer should launch beside MoveIt Pro.

Within the viewer, you can move objects manually by double-clicking the object you want to move, and then using the following:

  • Lift and move: CTRL+Right Mouse
  • Drag horizontally: CTRL+SHIFT+Right Mouse

You can also reset the simulation using the Reset button on the bottom left menu in the viewer.

Reset

Build - Pick from Pose

The next subtree you will build is going to pick the block specified in the {object_pose} blackboard variable.

This step requires using MoveIt Task Constructor (MTC) behaviors.

What is MoveIt Task Constructor (MTC)?

The MoveIt Task Constructor (MTC) framework enables you to break down complex planning tasks into multiple interdependent steps for use by motion planners.

For example, to pick an object a robot must plan for multiple dependent goals in an exact order:

  1. Open the gripper
  2. Move to a pre-grasp position
  3. Approach the object
  4. Close the gripper
  5. Lift the object

MTC is used to plan a solution to the problem, given an object’s pose, and execute the plan.

Benefits of MoveIt Pro Behaviors

MoveIt Pro provides a library of behaviors to make using MTC easier. They provide a simplified way to create, plan, and execute MTC tasks. You can:

  • Use Behaviors to set up and extend the task with common building blocks
  • Choose from a variety of building blocks
  • Reuse components of existing tasks much easier than writing low-level C++

Relevant MTC behaviors

There are many built-in MTC behaviors, you will use these to build your picking objective:

  • InitializeMTCTask creates a task object and initializes the common global properties like trajectory execution info. The task object is then stored on the blackboard to be modified by following behaviors.
  • SetupMTCCurrentState takes the created task and sets up a generator stage corresponding to the current state as the task's start state.
  • SetupMTCPickObject is our custom Behavior which adds the stages required to describe the pick-planning problem to the task.
  • PlanMTCTask calls the plan() function of the given MTC task and stores the solution to the blackboard.
  • ExecuteMTCTask reads an MTC solution from the blackboard and executes it.

Create a new subtree within the Stack Blocks objective called Pick from Pose.

Within that subtree, add the following behaviors and ports:

  1. LoadObjectiveParameters
    a. config_file_name: pick_object_config.yaml
    b. parameters: {parameters}
  2. InitializeMTCTask
    a. task: {mtc_task}
    b. task_id: pick_object
  3. SetupMTCCurrentState
    a. task: {mtc_task}
  4. SetupMTCPickObject
    a. grasp_pose: {grasp_pose}
    b. task: {mtc_task}
    c. parameters: {parameters}
  5. PlanMTCTask
    a. task: {mtc_task}
    b. solution: {mtc_solution}
  6. ExecuteMTCTask
    a. solution: {mtc_solution}

Add a port to the root node of the Pick from pose subtree.

Name the port grasp_pose, and set the default value to the blackboard variable output from the Get object pose subtree you created earlier: {object_pose}.

Your subtree should now look like this:

Subtree

Select Done to finish editing the subtree.

Place the object

Finish building your Stack Blocks objective by adding these existing subtrees:

  1. Look at Table
  2. Place Object
  3. Open Gripper

Place Object is an example subtree that will place an object at a pre-defined waypoint. Adding the Look at Table subtree before placing the object is needed so that the planner will approach the placement point from above. This will make the robot stack the blocks!

Your finished Stack Blocks objective should look like this:

Stack Blocks

Run the objective and you should see the robot pick up a block, and move to the Look at Table waypoint, then plan a placement trajectory and ask for approval:

Look at Table

User Approval

The approval step is not required in real-world applications, but MoveIt Pro provides behaviors for user interaction and approval for applications that are mission-critical and/or a human in the loop is desired. In this case, your Place Object subtree includes this capability by using the Wait for Trajectory Approval if User Available subtree, which checks if there is a UI attached, and if so asks the user to approve the trajectory.

Wait for Trajectory Approval if User Available

Optionally, you could also add a Fallback behavior as a parent of your Place Object subtree, and if the user rejects the plan, you can fall back to the Request Teleoperation subtree, as shown in the Basic Usage Tutorial.

Once you approve the trajectory, the robot should stack the blocks like this!

Blocks

info

You won’t see the blocks being stacked in the “Visualization” pane, it is only shown in the simulated camera feeds, for example under “/scene_camera/color” and “/wrist_camera/color”.

Congratulations, now that you’ve stacked blocks, you can continue to the next section to learn how to move around obstacles and debug planning failures.

Add a Keep-out Zone

Keep-out zones are areas the planners must avoid when moving a robot. In MoveIt Pro, you can add keep-out zones to specify what areas to avoid.

On the left side of the Visualization pane, you’ll see a cube symbol.

Cube symbol

Click on the cube to add a keep-out zone.

Keep-out zone

Add a keep-out zone at Position 1, 1, 1. That means 1 m in (x, y, z) offset from the world frame. You’ll see a red cube appear above the workspace. Click on the cube to drag it, using an interactive marker.

Position 1, 1, 1

Drag it into the workspace. To remove the interactive marker, click anywhere outside the cube in the visualization pane.

Interactive marker

Put a keep-out zone to the robot’s right side, then run Teleoperate, and choose the Workspace Right waypoint.

Workspace Right

The robot should move around the keepout zone!

Return the robot to the initial position by selecting the Look at Table waypoint.

Click on the cube and delete it. Now add a new keepout zone, but set the size to 1m/side.

Create keepout zone

Visualization

Run Teleoperate again, and choose the Workspace Right waypoint again. You should see a message that the motion planning failed.

Failed

This is expected since you added a keep-out zone and tried to move the robot into the keep-out. In other words, the keep-out zone worked!

Next, try to use the Interactive Marker Teleoperation to move the robot arm into the red cube area. You should see a message that PlanMTCTask has failed.

Failed PlanMTCTask

To debug this further, zoom into the Behavior Tree and find the failing node. It will be highlighted in red.

Behavior Tree

Click on the bug symbol to open the MTC debugger and see what failed.

Bug symbol

Debugger

Highlight the failing stage to see a comment that explains why it failed. In this case, it says “eef in collision” which means the robot’s end effector is in collision with the Keepout zone.

Choose Return to Objective in the top right corner to close the MTC Debugger view.

Add a Breakpoint

Another way to debug what is occurring in an objective is by inserting a breakpoint, using a Breakpoint Subscriber.

Breakpoint Subscriber

Edit your Stack Blocks objective, and add a Breakpoint Subscriber in the middle.

Stack Blocks

Run the objective, and you should see the robot pick the cube, then the objective will wait at the breakpoint until the Resume button in the top right corner is pressed.

Resume

At this point, you can move the visualization to get a better look at the scene, and if there was a real failure, you could determine the root cause before resuming the objective.

Press Resume to finish running the objective.

More Perception Objectives

In the Stack Blocks objective, you used April Tags to locate the blocks on the table so that you could stack them. There are other perception capabilities within MoveIt Pro, for example, Point Cloud Registration and Point Cloud Segmentation. In this section, you’ll run some objectives that demonstrate those capabilities and learn how they work.

Point Cloud Registration

Point cloud registration is the process of localizing an object within a point cloud, given a CAD mesh file as an input. This is used in robotics for locating a part within a workspace, as an input to manipulation flows like polishing and grinding parts.

Registering Point Clouds in MoveIt Pro

Typically, point cloud registration starts with an initial guess pose, which might be from an ML perception model, or based on where an object should be by the design of the robot workspace. This initial guess pose should be close to the object being registered, but not exact. The registration process then will find the exact pose using one of several algorithms, such as Iterative Closest Point (ICP).

In Moveit Pro, the RegisterPointClouds behavior does this matching, given three inputs: 1) an initial guess point cloud, 2) the maximum ICP correspondence distance, and 3) a maximum number of iterations. The output is called the “registered pose”, and is the pose relative to the initial guess point cloud. The following screenshot shows an example usage of this behavior:

RegisterPointClouds

Try it yourself

Select the Build tab. In the Application - Advanced Examples section, select the Register CAD Part objective to begin editing.

Register CAD Part

You’ll see the following overall flow:

  1. Move the camera on the end effector to look at the area of interest
  2. Create an initial guess pose (CreateStampedPose)
  3. Load a mesh point cloud at guess pose (visualized as the red point cloud)
  4. Get the camera point cloud
  5. Register (using ICP) the initial guess point cloud to the actual camera point cloud (visualized as the green point cloud)

Register using ICP

Next, run the objective, and you should see two point clouds appear, first a red one above the table (the initial guess), then a green one that matches the closest cube to the initial guess.

Objective

Now edit the objective, and modify the guess pose by changing the x, y, and z values in the CreateStampedPose behavior to (0.2, 0.75, 0.6).

Run the objective again and see how the new guess will register a different cube.

As an additional hands-on exercise, you can replace the Get Object Pose subtree in the Stack Blocks objective with this Register CAD Part objective.

Point Cloud Segmentation

Point cloud segmentation is the process of grouping individual points in a point cloud based on shared characteristics or belonging to the same object. The Segment Point Cloud from Clicked Point objective demonstrates how to segment an object from a point cloud. It uses the GetMasks2DFromPointQuery behavior which calls a machine learning model, called the Segment Anything Model (SAM) to segment the object.

The objective:

  • Prompts the user to click an object in the color wrist camera image
  • Creates a 2D mask of the object using the Segment Anything Model (SAM)
    • The 2D mask is the x,y location of the object, and all the color pixels within the object
  • Converts the 2D mask to a 3D mask, mapping the object into the point cloud
  • Applies the 3D mask to the point cloud, removing everything except the chosen object

3D mask

In the Application - Advanced Examples section, locate and run the Segment Point Cloud from Clicked Point objective. Click on an object in the camera pane and it will segment out the point cloud for that object.

Segment Point Cloud from Clicked Point

Select the burner on the right side and you will see the visualization pane update with only that object in the point cloud.

Object

You can use the Clear Snapshot objective to clear the snapshot, or run the objective again and select another object to segment.

For another hands-on exercise, you can use GetGraspableObjectsFromMasks3D to convert the 3D mask to a graspable object, then ExtractGraspableObjectPose to get a pose that can be used with your existing Pick from Pose subtree.

Summary

In this tutorial, you learned how to:

  1. Find objects using April tags
  2. Create Subtrees
  3. Use MoveIt Task Constructor (MTC) behaviors to pick objects
  4. Add a keep-out zone to your planning scene
  5. Add a breakpoint to your objective
  6. Register an object in a point cloud
  7. Segment an object in a point cloud

Congratulations, you’re now ready to move to the next tutorial!