Skip to main content
Version: 7

1. Basic Usage

đź•’ Duration: 1-2 hours

Tutorial Overview​

Welcome to your first hands-on experience with MoveIt Pro! This tutorial is designed to introduce you to the core interface and functionality of the platform, including how to run and modify Objectives, use teleoperation tools, and create your own robot applications using behavior trees. You'll learn how to interact with simulated camera feeds, build pick-and-place workflows, set up keep-out zones, and integrate human-in-the-loop recovery steps. Whether you're new to robotics or transitioning from ROS-based development, this tutorial lays the foundation for building powerful, perception-driven robotic applications.

info

An Objective is basically an application in MoveIt Pro, though it can also be a lower level function (a subtree) that is then combined into a higher level Objective. Objectives are created using behavior trees, which are similar to state machines. Behavior trees are composed of behaviors, which are the low-level nodes or skills. An example of a behavior is opening a gripper or moving to a pose. To learn more about behaviors, see the Behaviors Concepts Page.

Pre-reqs​

We assume you have already installed MoveIt Pro and have some familiarity with robotic arms.

Start MoveIt Pro​

Launch the application if you haven't already, using:

moveit_pro run -c lab_sim

Run View​

MoveIt Pro has two view tabs, a Build view, which is used when creating or editing objectives, and a Run view, for running objectives. Click on the Run tab.

blank

The Run view is very customizable; the below image provides a high level explanation of functionality:

Favorities

Panes Overview​

Your configuration of panes may vary depending on previously saved settings, and you can always change the number of windows available by clicking on the Pane Selection menu in the top right. The contents of each pane can be changed using the drop-down lists in the top left corner of each pane.

Use the drop-down list to change the bottom right pane to /wrist_camera/color. This will change the camera view to the camera mounted on the robot’s wrist.

blank

info

More information about each pane:

Visualization Pane​

Displays a rendering of what the robot understands of the world, similar to the common Rviz visualizer in ROS. In MoveIt Pro this understanding of the world is stored in a Planning Scene. The Planning scene is a combination of our robot’s URDF model, point clouds, octomaps, meshes, and geometric primitives. These data sources are used to avoid collision with the robot and the world.

For lab_sim, you should see a robot arm on a rail. We shouldn't see any other objects from the simulated scene, since the robot has not 'perceived' them yet. The view of the scene can be adjusted by clicking within the pane and dragging the mouse around. The left mouse button rotates the scene, and the right mouse button drags the scene.

Camera Feeds​

The /scene_camera/color, /scene_camera/depth, /wrist_camera/color, and /wrist_camera/depth panes show the camera feeds from the underlying simulator. If you were connected to hardware, these cameras would show the real world and not simulations. By default, these are set to the scene cameras, which are simulated third-person views of the robot, both in color, and depth respectively.

Behavior Tree Pane​

In this pane the most recently run objective is shown, if any. While running an objective, this pane will highlight which behavior is currently executing, which is useful for debugging and introspection.

Blackboard Pane​

This pane shows the variables being passed around on the behavior tree blackboard. These parameters are the key data that is passed around between behavior nodes.

Setting Favorites​

The Favorites toolbar at the top of the user interface can be customized so that our most commonly used applications are quickly accessible. To add/remove objectives to/from the toolbar, click on the three dots next to an objective in the objectives sidebar to bring up a drop-down menu, and then star/unstar the objective.

Load

Scan in the environment into the planning scene​

You might notice our virtual world in the Visualization pane is a bit empty - let’s scan in the entire environment by running the _Scan Scene Objective. We can find it in three different ways:

  • In the favorites toolbar
  • By scrolling through the objectives sidebar categories
  • Or by typing in the name of the objective in the sidebar search area

After running _Scan Scene you should see:

blank

Running an example lab application​

To run an objective in the Run view, choose Move Beakers to Burners

We should see the robot arm pick up each of the three flasks on the left side of the workspace and move them to the burner on the right. When finished, the objective status should update to Objective complete!

Objective complete!

Now run the Push Button objective and the robot will press on the controls of a simulated burner to heat up the beakers 🧪. This objective uses admittance/force control to push the button with an exact force. That’s science! ⚗️

tip

In this beginner tutorial we are showing you how to do everything from the UI. However, MoveIt Pro ships with a powerful API that allows you to start, stop, and monitor objectives in a headless mode, without the UI.

Modifying the Planning Scene​

We can clear the scanned in environment at any time by running the Clear Snapshot objective - try that now. We should see the Visualization pane be cleared out.

Next you can take a snapshot from a different camera - the camera on the robot’s wrist. Run Take wrist camera snapshot objective and you should see a much smaller area show up as a point cloud.

Restore the full scene by running _Scan Scene a second time.

Stop looping objectives​

Some applications will run forever, until you tell it to stop. The 3 Waypoints Pick and Place objective is a simple example like that - run it now. It will pick a small cube object using hard-coded waypoints forever.

Use the Stop Motion button on the top right to stop the loop when we've seen it run completely. We'll see the objective status change from Executing to Objective canceled

Executing

To reset the robot pose back to its original pose, run the Look at Table objective. This objective is an example of a very simple behavior tree that simply commands the robot to named waypoint. We can also move to waypoints using the Teleoperate functionality that is covered later in this tutorial.

Running an example ML objective​

MoveIt Pro offers a powerful set of tools for using machine learning models within your robot application, while still providing safety and reliability guardrails. One example that ships out of the box is ML-based image segmentation. Given a text prompt like “hotdog”, the model will search for all instances of hotdogs within the camera field of view, and return its location for use in manipulation.

warning

Not all computers are powerful enough to run ML models, especially if they do not have a dedicated GPU.

Let’s run the Segment Image from Text Prompt objective. The default value of the prompt string is an object. The output of this objective is a new camera topic /masks_visualization that should automatically be added to our view panes, with all the detected objects highlighted in different colors.

blank

Modify the example ML objective​

Now we are going to edit the Segment Image from Text Prompt objective by selecting the Edit button on the top right of the menu bar.

Edit Button

We should see the behavior tree ready to be edited:

Segment Image From Text Objective

Click on the Segment Image from No Negative Text Prompt Subtree so that a sidebar opens on the right side. In this sidebar you can easily set and edit parameters.

Segment Image Subtree

Scroll down in the sidebar until you see the prompts variable, towards the bottom.

Prompts input

Change the value from an object to bottle. This will prompt the ML model to find only bottles in the camera image.

Now choose the Run button from the top right of the menu bar to run the objective again. We'll see the /masks_visualization camera image update to show only the bottles are segmented.

blank

Try changing the value of prompts to flask and you should see that the flasks are now segmented. For fun, try random inputs like “dog” and see what happens. 🥴

View Parameters on the Blackboard​

Now that you are editing parameters within behavior trees, we should mention that MoveIt Pro uses a blackboard to store and share data between behaviors and behavior trees. To view the state of the blackboard, choose Blackboard from the drop down menu in any view pane. Then expand the arrow buttons to see the variables within each subtree, and you should see a list of variables that are currently on the blackboard.

info

You'll learn more about the concept of subtrees in Tutorial 2

Blackboard

Find the variable named prompts and its value should be whatever you most recently set it to. This tool is useful for debugging complex data flows within a behavior tree.

Teleoperate the Robot​

MoveIt Pro provides four types of manual control for when a robot needs to be set up, diagnosed, or recovered.

To dive in, click on the Teleoperate button on the top right of the toolbar.

In the top left of the menu bar you should see four available Teleoperation modes - click through them to explore how the user interface changes for each mode.

  • Waypoints
  • IMarker (Interactive Marker)
  • Pose Jog
  • Joints Jog

To interact with the IMarker, you will need the "Visualization" view selected.

Waypoints

Waypoint Control​

Waypoints are saved joint states that can be re-used later in objectives or standalone. The top toolbar also provides some favorite waypoints for quick access during teleoperation. Try running a few waypoints to get a feel for it.

Waypoints

The Waypoints sidebar to the left contains the full list of options, including the ability to save, edit, and favorite waypoints.

More

Interactive Marker Control​

The Interactive Marker (IMarker) teleoperation mode allows users to move the robot's end effector in 6 degrees of freedom using arrows for translation, and discs for rotation. Try dragging the arrows and rotating the disks to get an idea for how the interactive marker can be moved.

As the interactive marker is moved, a motion will be automatically planned that moves the end effector from the current pose to a goal pose. A preview of the trajectory will be visualized, and if it looks safe and desirable, you can approve the trajectory by clicking the green check button.

blank

If no preview is shown, it means that there is no valid inverse kinematics solution for the desired pose. We may have dragged the interactive marker beyond the robot’s physical reach. If the marker is in an undesirable state, the Reset Marker button will reset the marker back to the current end effector pose.

warning

Some elements of the simulation scene (e.g. the bench) may not be known to the robot for planning purposes, and therefore you can command the robot to collide with those elements, unless they are added to the planning scene. Check out our how-to guide on creating keepout zones and saving/loading a Planning Scene for more information.

Cartesian Pose Jog Control​

The Pose Jog mode enables the user to translate or rotate the end-effector along different planes or axes, and open or close the gripper. To use this mode most effectively, we recommend you switch your largest view pane to the /wrist_camera/color since the controls are mapped to the coordinate frame of the wrist camera. Try out these controls now to get a feel for them.

Pose Jog

Gripper Control in Pose Jog​

Note - on the bottom left of the Visualization pane are buttons that can be used to open and close the gripper.

Control

Pose Jog Settings​

On the toolbar you’ll see a settings icon, which allows you to change the jog speed, and turn off collision checking if needed. It also has a setting for changing planning groups, which is an advanced feature you can ignore for now. Try adjusting these controls to see how it affects the behavior.

Joints Jog

note

There can be situations where the robot collides with an object during an objective and is unable to be teleoperated because the beginning of the trajectory is in a collision. In that case, Jog Collision Checking can be turned off so that the robot can be teleoperated. It’s recommended to keep collision checking on unless you move the robot into a collision and need full manual control to get it back into a safe state. After it is moved back to a non-collision state, it is recommended to turn collision checking back on for safety reasons.

note

The Jog Collision Checking and Jog Speed parameters are only used when jogging a joint via the +/- buttons in the Joints Jog view or when using the endpoint jog buttons around the Visualization pane in the Pose Jog view. This is because those two methods use MoveIt Servo (and the respective servo parameters in the robot configuration package), whereas the other modes (such as the slider in Joints Jog view and Interactive Marker in IMarker view) use a regular motion planner.

To make it easy to control our jogging during this tutorial, change the Jog Speed to 30% using the slider.

Joint Control​

The Joints Jog mode can be used to perform low-level individual joint control. Switch to this mode and you should see a sidebar appear that offers several control options:

  • The +/- buttons allow you to jog the arm slowly. The speed scale can help adjust the precision
  • The slider bar allows you to automatically move to a setpoint
  • The text input box allows you to type an exact degree or radian you want the robot to achieve.

Joint sidebar

In addition, you can see the joint limits on the sliders, switch between radians and degrees with the toggle, and copy out all the joint names and values (in radians/meters) using the copy button.

Gripper Control in Joint Jog​

In the secondary navigation area at the top, there are the "Open Gripper" and "Close Gripper" buttons for controlling the end effector.

Gripper Control

note

Teleoperating the gripper to joint values other than "Open" or "Close" is not currently supported.

Create a Waypoint​

Now that you know how to use various teleoperation modes, you can create a new waypoint! We’re going to create a waypoint that grasps the cube on the right side of the table.

Use the various teleop modes to drive the robot arm to a grasp position that envelopes the cube. We will want to first open the gripper, and we recommend using Pose Jog as the easiest mode for driving the arm around the cube. Remember to set the largest camera view to /wrist_camera/color

Now create a waypoint from your robot pose by switching to the Waypoints mode, then pressing the +Waypoint button.

blank

Name your new waypoint Pick right cube.

tip

You can adjust which parts of the robot to save as a waypoint, by clicking the Change Planning Group button when adding a new waypoint. However, this is an advanced feature you do not need to worry about for now.

Click Create to finish creating your new waypoint. We’ll use this later when we create a new objective.

When you're ready, click either the Stop Motion button at the top right, or the Stop and Exit button on the left to exit teleoperation mode.

Building your own objective​

Create New Objective​

Now you're ready to build a your own custom objective! We’re going to build a simple pick and place application using the waypoint you created earlier called Pick right cube, as well as some existing waypoints that should come out of the box.

To begin, select the Build tab on the top menu bar.

In the Build tab, select +Objective. This opens the New Objective dialog.

Enter My Pick and Place as the name.

For the category, create a new one using your company or organization's name. This can be your first custom category.

You can also provide an optional description, which we recommend. Try A simple example using hardcoded waypoints.

New Objective

tip

Leave the checkbox Subtree-only objective unchecked. It specifies if an Objective can be executed. If an Objective is marked for usage as a subtree only, then it is non-runnable, but can instead be used inside another Objective in the same way we use behaviors.

Click the Create button. You should see a new, mostly empty objective.

Adding Behaviors​

You should see three behaviors pre-populated in your new objective.

info

In the behavior tree concept, a behavior, also known as a node in other domains, is the fundamental building block of a behavior tree. Each behavior represents a single unit of a robot skill or control logic. It always returns a status — SUCCESS, FAILURE, or RUNNING—when ticked (i.e., executed). Behaviors can perform actions, evaluate conditions, manage flow between other nodes, or modify behaviors. They fall into categories such as Action, Condition, Control, Decorator, and Subtree nodes. By combining and organizing behaviors hierarchically, developers can create complex and reusable robot applications in a modular and maintainable way.

All new objectives are created with a simple valid behavior tree using the AlwaysSuccess behavior.

AlwaysSuccess

Delete the AlwaysSuccess behavior by either clicking on it and selecting the popup delete icon, or by pressing the Delete button on your keyboard.

tip

The MoveIt Pro behavior tree editor includes an Undo/Redo button that can protect you from accidentally deletions, or other mistakes. You can find it in the top left of the editor screen. Try using this feature to undo and redo the deletion.

Now we are going to add our first behavior.

info

MoveIt Pro ships with 200+ behaviors for all domains of robotics: motion planning, machine learning, inverse kinematics, Cartesian planning, real-time control, grasping, task planning, human in the loop teleop, and more. You are also encouraged to build your own customer behaviors and plugins, which allows you to incorporate other ROS packages or third party capabilities with MoveIt Pro for unlimited potential.

In MoveIt Pro, there are two main ways to add behaviors:

  1. Scroll through the full library of behaviors on the left sidebar, expanding categories as needed
  2. Use the search bar to find the behavior from memory

Either way, once you find the behavior you must drag it onto the behavior tree editor and have it “snap” to a Sequence or other node.

Using your preferred approach, add the Move To Waypoint behavior to our new tree. Click on the behavior and scroll to the bottom of the sidebar list to find the waypoint_name port dropdown selector. Use the dropdown list to set the waypoint name to Look at Table.

Look at Table

tip

Each behavior in an objective has a name attribute that can be useful for identification. Let's set this behavior’s name to Look at Table.

Next, add a Clear Snapshot behavior, then add a Take Wrist Camera Snapshot behavior to the tree. This will add a fresh point cloud to the visualization pane when you run the objective.

Clear Snapshot

As a sanity check, let’s run our objective that is under construction, and you’ll see the Visualization updated to show the depth camera view of the objects on the table.

Visualization

Now click the Edit button shortcut in the top right to jump back into editing this objective.

We are going to add a few more behaviors to complete the objective for pick and place. Below is a complete list of steps, including the ones we’ve already added in the above.

Full Behavior Sequence​

Full sequence:

  • Move To Waypoint: Look at Table
  • Clear Snapshot
  • Take Wrist Camera Snapshot
  • Move To Waypoint: Pick right cube
    • This is the waypoint you added earlier during the Teleop tutorial
  • Close Gripper
  • Move To Waypoint: Above Place Cube
    • This waypoint is used as a mid-point between pick and place
  • Move To Waypoint: Place Cube
  • Open Gripper
  • Move To Waypoint: Above Place Cube
tip

To save time, you can use the blue Duplicate icon on the top right side of any behavior to make a copy of it.

blank

tip

Take advantage of the build navigation buttons located in the lower left of the window. To keep your tree clean and organized, try the Auto Layout button!

blank

Your completed objective should look like this:

Completed

Run the objective to make sure it works.

We should see the robot pick up the right cube and place it in a different location on the table! We realize this is a very basic example, but this is our intro tutorial. MoveIt Pro can do much more advanced applications that involve computer vision, reasoning, and machine learning.

tip

You can also edit objectives using your favorite IDE / code editor, as all behavior trees are saved to file in plain text XML format.

The behavior tree you just created can be found on your filesystem, and you can run the following command in terminal to see them all:

ls -la ~/moveit_pro/moveit_pro_example_ws/src/lab_sim/objectives

Great job on creating your first objective! We will come back to this in a minute, but first we'll explain the concept of keep-out zones.

Adding Keep-out Zones​

In MoveIt Pro, keep-out zones are areas the motion planners must avoid, and are essentially easy-to-configure collision objects.

info

Collision Object In motion planning, a collision object represents a physical item in the environment that the robot must avoid during motion. These objects are typically defined by their shape, size, and position in the planning scene and can include things like tables, walls, tools, or even other robots. Collision objects are used by the motion planner to ensure that generated paths are free of collisions, allowing the robot to move safely and efficiently through its workspace.

To add a keep-out zone, find the upper left side of the Visualization pane and you should see a cube symbol.

Cube symbol

Click on the cube to open the pop-up modal.

Keep-out zone

Adjust the Size parameter to 0.3m.

Click the Create button and you’ll see a red cube appear to the side of the robot.

Click on the cube and an interactive marker should appear.

Position 1, 1, 1

For this next step, try to be exact in following the instructions. In the image above, grab the red arrow on the right side and drag it until its centered in front of the robot.

To hide the interactive marker, click anywhere outside the cube in the visualization pane. Here's the resulting keep-out zone location, from a different angle:

Interactive marker

Next we'll run the My Pick and Place objective again, and we should observe it fail due to this new collision object.

Failed place

Notice the red spheres that highlight where the collision would have occurred, had the motion plan been executed. In the next section we'll explain how to add teleop recovery to our custom-built objective to overcome this issue.

Modify your objective to add teleop recovery​

In some industries and applications, such as in unstructured environments, a robot may encounter an unexpected condition where it may be appropriate to call in a human operator to help recover the robot — “human in the loop” style. This is not true for all industries and applications, of course.

A unique feature of MoveIt Pro is that it allows user interventions, approvals, and feedback to be seamlessly integrated within the behavior tree. We can add a special behavior for teleop recovery before proceeding with the rest of the objective. To do this, we're going to modify our objective to add a Fallback behavior, which then switches into Teleoperation mode for the user to move the robot.

Adding a Fallback Node​

The location where our objective is currently failing is the second to last Move to Waypoint behavior in the tree. You can see this node highlighted red after running the My Pick and Place objective with the keep-out zone in the way (from the previous section).

FailingPlace

To overcome this, we will add a Fallback behavior to the objective to allow the application to recover from this failure.

  • Switch into edit mode for our previous My Pick and Place objective.
  • Add the Fallback behavior, dragging it to the editor and connecting it to the Sequence node, and placing it above the Open Gripperbehavior.

Fallback

info

Fallback behaviors allow you to execute a different set of behaviors when you encounter a failure. These behaviors are called recovery behaviors.

  • Delete the line connecting the second to last Move to Waypoint behavior from the parent Sequence node. You can do this by clicking on the line then pressing the Delete key.
  • Drag a line from the orphaned Move to Waypoint behavior to the new Fallback.

FailingPlace

At this point your behavior might look pretty messy. Click the Autolayout button in the bottom left of your behavior tree editor.

FailingPlace

Adding Request Teleoperation​

We're going to use the Request Teleoperation objective as our recovery behavior.

  • Find and add the Request Teleoperation behavior to the behavior tree, adding it below the Move to Waypoint behavior as a separate branch.

Fallback layout

  • Modify the Request Teleoperation behavior parameters:
    • Set the enable_user_interaction value to true.
    • Set the user_interaction_prompt text to say Choose a different place location.

Request Teleoperation

  • Now run the My Pick and Place objective again.

This time after the cube is picked up it does not just stop. Instead, a Teleoperation menu appears and you can manually drive the robot, using various teleop modes like IMarker, to a different location to drop off the cube.

warning

If you do not have the Visualization view visible, you will not see the teleoperation prompt.

blank

Teleop the robot to a new location to drop off the cube, and click the Success button to continue operation of the still-running objective. It should then open the gripper and move to its home position automatically.

Summary​

By completing this tutorial, you’ve built a strong foundation in using MoveIt Pro—from running objectives and teleoperating the robot, to creating your own pick-and-place application with safety zones and recovery behaviors. You explored how to configure the user interface, use behavior trees for task logic, and integrate ML-based perception. With these essential skills, you're now ready to dive into more advanced capabilities like AprilTag-based vision, motion planning, and debugging tools in the next tutorial.

🎉 Congratulations, we're now ready to move to the next tutorial!