1. Basic Usage
Tutorial Overview
Welcome to the first MoveIt Pro tutorial! We will teach you how to run example objectives, as well as create your own.
An objective is basically an application in MoveIt Pro, though it can also be a lower level function (a subtree) that is then combined into a higher level objective. Objectives are created using behavior trees, which are similar to state machines. Behavior trees are composed of behaviors, which are the low-level nodes or skills. An example of a behavior is opening a gripper or moving to a pose. To learn more about behaviors, see the Behaviors Concepts Page.
Start MoveIt Pro
We assume you have already installed MoveIt Pro. Launch the application if you haven't already, using:
moveit_pro run -c lab_sim
Run View
MoveIt Pro has two view tabs, a Build view, which is used when creating or editing objectives, and a Run view, for running objectives. Click on the Run tab.
The Run view is very customizable; the below image provides a high level explanation of functionality:
Panes Overview
Your configuration of panes may vary depending on previously saved settings, and you can always change the number of windows available by clicking on the Pane Selection menu in the top right. The contents of each pane can be changed using the drop-down lists in the top left corner of each pane.
Use the drop-down list to change the bottom right pane to /wrist_camera/color
. This will change the camera view to the camera mounted on the robot’s wrist.
More information about each pane:
Visualization Pane
Displays a rendering of what the robot understands of the world, similar to the common Rviz visualizer in ROS. In MoveIt Pro this understand of the world is stored in a Planning Scene. The Planning scene is a combination of our robot’s URDF model, point clouds, octomaps, meshes, and geometric primitives. These data sources are used to avoid collision with the robot and the world.
For lab_sim, you should see a robot arm on a rail. We shouldn't see any other objects from the simulated scene, since the robot has not 'perceived' them yet. The view of the scene can be adjusted by clicking within the pane and dragging the mouse around. The left mouse button rotates the scene, and the right mouse button drags the scene.
Camera Feeds
The /scene_camera/color
, /scene_camera/depth
, /wrist_camera/color
, and /wrist_camera/depth
panes show the camera feeds from the underlying simulator. If you were connected to hardware, these cameras would show the real world and not simulations. By default, these are set to the scene cameras, which are simulated third-person views of the robot, both in color, and depth respectively.
Behavior Tree Pane
In this pane the most recently run objective is shown, if any. While running an objective, this pane will highlight which behavior is currently executing, which is useful for debugging and introspection.
Blackboard Pane
This pane shows the variables being passed around on the behavior tree blackboard. These parameters are the key data that is passed around between behavior nodes.
Setting Favorites
The Favorites toolbar at the top of the user interface can be customized so that our most commonly used applications are quickly accessible. To add/remove objectives to/from the toolbar, click on the three dots next to an objective in the objectives sidebar to bring up a drop-down menu, and then star/unstar the objective.
Now that you understand the Run view, we’re ready to start running objectives! ⭐
Scan in the environment into the planning scene
First our virtual world in the Visualization pane is a bit empty - let’s scan in the entire environment by running the _Scan Scene
objective. We can find it in three different ways:
- In the favorites toolbar
- By scrolling through the objectives sidebar categories
- Or by typing in the name of the objective in the sidebar search area
After running _Scan Scene
you should see:
Running an example lab application
To run an objective in the Run view, choose Move Beakers to Burners
We should see the robot arm pick up each of the three flasks on the left side of the workspace and move them to the burner on the right. When finished, the objective status should update to Objective complete!
Now run the Push Button
objective and the robot will press on the controls of a simulated burner to heat up the beakers 🧪. This objective uses admittance/force control to push the button with an exact force. That’s science! ⚗️
Modifying the Planning Scene
We can clear the scanned in environment at any time by running the Clear Snapshot
objective - try that now. We should see the Visualization pane be cleared out.
Next you can take a snapshot from a different camera - the camera on the robot’s wrist. Run Take wrist camera snapshot
objective and you should see a much smaller area show up as a point cloud.
Restore the full scene by running _Scan Scene
a second time.
Stop looping objectives
Some applications will run forever, until you tell it to stop. The 3 Waypoints Pick and Place
objective is a simple example like that - run it now. It will pick a small cube object using hard-coded waypoints forever.
Use the Stop Motion button on the top right to stop the loop when we've seen it run completely. We'll see the objective status change from Executing to Objective canceled
To reset the robot pose back to its original pose, run the Look at Table
objective. This objective is an example of a very simple behavior tree that simply commands the robot to named waypoint. We can also move to waypoints using the Teleoperate functionality that is covered later in this tutorial.
Running an example ML objective
MoveIt Pro offers a powerful set of tools for using machine learning models within your robot application, while still providing safety and reliability guardrails. One example that ships out of the box is ML-based image segmentation. Given a text prompt like “hotdog”, the model will search for all instances of hotdogs within the camera field of view, and return its location for use in manipulation.
Let’s run the Segment Image from Text Prompt objective. The default value of the prompt string is an object
. The output of this objective is a new camera topic /masks_visualization
that should automatically be added to our view panes, with all the detected objects highlighted in different colors.
There is a bug in 7.5.1 that requires you to run this objective twice in order to correctly view the /masks_visualization images. This should be fixed in 7.6.
Modify the example ML objective
Now we are going to edit the Segment Image from Text Prompt objective by selecting the Edit button on the top right of the menu bar.
We should see the behavior tree ready to be edited:
Click on the Segment Image from No Negative Text Prompt Subtree
so that a sidebar opens on the right side. In this sidebar you can easily set and edit parameters.
Scroll down in the sidebar until you see the prompts
variable, towards the bottom.
Change the value from an object
to bottle
. This will prompt the ML model to find only bottles in the camera image.
Now choose the Run
button from the top right of the menu bar to run the objective again. We'll see the /masks_visualization
camera image update to show only the bottles are segmented.
Try changing the value of prompts
to flask
and you should see that the flasks are now segmented. For fun, try random inputs like “dog” and see what happens. 🥴
View Parameters on the Blackboard
Now that you are editing parameters within behavior trees, we should mention that MoveIt Pro uses a blackboard to store and share data between behaviors and behavior trees. To view the state of the blackboard, choose Blackboard
from the drop down menu in any view pane. Then expand the arrow buttons to see the variables within each subtree, and you should see a list of variables that are currently on the blackboard.
Find the variable named prompts
and its value should be whatever you most recently set it to. This tool is useful for debugging complex data flows within a behavior tree.
Teleoperate the Robot
MoveIt Pro provides four types of manual control for when a robot needs to be set up, diagnosed, or recovered.
To dive in, click on the Teleoperate button on the top right of the toolbar.
In the top left of the menu bar you should see four available Teleoperation modes - click through them to explore how the user interface changes for each mode.
- Waypoints
- IMarker (Interactive Marker)
- Pose Jog
- Joints Jog
Waypoint Control
Waypoints are saved joint states that can be re-used later in objectives or standalone. The top toolbar also provides some favorite waypoints for quick access during teleoperation. Try running a few waypoints to get a feel for it.
The Waypoints sidebar to the left contains the full list of options, including the ability to save, edit, and favorite waypoints.
Interactive Marker Control
The Interactive Marker (IMarker) teleoperation mode allows users to move the robot's end effector in 6 degrees of freedom using arrows for translation, and discs for rotation. Try dragging the arrows and rotating the disks to get an idea for how the interactive marker can be moved.
As the interactive marker is moved, a motion will be automatically planned that moves the end effector from the current pose to a goal pose. A preview of the trajectory will be visualized, and if it looks safe and desirable, you can approve the trajectory by clicking the green check button.
If no preview is shown, it means that there is no valid inverse kinematics solution for the desired pose. We may have dragged the interactive marker beyond the robot’s physical reach. If the marker is in an undesirable state, the Reset Marker button will reset the marker back to the current end effector pose.
Some elements of the simulation scene (e.g. the bench) may not be known to the robot for planning purposes, and therefore you can command the robot to collide with those elements, unless they are added to the planning scene. Check out our how-to guide on creating keepout zones and saving/loading a Planning Scene for more information.
Cartesian Pose Jog Control
The Pose Jog mode enables the user to translate or rotate the end-effector along different planes or axes, and open or close the gripper. To use this mode most effectively, we recommend you switch your largest view pane to the /wrist_camera/color
since the controls are mapped to the coordinate frame of the wrist camera. Try out these controls now to get a feel for them.
Gripper Control in Pose Jog
Note - on the bottom left of the Visualization pane are buttons that can be used to open and close the gripper.
Pose Jog Settings
On the toolbar you’ll see a settings icon, which allows you to change the jog speed, and turn off collision checking if needed. It also has a setting for changing planning groups, which is an advanced feature you can ignore for now. Try adjusting these controls to see how it affects the behavior.
There can be situations where the robot collides with an object during an objective and is unable to be teleoperated because the beginning of the trajectory is in a collision. In that case, Jog Collision Checking can be turned off so that the robot can be teleoperated. It’s recommended to keep collision checking on unless you move the robot into a collision and need full manual control to get it back into a safe state. After it is moved back to a non-collision state, it is recommended to turn collision checking back on for safety reasons.
The Jog Collision Checking and Jog Speed parameters are only used when jogging a joint via the +/- buttons in the Joints Jog view or when using the endpoint jog buttons around the Visualization pane in the Pose Jog view. This is because those two methods use MoveIt Servo (and the respective servo parameters in the robot configuration package), whereas the other modes (such as the slider in Joints Jog view and Interactive Marker in IMarker view) use a regular motion planner.
To make it easy to control our jogging during this tutorial, change the Jog Speed to 30% using the slider.
Joint Control
The Joints Jog mode can be used to perform low-level individual joint control. Switch to this mode and you should see a sidebar appear that offers several control options:
- The +/- buttons allow you to jog the arm slowly. The speed scale can help adjust the precision
- The slider bar allows you to automatically move to a setpoint
- The text input box allows you to type an exact degree or radian you want the robot to achieve.
In addition, you can copy the joint values, see the joint limits, and switch between radians and degrees.
Gripper Control in Joint Jog
In the secondary navigation area at the top, there are the "Open Gripper" and "Close Gripper" buttons for controlling the end effector.
Teleoperating the gripper to joint values other than "Open" or "Close" is not currently supported.
Create a Waypoint
Now that you know how to use various teleoperation modes, you can create a new waypoint! We’re going to create a waypoint that grasps the cube on the right side of the table.
Use the various teleop modes to drive the robot arm to a grasp position that envelopes the cube. We will want to first open the gripper, and we recommend using Pose Jog as the easiest mode for driving the arm around the cube. Remember to set the largest camera view to /wrist_camera/color
Now create a waypoint from your robot pose by switching to the Waypoints mode, then pressing the +Waypoint button.
Name your new waypoint Pick right cube
.
You can adjust which parts of the robot to save as a waypoint, by clicking the Change Planning Group button when adding a new waypoint. However, this is an advanced feature you do not need to worry about for now.
Click Create to finish creating your new waypoint. We’ll use this later when we create a new objective.
When you're ready, click either the Stop Motion button at the top right, or the Stop and Exit button on the left to exit teleoperation mode.
Building a new objective
Now that you have run objectives, know how to teleoperate, and know how to save waypoints, we're ready to begin building a simple objective yourself. We’re going to build a simple pick and place application using the waypoint you created earlier called Pick right cube
, as well as some existing waypoints that should come out of the box.
To begin, select the Build tab on the top menu bar.
In the Build tab, select +Objective. This opens the New Objective dialog.
Enter My Pick and Place
as the name, then Create.
Now you should have a new, empty objective. By default, all new objectives begin with the AlwaysSuccess
behavior so that it's considered a valid behavior tree.
Delete the AlwaysSuccess
behavior by either clicking on it and selecting the popup delete icon, or by pressing the *Delete button on your keyboard.
Our Behavior Tree editors includes an Undo/Redo button that can protect you from accidentally deletions, or other mistakes.
Now we are going to add our first behavior. There are two main ways to add behaviors:
- Scroll through the full library of behaviors on the left sidebar, expanding categories as needed
- Use the search bar to find the behavior from memory
Either way, once you find the behavior you must drag it onto the behavior tree editor and have it “snap” to a Sequence or other node.
Using your preferred approach, add the Move To Waypoint
behavior to our new tree. Click on the behavior and scroll to the bottom of the sidebar list to find the waypoint_name
port dropdown selector. Use the dropdown list to set the waypoint name to Look at Table
.
If you want to be able to easily see the Look at Table
, you can also set the behavior’s name
attribute to Look at Table
.
Next, add a Clear Snapshot
behavior, then add a Take Wrist Camera Snapshot
behavior to the tree. This will add a fresh point cloud to the visualization pane when you run the objective.
As a sanity check, let’s run our objective that is under construction, and you’ll see the Visualization updated to show the depth camera view of the objects on the table.
Now click the Edit button shortcut in the top right to jump back into editing this objective.
We are going to add a few more behaviors to complete the objective for pick and place. Below is a complete list of steps, including the ones we’ve already added in the above.
"My Pick and Place" Objective
Full sequence:
Move To Waypoint: Look at Table
Clear Snapshot
Take Wrist Camera Snapshot
Move To Waypoint: Pick right cube
- This is the waypoint you added earlier during the Teleop tutorial
Close Gripper
Move To Waypoint: Above Place Cube
- This waypoint is used as a mid-point between pick and place
Move To Waypoint: Place Cube
Open Gripper
Move To Waypoint: Above Place Cube
To save time, you can use the blue Duplicate icon on the top right side of any behavior to make a copy of it.
Your completed objective should look like this:
Run the objective to make sure it works.
We should see the robot pick up the right cube and place it in a different location on the table! We realize this is a very basic example, but this is our intro tutorial. MoveIt Pro can do much more advanced applications that involve computer vision, reasoning, and machine learning.
Modify the objective to add teleop recovery
In some industries and applications, such as in unstructured environments, a robot may encounter a condition that causes the robot to fail to run successfully. In these situations, it may be appropriate to call in a human operator to help recover the robot - “human in the loop” style. This is not true for all industries and applications, of course.
A unique feature of MoveIt Pro is that it allows user interventions, approvals, and feedback to be seamlessly integrated within the behavior tree. We can add a special behavior for teleop recovery before proceeding with the rest of the objective. To illustrate how to do this, we're going to modify our objective to add a Fallback
behavior, which then switches into Teleoperation mode for the user to move the robot.
Steps to Modify
Switch into edit more for our previous My Pick and Place
objective.
Next, add a new behavior called AlwaysFailure
, dragging it into the objective. Connect the input of the AlwaysFailure
behavior to the bottom of the sequence:
Before continuing, let’s see what this rather arbitrary change does. Run the objective and you should see the objective run and then fail at the end with the status changing to Objective failed and an error message pop-up. This is expected.
Next, we're going to add a Fallback
behavior to the objective to allow the application to recover from this failure. Fallback
behaviors allow you to execute a different set of behaviors when you encounter a failure. These behaviors are called recovery behaviors.
We're going to use the Request Teleoperation
objective as a recovery behavior. Go back into edit mode for your objective and add the Fallback
behavior, dragging it to the editor and adding it to the bottom of the sequence.
Click on the existing connection (line) between the Sequence
node and the Always Failure
node. Then use the Delete key to remove it.
Next, connect the Fallback
between those nodes:
Next, find and add the Request Teleoperation
behavior, and add that below the AlwaysFailure
as a separate branch.
Within the Request Teleoperation
behavior parameters, set the enable_user_interaction
value to true
.
Also set the user_interaction_prompt
text to say “Place the cube”.
Now run the objective again. This time after the cube is picked up it does not just stop. Instead, a Teleoperation menu appears and you can manually drive the robot to its location to drop off the cube.
Drive the robot to a new location to drop off the cube, and click the Open Gripper button to release it. When you are finished, use the Success button to end the objective.
Summary
We've now completed this tutorial. In this tutorial, you learned how to:
- Run built-in objectives
- Teleoperate a robot
- Create a waypoint
- Create a new objective
- Modify an objective
- Add a recovery behavior
🎉 Congratulations, we're now ready to move to the next tutorial!