Skip to main content
Version: 7

3D Object Registration and Scanning

For this example, we will use a simulated Universal Robots UR20 to perform object registration and scanning tasks.

Launch MoveIt Pro

We assume you have already installed MoveIt Pro to the default install location. Launch the application using:

moveit_pro run -c grinding_sim

Point Cloud Registration to a 3D Mesh

MoveIt Pro can load point clouds from either an .stl mesh file or a .pcd point cloud file. Then, we can use the Iterative Closest Point (ICP) algorithm to register a live point cloud from the camera to this ground truth mesh. The output of registration is the estimated 3D pose of the object with respect to a given coordinate frame.

The workflow for doing this is:

  1. Move to a predefined waypoint that can get a good approximate view of the object.
  2. Load a point cloud representing the ground truth file using the LoadPointCloudFromFile Behavior.
  3. Get point cloud data from a sensor and transform it from the sensor frame to the world frame using the TransformPointCloudFrame Behavior.
  4. Register the live point cloud to the ground truth point cloud from the mesh using the RegisterPointClouds Behavior.
  5. Note that the ICP algorithm requires an initial pose estimate, which can be initialized with a CreatePoseStamped Behavior.
  6. Visualize the results by transforming the point cloud and sending it to the UI, as shown below.

To run this example, execute the Register Machined Part Annotated Objective. By performing object registration, we can approximate the transform from the origin of the registered model to the world frame, as shown below. This transformation, denoted in the Objective as {registered_pose}, will be useful for the next step. Each stage of registration is illustrated in the following images.

  1. Provide an initial guess for the model position. world to initial guess
  2. Registration outputs the transform from the initial position to the model. registration correction
  3. The model's position in the world frame is then calculated. registration correction

Executing a Trajectory Relative to the Registered Object

Now that we have registered our object, we have an updated estimate for its pose in the world. Again, this corresponds to the {registered_pose} blackboard variable.

With this, we can plan a Cartesian path on the object surface for grinding. Importantly, these poses need to be expressed with respect to the registered object transform. This ensures that the grinding path is robust to different placements of the object, so long as the object can be successfully localized.

For our grinding task, we will add the following to our Objective:

  1. Load a list of poses from a file using the LoadPoseStampedVectorFromYaml Behavior.

  2. Using the ForEachPoseStamped Behavior, loop through each of the poses above and visualize the pose with the VisualizePose Behavior.

  3. Use the PlanCartesianPath Behavior to plan a path through all of the points.

  4. Execute the planned motion with the ExecuteFollowJointTrajectory Behavior

To run this example, execute the Grind Machined Part Objective.

grinding objective

Next Steps

In this tutorial, you have seen some example Objectives for 3D object registration and planning. You can augment the capabilities of your robot in several ways, including:

  • Trying this approach with a different 3D object.
  • Incorporating fallback logic when registration fails, such as retrying with a new initial guess.
  • Using geometric processing Behaviors to fit a plane to the ground surface and remove it before registration.
  • Using machine learning based segmentation models to segment the point cloud to the object of interest before registration.