Plotting with robots: Accurate robot path creation
Using robots for gluing is commonplace in automated production lines. But how can pinpoint glue application accuracy be maintained when target objects are easily deformed? The answer lies in rigorous object measurement and derivation of traversable paths from point cloud data.
news
4min
2025-10-12
Robots require spatial awareness to operate autonomously, especially in unstructured environments. Currently, ABB possesses well-established hardware and methods, such as 3DQI, that are effective for reconstructing 3-D characteristics of parts. These tools are commonly used in the automotive industry to measure large-scale objects, such as car bodies. However, in the electronics industry, 3-D measurement and handling of smaller components are more challenging. Moreover, this sector has no solution for high-precision gluing applications involving easily deformable parts.
Point cloud reconstruction and path extraction
For high-precision gluing applications on easily deformable parts, ABB has turned to two techniques: robot-based point cloud reconstruction and path extraction »01. Point cloud reconstruction involves building a digital data set that provides a precise 3-D representation of an object. In the work described here, a robot uses auxiliary vision equipment and markers to gather the required data. From this data, a point cloud map (ie, the digital 3-D representation) is extracted and a gluing path calculated. The techniques discussed in this article focus on achieving a point cloud reconstruction and path accuracy that meets industry standards.
The necessity to remain within computational time constraints is paramount: As accuracy increases, so too does data granularity and with it, computational overhead. Long processing times are intolerable in a manufacturing environment, so innovative approaches to speed up overall algorithm execution times have been developed in the current project.
Principle of point cloud reconstruction
In principle, a robot-based, point cloud reconstruction system incorporates a robot, a 3-D sensor, a 2-D camera and multiple markers. The 2-D camera and 3-D sensor are mechanically connected to the robot’s flange plate. The marker points are randomly distributed around where the object will be.
The robot base, robot end, 3-D sensor, 2-D camera, markers and object each have their own coordinate systems. Initial point cloud measurements of the object at different shooting poses are obtained by the 3-D sensor and are recorded in the sensor coordinates.
These coordinates should be converted to a set of coordinates suitable for further point cloud registration processing. The robot coordinates, for example, could serve as a frame of reference for this task, but they are not accurate enough for this particular application. Therefore, the marker system was introduced as it provides a frame of reference that can be measured by a 2-D camera and delivers better accuracy than the robot coordinates.
01 Overview of point cloud reconstruction and path extraction.
02 Robot-based point cloud reconstruction has two phases: calibration and measurement.
In other words, the procedure for converting point cloud data to marker coordinates is as follows: the “object-to-3-D” sensor provides the point cloud in sensor coordinates. The next steps are the “sensor-to-camera” and “camera-to-marker” transpositions. Ultimately, the point cloud is in marker coordinates. The need for accuracy underlines the importance of precision in these transpositions, as well as rigorous optical and positional calibration of the data capture elements.
How to transform between coordinate systems
The setup adjustments needed for the system under discussion are common to many robot vision systems and are, therefore, well-tried and tested. The marker-to-camera transformation is determined by the so called perspective-n-points (PnP) algorithm. This method is commonly used to deduce a camera’s pose with respect to an object. Images of the markers are taken by the camera mounted on the robot’s end effector in different poses – ie, various camera positions and orientations »02. Two key issues affecting this transformation’s accuracy are the calibration of camera intrinsic parameters and ellipse detection (see below).
The camera-to-sensor transformation is determined by a procedure that is similar to the so-called eye-in-hand calibration. This calibration is used to relate what the camera sees to what the 3-D sensor perceives. Thus, the relative position and orientation of the camera with respect to the 3-D sensor can be determined »02.
Calibrating the camera’s intrinsic parameters
As in other robot vision setups, calibrating the camera involves capturing images of a known image array and performing a series of transformations and complex, proprietary algebraic manipulations. Distortion parameters are considered in an iterative re-estimation of values to minimize errors.
Ellipse detection and relationship calculation
The markers are all circles. They may or may not be of the same size. When they are presented to the camera, the circles appear as ellipses. The centers of ellipses given by ellipse detection are crucial because they help to improve the measurement accuracy of the marker system in the 2-D-camera coordinate (marker-to-camera).
The most critical factor affecting the accuracy of ellipse detection is the edge detection process. The method of accurate subpixel edge location based on the partial area effect has been used here as it can obtain the most precise edge features »03.
03 Multiple ellipses captured in the image are used to estimate the camera’s position relative to the markers. The red points are the center of the ellipse.
Path extraction based on non-rigid registration
Generally, rigid registration assumes that the object maintains the same shape and size during the transformation process, taking rotation and translation into account. Non-rigid registration (NRR) is suitable for deformation cases and involves aligning two or more point cloud datasets that may differ due to deformation. An example procedure for this could be:
- Perform an NRR, such as coherent point drift (CPD), between the point cloud from the CAD model and the point cloud of the workpiece. The NRR is applied to cross-sections, of which there will be hundreds along the path.
- Mark the CAD path from the point cloud extracted from the CAD model or the golden part of the workpiece (ie, a “most-representative” part used as an example in the design or commissioning of the automation process).
- Transform the workpiece path from the CAD path using the non-rigid transformation parameters »04.
04 Path extraction and robot 3-D point cloud reconstruction: Two sets of point cloud data are aligned using rigid registration, then NRR is applied for each cross-section along the path. Here, “x” indicates positional data.
This work implemented CPD for NRR as it is robust, insensitive to noise and uses a global optimization strategy to find the optimal deformation field, thereby ensuring the consistency of the overall registration.
However, the CPD method has issues with high memory consumption and slow speed when dealing with large-scale point clouds. ABB improved the process by slicing the point cloud and registering these regions separately and in parallel.
Verify and clarify
To verify the proposed method, a test system was set up comprising an IRB 120 robot with a repeatable positioning accuracy of ±0.01 mm, a third-party, high-accuracy 3-D scanner, a 2-D camera with an 8 mm lens, a semicircular light source and several 3M reflective markers »05. The pose relationship between the camera and the fiducial markers was predetermined with the 3-D scanner. The calibrations of the different elements followed the same procedures as described earlier.
As alluded to above, to achieve high-precision point cloud reconstruction, the point cloud captured by the 3-D sensor must be accurately transformed to the marker point coordinate system.
05 Verification system setup.
As before, the camera was positioned at various poses within a predefined range to capture the image of markers from different perspectives. The captured images were then processed using an ellipse detection algorithm to identify the markers within the image.
This step is critical as accurate marker identification is crucial for determining the relationship between the marker point and camera coordinate systems. Finally, the PnP method was used to solve for the camera’s pose relative to the marker coordinate system using geometric constraints and known marker positions. The method iteratively optimizes the pose estimation until a satisfactory solution is achieved.
Subsequently, a measurement phase collects data on the object of interest and the point cloud is reconstructed, based on the measured data and the results of calibration. Many samples were measured to assess the accuracy of the system’s reconstruction capabilities. One example is shown in »06.
To ensure the accuracy of the ground truth for the tested object, a third-party organization was brought on board to conduct precise measurements using a scanner with an accuracy one order of magnitude higher than the test unit. The results showed that the global point cloud reconstruction accuracy achieves a remarkable 0.015 mm, highlighting the reconstruction methods’ efficacy »06. For other typical objects, the region of interest in the reconstruction can be characterized with an overall precision of 0.025 mm. In the worst case scenario, areas with large camera inclination angles may introduce significant position errors, reducing the maximum point cloud accuracy at these area to 0.04 mm, but this is still acceptable when compared to industry standards and is unlikely to impact field operations significantly. The reconstruction results showed that, for different types of parts, the region of interest in the reconstruction can be characterized with a precision of 0.025 mm.
06 Reconstructed point cloud and error comparison (marker-based and 20 poses). The color scale is in error in mm.
06a Reconstructed point cloud (20 poses).
06b The ground truth from a high-precision scanner.
06c The 3-D comparison between 06a and 06b.
By utilizing the method for processing point clouds in regional sections to reduce registration times, an accuracy of path extraction of 0.14 mm was achieved. This level of precision is essential for maintaining quality standards and minimizing defects in manufacturing processes.
A path to the future
The innovation of the approach described here lies in the precalibration that determines the relationship of measurement poses of the 2-D camera to the marker system, then having the robot move to these same poses to measure workpieces. These measurements are merged to obtain the point cloud of the entire piece. Since the robot must move to the exact same poses as in the precalibration and measurement phases, it must have precise positional repeatability, which the ABB robot possesses. Other innovative steps include the non-rigid registration of each cross-section along the path in parallel, rather than the entire point cloud, to reduce processing time.
This approach achieves an overall reconstruction accuracy of 0.025 mm, a commendable figure. The system’s cycle time is also noteworthy: By paralleling measurement and computation, it is under 40 s for the target object, which is above end customer expectations. Speed and accuracy make the system a valuable asset in competitive and time-sensitive manufacturing settings. By advancing in these areas, ABB can significantly improve the system’s overall performance and flexibility, rendering it an even more indispensable tool across many manufacturing applications.
Stay ahead in innovation & technology
Get our latest news, insights, and breakthroughs straight to your inbox.