In modern intelligent manufacturing, the precision of an industrial robot is paramount, with the pose repeatability of its end effector being a critical metric for assessing its capability to perform high-precision tasks. This capability determines the consistency with which the robot’s tool or gripper returns to a commanded position and orientation in space. Accurate and efficient measurement of this repeatability is essential for quality assurance, calibration, and performance validation of robotic systems deployed in assembly, machining, and inspection applications. This article presents a comprehensive method for the in-situ measurement of end effector pose repeatability, detailing the underlying theoretical model, the core measurement algorithms, and the experimental validation of the system’s performance.
The pose of a rigid body in three-dimensional space is defined by its position (three translational degrees of freedom) and its orientation (three rotational degrees of freedom). Quantifying the repeatability error involves commanding the robot to move to a specific pose multiple times and measuring the statistical dispersion of the actual achieved poses around their mean. Traditional methods, such as using laser trackers or coordinate measuring machines (CMMs), can be accurate but are often offline, time-consuming, costly, and not suited for rapid, multi-point assessment within a large workspace. The method proposed here addresses these limitations by employing an active optical target and computer vision, enabling fast, non-contact, and simultaneous measurement of all six degrees of freedom.
The core of the measurement principle is based on a Direction Cosine Model. An optical pose benchmark, rigidly attached to the robot’s end effector, emits three mutually perpendicular fan-shaped laser beams, each projecting a cross-line pattern onto a dedicated detection screen. These three beams define the axes (X, Y, Z) of a coordinate frame attached to the end effector. A camera-based sensor module captures the image of each cross-line. The pose repeatability of the end effector is thus physically manifested as changes in the position and orientation of these cross-line images on their respective screens. The mathematical relationship between the spatial pose change of the optical target and the observed 2D image changes is derived using direction cosines and Euler angle rotations.

Let the initial pose of the optical target (and thus the end effector) be defined by a coordinate frame $O_a – XYZ$. After a repeatability test cycle, the frame moves to a new pose. This transformation can be decomposed into a rotation followed by a translation. The rotation from the initial orientation to an intermediate orientation $A’$ is described by Euler angles ($\Delta\alpha$, $\Delta\theta$, $\Delta\gamma$) around the X, Y, and Z axes respectively, resulting in a composite rotation matrix $\mathbf{R}$:
$$ \mathbf{R} = R(Z, \Delta\gamma) \cdot R(Y, \Delta\theta) \cdot R(X, \Delta\alpha) $$
Where the individual rotation matrices are:
$$ R(X, \Delta\alpha) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos\Delta\alpha & -\sin\Delta\alpha \\ 0 & \sin\Delta\alpha & \cos\Delta\alpha \end{bmatrix}, $$
$$ R(Y, \Delta\theta) = \begin{bmatrix} \cos\Delta\theta & 0 & \sin\Delta\theta \\ 0 & 1 & 0 \\ -\sin\Delta\theta & 0 & \cos\Delta\theta \end{bmatrix}, $$
$$ R(Z, \Delta\gamma) = \begin{bmatrix} \cos\Delta\gamma & -\sin\Delta\gamma & 0 \\ \sin\Delta\gamma & \cos\Delta\gamma & 0 \\ 0 & 0 & 1 \end{bmatrix}. $$
The complete rotation matrix $\mathbf{R}$ has elements $r_{ij}$ representing the direction cosines between the new and old coordinate axes. The translational displacement components ($\Delta x$, $\Delta y$, $\Delta z$) of the end effector are then calculated from the measured shifts of the cross-line center points on the detection planes, scaled by these direction cosines and the known geometry (lever arms $l_x$, $l_y$, $l_z$) between the planes. The general form for one displacement component is:
$$ \Delta x = (k_x^1 – k_x^0) + l_x \cdot \left( \frac{r_{23}}{r_{33}} \right) $$
Similar equations govern $\Delta y$ and $\Delta z$. The orientation changes $\Delta\alpha$, $\Delta\theta$, $\Delta\gamma$ are directly obtained by measuring the rotation angles of the cross-lines in their respective image planes.
The critical step in realizing this measurement is the precise extraction of the cross-line center position and its orientation angle from the camera images. We developed a novel two-stage algorithmic approach for this purpose. The first stage involves robust image preprocessing: binarization, morphological filtering to remove noise, and contour extraction using a RANSAC-style line fitting algorithm to separate the horizontal and vertical laser stripes. The second and most crucial stage is centerline extraction using the Improved Gaussian Curve Fitting combined with Improved B-Spline Curve Fitting (IGCF-IBSCF) algorithm.
The intensity profile of a laser stripe in an image approximates a Gaussian distribution. The IGCF step fits a 1D Gaussian function to the intensity values across the stripe’s width at numerous sample points along its length. The center point for each cross-section is taken as the peak of the fitted Gaussian. To counteract errors from pixel saturation (“blooming”) near the peak, the algorithm intelligently excludes saturated pixel clusters and adjusts the sampling width dynamically, ensuring 5-9 valid data points per fit. The Gaussian function is linearized by taking logarithms:
$$ f(x) = A e^{-\frac{(x-x_0)^2}{2\sigma^2}} \rightarrow \ln(f(x)) = a_0 + a_1 x + a_2 x^2 $$
Where $a_2 = -1/(2\sigma^2)$ and $a_1 = x_0 / \sigma^2$. Solving for $a_1$ and $a_2$ via least-squares gives the sub-pixel center location $x_0 = -a_1/(2a_2)$.
The collection of these center points along the stripe forms a discrete, noisy path. The IBSCF step then fits a smooth B-spline curve through all these points. Unlike standard B-splines that do not pass through the control points, the improved algorithm enforces the curve to interpolate the actual extracted center points (now treated as “knots”). This is achieved by constructing a special control polygon where the data points lie at specific fractions along the polygon edges, guaranteeing the final spline’s passage through them while maintaining the desirable properties of B-splines (local control, continuity). The resulting centerline provides a highly accurate and continuous representation of the laser stripe, from which the line’s angle and its intersection point (the cross center) are reliably computed.
To achieve high measurement accuracy across a large robot workspace, systematic errors in the vision system must be characterized and compensated. We propose a sophisticated calibration and real-time error compensation method based on Bisquare Curve Fitting, Improved Bayesian Particle Swarm Optimization, and a Nonlinear Echo State Network (BCF-IBPSO-NESN). This method models the complex, nonlinear relationship between the raw image sensor readings and the actual pose error.
The process begins by collecting calibration data. A high-precision CMM or linear stage provides reference displacements, while a laser interferometer gives ground truth. Similarly, a high-accuracy rotary stage provides reference angles. The vision system’s raw measurements (cross-center positions and angles) are recorded alongside these reference values at multiple points in the workspace to generate a dataset of measurement errors. The BCF algorithm performs an initial robust fit to this data, minimizing the influence of outliers. The IBPSO algorithm optimizes the parameters of this model. The standard PSO is enhanced with an adaptive nonlinear inertia weight and learning factors:
$$ \omega = \frac{\omega_{standard}}{1 + e^{objvalue_{zbest} / target_{mape}}}, \quad c_1 = c_2 = \frac{2 + \omega}{2} $$
Furthermore, a Bayesian preprocessing step classifies particle vectors, guiding the swarm towards faster and more stable convergence to the global optimum.
Finally, the NESN—a type of recurrent neural network with a dynamic “reservoir”—learns the residual, highly nonlinear mapping that the BCF model might have missed. The NESN enhances nonlinear processing capacity with fewer neurons compared to a standard ESN. The combined BCF-IBPSO-NESN model outputs a precise compensation value for any given raw sensor measurement, which is then subtracted from the initial reading to yield the corrected pose. The overall compensation process for a measured position $S$ is:
$$ S_{corrected} = S_{measured} – \Delta S_{BCF-IBPSO-NESN} $$
To validate the complete measurement system, comprehensive experiments were conducted. The system’s displacement and angular measurement capabilities were first calibrated. After compensation using the BCF-IBPSO-NESN method, the system demonstrated high accuracy, as summarized below:
| Measurement Type | Compensated Accuracy |
|---|---|
| Linear Displacement (X, Y, Z) | ±1.5 μm |
| Angular Rotation (around X, Y, Z) | ±2 arcseconds |
Subsequently, the system was used to evaluate the pose repeatability of an industrial robot (ABB IRB 2600). The optical target was mounted on the robot’s end effector. The robot was programmed to repeatedly move to five different test poses within its workspace for 30 cycles. At each pose, the system measured the end effector’s position and orientation. The pose repeatability was calculated as the dispersion (e.g., ±3σ) of these measurements. The performance of the proposed IGCF-IBSCF center extraction method was compared against traditional methods like standard Gaussian fitting and grayscale centroid. Furthermore, the BCF-IBPSO-NESN compensation was compared to simpler methods like cubic spline and least-squares fitting. The results for pose repeatability precision (RP) at one test point are illustrative:
| Method (Center Extraction / Compensation) | $l_{RP}$ (mm) | $a_{RP}$ (arcsec) | $b_{RP}$ (arcsec) | $c_{RP}$ (arcsec) |
|---|---|---|---|---|
| IGCF-IBSCF / BCF-IBPSO-NESN (Proposed) | 0.0013 | 0.00013 | 0.00035 | 0.0005 |
| Standard Gaussian / BCF-IBPSO-NESN | 0.0049 | 0.00372 | 0.00745 | 0.00247 |
| Grayscale Centroid / BCF-IBPSO-NESN | 0.0085 | 0.00846 | 0.00891 | 0.00978 |
| IGCF-IBSCF / Cubic Spline | 0.0062 | 0.00874 | 0.00884 | 0.00634 |
| IGCF-IBSCF / Least Squares | 0.0145 | 0.01474 | 0.01053 | 0.00978 |
The data clearly shows that the proposed combination of algorithms delivers superior performance. The IGCF-IBSCF method significantly reduces errors inherent in centerline extraction, while the BCF-IBPSO-NESN method provides a far more accurate error compensation model compared to traditional fitting techniques. This synergy enables the measurement system to reliably detect the very fine pose variations associated with high-precision robot repeatability.
In conclusion, this work presents a viable and effective solution for the online measurement of industrial robot end effector pose repeatability. By transforming the abstract concept of spatial pose into measurable optical projections and developing advanced algorithms for image processing and error compensation, the system achieves micron-level displacement accuracy and arcsecond-level angular accuracy. The method is non-contact, allows for multi-point measurement within a large workspace, and is suitable for integration into quality control or calibration procedures for robotic cells. The demonstrated performance indicates its strong potential for ensuring the precision and reliability of robots in advanced manufacturing applications.
