UWB-Based Clustered Relative Positioning for Robot Swarms

In the field of robot technology, swarm robotics has emerged as a critical area, enabling collaborative tasks in diverse environments. Relative positioning among robots is fundamental to achieving coordinated movements and efficient operations. Traditional methods often rely on pre-deployed base stations or anchors, which lack flexibility and increase costs. Moreover, in large-scale robot technology applications, the failure of a single robot can disrupt the entire swarm’s localization. To address these challenges, we propose a novel clustered relative positioning method using Ultra-Wideband (UWB) technology. This approach eliminates the dependency on fixed infrastructure by organizing robots into hierarchical clusters, enhancing robustness and scalability. By integrating inertial measurement units (IMU) and Kalman filtering, we compensate for errors in both line-of-sight (LOS) and non-line-of-sight (NLOS) environments, ensuring accurate positioning. This advancement in robot technology not only reduces costs but also adapts to dynamic scenarios, making it ideal for applications like logistics, surveillance, and exploration.

Robot technology often faces limitations in relative positioning due to environmental constraints. In our method, we introduce a clustered architecture where the robot swarm is divided into multiple levels, such as primary, secondary, and tertiary clusters. Each cluster has a leader robot (cluster head) and follower robots (cluster members). The cluster head is equipped with three UWB nodes—a primary, secondary, and directional node—while cluster members may have fewer nodes depending on their hierarchy. This design minimizes the impact of individual robot failures on the entire swarm, a significant improvement in robot technology reliability. For instance, if a cluster head malfunctions, only its immediate followers are affected, preserving the localization integrity of other clusters. The relative positioning is achieved through UWB ranging using the Double-Sided Two-Way Ranging (DS-TWR) algorithm, which measures distances between nodes without external base stations. This clustered approach exemplifies how robot technology can evolve to handle complex, large-scale deployments efficiently.

To compute the relative position of a cluster member with respect to its cluster head, we model the problem using nonlinear equations based on UWB distance measurements. Let the cluster head’s UWB nodes have coordinates $P_{i1} = (x_{P_{i1}}, y_{P_{i1}})$, $P_{i2} = (x_{P_{i2}}, y_{P_{i2}})$, and $P_{i3} = (x_{P_{i3}}, y_{P_{i3}})$. The cluster member’s primary UWB node is at $P_{ij} = (x_{P_{ij}}, y_{P_{ij}})$. The measured distances are $d_{ij1}$, $d_{ij2}$, and $d_{ij3}$, corresponding to the distances from the cluster member to each of the cluster head’s nodes. The system of equations is:

$$f_1(x_{P_{ij}}, y_{P_{ij}}) = \sqrt{(x_{P_{ij}} – x_{P_{i1}})^2 + (y_{P_{ij}} – y_{P_{i1}})^2} – d_{ij1} = 0$$

$$f_2(x_{P_{ij}}, y_{P_{ij}}) = \sqrt{(x_{P_{ij}} – x_{P_{i2}})^2 + (y_{P_{ij}} – y_{P_{i2}})^2} – d_{ij2} = 0$$

$$f_3(x_{P_{ij}}, y_{P_{ij}}) = \sqrt{(x_{P_{ij}} – x_{P_{i3}})^2 + (y_{P_{ij}} – y_{P_{i3}})^2} – d_{ij3} = 0$$

We solve this nonlinear system using the Gauss-Newton method, an iterative optimization technique. The residual vector is defined as $F(x, y) = [f_1(x, y), f_2(x, y), f_3(x, y)]^T$, and the Jacobian matrix $J$ is computed as:

$$J(x, y) = \begin{bmatrix}
\frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y} \\
\frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y} \\
\frac{\partial f_3}{\partial x} & \frac{\partial f_3}{\partial y}
\end{bmatrix} = \begin{bmatrix}
\frac{x – x_{P_{i1}}}{s_1} & \frac{y – y_{P_{i1}}}{s_1} \\
\frac{x – x_{P_{i2}}}{s_2} & \frac{y – y_{P_{i2}}}{s_2} \\
\frac{x – x_{P_{i3}}}{s_3} & \frac{y – y_{P_{i3}}}{s_3}
\end{bmatrix}$$

where $s_k = \sqrt{(x – x_{P_{ik}})^2 + (y – y_{P_{ik}})^2}$ for $k = 1, 2, 3$. The update step in each iteration is $\Delta z = -(J^T J)^{-1} J^T F(z_k)$, with $z = [x, y]^T$. For better accuracy, we extend this to the Levenberg-Marquardt algorithm, which incorporates a damping factor $\mu$ to handle ill-conditioned problems. The update becomes $\Delta z = -(J^T J + \mu I)^{-1} J^T F(z_k)$. This method ensures robust convergence even with noisy measurements, a common issue in robot technology applications.

Once the absolute position of the cluster member is determined, the relative position relative to the cluster head is derived using a rotation matrix based on the cluster head’s orientation $\theta$. The transformation is:

$$\begin{bmatrix}
x_{P_{ij}}^r \\
y_{P_{ij}}^r
\end{bmatrix} = \begin{bmatrix}
\cos \theta & \sin \theta \\
-\sin \theta & \cos \theta
\end{bmatrix} \begin{bmatrix}
x_{P_{ij}} – x_{P_{i1}} \\
y_{P_{ij}} – y_{P_{i1}}
\end{bmatrix}$$

where $(x_{P_{ij}}^r, y_{P_{ij}}^r)$ is the relative position. This step is crucial in robot technology for maintaining formation and coordination.

For relative heading calculation, we use dual UWB nodes on both the cluster head and cluster member. The cluster member’s heading $\theta$ is computed as:

$$\theta = \arctan \left( \frac{y_{P_{ij1}} – y_{P_{ij2}}}{x_{P_{ij1}} – x_{P_{ij2}}} \right)$$

where $P_{ij1}$ and $P_{ij2}$ are the primary and secondary UWB nodes of the cluster member. This approach allows real-time heading estimation, essential for dynamic robot technology tasks.

However, UWB measurements are prone to errors, especially in challenging environments. In LOS conditions, errors arise from sensor noise and systematic biases. We model the measurement as:

$$\begin{bmatrix}
\hat{x} \\
\hat{y} \\
\hat{\theta}
\end{bmatrix} = (e_c + e_z) K_e + \begin{bmatrix}
x \\
y \\
\theta
\end{bmatrix}$$

where $e_c$ is the systematic error, $e_z$ is random noise, and $K_e$ is a scaling vector. To correct $e_c$, we perform linear fitting on calibration data. For example, using UWB modules like DWM1000, we collect distance measurements and actual values, then fit a line $y = k_1 x + k_2$ using least squares minimization:

$$\min_{k} \sum_{i=1}^{n} (z_i – f(q_i, k))^2$$

where $z_i$ is the measured value and $q_i$ is the actual value. This reduces systematic errors in robot technology systems.

For random noise $e_z$, we integrate IMU data with Kalman filtering. The state vector includes position and velocity, and the prediction step uses IMU-derived displacements. The state prediction is:

$$\hat{z}_{k+1} = \bar{z}_k + z_k^e$$

where $\bar{z}_k$ is the previous state estimate and $z_k^e$ is the IMU-estimated displacement. The UWB distance prediction is:

$$\hat{d}_{k+1} = \sqrt{\bar{d}_k^2 + (x_{P_{ij}}^e – x_{P_i}^e)^2 + (y_{P_{ij}}^e – y_{P_i}^e)^2 + \sigma_k}$$

with $\sigma_k = 2(x_{P_{ij}} – x_{P_i})(x_{P_{ij}}^e – x_{P_i}^e) + 2(y_{P_{ij}} – y_{P_i})(y_{P_{ij}}^e – y_{P_i}^e)$. The Kalman filter equations are:

$$\text{Prediction: } \hat{P}_{k+1} = \hat{P}_k + Q$$

$$\text{Kalman Gain: } K_{k+1} = \frac{\hat{P}_{k+1}}{\hat{P}_{k+1} + R}$$

$$\text{Update: } \hat{d}_{k+1} = \hat{d}_{k+1} + K_{k+1} (d_k – \hat{d}_{k+1})$$

$$\text{Covariance Update: } \hat{P}_{k+1} = \hat{P}_{k+1} – K_{k+1} \hat{P}_{k+1}$$

Here, $Q$ and $R$ are process and measurement noise covariances, set to 0.4 and 0.2 based on empirical data. This fusion enhances accuracy in robot technology applications.

In NLOS environments, errors are more severe due to signal obstruction. We identify NLOS conditions using signal power characteristics. Let $P_{\text{total}}$ be the total received power and $P_{\text{first}}$ be the first path power. The decision rules are:

  • If $P_{\text{total}} – P_{\text{first}} > L_1$ and $\Delta d_{ij} > D_E$, then NLOS.
  • If $P_{\text{total}} – P_{\text{first}} \leq L_2$ and $\Delta d_{ij} \leq D_E$, then LOS.

where $\Delta d_{ij} = d_{ij}^k – d_{ij}^{k-1}$, $D_E = \sigma + \Delta s$, $\sigma$ is the LOS standard deviation, and $\Delta s$ is the node displacement. Thresholds $L_1$ and $L_2$ are set to 10 dB and 6 dB, respectively. For NLOS compensation, we modify the Kalman gain by introducing a factor $\tau$:

$$K_{k+1} = \tau \frac{\hat{P}_{k+1}}{\hat{P}_{k+1} + R}$$

This adjustment mitigates NLOS errors, ensuring reliable performance in robot technology deployments.

To validate our method, we conducted simulations and experiments. In simulation, we compared our clustered approach with base station-dependent UWB positioning. The robot swarm moved along a curved path with obstacles, and we measured relative positioning accuracy. The results showed that our method maintained formation integrity, whereas base station-dependent methods suffered from NLOS errors. The table below summarizes UWB ranging errors under different conditions:

Method LOS Error (m) NLOS Error (m)
Base Station-Dependent 0.14 ± 0.08 0.80 ± 0.13
Inter-Node Ranging 0.13 ± 0.08 0.15 ± 0.06
With Error Compensation 0.10 ± 0.08 0.12 ± 0.08

Our error compensation reduced NLOS errors by 57%, highlighting its efficacy in robot technology.

In experiments, we used two mobile robots: one as a cluster head with three UWB nodes and a LiDAR, and another as a cluster member with one UWB node and a onboard computer. The environment was a 5m × 5m area with four obstacles. The cluster head moved at 0.1 m/s, and the cluster member followed at a relative position of (0, 1) meters. We evaluated relative positioning accuracy over time. The table below compares different methods:

Positioning Method Relative Position Error (m)
Base Station-Dependent 0.57 ± 0.23
UWB + Odometry 0.43 ± 0.18
UWB + IMU + Graph Optimization 0.32 ± 0.08
Our Clustered Method 0.26 ± 0.11

Our approach reduced errors by 54% compared to base station-dependent methods and 39% compared to UWB-odometry fusion, demonstrating its superiority in robot technology.

The cumulative error analysis over time for our clustered method showed stable performance, with errors remaining within 0.26 ± 0.11 meters. This consistency is vital for long-duration missions in robot technology, such as autonomous logistics or search-and-rescue operations. The integration of clustering with UWB and IMU data fusion provides a scalable solution that adapts to various scenarios, from indoor warehouses to outdoor terrains.

In conclusion, our UWB-based clustered relative positioning method represents a significant advancement in robot technology. By eliminating dependency on base stations and incorporating hierarchical clusters, we enhance robustness and reduce costs. The error compensation techniques for LOS and NLOS environments ensure high accuracy, making the system suitable for dynamic and large-scale robot swarms. Future work will explore multi-sensor fusion with machine learning to further improve performance in complex robot technology applications. This approach paves the way for more autonomous and collaborative robot systems, driving innovation in the field.

Scroll to Top