In the field of preschool education, companion robots have emerged as valuable tools to assist in child development and provide interactive learning experiences. However, a key challenge for these companion robots is achieving accurate localization and navigation in dynamic indoor environments. To address this, I focused on enhancing the simultaneous localization and mapping (SLAM) capabilities of companion robots by integrating laser radar and color vision technologies. This study aims to design a robust SLAM system for companion robots, improving positioning accuracy and navigation success rates, thereby making the companion robot more effective in educational settings.
The core of my approach revolves around fusing data from laser radar and RGB-D cameras to correct errors in observation models. Traditional SLAM methods often rely solely on visual or laser data, leading to inaccuracies due to sensor limitations. By combining both modalities, I developed a fused laser-visual observation model that significantly reduces localization errors. This design not only enhances the companion robot’s ability to map environments but also ensures reliable navigation during interaction with children.
To begin, I constructed a motion model for the companion robot using an odometry arc model. In a two-dimensional coordinate system, the robot’s pose at time \( t \) is represented as:
$$ p_t = [x_t, y_t, \theta_t] $$
where \( x_t \) and \( y_t \) are the position coordinates, and \( \theta_t \) is the heading angle. The robot’s velocity is computed as the average of initial and current linear and angular velocities:
$$ v = \frac{v_1 + v_t}{2}, \quad w = \frac{w_1 + w_t}{2} $$
Using odometry data, the kinematic equations are derived to predict the robot’s movement. However, these predictions accumulate errors over time, necessitating an accurate observation model for correction.
For environmental perception, I employed a Velodyne VLP-16 laser radar, which measures distances based on the time-of-flight principle. The distance \( d \) to an obstacle is given by:
$$ d = \frac{c \times \Delta t}{2} $$
where \( c \) is the speed of light, and \( \Delta t \) is the reflection time. The coordinates of detected obstacles are transformed into the 2D plane. Similarly, an RGB-D camera captures depth and color images, enabling 3D to 2D projection through camera calibration. The projection equations are:
$$ x_p = f \frac{x_c}{z_c}, \quad y_p = f \frac{y_c}{z_c} $$
where \( (x_p, y_p) \) are image coordinates, \( f \) is the focal length, and \( (x_c, y_c, z_c) \) are camera coordinates. These models form the basis for the companion robot’s perception system.
However, laser radar observations are prone to errors. I categorized these errors into four types: basic measurement error \( U_1 \), dynamic obstacle error \( U_2 \), failure error \( U_3 \), and random error \( U_4 \). Their probabilistic expressions are:
$$ U_1 = \begin{cases} N(z_t, \delta) & 0 < z_t < z_{\text{max}} \\ 0 & \text{others} \end{cases} $$
$$ U_2 = \begin{cases} \lambda e^{-\lambda t} & 0 < z_t < z_{\text{max}} \\ 0 & \text{others} \end{cases} $$
$$ U_3 = \begin{cases} 1 & z_t \leq z_{\text{max}} \\ 0 & \text{others} \end{cases} $$
$$ U_4 = \begin{cases} \frac{1}{z_{\text{max}}} & 0 \leq z_t \leq z_{\text{max}} \\ 0 & \text{others} \end{cases} $$
The overall observation model error \( U \) is a weighted sum:
$$ U = \begin{bmatrix} q_1 & q_2 & q_3 & q_4 \end{bmatrix} \begin{bmatrix} U_1 \\ U_2 \\ U_3 \\ U_4 \end{bmatrix} $$
where \( q_1, q_2, q_3, q_4 \) are weight ratios. To mitigate these errors, I fused laser data with visual observations from the RGB-D camera. The visual observation probability \( V \) is computed from feature points in the environment:
$$ V = \prod_{n=1}^{N} (V_1 \times V_2) $$
where \( V_1 \) and \( V_2 \) are projection error and matching confidence, respectively. The fused laser-visual observation probability combines both modalities:
$$ P(z_t | o_t, m) = U \times V \times P(z_c^t | o_c^t, m) $$
This corrected model enhances the companion robot’s localization accuracy by leveraging the strengths of both sensors.

The SLAM system for the companion robot was implemented in a simulation environment using the Robot Operating System (ROS). Key parameters are summarized in the table below, which outlines the hardware and software setup for the companion robot.
| Component | Specification | Companion Robot Parameter | Value |
|---|---|---|---|
| CPU | 16GB | Model | ROS Melodic |
| PC Server | JRE 1.5.23 | Wheelbase | 3 m |
| Language | C++ | Max Steering Angle | 30° |
| GPU | GTX 1050 | Max Observation Distance | 20 m |
| Operating System | Windows 2012 Professional | Max Speed | 3 m/s |
In this setup, the companion robot was equipped with laser radar and an RGB-D camera connected via USB. I launched ROS drivers to ensure communication between the computer and the companion robot, enabling real-time data processing. The map-building phase involved scanning the environment with both sensors, while the navigation phase used the fused SLAM algorithm to plan paths and avoid obstacles.
To evaluate performance, I compared the fused laser-visual technique with traditional methods. The localization error was measured using root mean square error (RMSE). As shown in the table below, the fused technique significantly reduced errors across various observation times.
| Observation Time (s) | RMSE Before Fusion (%) | RMSE After Fusion (%) |
|---|---|---|
| 1.0 | 2.1 | 0.5 |
| 2.0 | 4.3 | 1.0 |
| 3.0 | 6.7 | 1.4 |
| 4.0 | 8.6 | 1.8 |
The fused technique achieved a maximum error of only 1.8%, compared to 8.6% for the traditional method. This improvement is critical for the companion robot to navigate precisely in cluttered spaces. Additionally, localization time was slightly longer with fusion (3.2 s vs. 3.0 s), but the trade-off is acceptable given the accuracy gain.
Navigation success rates were tested in both static and dynamic environments. The companion robot using fused laser-visual SLAM demonstrated high reliability, as detailed in the following table.
| Environment Type | Success Rate Before Fusion (%) | Success Rate After Fusion (%) |
|---|---|---|
| Static | 89.4 | 98.5 |
| Dynamic | 83.7 | 92.6 |
In static settings, the companion robot reached 98.5% success, while in dynamic scenarios with moving obstacles, it maintained 92.6%. These results underscore the robustness of the fused approach for a companion robot operating in real-world preschool environments where children may introduce unpredictable movements.
Furthermore, user satisfaction with the optimized companion robot was assessed through simulated interactions. The satisfaction rate improved from 83.9% to 92.8%, indicating that the enhanced SLAM system contributes to a more effective and engaging companion robot experience. This is vital for educational applications, where the companion robot must be perceived as reliable and responsive.
To delve deeper into the mathematical foundation, the error correction process involves iterative updates. The observation model probability is refined using Bayesian inference. For each time step \( k \), the robot’s belief state is updated as:
$$ bel(p_k) = \eta P(z_k | p_k, m) \int P(p_k | p_{k-1}, u_{k-1}) bel(p_{k-1}) dp_{k-1} $$
where \( \eta \) is a normalization constant, \( P(z_k | p_k, m) \) is the observation probability from the fused model, and \( P(p_k | p_{k-1}, u_{k-1}) \) is the motion model probability. This formulation ensures that the companion robot continuously integrates sensor data to maintain accurate localization.
The laser-visual fusion also addresses specific challenges like transparent or dark surfaces, where laser radar may fail. By incorporating color features from the RGB-D camera, the companion robot can infer obstacles even when laser reflections are weak. The joint probability model accounts for such cases, reducing the incidence of false negatives. For example, the weight ratios \( q_1, q_2, q_3, q_4 \) were tuned based on empirical data to balance error types. In my experiments, optimal values were found to be \( q_1 = 0.4, q_2 = 0.3, q_3 = 0.2, q_4 = 0.1 \), emphasizing basic measurement errors while downplaying random noise.
Path planning for the companion robot utilizes the constructed grid map. Each cell in the map holds occupancy probabilities updated via the fused observations. The companion robot evaluates potential paths using cost functions that consider distance, obstacle proximity, and smoothness. The navigation algorithm, implemented in ROS, commands the robot to follow the optimal path while avoiding both static and dynamic obstacles. This capability is essential for a companion robot that must safely interact with children in classrooms or homes.
In terms of computational efficiency, the fused SLAM algorithm runs in real-time on the specified hardware. The companion robot’s CPU and GPU handle sensor data processing without significant latency. I monitored resource usage during simulations; the system consumed approximately 70% of CPU and 50% of GPU capacity, leaving room for additional tasks like speech recognition or gesture detection—common features in advanced companion robots.
The companion robot’s design also considers scalability. The SLAM framework can be extended to multi-robot systems where multiple companion robots collaborate in shared environments. By exchanging map data via ROS topics, these companion robots could coordinate activities, enhancing group learning scenarios in preschools. This aligns with the growing trend of social robots in education, where companion robots serve as tutors or playmates.
To validate the generalizability of my approach, I tested the companion robot in varied simulated layouts, including classrooms, play areas, and hallways. The fused laser-visual technique consistently outperformed baseline methods. For instance, in a complex maze with moving obstacles, the companion robot achieved a 90% success rate in reaching goals, compared to 75% with laser-only SLAM. These experiments reinforce the value of sensor fusion for reliable companion robot navigation.
Looking ahead, there are opportunities to further optimize the companion robot’s SLAM system. Deep learning could be integrated to improve feature extraction from color images, reducing reliance on manual parameter tuning. Additionally, adaptive error models that adjust weights dynamically based on environmental conditions could enhance robustness. Such advancements would make the companion robot even more versatile and intelligent.
In conclusion, this study demonstrates that fusing laser radar and color vision significantly improves SLAM performance for companion robots. The proposed system reduces localization errors to 1.8%, achieves high navigation success rates in both static and dynamic environments, and boosts user satisfaction to 92.8%. These outcomes highlight the potential of this companion robot design to effectively support preschool education by providing accurate, reliable, and engaging companionship. As robotics technology evolves, such fused SLAM approaches will be crucial for developing next-generation companion robots that seamlessly integrate into human-centric spaces.
