In modern infrastructure, cable tunnels play a critical role in power distribution, yet they pose significant risks to human inspectors due to enclosed environments, high humidity, poor ventilation, potential heat emissions from equipment, and the presence of combustible or toxic gases. Traditional manual inspections not only endanger personnel but also suffer from inefficiencies and reliability issues. To address these challenges, we have developed an innovative solution leveraging quadruped robots, commonly referred to as robot dogs, for autonomous inspection tasks. Our approach integrates advanced sensing, navigation, and Internet of Things (IoT) technologies to create a robust system that reduces human intervention and enhances safety. This article details the design, implementation, and experimental validation of our quadruped robot system in cable tunnel environments, emphasizing key functionalities such as map building, autonomous navigation, environmental monitoring, defect detection, and real-time data communication. By utilizing a combination of laser radar, inertial measurement units (IMUs), thermal imaging cameras, and gas sensors, our robot dog demonstrates remarkable adaptability in unstructured terrains, making it a promising tool for industrial applications.
The core of our system is a custom-built quadruped robot equipped with high-performance brushless motors, planetary reduction mechanisms, and dual encoders for precise force control. This enables stable locomotion on varied surfaces like grass, sand, gravel, slopes, and stairs. For perception and data acquisition, the robot dog incorporates a thermal imaging dual-spectrum pan-tilt camera, temperature and humidity sensors, an IMU, a laser radar, and a Wi-Fi module. These components work in synergy to perform complex tasks in cable tunnels, where lighting conditions and structural uniformity present unique challenges. Our implementation focuses on achieving full autonomy through sophisticated algorithms for mapping, navigation, and data handling, while the backend管控中心 (control center) leverages open-source platforms like ThingsBoard for device management, data visualization, and remote control. Throughout this article, we will explore the technical intricacies of our quadruped robot system, supported by mathematical formulations, tables summarizing key parameters, and empirical results from field tests.

Map building is a foundational step for autonomous navigation in cable tunnels. We employ a Fast Direct LiDAR and Inertial Odometry (FAST-LIO2) algorithm, which offers rapid, robust, and accurate mapping by tightly coupling 3D LiDAR data with IMU inputs through an iterative extended Kalman filter (IEKF). This method avoids feature extraction from point clouds and instead directly registers raw point clouds to update the map, capturing subtle environmental features for improved matching precision. The algorithm dynamically maintains map data using an incremental k-d tree (ikd-Tree) structure, which supports point insertion, deletion, and rebalancing, thereby accelerating point cloud search speeds compared to traditional k-d trees or octrees. The mapping process can be summarized as follows: LiDAR accumulates point clouds over 10-100 ms intervals, forming a frame that is aligned with the existing map via the IEKF for state estimation. The ikd-Tree organizes global points and removes distant historical points based on the LiDAR’s field of view, ensuring real-time updates. Mathematically, the state estimation involves minimizing residuals between new scans and map points, with the optimization pose used to insert downsampled points into the global map. The algorithm’s efficiency is captured by the equation for residual calculation: $$ r = \min \sum_{i=1}^{n} \| p_i – T \cdot q_i \|^2 $$ where \( p_i \) represents a point in the new scan, \( q_i \) is the corresponding map point, and \( T \) denotes the transformation matrix optimized through the IEKF. This approach enables our quadruped robot to generate detailed 3D maps of tunnel environments, as illustrated in the point cloud visualization, facilitating reliable navigation.
For autonomous navigation and inspection point photography, we adopt a fusion of camera and laser radar technologies to overcome limitations of pure LiDAR or visual methods in tunnel settings. Cable tunnels often have uniform structures and variable lighting, which can cause degradation in LiDAR-based pose estimation or visual feature recognition. By dynamically adjusting weights in a unified loss function, our system prioritizes visual navigation parameters in well-lit areas (e.g., following a red pipeline line) and LiDAR parameters in dim regions. The navigation loss function is defined as: $$ L = \alpha L_{\text{visual}} + \beta L_{\text{LiDAR}} $$ where \( \alpha \) and \( \beta \) are adaptive weights based on environmental cues. Inspection points are set at high-voltage pipeline joints, identified by隆起 (protrusion) features. Upon detecting a point, the robot dog halts, captures three images at 30 cm intervals using the pan-tilt camera, and uploads them to the control center. This process minimizes cumulative errors by correcting navigation deviations through visual recognition. The table below summarizes key navigation parameters and their roles in the fusion algorithm.
| Parameter | Description | Role in Fusion |
|---|---|---|
| Visual Weight (\( \alpha \)) | Emphasis on camera data for line following | Increased in bright areas |
| LiDAR Weight (\( \beta \)) | Emphasis on laser data for obstacle avoidance | Increased in low-light conditions |
| Residual Threshold | Error tolerance for pose estimation | Determines convergence in IEKF |
Environmental monitoring and data reporting are critical for assessing tunnel conditions. Our quadruped robot continuously collects temperature, humidity, and gas concentration data using onboard sensors. To handle unstable network connectivity in tunnels, we implement a robust data upload strategy with a local SQLite database for offline storage. When online, data is transmitted in real-time to the control center via MQTT protocol (version 5.0); if offline, data is cached locally and synchronized upon reconnection. This ensures no data loss, enhancing the reliability of the inspection process. The data flow can be modeled as: $$ D_{\text{upload}} = \begin{cases} D_{\text{real-time}} & \text{if online} \\ D_{\text{cache}} & \text{if offline} \end{cases} $$ where \( D_{\text{real-time}} \) is directly sent, and \( D_{\text{cache}} \) is stored and later transmitted. Additionally, inspection point images and historical videos are uploaded using a similar strategy, with a minio-based distributed file system for large file handling. The table below outlines the sensor specifications and data types handled by the robot dog.
| Sensor Type | Data Collected | Upload Frequency |
|---|---|---|
| Thermal Imaging Camera | Heat distribution images | Per inspection point |
| Temperature/Humidity Sensor | Ambient conditions | Continuous (1 Hz) |
| Gas Sensor | Combustible/toxic gas levels | Continuous (1 Hz) |
| IMU | Orientation and acceleration | Integrated with LiDAR |
The control center, built on the open-source ThingsBoard platform, serves as the hub for remote management, data visualization, and alerting. It supports multiple users through a B/S architecture, allowing inspectors to monitor real-time video streams, review historical data, and control the quadruped robot via web interfaces. Key functionalities include device management using token-based authentication, telemetry data storage in PostgreSQL, and defect identification through AI-based image analysis. For real-time video streaming, we deploy RTSPtoWeb technology on the robot dog, which converts RTSP feeds into web-compatible formats using JSMpeg for low-latency playback. This enables multiple users to view live footage simultaneously within the same subnet. However, cross-subnet access remains a challenge for future improvement. The video streaming process involves the robot dog pushing RTSP streams, which are transcoded and distributed via WebSockets, achieving delays of around 50 ms. The equation for stream latency is: $$ L_{\text{stream}} = T_{\text{encode}} + T_{\text{transmit}} + T_{\text{decode}} $$ where each component is optimized for minimal lag.
For handling large files like inspection videos, we developed a minio-upload-srv service with resumable upload capabilities. This service uses MD5 checksums to manage file chunks, allowing interrupted uploads to resume from the last chunk. The process involves: (1) computing the file MD5, (2) querying existing uploads, (3) initializing a new task if absent, (4) uploading chunks with pre-signed URLs, and (5) merging chunks upon completion. This ensures efficient data transfer even over unreliable networks. Defect recognition is performed using deep learning models that analyze uploaded inspection point images for typical cable flaws such as peeling or leakage. If defects are detected, the system generates annotated images by combining original, recognized, and marked versions, and triggers alarms for inspector review. The accuracy of defect identification is governed by a convolutional neural network (CNN) model, with the loss function: $$ L_{\text{defect}} = -\sum y \log(\hat{y}) $$ where \( y \) is the true label and \( \hat{y} \) is the predicted probability.
Field experiments were conducted in an operational cable tunnel to validate the system’s performance. The quadruped robot successfully executed autonomous navigation over 200 meters, from the tunnel entrance to a maintenance platform, while constructing precise 3D maps using the FAST-LIO2 algorithm. During inspections, the robot dog accurately identified inspection points, captured images, collected environmental data, and uploaded information to the control center. Real-time monitoring via the web interface allowed inspectors to view live video and telemetry data, such as temperature and gas levels, as shown in the data streams. The tests demonstrated that the quadruped robot could partially replace human inspectors, reducing risks and improving efficiency. Benefits included enhanced safety management, early defect detection, and cost savings over the tunnel’s lifecycle. The table below summarizes key performance metrics from the experiments.
| Metric | Value | Description |
|---|---|---|
| Navigation Accuracy | < 10 cm error | Deviation from planned path |
| Data Upload Success Rate | 99.5% | Percentage of successful transmissions |
| Defect Detection Accuracy | 95% | Based on AI model validation |
| Battery Life | 2-3 hours | Duration per inspection cycle |
In conclusion, our quadruped robot system represents a significant advancement in cable tunnel inspection, combining robust mechanical design with intelligent algorithms and IoT integration. The robot dog’s ability to adapt to challenging environments, perform autonomous tasks, and communicate seamlessly with a control center highlights its potential for broader applications in urban security, disaster response, and smart facility management. However, limitations such as subnet restrictions for real-time video and the need for multi-robot coordination warrant further research. Future work will focus on optimizing network protocols for cross-subnet video streaming and enhancing the quadruped robot’s capabilities for collaborative operations. Through continuous improvement, we believe that quadruped robots will play an increasingly vital role in automating hazardous industrial inspections, ultimately safeguarding human workers and boosting operational reliability.
