The increasing frequency of natural and man-made disasters, such as earthquakes, mine collapses, and fires, underscores the critical need for efficient search and rescue operations. The most crucial task post-disaster is the rapid location and extraction of survivors. Research indicates that the survival rate of individuals trapped under rubble decreases significantly after the first 72 hours. The disaster environment is often complex and hazardous, characterized by unstable structures, confined spaces, toxic atmospheres, and extreme temperatures, posing severe risks to human rescuers and search dogs. This creates a pressing demand for technological aids. Small, agile robotic systems can navigate these treacherous, narrow environments, utilizing an array of sensors to gather environmental data and locate signs of life, thereby improving rescue efficiency and reducing first-responder casualties. This research focuses on the design and development of a control system for an all-terrain hexapod bionic robot, intended for exploration and victim localization in disaster scenarios like structural collapses. The system aims to be economical, robust, and highly functional, merging autonomous capabilities with manual remote operation to create a versatile and practical robotic platform for complex terrain.
Introduction and Conceptual Framework
The field of legged robotics draws significant inspiration from nature, where hexapodal insects like cockroaches and stick insects demonstrate remarkable stability, agility, and adaptability across uneven ground. This biomimetic approach forms the foundation of our design. A hexapod bionic robot, with its inherent static stability (maintaining balance with at least three legs on the ground) and redundant degrees of freedom, is ideally suited for all-terrain navigation. The core challenge lies in synthesizing a robust mechanical design with an intelligent, multi-modal control system that can process sensory data and execute coordinated locomotor actions. Our design philosophy centers on creating a modular system comprising a lightweight mechanical platform, a distributed sensor suite for environmental perception, and a hierarchical control architecture for decision-making and actuation. The term bionic robot is central here, as it encapsulates our goal of emulating biological principles of locomotion and sensing within an engineered system.
Mechanical Structure and Locomotion Gait
The physical platform for our control system is a custom-designed hexapod bionic robot. Each of the six legs possesses three degrees of freedom (3-DOF), actuated by servo motors, providing the necessary dexterity for complex limb trajectories. This configuration allows the robot to adjust its body posture, step over obstacles, and navigate inclines. The leg structure is inspired by insect morphology, typically divided into coxa, femur, and tibia segments. The body and leg components are fabricated using advanced composite materials and 3D-printed parts, achieving an optimal balance between strength, durability, and low weight—a critical factor for energy efficiency and payload capacity.
The walking pattern, or gait, is fundamental to stable locomotion. For a hexapod bionic robot, the tripod gait is one of the most efficient and stable. In this gait, the six legs are divided into two alternating tripods: legs 1, 3, 5 and legs 2, 4, 6 move in synchrony. While one tripod is in the swing phase (moving forward), the other is in the stance phase (supporting the body and propelling it forward). The duty factor $\beta$ (the fraction of a cycle a leg is in stance phase) for a stable tripod gait is 0.5. The forward velocity $v$ can be approximated by:
$$ v = \frac{R}{T_{cycle}} $$
where $R$ is the stride length and $T_{cycle}$ is the duration of one complete gait cycle. The coordination of 18 servos to produce this and other gaits (e.g., wave gait for more careful movement) is managed by the motion control subsystem.
| Gait Type | Swing Phase Legs | Duty Factor ($\beta$) | Stability | Speed | Use Case |
|---|---|---|---|---|---|
| Tripod Gait | (1,3,5) / (2,4,6) | 0.5 | Statically Stable | High | Fast traversal on flat/uneven terrain |
| Tetrapod Gait | 4 legs support, 2 swing | ~0.67 | Very Stable | Medium | Rough, slippery, or inclined terrain |
| Wave Gait | Only one leg swings at a time | >0.8 | Maximally Stable | Very Low | Precise positioning, fragile environments |
Hardware Selection and Sensor Suite
The efficacy of the bionic robot is largely determined by its sensors and actuators. Careful selection was made to ensure reliability, environmental resistance, and performance in harsh conditions.
Actuator Selection: Servo Motors
Servo motors are the “muscles” of the bionic robot. We selected 18 DS3230 digital servos for their waterproof and dustproof design, metal gears for high torque, and compact size. Each servo provides precise angular control via Pulse Width Modulation (PWM) signals. The torque $\tau$ required for a joint is a function of the load and limb geometry. For a generic leg segment, the required torque at the joint can be estimated by a simplified static model considering the weight of distal segments and any external force $F_{ext}$:
$$ \tau \approx m \cdot g \cdot l \cdot \cos(\theta) + F_{ext} \cdot L $$
where $m$ is the mass of the distal links, $g$ is gravity, $l$ is the distance to the center of mass, $\theta$ is the joint angle, and $L$ is the moment arm for the external force.
| Parameter | Specification |
|---|---|
| Operating Voltage | 6.0V – 8.4V |
| Stall Torque (at 7.4V) | 25 kg-cm |
| Speed (at 7.4V) | 0.16 sec/60° |
| Weight | 62g |
| Interface | PWM (Standard 3-pin) |
| Protection | Waterproof & Dustproof |
Perception Sensor Suite
The robot is equipped with a multi-sensor perception suite to form a comprehensive understanding of its environment, a key feature for an autonomous bionic robot.
- OpenMV Cam (Visual Perception): This acts as the primary “eye.” It is a programmable microcontroller with a camera, capable of on-board computer vision tasks like color tracking, blob detection, line following, and simple object recognition using pre-trained Haar cascades or custom neural networks (when optimized). Its output drives autonomous behaviors and provides a video stream for remote operation.
- LiDAR (A0602) for Mapping: A 2D laser rangefinder is used for simultaneous localization and mapping (SLAM) in a planar slice. It provides accurate distance measurements over a 360° field of view, allowing the robot to build an occupancy grid map of its surroundings. This map is crucial for global path planning and obstacle avoidance beyond the immediate field of view of other sensors.
- Ultrasonic Sensor (HY-SRF05) for Proximity: Used for medium-range obstacle detection directly in the robot’s path. It works on the time-of-flight principle. The distance $d$ to an object is calculated as:
$$ d = \frac{v_{sound} \cdot \Delta t}{2} $$
where $v_{sound}$ is the speed of sound (~343 m/s at 20°C) and $\Delta t$ is the time between pulse emission and echo reception. - Infrared (IR) Sensors for Cliff Detection: Mounted on the underside of the body or feet, these sensors detect the presence of a solid surface. A lack of reflected IR signal indicates a cliff or hole, triggering an avoidance routine to prevent falls.
- Infrared Thermopile (GY-MCU90615) for Temperature Sensing: This non-contact sensor measures ambient temperature and can detect heat signatures. It is vital for assessing environmental hazards (fire) and potentially locating warm-bodied survivors.
| Sensor Module | Primary Function | Range/Accuracy | Data Type |
|---|---|---|---|
| OpenMV Cam | Visual feedback, object/color recognition, video streaming | Field of View ~70° | Image frame, coordinates, metadata |
| 2D LiDAR (A0602) | Environment mapping, long-range obstacle detection | ~12m range, ±30mm accuracy | Point cloud (distance, angle) |
| Ultrasonic (HY-SRF05) | Short-to-medium range frontal obstacle avoidance | 2cm – 450cm, ±3mm | Distance (scalar) |
| Infrared Proximity | Cliff/fall prevention, ground contact detection | 2cm – 30cm (adjustable) | Binary/Digital (Obstacle/No Obstacle) |
| IR Thermopile | Ambient & spot temperature measurement | -40°C to 115°C, ±0.5°C | Temperature (scalar) |
System Architecture and Control Design
The control system for this hexapod bionic robot follows a hierarchical and modular architecture. The flow of information mimics a biological nervous system: sensors (sensory organs) gather data, which is processed by the main controller (brain), which then sends high-level commands to a dedicated motion controller (spinal cord / central pattern generator), which finally generates the precise timing signals for the servo actuators (muscles).

1. Central Processing Unit: Arduino Mega 2560
This microcontroller serves as the primary brain of the bionic robot. Its key roles include:
if (ultrasonic_distance < 25cm) { stop(); scan_with_servo(); turn_toward_clearest_path(); }.2. Motion Control Subsystem: Lobot Motion Control Board
This dedicated controller acts as the low-level gait generator for the bionic robot. It stores pre-programmed “action groups”—sequences of synchronized servo positions that constitute fundamental motions like “step forward with tripod gait,” “turn on the spot,” “stand up,” or “wave a leg.” The Arduino sends simple ASCII commands (e.g., “PLAY_GROUP 1”) to this board, which then handles the complex, timing-critical task of interpolating servo angles and outputting 18 coordinated PWM signals. This offloads computational burden from the main CPU and ensures smooth, jitter-free motion.
3. Vision and Teleoperation Subsystem: OpenMV with WiFi
The OpenMV module operates semi-independently. It processes visual data on its own core. Its functionalities are twofold:
4. Remote Human-Robot Interaction: Bluetooth & WiFi
Dual wireless channels provide flexible control:
This hybrid approach allows the operator to switch between direct manual control and supervisory control based on live video and sensor data.
Kinematic Modeling and Gait Planning
To achieve precise foot placement and body movement, we employ kinematic models. The forward kinematics for a single leg defines the position of the foot (end-effector) $P_{foot} = (x, y, z)$ relative to the body based on the three joint angles $(\theta_1, \theta_2, \theta_3)$.
Using the Denavit-Hartenberg (D-H) convention, the transformation from the coxa frame to the foot frame is:
$$ T_{3}^{0} = A_{1}^{0}(\theta_1) \cdot A_{2}^{1}(\theta_2) \cdot A_{3}^{2}(\theta_3) $$
where each $A_{i}^{i-1}$ is a homogeneous transformation matrix. For a simplified model with link lengths $L_1$ (coxa), $L_2$ (femur), and $L_3$ (tibia), the foot position can be calculated as:
$$
\begin{aligned}
x &= L_1 + L_2 \cos(\theta_2) + L_3 \cos(\theta_2 + \theta_3) \\
y &= 0 \quad \text{(for leg in nominal plane)} \\
z &= -L_2 \sin(\theta_2) – L_3 \sin(\theta_2 + \theta_3)
\end{aligned}
$$
Inverse kinematics is used for gait planning: given a desired foot trajectory $(x_d, y_d, z_d)$ in the leg’s coordinate frame, we solve for the required joint angles. For our 3-DOF leg in a primarily 2D plane of motion (sagittal plane for lifting/swinging), the solution is:
$$
\begin{aligned}
\theta_1 &= \arctan2(y_d, x_d – L_1) \\
D &= \frac{(x_d – L_1 – L_1 \cos(\theta_1))^2 + z_d^2 – L_2^2 – L_3^2}{2 L_2 L_3} \\
\theta_3 &= \arctan2(\pm \sqrt{1 – D^2}, D) \\
\theta_2 &= \arctan2(z_d, x_d – L_1 – L_1 \cos(\theta_1)) – \arctan2(L_3 \sin(\theta_3), L_2 + L_3 \cos(\theta_3))
\end{aligned}
$$
These calculations are performed offline to generate the keyframes for action groups stored on the Lobot board. The swing trajectory for a foot is often defined by a parametric curve (like a cycloid or a simple parabola) to ensure smooth lifting and planting.
Integrated System Workflow and Autonomous Functions
The integrated operation of this bionic robot can be described through its primary autonomous functions:
1. Autonomous Exploration with Mapping: The robot is placed in an unknown area. The LiDAR begins scanning, and the Arduino (or an connected companion computer) runs a basic SLAM algorithm (e.g., based on iterative closest point or grid mapping) to build a 2D occupancy map. The robot uses a simple frontier-based exploration algorithm, planning paths to the nearest unexplored boundary on its map while using the ultrasonic and IR sensors for real-time local obstacle and cliff avoidance.
2. Vision-Guided Navigation: The OpenMV cam is configured to detect a specific color (e.g., red tape marking a safe path) or to follow a line. The visual processing outputs a deviation error. This error is fed into a Proportional-Integral-Derivative (PID) controller implemented on the Arduino:
$$ u(t) = K_p e(t) + K_i \int_{0}^{t} e(\tau) d\tau + K_d \frac{de(t)}{dt} $$
where $u(t)$ is the turn rate command, $e(t)$ is the visual deviation error, and $K_p$, $K_i$, $K_d$ are tuning gains. The output $u(t)$ modulates the basic forward gait to keep the robot centered on the path.
3. Multi-Sensor Collision and Fall Prevention: This is a reactive safety layer running continuously at a high frequency in the main control loop. The logic can be summarized as:
Loop:
Read ultrasonic_distance (U_d)
Read IR_left, IR_right, IR_center (binary)
Read IMU_data (pitch, roll) // Assumed from an Inertial Measurement Unit
If (U_d < SAFE_DISTANCE): Trigger "Obstacle Avoidance Maneuver"
Else If (ANY_IR == FALSE): Trigger "Back Up & Re-orient"
Else If (abs(pitch) > 45° || abs(roll) > 45°): Trigger "Fall Recovery Procedure"
Else: Continue current action.
This multi-layered approach ensures the bionic robot can handle a variety of terrain hazards.
Conclusion and Significance
This research presents a comprehensive design for the control system of an all-terrain hexapod bionic robot. By integrating biomimetic mechanical design with a layered architecture of perception, decision-making, and actuation, we have developed a platform capable of operating in complex, unstructured environments. The use of a dedicated motion controller ensures reliable and smooth locomotion, while the fusion of LiDAR, vision, and proximity sensors provides robust environmental awareness for both autonomous behaviors and human teleoperation. The dual wireless control channels (Bluetooth and WiFi) offer flexible and intuitive human-robot interaction. The emphasis on lightweight construction and waterproof components enhances the robot’s practicality for real-world search and rescue applications. Future work will focus on implementing more advanced SLAM and path planning algorithms, improving energy efficiency, and integrating more sophisticated manipulators or tools for interaction with the environment. The developed hexapod bionic robot demonstrates significant potential as a valuable tool for emergency response, hazardous environment inspection, and exploration, ultimately aiming to reduce risk to human life in disaster scenarios.
