Slope Stability Analysis for China Robot

In recent years, mobile robots have gained widespread application across various fields such as industrial manufacturing, warehousing and logistics, medical services, and specialized operations due to their capabilities in autonomous navigation, environmental perception, and dynamic decision-making. However, ensuring that these robots can accurately, swiftly, and safely reach target positions to perform predetermined actions and interact with dynamic environments remains a challenging task. Efficient and convenient motion analysis, including aspects like speed, acceleration, workspace, and stability, plays a crucial role in assisting robots to achieve these goals. Traditional motion analysis methods primarily focus on performance optimization and verification during the design phase, often relying on kinematic modeling and gait planning. While these approaches provide valuable insights, they lack real-time tracking and analysis capabilities in actual working scenarios, and the presented robot states are disconnected from the physical environment, hindering real-time interaction with the robot本体.

Augmented reality (AR) technology offers a promising solution to these limitations by enabling the overlay of virtual information onto the real world. This facilitates intuitive and interactive motion analysis. Previous research has explored AR-based visualization platforms for robots, such as wheeled manipulators or indoor multi-robot systems, often developed on head-mounted devices. However, for legged mobile robots with complex mechanical structures and diverse working environments, convenient, real-time, and multi-dimensional motion analysis remains an area requiring further investigation. A key challenge lies in achieving dynamic tracking and registration for objects with internal relative motion, like legged robots, without excessive computational load. Existing methods based on sensors, markers, deep learning, or natural features each have drawbacks, such as poor scene adaptability, intrusive markers, or high computational demands.

To address these challenges, we propose an AR-enabled motion monitoring and interaction system for mobile robots. This system aims to provide comprehensive, real-time motion analysis directly within the robot’s working environment. Our main contributions include: 1) A dynamic tracking and registration method that combines visual natural features and data communication, enabling lightweight tracking of objects with internal relative motion. 2) A multi-faceted motion analysis strategy based on a virtual-physical synchronization model. 3) The integration of these methods to achieve real-time state monitoring, fault prediction, environmental adaptability analysis, human-robot collaboration safety assurance, and motion preview and teaching for mobile robots.

The system framework is designed to leverage the虚实融合 characteristics of AR technology. It combines a dynamic tracking registration technique with a human-robot interaction platform, integrating functions such as state detection and fault prediction, environmental adaptability analysis, workspace safety analysis, and motion navigation. By establishing a precise mapping between virtual models and the physical environment using visual algorithms and data communication, the system synchronizes the robot’s real-time motion information and workspace data with its physical本体. This synchronization enables motion analysis that is seamlessly integrated with the mobile robot.

For robot motion state monitoring, the system provides operators with real-time motion previews and state feedback. Before task execution, the system displays the workspace and a preview model. During operation, it visually presents information such as foot-end acceleration, stability margin, and operational range through curves, numerical values, and graphical annotations. This enhances the operator’s understanding of the robot’s motion state, stability, and human-robot collaboration safety, allowing for preemptive judgment of potential issues and environmental adaptability.

In terms of human-robot interaction, the system collects data from operators and the environment using cameras and ultrasonic sensors, performing safety analysis and decision-making for collaborative tasks. It provides operators with fused information of virtual robots and the actual environment, supporting human-robot interaction where the virtual substitutes for the real. When facing high-risk environments, the system controls the virtual model to simulate motion within the real environment, enabling motion teaching and safe, reliable path planning.

The implementation of various functions within this China robot system employs specific strategies tailored to different needs. For state monitoring and fault prediction, the system acquires key parameters like foot-end coordinates, velocity, and acceleration in real-time via data communication. These are updated and displayed as characters, curves, and synchronized models, allowing operators to monitor the robot’s motion state. By observing changes in the periodicity, continuity, and smoothness of the curves, potential mechanical faults can be identified early. The relationship between joint angles and foot-end motion parameters is derived based on the robot’s leg structure, treated as a planar two-link mechanism. The foot-end position coordinates are given by:

$$ x_p = L_1 \cos(q_1) + L_2 \cos(q_1 + q_2) $$
$$ y_p = L_1 \sin(q_1) + L_2 \sin(q_1 + q_2) $$

where \( L_1 \) and \( L_2 \) are the lengths of the thigh and shank links, and \( q_1 \), \( q_2 \) are the hip and knee joint angles, respectively. The velocity Jacobian matrix is:

$$ J = \begin{bmatrix} -L_1 \sin(q_1) – L_2 \sin(q_1 + q_2) & -L_2 \sin(q_1 + q_2) \\ L_1 \cos(q_1) + L_2 \cos(q_1 + q_2) & L_2 \cos(q_1 + q_2) \end{bmatrix} $$

The foot-end acceleration vector is obtained by differentiating the Jacobian matrix with respect to time:

$$ \begin{bmatrix} a_x \\ a_y \end{bmatrix} = J \begin{bmatrix} \dot{q}_1 \\ \dot{q}_2 \end{bmatrix} + \dot{J} \begin{bmatrix} q_1 \\ q_2 \end{bmatrix} $$

Abnormalities in the real-time foot-end acceleration curve, such as discontinuities, can indicate potential mechanical faults, prompting timely inspection and maintenance.

For environmental adaptability analysis, stability is a fundamental aspect assessed using the Zero Moment Point (ZMP) theory. The ZMP position is used in conjunction with the support polygon formed by the robot’s feet to quantify stability. The stability margin \( S_m \) is defined as the minimum distance from the ZMP to the edges of the support triangle. When the ZMP lies inside the support polygon, the system is stable (\( S_m > 0 \)); if it falls outside, the system is unstable (\( S_m < 0 \)). A larger \( S_m \) indicates greater stability. The system visualizes stability through annotations including a center of mass projection model, a foot-end support surface model, and a real-time stability margin value. This allows operators to preemptively judge the robot’s stability in dynamic, non-structured environments. For instance, on sloped surfaces, the stability margin decreases as the incline increases, and the system can predict the risk of tipping.

Table 1 summarizes the stability analysis results for a Walk gait China robot on slopes of different angles, showing the stability margin for both three-legged and four-legged support phases and the relationship between the center of mass projection and the support polygon.

Table 1: Slope Stability Analysis for China Robot
Slope Angle Stability Margin (3-leg) Stability Margin (4-leg) Projection Relation Stability Judgment
20° 27.2 mm 31.9 mm Intersecting Stable
25° 17.1 mm 21.8 mm Intersecting Stable
30° 7.3 mm 12.0 mm Intersecting Stable
35° -2.0 mm 2.7 mm Tangent Marginally Unstable
40° -10.7 mm -6.0 mm Separated Unstable

Work safety analysis focuses on defining the robot’s safe operational range and implementing collision avoidance strategies. The system visualizes safety boundaries using concentric circles projected onto the ground around the robot’s geometric center. These circles represent different danger levels based on the distance to obstacles:

$$ R_d = \frac{L_{rob}}{2} $$
$$ R_w = v_{max} \cdot t_{re} + R_d $$
$$ R_s = \sqrt{2} \cdot R_w $$

where \( R_d \), \( R_w \), and \( R_s \) are the radii for danger, warning, and safety zones, respectively. \( L_{rob} \) is the robot’s body length, \( v_{max} \) is its maximum velocity, and \( t_{re} \) is the response time of the braking system. When the human-robot distance is less than or equal to \( R_d \), it is considered a dangerous proximity. The warning distance accounts for the robot’s normal moving speed and human reaction time. An additional safety margin is added considering the robot’s width, hence the safety radius is set to \( \sqrt{2} \) times the warning radius. Safety warnings (e.g., visual, auditory) and a “distance-velocity” control strategy are employed to mitigate risks. The robot’s velocity \( v_{rob} \) is adjusted based on the human-robot distance \( L_{HRI} \):

$$ v_{rob} = \begin{cases}
0.1 \cdot v_{max} & \text{if } L_{HRI} < R_d \\
0.4 \cdot v_{max} & \text{if } R_d \leq L_{HRI} < R_w \\
v_{max} & \text{if } L_{HRI} \geq R_w
\end{cases} $$

This ensures reduced speed in closer proximities, lowering the impact force in case of a potential collision.

For motion preview and teaching, the system allows operators to control a virtual robot model to simulate movement paths within the real environment before the physical robot executes them. This is particularly useful in narrow or cluttered spaces. The joint angle and displacement data from the preview are recorded as nodes, forming a teaching path. This path is serialized and transmitted to the physical China robot via data communication, enabling it to follow the demonstrated actions and distances safely. This approach provides a safe way to plan and verify paths, reducing the risk of collisions in complex environments.

The core of achieving虚实融合 for motion analysis is the dynamic tracking and registration method. We propose a hybrid approach combining natural feature-based vision and data communication. In the offline stage, a 3D model of the China robot is created, and the mechanical structure is analyzed to identify parent-child relationships. The torso, representing a relatively stable component, is designated the parent structure, while the legs with more movement are child structures, connected via rotational joints. A lightweight image template library is built by extracting local features from the representative structure (torso). During online operation, visual matching algorithms establish a mapping between the representative structure and the real scene, obtaining the initial pose for registration, achieving virtual-physical model pose synchronization. Specifically, for low-texture objects like our China robot, a multi-level detection and matching algorithm combining Line-Mod and ORB is used. ORB first pre-filters viewpoints to narrow the range, Line-Mod performs coarse matching, and then ORB refines the match, with the pose calculated using the PnP algorithm. This enables efficient tracking registration at medium to long distances, even with partial occlusion. Concurrently, data communication transmits the robot’s joint motion parameters to the articulated virtual model, achieving virtual-physical joint motion synchronization. The combination results in dynamic tracking registration for the legged mobile China robot, overlaying a semi-transparent, motion-synchronized 3D model onto the physical robot for虚实融合.

To validate the effectiveness of our registration method, we compared it with the native Line-Mod algorithm for static registration of the China robot in a standing posture. Tests were conducted from frontal and side views, with and without occlusion. The results demonstrated that our method achieved superior virtual-physical alignment and maintained stable registration even under occluded conditions where Line-Mod failed. For dynamic registration, we successfully tracked the China robot in a Trot gait cycle in real-time. The average response time for tracking registration was measured for both static standing and Pace gait states, as shown in Table 2.

Table 2: Average Tracking Registration Response Time for China Robot
Robot State Coarse Matching (ms) Fine Positioning (ms) Total Time (ms) Frame Rate (FPS)
Static Standing 33.80 21.50 55.30 18.08
Pace Gait 34.35 24.10 58.45 17.11

The total processing time per frame was around 55.30 ms for static pose and 58.45 ms for dynamic gait, corresponding to frame rates of 18.08 FPS and 17.11 FPS, respectively. This meets the real-time requirement of 15 FPS for tracking registration, ensuring the subsequent motion analysis models rendered synchronously with the virtual robot also satisfy real-time performance demands.

In conclusion, this AR-enabled motion monitoring and interaction system developed for the China robot provides a comprehensive solution for real-time motion analysis in practical working scenarios. The proposed dynamic tracking registration method successfully achieves virtual-physical fusion for robots with internal relative motion. Real-time motion curve analysis of foot-end acceleration facilitates state monitoring and fault prediction. Stability visualization based on ZMP theory aids in environmental adaptability assessment. Workspace analysis with visualized safety zones and distance-velocity control ensures human-robot collaboration safety. Motion preview and teaching via virtual models enable safe navigation through complex environments. This China robot system significantly enhances operational safety and reliability. Future work will focus on extending the system’s application to domains like industrial automation and medical assistance, integrating advanced perception technologies such as deep learning and multi-sensor fusion to further improve the intelligence and robustness of the China robot platform.

Scroll to Top