In recent years, the development of humanoid robots has garnered significant attention due to their potential to perform complex, human-like tasks in dynamic environments. A key challenge in this domain is enabling humanoid robots to execute high-dynamic motions, such as aerial jumping, with the fluidity and stability observed in humans. This article presents a comprehensive experimental design for humanoid robot aerial jumping, incorporating motion capture technology to bridge the gap between human and robot movement. The approach involves capturing human motion data, establishing mapping relationships between human and robot joint trajectories, developing optimized jumping models, and implementing compliant landing strategies. By covering theoretical modeling, algorithm design, and experimental verification, this work aims to provide a robust framework for enhancing the performance of humanoid robots in demanding scenarios. The integration of motion capture not only facilitates realistic motion imitation but also enables the optimization of energy efficiency and stability, critical for the advancement of humanoid robot applications.
The core of this experiment lies in leveraging human motion data to guide the control of a humanoid robot. Human motion capture systems, such as optical tracking, allow for precise recording of joint positions, velocities, and accelerations during activities like jumping. This data is then processed to create a scalable model that accounts for the structural differences between humans and humanoid robots. For instance, the humanoid robot used in this study has specific dimensions and degrees of freedom that require tailored mapping functions. The mapping relationship between human joint positions and those of the humanoid robot is defined using scaling factors based on limb lengths and joint configurations. Let the human hip joint position be denoted as $[x_{hip}, y_{hip}, z_{hip}]^T$ and the robot hip joint position as $[x’_{hip}, y’_{hip}, z’_{hip}]^T$. The transformation can be expressed as:
$$ \begin{bmatrix} x’_{hip} \\ y’_{hip} \\ z’_{hip} \end{bmatrix} = \begin{bmatrix} \frac{R_1}{H_1} & 0 & 0 \\ 0 & \frac{R_2}{H_2} & 0 \\ 0 & 0 & \frac{R_3}{H_3} \end{bmatrix} \begin{bmatrix} x_{hip} \\ y_{hip} \\ z_{hip} \end{bmatrix} $$
where $R_1$, $R_2$, $R_3$ represent the robot’s specific dimensions (e.g., distances from knee to ankle, hip joint centers, and ankle to foot), and $H_1$, $H_2$, $H_3$ are the corresponding human measurements. This scaling ensures that the generated trajectories for the humanoid robot retain the essential characteristics of human motion while adapting to the robot’s physical constraints. The overall process involves data acquisition, preprocessing, trajectory generation, and optimization, with a focus on achieving stable and efficient jumps for the humanoid robot.

To implement the aerial jumping experiment, a detailed hardware setup is essential. The humanoid robot in this study is designed with 10 degrees of freedom, distributed across the hip, knee, and ankle joints, mimicking human lower-limb kinematics. The robot stands 90 cm tall and weighs 15 kg, with thigh and shank lengths of 250 mm each, and an ankle height of 100 mm. The foot dimensions are 160 mm by 100 mm, providing a stable base for locomotion. Actuation is achieved through AK-series brushless DC motors, combined with synchronous belt drives and harmonic gear reducers, ensuring precise control and high torque output. Sensors, including six-axis inertial measurement units, are integrated to monitor the robot’s state in real-time, facilitating feedback control during dynamic motions like jumping. The motion capture system employs infrared optical cameras and reflective markers placed on key human body points, such as the head, torso, waist, legs, and arms, to capture movement at 240 frames per second with sub-millimeter accuracy. This setup allows for the extraction of critical motion parameters, including center of mass trajectories, jump height, takeoff velocity, and movement cycles, which are then processed in MATLAB for analysis and optimization.
The experimental design for humanoid robot aerial jumping is divided into several phases: data capture, trajectory mapping, jump planning, and landing control. In the data capture phase, human subjects perform jumping motions while the motion capture system records the positions of 37 markers. This data is preprocessed to filter noise and extract keyframes, which represent significant postures during the jump, such as the squat, takeoff, aerial, and landing phases. For the humanoid robot, these keyframes are mapped using the scaling relationship described earlier, resulting in joint trajectories that approximate human motion. The jump planning phase involves parameterizing the jump into distinct intervals: squatting (from upright to lowest position), takeoff (from lowest position to liftoff), aerial (from liftoff to ground contact), and landing (from contact to stable upright). Each phase is governed by constraints on the hip joint’s position, velocity, and acceleration. For example, during the squat phase, the hip position $Z_{hip}(t)$, velocity $\dot{Z}_{hip}(t)$, and acceleration $\ddot{Z}_{hip}(t)$ must satisfy boundary conditions at times $t_0$ (start), $t_1$ (lowest point), and $t_2$ (liftoff):
$$ Z_{hip}(t) = \begin{cases} h_{t0}, & t = t_0 \\ h_{t1}, & t = t_1 \\ h_{t2}, & t = t_2 \end{cases} $$
$$ \dot{Z}_{hip}(t) = \begin{cases} 0, & t = t_0 \\ 0, & t = t_1 \\ v, & t = t_2 \end{cases} $$
$$ \ddot{Z}_{hip}(t) = \begin{cases} 0, & t = t_0 \\ 0, & t = t_1 \\ -g, & t = t_2 \end{cases} $$
where $h_{t0}$, $h_{t1}$, $h_{t2}$ are the hip heights at respective times, $v$ is the takeoff velocity, and $g$ is gravitational acceleration. These constraints ensure that the humanoid robot generates sufficient impulse for takeoff while maintaining stability. The trajectories are further optimized using intelligent algorithms to enhance performance metrics such as stability and energy efficiency.
Optimization plays a crucial role in refining the jump trajectories for the humanoid robot. Multi-objective optimization algorithms, including Particle Swarm Optimization (PSO), Multi-Objective PSO (MOPSO), and Non-dominated Sorting Genetic Algorithm (NSGA), are employed to minimize energy consumption and maximize stability, quantified by the Zero Moment Point (ZMP) criterion. The ZMP is a key indicator of dynamic stability for humanoid robots, representing the point where the net moment of forces is zero. The stability margin is defined as the distance between the ZMP and the center of the support polygon. The objective function $J$ combines stability and energy terms:
$$ J = \alpha J_{ZMP} + (1 – \alpha) J_P $$
where $\alpha$ is a weighting coefficient between 0 and 1, $J_{ZMP}$ is the ZMP stability function, and $J_P$ is the energy efficiency function. The ZMP stability function is given by:
$$ J_{ZMP} = (x_{zmp} – x_c)^2 + (y_{zmp} – y_c)^2 $$
with $(x_c, y_c)$ as the support foot center coordinates and $(x_{zmp}, y_{zmp})$ as the ZMP coordinates. The energy efficiency function accounts for the power consumption across all joints:
$$ J_P = \frac{1}{N} \sum_{i=1}^{N} \tau_i(j) \dot{\theta}_i(j) $$
where $\tau_i(j)$ and $\dot{\theta}_i(j)$ are the torque and angular velocity of joint $i$ at time $j$, respectively, and $N$ is the number of joints. Optimization results in parameter sets that balance these objectives, as shown in the following table comparing pre- and post-optimization values for key jump parameters:
| Parameter | Pre-Optimization | PSO | MOPSO | NSGA |
|---|---|---|---|---|
| $h_{t1}$ (cm) | 270 | 262 | 256 | 250 |
| $T_1$ (s) | 0.58 | 0.56 | 0.60 | 0.61 |
| $T_2$ (s) | 0.26 | 0.15 | 0.17 | 0.20 |
| $v$ (m/s) | 2.23 | 1.92 | 1.93 | 2.23 |
This table illustrates how optimization adjusts parameters like squat height ($h_{t1}$), squat duration ($T_1$), takeoff duration ($T_2$), and takeoff velocity ($v$) to improve the humanoid robot’s jump performance. For instance, NSGA reduces the squat height and increases durations, leading to better energy efficiency and stability compared to other methods.
Landing control is another critical aspect of the humanoid robot aerial jumping experiment. To mitigate impact forces during landing, a variable stiffness strategy based on impedance control is implemented. The robot’s leg is modeled as an equivalent impedance system, with the thigh and shank represented as rigid links connected by joints. The overall leg stiffness $k$ is derived from the series combination of link impedance $k_1$ and joint stiffness $k_2$. For a leg with thigh and shank lengths $l_1$ and $l_2$, and knee flexion angle $\beta$, the equivalent leg length $L$ is:
$$ L = (l_1 + l_2) \sin(\beta / 2) $$
Assuming $l_1 = l_2$, the variation in leg length $\Delta L$ due to small changes in $\Delta l_1$ and $\Delta \beta$ is:
$$ \Delta L = 2 \Delta l_1 \sin(\beta / 2) + l_1 \Delta \beta \cos(\beta / 2) $$
The equivalent stiffness $k$ is then calculated as:
$$ k = \frac{k_1 k_2}{2 k_2 \sin^2(\beta / 2) + \frac{k_1}{l_1^2} \cos^2(\beta / 2)} $$
The dynamics of the landing phase are governed by the equation:
$$ \ddot{L} = \frac{F – k(L_0 – L) – D(\dot{L}_0 – \dot{L})}{m} $$
where $L_0$ is the initial leg length, $D$ is the damping coefficient, $m$ is the mass of the leg, and $F$ is the ground reaction force. By integrating $\dot{L}$, the hip position corrections are computed to achieve a compliant landing for the humanoid robot. This approach reduces the risk of tipping and damage, ensuring the humanoid robot can repeatedly perform jumps without instability.
Experimental verification involves simulating the optimized trajectories in a PyBullet environment, where a virtual model of the humanoid robot is controlled based on inverse kinematics calculations. The joint angles derived from the mapped trajectories are sent to the robot’s controllers, and the motion is executed in real-time. The results demonstrate that the humanoid robot successfully replicates human-like jumping, with smooth transitions between phases. The ZMP trajectories during jumps remain within the support polygon, indicating dynamic stability. The following table compares the performance of different optimization algorithms in terms of ZMP stability root mean square error and energy efficiency:
| Algorithm | ZMP Stability RMSE | Energy Efficiency (W) |
|---|---|---|
| NSGA | 0.88 | 9.51 |
| MOPSO | 0.99 | 10.20 |
| PSO | 0.85 | 10.90 |
NSGA shows superior performance, with a 12.5% improvement in ZMP stability and a 7.2% gain in energy efficiency over MOPSO, highlighting its effectiveness for humanoid robot applications. Additionally, the joint trajectories of the humanoid robot closely match those of the human, confirming the fidelity of the motion capture-based mapping. For example, the hip, knee, and ankle angle profiles exhibit similar patterns to human data, with minor adjustments for robot-specific constraints.
The integration of motion capture technology with advanced control strategies enables the humanoid robot to achieve complex motions like aerial jumping with high similarity to human movement. This experimental design not only advances the capabilities of humanoid robots but also serves as an educational tool for understanding robotics principles. By engaging in full-cycle development—from data acquisition to real-world implementation—students and researchers can gain insights into the challenges and solutions in humanoid robot control. Future work may explore more dynamic motions, such as running or acrobatics, and incorporate machine learning for adaptive control. Overall, this approach underscores the potential of humanoid robots to operate in human-centric environments, pushing the boundaries of robotics innovation.
In conclusion, the humanoid robot aerial jumping experiment demonstrates a systematic method for translating human motion into robot-executable trajectories. Through meticulous modeling, optimization, and control, the humanoid robot achieves stable and efficient jumps, validated by performance metrics. The use of motion capture ensures that the movements are natural and human-like, while optimization algorithms enhance robustness and sustainability. This framework can be extended to other dynamic tasks, contributing to the broader goal of developing autonomous humanoid robots for real-world applications. As humanoid robots continue to evolve, experiments like this will play a pivotal role in bridging the gap between human and machine capabilities, fostering innovation in robotics and beyond.