In recent years, the integration of AI robots into various non-medical domains, such as industrial automation, service sectors, and personal assistance, has accelerated significantly. These AI robots are designed to operate in shared spaces with humans, leading to increased emphasis on human-robot interaction (HRI) safety. Ensuring the safety of these interactions is paramount, as failures can result in physical harm, reduced trust, and hindered adoption. This review examines the current state of HRI safety evaluation technologies for non-medical AI robots, focusing on collaborative and wearable robots, and explores future trends. The evaluation encompasses physical, informational, and other safety aspects, with a particular focus on standardized parameters, testing methodologies, and validation platforms. By analyzing existing standards, instruments, and research gaps, this article aims to provide a comprehensive overview and highlight critical areas for advancement in HRI safety测评.
The safety of AI robots in HRI contexts is influenced by multiple factors, including contact mechanics, environmental dynamics, and user behavior. For instance, collaborative AI robots often operate in close proximity to humans, requiring robust safety mechanisms to prevent injuries during accidental contacts. Similarly, wearable AI robots, which are physically coupled with users, must ensure comfort and safety over prolonged use. The complexity of these interactions necessitates a multidisciplinary approach, combining robotics, biomechanics, and artificial intelligence. In this review, I will discuss the key parameters for evaluating HRI safety, the progress in international standards and testing technologies, and the challenges faced in current practices. Additionally, I will propose future research directions to enhance the safety and reliability of AI robots in diverse applications.

To systematically evaluate HRI safety, it is essential to define the parameters and modalities involved. For collaborative AI robots, safety evaluation parameters can be categorized into physical interaction safety, information interaction safety, and other interaction safety aspects. Physical interaction safety includes metrics such as collision force, pressure distribution, and contact duration, which are critical for assessing potential injuries. Information interaction safety involves data security, communication reliability, and human intention recognition, ensuring that AI robots respond appropriately to user commands. Other interaction safety aspects may include environmental adaptability and task-specific risks. These parameters are often derived from international standards like ISO/TS 15066, which provides guidelines for collaborative robot safety. However, the rapid evolution of AI robot technologies demands continuous updates to these standards to address emerging challenges, such as multi-modal interactions and complex scenarios.
For wearable AI robots, the evaluation parameters differ due to the intimate coupling with the human body. Key aspects include general interaction parameters (e.g., mechanical and electrical limits), bio-mechanical constraints (e.g., force distribution and comfort), and human-robot coupling parameters (e.g., joint angle matching and stability). These parameters ensure that wearable AI robots do not cause discomfort, fatigue, or injuries during use. For example, excessive pressure on the skin or misalignment of joints can lead to long-term health issues. Therefore, comprehensive testing must simulate real-world conditions, such as dynamic movements and varying loads, to validate safety. The following table summarizes the primary safety evaluation parameters for collaborative and wearable AI robots, highlighting their importance in HRI safety assessment.
| Robot Type | Safety Category | Parameters | Description |
|---|---|---|---|
| Collaborative AI Robots | Physical Interaction | Collision force, pressure, contact area | Measures forces and pressures during human-robot contact to prevent injuries. |
| Information Interaction | Data integrity, response time, error rate | Ensures secure and reliable communication between humans and AI robots. | |
| Other Interaction | Environmental factors, task complexity | Accounts for external conditions and specific application risks. | |
| Wearable AI Robots | General Interaction | Mechanical limits, stop functions | Defines operational boundaries to avoid overexertion. |
| Bio-mechanical Constraints | Force distribution, comfort, fatigue | Evaluates physical impact on the human body during wear. | |
| Human-Robot Coupling | Joint alignment, stability, control accuracy | Assesses the synergy between robot and human movements. |
The interaction modalities between humans and AI robots play a crucial role in safety evaluation. Based on contact dynamics, HRI can be classified into several modalities, including quasi-static contact, transient contact, dynamic continuous contact, teaching-by-demonstration contact, and indirect contact. Quasi-static contact occurs when a body part is trapped between robot components, leading to sustained pressure, while transient contact involves brief impacts where the human can recoil. Dynamic continuous contact refers to prolonged interactions during cooperative tasks, such as assembly or搬运, requiring real-time force monitoring. Teaching-by-demonstration contact involves humans guiding AI robots through physical interaction, necessitating sensitive force feedback. Indirect contact occurs through tools or interfaces, adding layers of complexity to safety assessment. Each modality presents unique risks; for example, quasi-static contact may cause crushing injuries, whereas transient contact could lead to fractures or abrasions. Understanding these modalities helps in designing appropriate safety measures and testing protocols for AI robots.
To quantify the safety limits in HRI, biomechanical models are employed. These models estimate the maximum allowable forces and pressures based on human pain thresholds and injury criteria. For instance, the energy transfer during a collision can be modeled using the equation: $$E = \frac{1}{2} m v^2$$ where \(E\) is the kinetic energy, \(m\) is the effective mass of the AI robot, and \(v\) is the relative velocity. By limiting \(v\), the energy transferred to the human body can be kept below harmful levels. Similarly, for pressure, the allowable value \(P_{\text{max}}\) can be derived from empirical studies on different body regions. For example, the forehead can withstand higher pressures than the abdomen. Standards like ISO/TS 15066 provide detailed tables of these values, but they often lack coverage for multi-directional or dynamic interactions. The following formula generalizes the force limit for transient contact: $$F_{\text{allow}} = k \cdot A \cdot P_{\text{thresh}}$$ where \(k\) is a safety factor, \(A\) is the contact area, and \(P_{\text{thresh}}\) is the threshold pressure for pain onset. Such models are essential for setting safety standards in AI robot design.
In terms of international standards and testing technologies, significant progress has been made, but gaps remain. For collaborative AI robots, standards like ISO 10218 and ISO/TS 15066 outline safety requirements for industrial environments, including force and pressure limits during collisions. However, these standards primarily address unidirectional contacts and static scenarios, failing to encompass multi-modal interactions common in modern AI robots. Testing instruments, such as those developed by German company Pilz or American firm Tekscan, offer high-precision force and pressure measurement systems. For example, Pilz’s system can assess pain values from collisions with errors below 3% in specific force ranges. Conversely, wearable AI robots are covered by standards like ISO 18646-4 and ISO 13482, which focus on mechanical safety and performance but lack comprehensive guidelines for continuous HRI safety. Instruments like pressure mapping systems from Novel GmbH enable detailed analysis of interaction forces, yet they are often limited to laboratory settings. The table below compares the current status of standards and testing technologies for collaborative and wearable AI robots, highlighting areas where advancements are needed.
| Aspect | Collaborative AI Robots | Wearable AI Robots |
|---|---|---|
| Key Standards | ISO 10218, ISO/TS 15066 | ISO 18646-4, ISO 13482 |
| Testing Parameters | Collision force, pressure, speed | Joint angles, assistive torque, comfort |
| Available Instruments | Unidirectional force sensors, pressure mats | Motion capture systems, EMG sensors |
| Error Margins | 3-5% in force measurement | 5-10% in motion tracking |
| Gaps | Lack of multi-directional and dynamic testing | Insufficient real-world scenario simulation |
Validation platforms for HRI safety have evolved to simulate real-world conditions. Internationally, facilities like the NIST Robotic Test Facility in the USA and RoboTest in Germany offer advanced testing environments for AI robots, incorporating sensor arrays and human analogs to measure interaction forces and displacements. These platforms enable rigorous assessment of safety protocols, but their methods are often proprietary, creating barriers to widespread adoption. In China, centers such as the National Robot Testing and Evaluation Center have developed comprehensive testing grounds for reliability and performance, including complex terrain navigation and obstacle avoidance for AI robots. However, these platforms primarily reference existing standards like GB/T 36008, which do not fully address the nuances of HRI in dynamic environments. For instance, high-altitude or high-voltage scenarios are not adequately covered, limiting the applicability of current evaluations. The integration of AI-driven monitoring systems could enhance these platforms by providing real-time risk assessment and adaptive testing scenarios for AI robots.
Despite these advancements, several challenges persist in HRI safety evaluation for AI robots. First, the influence mechanisms of interaction safety are not fully understood, particularly for multi-modal contacts involving variable speeds, loads, and body regions. Second, testing methods and standard systems are incomplete, with many existing protocols focusing on isolated parameters rather than integrated assessments. For example, collaborative AI robots lack standards for information interaction safety, such as cybersecurity vulnerabilities in control systems. Third, testing instruments and calibration methods are underdeveloped, especially for wearable AI robots where flexible and multi-axis force sensors are needed. Current devices, like strain gauges or pressure pads, often suffer from low accuracy in dynamic conditions or cannot measure heterogeneous surface contacts. Additionally, the absence of standardized calibration procedures leads to inconsistent results across different testing facilities. These issues are compounded by the rapid pace of innovation in AI robotics, which outstrips the development of corresponding safety frameworks.
Looking ahead, future research trends in HRI safety evaluation for AI robots should focus on several key areas. First, there is a need to develop comprehensive models that account for the “human-robot-environment-task-process” integration. This involves creating dynamic risk assessment frameworks that adapt to changing conditions, such as varying user movements or environmental hazards. For example, machine learning algorithms could be employed to predict unsafe interactions based on real-time sensor data from AI robots. Second, intelligent testing and grading systems should be established to evaluate AI-specific capabilities, such as emotion recognition, decision-making, and adaptive learning in HRI contexts. These systems could use simulations and virtual environments to test AI robots under extreme scenarios without physical risks. Third, enhancing the reliability of AI robots through prolonged durability testing and fault tolerance analysis is crucial. This includes assessing mechanical wear, software stability, and environmental resilience over extended periods.
Another promising direction is the advancement of testing instruments for multi-modal HRI safety. Researchers should invest in developing multi-directional force and pressure sensors capable of capturing complex interaction dynamics in AI robots. For instance, flexible tactile sensors mimicking human skin could provide high-resolution data on contact forces and distributions. Calibration techniques using reference standards and compensation algorithms, such as those based on deep reinforcement learning, can improve measurement accuracy. The equation for sensor error compensation might be: $$\Delta F = f(\theta, t, E)$$ where \(\Delta F\) is the force error, \(\theta\) represents environmental factors, \(t\) is time, and \(E\) denotes sensor characteristics. By minimizing \(\Delta F\), testing instruments can deliver more reliable safety assessments for AI robots. Furthermore, building validation platforms that replicate complex operational scenarios, such as collaborative manufacturing or outdoor assistance, will enable more realistic evaluations. These platforms should incorporate modular designs for easy reconfiguration and distributed sensor networks for comprehensive data collection.
In conclusion, the safety evaluation of HRI for non-medical AI robots is a critical enabler for their widespread adoption in industries and daily life. While current standards and technologies provide a foundation, significant gaps remain in understanding interaction mechanisms, developing comprehensive测试方法, and creating accurate instruments. Future efforts should prioritize integrated models, intelligent testing systems, and advanced validation platforms to address these challenges. By doing so, we can ensure that AI robots operate safely and effectively alongside humans, fostering trust and unlocking their full potential. As AI robotics continue to evolve, continuous collaboration between researchers, industry stakeholders, and standardization bodies will be essential to keep pace with innovation and safeguard human well-being.