In the context of rapidly advancing intelligent manufacturing, the application of intelligent robots in modern industrial production has become increasingly widespread. Traditional industrial robots often rely on pre-programmed fixed action sequences, lacking perception capabilities for environmental changes, which results in significant limitations in adaptability, operational flexibility, and workspace range. In contrast, autonomous mobile intelligent robots equipped with visual feedback can perceive the surrounding environment in real-time through multi-source sensors, perform path planning, and move autonomously, thereby significantly expanding the operational space and achieving precise grasping of targets at different locations. This is crucial for enhancing the intelligence level of robotic systems, expanding their application scenarios, and improving production efficiency. In this work, we design an intelligent robot system that integrates omnidirectional movement, obstacle avoidance, and grasping functions, leveraging visual feedback for autonomous operations.
Intelligent mobile robots are primarily categorized into wheeled, legged, tracked, and hybrid types based on their locomotion methods. Among these, wheeled mobile intelligent robots have garnered widespread attention due to their simple structure, low energy consumption, and high motion efficiency. However, most wheeled robots suffer from insufficient flexibility and limited non-holonomic motion constraints, leading to restricted obstacle avoidance capabilities in complex environments and difficulty in handling unexpected situations. To address issues such as low automation and limited manual operation efficiency in existing hardness testing processes, we developed an intelligent robot system based on visual feedback. This intelligent robot has overall dimensions of 250 mm × 200 mm × 400 mm and weighs approximately 3 kg. The system integrates machine vision technology and a mobile manipulator, enabling fully automated operations for workpieces through autonomous movement, visual recognition, and automatic grasping. The intelligent robot can autonomously navigate to the workpiece placement area (grasping zone), achieve real-time obstacle avoidance via sensors during movement, identify and grasp the workpiece upon reaching the target area, and accurately place it at the designated testing location (placement zone) of a LECO hardness tester.

The design of this intelligent robot focuses on creating a lightweight and versatile platform. The mechanical arm structure is optimized for compactness, utilizing 3D-printed components to reduce weight while maintaining robustness. The mobile platform employs a four-Mecanum wheel configuration, enabling omnidirectional movement without the need for additional steering mechanisms. This design allows the intelligent robot to maneuver in tight spaces and adapt to dynamic environments. The integration of visual feedback ensures precise control, with the system capable of processing real-time image data to adjust movements and grasping actions. Throughout this paper, we emphasize the capabilities of this intelligent robot, highlighting its potential for various indoor applications.
Mechanical Arm Design for the Intelligent Robot
Based on the dimensions of target workpieces, we designed a lightweight mechanical arm structure. This mechanical arm has five degrees of freedom, including three lifting degrees, one base rotation degree, and one end-effector rotation degree, meeting posture adjustment requirements during grasping and placement. The overall structure consists of upper, mid-upper, mid-lower, and lower brackets, a flexible end-effector (flexible gripper), a support platform, an industrial camera mount, and six servo motors. The flexible gripper is made of rubber resin material, allowing for interchangeable gripper jaws to accommodate workpieces of different shapes, ensuring stable and reliable grasping operations.
We applied inverse kinematics analysis using a geometric method to solve the mechanical arm’s motion. Taking the motion in the grasping plane as an example, the geometric analysis is illustrated. Given the target coordinates of the end-effector \((X_t, Y_t)\) and the lengths of the mechanical arm links \(L_1\), \(L_2\), and \(L_3\), the joint angles \(\alpha\), \(\beta\), and \(\gamma\) can be derived through geometric relationships. The equations are as follows:
$$ X_t = L_1 \cos \alpha + L_2 \cos(\alpha + \beta) + L_3 \cos(\alpha + \beta + \gamma) $$
$$ Y_t = L_1 \sin \alpha + L_2 \sin(\alpha + \beta) + L_3 \sin(\alpha + \beta + \gamma) $$
Let \(\delta = \alpha + \beta + \gamma\). From the geometric model, we can express the coordinates relative to the base. Simplifying these equations yields:
$$ (X_t – L_3 \cos \delta)^2 + (Y_t – L_3 \sin \delta)^2 = L_1^2 + L_2^2 + 2 L_1 L_2 \cos \beta $$
Expanding and solving using the quadratic formula, we obtain:
$$ \cos \beta = \frac{(X_t – L_3 \cos \delta)^2 + (Y_t – L_3 \sin \delta)^2 – L_1^2 – L_2^2}{2 L_1 L_2} $$
Thus, \(\beta\) can be calculated as:
$$ \beta = \arccos\left( \frac{(X_t – L_3 \cos \delta)^2 + (Y_t – L_3 \sin \delta)^2 – L_1^2 – L_2^2}{2 L_1 L_2} \right) $$
Substituting \(\beta\) back, we solve for \(\alpha\) and \(\gamma\) sequentially. This geometric approach ensures precise positioning of the intelligent robot’s mechanical arm. The parameters of the mechanical arm are summarized in Table 1.
| Component | Description | Specification |
|---|---|---|
| Degrees of Freedom | Total DOF | 5 (3 lift, 1 rotation, 1 end-effector) |
| Servo Motors | Quantity and Type | 6 (3 single-axis, 3 dual-axis) |
| Link Lengths | L1, L2, L3 | 150 mm, 120 mm, 100 mm (example values) |
| End-Effector | Material | Rubber resin, interchangeable jaws |
| Workspace Diameter | Fixed space grasping | Approximately 450 mm |
This design enables the intelligent robot to perform grasping in both fixed and non-fixed spaces, enhancing its versatility. The mechanical arm subsystem is integrated with the mobile platform, forming a cohesive intelligent robot system capable of autonomous tasks.
Visual Feedback System in the Intelligent Robot
The machine vision system comprises image acquisition, image analysis, and communication execution modules. Image acquisition and execution rely on hardware, while image analysis requires a processor and vision algorithms to perform image recognition and processing. In this intelligent robot, machine vision is used for feedback control of the mechanical arm. The grasping task program logic flow involves two key decision nodes where the visual system analyzes environmental information and operational results in real-time, with analysis results serving as feedback signals to guide mechanical arm adjustments.
Specifically, an industrial camera captures images of the target area and transmits them to the processor. The system invokes Python’s OpenCV library for image preprocessing, including binarization and Gaussian blur, to reduce noise and improve quality. Subsequently, it extracts the pixel coordinates of the circular outer frame center of the workpiece, converts them to the mechanical arm workspace coordinates via inverse kinematics algorithms, and achieves precise control to complete grasping. The process is summarized in Table 2.
| Step | Action | Tool/Algorithm |
|---|---|---|
| 1 | Image Acquisition | Industrial camera |
| 2 | Preprocessing | OpenCV (binarization, Gaussian blur) |
| 3 | Feature Extraction | Circle detection, center point extraction |
| 4 | Coordinate Conversion | Inverse kinematics transformation |
| 5 | Control Execution | Servo motor PWM signals |
The visual feedback system enhances the accuracy of the intelligent robot, allowing it to adapt to variations in workpiece position and orientation. This capability is critical for applications requiring high precision, such as hardness testing automation.
Mobile Obstacle Avoidance Platform for the Intelligent Robot
The mobile platform of this intelligent robot is wheeled, equipped with four Mecanum wheels. These wheels enable omnidirectional movement, including X/Y translation and rotation around a central vertical axis, directly driven by motors without additional steering or braking mechanisms. This allows the intelligent robot to operate flexibly in narrow and complex environments, meeting mobility requirements for intelligent grasping. The autonomous obstacle avoidance function is realized using ultrasonic time-of-flight sensors, which offer high detection accuracy, large measurement range, and fast response. The principle involves estimating obstacle distance based on the time-of-flight of sound waves. The system then controls the four motors via Python programs to execute avoidance maneuvers. The core control logic is as follows.
Let \(d\) be the measured distance to an obstacle, and \(d_{\text{threshold}}\) be the safety threshold (e.g., 350 mm). If \(d \leq d_{\text{threshold}}\), the intelligent robot stops, moves laterally by a distance \(s\) (e.g., 200 mm, 1.5 times its width), and then returns to the original path. This behavior ensures robust navigation. The motion kinematics for Mecanum wheels can be described using the following equations for velocity vectors:
$$ \begin{bmatrix} v_x \\ v_y \\ \omega \end{bmatrix} = R \cdot \begin{bmatrix} \omega_1 \\ \omega_2 \\ \omega_3 \\ \omega_4 \end{bmatrix} $$
Where \(v_x\) and \(v_y\) are linear velocities, \(\omega\) is angular velocity, \(R\) is the wheel configuration matrix, and \(\omega_i\) are wheel angular velocities. For a four-Mecanum wheel platform with wheel radius \(r\) and chassis parameters, the matrix is derived from wheel orientations. This enables the intelligent robot to achieve smooth omnidirectional motion. The platform specifications are listed in Table 3.
| Parameter | Value | Description |
|---|---|---|
| Wheel Type | Mecanum wheel | Four wheels, omnidirectional |
| Motor Type | JGB37-520 DC brushless | Reduction ratio 19:1, torque 2.2 N·m |
| Speed Range | 5–16 m/min | Continuously adjustable |
| Obstacle Sensor | Ultrasonic ranging | Range up to 500 mm, accuracy ±5 mm |
| Control Chip | TB6612FNG | Motor driver with PWM control |
This mobile platform forms the base of the intelligent robot, providing the mobility needed for autonomous operations in indoor environments.
Hardware Design of the Intelligent Robot
The hardware design centers on a compact and efficient control system. We selected a Raspberry Pi 4B development board as the industrial computer module for the intelligent robot. This board runs a Linux system, features a Broadcom BCM2711 main chip with processing capability comparable to entry-level x86 PCs, and meets computational demands for vision processing, path planning, and multi-sensor fusion. Its rich interfaces accommodate motors, servos, and various sensors, facilitating system integration and debugging. The hardware architecture is summarized in Table 4.
| Module | Component | Function |
|---|---|---|
| Control Unit | Raspberry Pi 4B | Main processor, running Linux and Python scripts |
| Motor Drivers | TB6612FNG chips | Control DC motors for Mecanum wheels |
| Servo Motors | 6 servos (10 kg·cm and 15 kg·cm) | Mechanical arm actuation |
| Sensors | Ultrasonic sensors, industrial camera | Obstacle detection and visual feedback |
| Power Supply | Li-ion battery pack | 12 V, 3000 mAh, for mobility and actuation |
For the drive system, considering the intelligent robot’s total weight of 3 kg and power losses in Mecanum wheels, we used JGB37-520 DC brushless减速 motors (reduction ratio 19:1, rated speed 407 rpm, torque 2.2 N·m). These motors are compact with stable output, suitable for the mobile platform. The drive circuit is built around TB6612FNG chips, managed by the industrial computer. Through pin level combinations and PWM signals, motor direction and speed are controlled separately. Combined with MCU encoder pulse reading and PID algorithms, closed-loop speed control is achieved, ensuring motion precision and stability.
The mechanical arm employs lightweight servos: single-axis and dual-axis servos, all controlled via PWM signals with precision up to 0.1°, meeting the mechanical arm’s motion control needs. This hardware configuration ensures that the intelligent robot operates reliably while maintaining low cost and modularity.
Experimental Testing of the Intelligent Robot
We conducted experiments to evaluate the performance of the intelligent robot in autonomous obstacle avoidance and grasping tasks. The mechanical arm subsystems were manufactured using 3D printing for lightweight components. After assembly and calibration, the mechanical arm can grasp in a fixed space of about 450 mm diameter and perform non-fixed space grasping while mobile. The servos are assigned as follows: Servo 1 controls base rotation, Servos 2-4 drive joint movements, Servo 5 adjusts end-effector posture, and Servo 6 controls the flexible end-effector opening/closing. Servos 1, 5, and 6 are 10 kg·cm single-axis, while Servos 2-4 are 15 kg·cm dual-axis. The mobile platform supports 4-speed adjustments, with operational speeds continuously adjustable from 5 to 16 m/min.
In the experiments, the intelligent robot started 550 mm from an obstacle (an 80 mm diameter cylinder), with the obstacle 350 mm from the workpiece area. The intelligent robot moves straight by default, stops when the distance to the obstacle is ≤ 350 mm, moves left approximately 200 mm (1.5 times its width), then right the same distance to realign, and finally proceeds straight to the target area to complete avoidance. Upon reaching the target, the visual system identifies the workpiece and extracts its position. Through inverse kinematics algorithms, the control system computes the spatial position and joint angles for the end-effector, achieving accurate grasping and placement.
Experimental results indicate that detection errors during obstacle avoidance may cause the mobile platform to not precisely reach stopping positions. Additionally, lighting conditions and environmental noise can affect visual recognition. We performed 50 complete experiments, with 46 successes, yielding a success rate of 92%. This demonstrates that the intelligent robot system exhibits high stability and operational accuracy. The data are summarized in Table 5.
| Experiment Phase | Success Rate | Key Observations |
|---|---|---|
| Obstacle Avoidance | 94% (47/50) | Minor deviations in positioning due to sensor noise |
| Workpiece Recognition | 93% (46.5/50) | Lighting variations affected detection in 3 cases |
| Grasping and Placement | 92% (46/50) | Failures due to mechanical arm calibration errors |
| Overall Task Completion | 92% (46/50) | System robust in controlled environments |
These results validate the effectiveness of the intelligent robot design. The integration of visual feedback and omnidirectional mobility enables the intelligent robot to perform complex tasks autonomously.
Conclusion and Future Work
We designed and implemented an intelligent robot system integrating mobile obstacle avoidance and mechanical arm grasping. The mobile platform uses a four-Mecanum wheel structure for omnidirectional movement, combined with ultrasonic distance sensors for autonomous obstacle avoidance. The mechanical arm part achieves stable grasping of workpieces in fixed and non-fixed spaces through multi-degree-of-freedom servo configuration and inverse kinematics algorithms. Experimental results show that this intelligent robot can effectively complete obstacle avoidance and target grasping tasks in set environments, with high success rates and good system stability and accuracy. However, lighting changes and obstacle avoidance precision can still cause some operational errors. Future research will focus on improving mobile platform positioning accuracy, optimizing visual recognition algorithms, and enabling autonomous operations in more complex environments to enhance the system’s practicality and robustness. This work contributes to the development of low-cost, multifunctional indoor intelligent robot platforms, paving the way for broader applications in智能制造 and beyond.
The intelligent robot presented here showcases how visual feedback can be leveraged to create adaptive systems. As intelligent robots become more prevalent, designs like this will play a key role in automating tedious or hazardous tasks. We believe that continued refinement of such intelligent robots will drive innovation in various fields, from manufacturing to logistics.
