In recent years, the development of mobile robots for indoor environments, such as delivery robots, disinfection robots, and inspection robots, has progressed rapidly. However, unlike structured and static indoor settings, there is a growing demand for exploring unknown territories and performing hazardous rescue missions, which pushes mobile robots to expand into unstructured field environments. This expansion places increasingly high demands on the robot’s traversability and autonomous capabilities. Under conditions without markers, an intelligent mobile robot must be capable of autonomous navigation and positioning, autonomous planning and control, obstacle identification and avoidance, and dynamic path planning in field environments. Two or more such intelligent robots can also operate in intelligent formation, simulating a robot team. A tracked chassis can adapt to typical non-road field environments, offering strong capabilities in wading, obstacle crossing, and slope climbing. Most current field robots rely on laid tracks or manual remote-controlled vehicles. Field scenes are complex and vast, making track laying labor-intensive, while remote control often fails to achieve desired outcomes. Therefore, there is a need to develop new types of unassisted intelligent robots. Although significant progress has been made in field robotics, there remains a lack of mature intelligent robots that do not depend on auxiliary markers, possess strong field traversability, and exhibit high mobility. To address this, we have designed a field autonomous intelligent mobile robot. This intelligent robot employs a tracked chassis for enhanced off-road capability, uses a combination of GNSS, inertial navigation, and odometry for autonomous navigation and positioning, achieves dynamic autonomous trajectory planning in field environments, and is equipped with ultrasonic sensors for autonomous identification and avoidance of large obstacles. Additionally, it utilizes a star-shaped self-organizing network to support multi-robot collaboration in the same area. This intelligent robot represents a significant advancement in field robotics, with potential applications in search and rescue, patrol, and traffic management.
The field autonomous intelligent mobile robot consists of several key systems: the tracked chassis system, navigation system, control system, and payload module. The robot’s structure is designed to withstand harsh field conditions while maintaining high performance. The chassis control system includes a power module, motor drive unit, central control unit, and wireless communication unit. The navigation perception system comprises a GNSS receiver, three-axis gyroscope, three-axis accelerometer, and ultrasonic sensors. To improve local positioning accuracy, RTK (Real-Time Kinematic) differential technology is employed for centimeter-level absolute positioning. The combination of the three-axis gyroscope, three-axis accelerometer, and GNSS forms an integrated navigation unit that provides the robot’s position and attitude information. Ultrasonic sensors are used for close-range obstacle detection and avoidance within 2 meters. The electronic control system adopts a modular design, consisting of a power supply unit, central control unit, wireless communication unit, data acquisition and storage unit, and servo drive control unit. Operators set target points remotely via a terminal without specifying path trajectories. The robot receives target position commands through a wireless data transmission module, which are then filtered and sent to the central control unit. The central control unit dynamically plans the driving path based on the current robot state from the navigation system and the target coordinates, issuing control commands for both tracks within a finite time window. These commands are sent to the motor drive units, which perform closed-loop control of the left and right track motors. At each time step, the path is replanned according to navigation results and obstacle detection. The payload module can include targets or other user-required equipment mounted on the robot.

To quantify the performance of this intelligent robot, we analyze its key indicators through mathematical modeling. The total resistance \(F_A\) during tracked vehicle operation includes several components, expressed as:
$$ F_A = F_t + F_p + F_u + F_n $$
Here, \(F_t\) is the soil resistance, \(F_p\) is the slope resistance, \(F_u\) is the turning resistance, and \(F_n\) is the internal resistance of the walking mechanism. The soil resistance \(F_t\) arises from the compaction of soil by the tracks and is given by:
$$ F_t = G \omega_t \cos \alpha $$
where \(G\) is the total gravity of the machine, \(\alpha\) is the slope angle, and \(\omega_t\) is the operational specific resistance coefficient, which is higher on wet field grounds (e.g., 0.15) and lower on hard surfaces. The slope resistance \(F_p\) is due to the gravitational component on slopes:
$$ F_p = G \sin \alpha $$
The turning resistance \(F_u\) is complex, involving longitudinal and lateral forces during turning, but primarily consists of friction between the track plates and ground:
$$ F_u = \frac{1}{4} \beta \mu G \frac{L}{B} $$
where \(\beta\) is an additional resistance coefficient for soil scraping by the track plate edges (taken as 1.15), \(L\) is the track grounding length, \(B\) is the track gauge, and \(\mu\) is the turning resistance coefficient, estimated as:
$$ \mu = \mu_{\text{max}} \frac{0.85 + 0.15 r}{B} $$
Here, \(r\) is the turning radius of the track, and \(\mu_{\text{max}}\) is the coefficient for friction resistance under vertical load during braking, typically ranging from 0.4 to 0.7. The internal resistance \(F_n\) of the walking mechanism involves friction among track parts, drive wheels, guide wheels, and support wheels, and can be approximated by introducing a walking mechanism efficiency \(\eta\):
$$ F_n = \frac{1 – \eta}{\eta} F_A $$
with \(\eta\) empirically ranging from 0.7 to 0.8. The traction force \(F\) of the tracked chassis must exceed the total resistance:
$$ F \geq F_A $$
Additionally, the traction force must be less than the adhesion force between the tracks and ground to prevent slippage:
$$ F \leq G \varphi \cos \alpha $$
where \(\varphi\) is the adhesion coefficient, typically between 0.3 and 0.5. Based on this, the total power \(P\) of the drive motors can be calculated as:
$$ P = \frac{F_A v}{3600 \eta_1 \eta_2} $$
where \(v\) is the travel speed in km/h, \(\eta_1\) is the efficiency of the track walking device, and \(\eta_2\) is the transmission efficiency, ranging from 0.4 to 0.75. Substituting relevant parameters, the main specifications of this intelligent robot are summarized in Table 1.
| Indicator | Parameter | Indicator | Parameter |
|---|---|---|---|
| Dimensions | 960 mm × 680 mm × 322 mm | Operating Speed | 0–10 km/h |
| Chassis Weight | 70 kg | Chassis Height | 85 mm |
| Rated Power | 765 W × 2 | Design Load Capacity | 80 kg |
| Motor Type | 48 V DC Brushless Motor | Rated Torque | 37.5 Nm |
| Reduction Ratio | 1:7 | Track Width | 150 mm |
| Max Obstacle Clearance | 150 mm | Max Climbing Slope | >30° |
| Max Span | >400 mm | Max Load Capacity | 100 kg |
The control system design is crucial for the autonomy of this intelligent robot. The robot uses a tracked chassis with differential steering via left and right drive wheels, classifying it as a differential drive mobile robot. The kinematic model is expressed as:
$$ \begin{bmatrix} \dot{x}(t) \\ \dot{y}(t) \\ \dot{\theta}(t) \end{bmatrix} = \begin{bmatrix} v(t) \cos \theta(t) \\ v(t) \sin \theta(t) \\ \omega(t) \end{bmatrix} $$
where \(v\) is the robot’s center velocity, \(\omega\) is the turning angular velocity, \((x, y)\) are the coordinates of the center point, and \(\theta\) is the heading angle. The state vector is \(\mathbf{q} = [x, y, \theta]^T\), and the control vector is \(\mathbf{u} = [v, \omega]^T\). For trajectory updates, we assume a system sampling period of 100 ms, with constant velocity within each period. If the vehicle rotates by an angle \(\beta\) around a center \(O\) in one period, the updated coordinates in the robot’s body frame \((x_0, y_0)\) can be derived. Let \(s\) be the arc length of the trajectory in one period, approximated as a straight line: \(s = v \Delta t\), where \(\Delta t\) is the sampling period. The central angle is \(\beta = s / R\), with \(R\) as the instantaneous turning radius. From geometric relations, we have:
$$ x_0 = R(1 – \cos \beta) $$
$$ y_0 = R \sin \beta $$
Path planning and trajectory tracking control are essential for this intelligent robot in markerless field environments. For typical trajectories in cruising states, we use descriptive function methods. Operators can select desired motion trajectories from a dropdown menu, with modifiable parameters for different trajectory functions. Currently supported typical paths include circular, elliptical, Lissajous, and figure-eight trajectories, each described by a set of characteristic parameters. Based on whether the desired trajectory is time-dependent, trajectory tracking control is divided into path following and trajectory tracking modes. Path following refers to the robot starting from a point and eventually following a geometric path in motion space at a given speed, independent of time. Trajectory tracking involves following a time-dependent geometric path with given speeds (linear and angular) as functions of time. This intelligent robot employs both modes to adapt to dynamic field conditions.
Autonomous obstacle avoidance is a key feature of this intelligent robot. Ultrasonic sensors have a conical perception range in three-dimensional space. When an ultrasonic sensor detects an obstacle ahead, the distance can be calculated from the time of flight, but the exact position on an arc is ambiguous. Therefore, multiple ultrasonic sensors are installed at different positions and orientations on the robot. By intersecting arc segments, the precise obstacle location is determined. In our design, five ultrasonic sensors are placed: one at the front center (#3), and others at 60° left front (#1), 30° left front (#2), 30° right front (#4), and 60° right front (#5). To avoid interference, they are powered sequentially for measurement. When sensors #1 or #2 detect obstacles, the robot turns right for avoidance; when #4 or #5 detect obstacles, it turns left; and when #3 detects an obstacle, either direction is feasible. Upon detection, the robot automatically adjusts its reference trajectory using an artificial potential field method, guiding it along the obstacle edge toward the previous velocity vector direction. Once the robot aligns with a point on the desired trajectory, it switches back to reference trajectory tracking, enabling seamless obstacle avoidance for this intelligent robot.
For multi-robot collaboration, formation control allows follower robots to maintain specific distances and relative angles with a leader robot. The formation controller inputs are the follower’s velocity vector \(\mathbf{u}_f = [v_f, \omega_f]^T\) and the leader’s velocity vector \(\mathbf{u}_l = [v_l, \omega_l]^T\). The output \(\mathbf{z} = [\rho, \alpha, \phi]^T\) is computed from the geometric relationship between leader and follower:
$$ \begin{aligned} \rho &= \sqrt{(x_l – x_f)^2 + (y_l – y_f)^2} \\ \alpha &= \arctan2(y_l – y_f, x_l – x_f) – \theta_f \\ \phi &= \alpha + \theta_f – \theta_l + \pi \end{aligned} $$
Here, \(\rho\) is the distance between leader and follower, \(\alpha\) is the angle between the follower’s heading and the line connecting them (positive counterclockwise), and \(\phi\) is the angle between the leader’s heading and the connecting line (positive counterclockwise). The dynamic differential equations for this formation system are:
$$ \begin{aligned} \dot{\rho} &= -v_f \cos \alpha – v_l \cos \phi \\ \dot{\alpha} &= \frac{v_f \sin \alpha – v_l \sin \phi}{\rho} – \omega_f \\ \dot{\phi} &= \frac{v_f \sin \alpha – v_l \sin \phi}{\rho} + \omega_f – \omega_l \end{aligned} $$
Selecting \(\rho\) and \(\alpha\) as outputs, with output vector \(\mathbf{z} = [\rho, \alpha]^T\), and letting \(\rho_d\) and \(\alpha_d\) be the desired distance and relative angle, we aim for a linearized system:
$$ \begin{aligned} \dot{\tilde{\rho}} &= -k_1 \tilde{\rho} \\ \dot{\tilde{\alpha}} &= -k_2 \tilde{\alpha} \end{aligned} $$
where \(\tilde{\rho} = \rho – \rho_d\), \(\tilde{\alpha} = \alpha – \alpha_d\), and \(k_1, k_2 > 0\) ensure error convergence to zero. Using input-output feedback linearization, the follower’s control input is derived as:
$$ \begin{aligned} v_f &= \frac{k_1 \tilde{\rho} – v_l \cos \phi}{\cos \alpha} \\ \omega_f &= \frac{v_f \sin \alpha – v_l \sin \phi}{\rho} – k_2 \tilde{\alpha} \end{aligned} $$
This enables stable formation control for multiple intelligent robots operating as a team.
Extensive testing has been conducted to validate the performance of this intelligent robot. In field trials, the robot demonstrated a maximum straight-line speed of 10.8 km/h, a minimum turning radius of 0.1 m, and a maximum approach angle of 30° on slopes. Path tracking control errors were within 0.5 m, and GNSS-integrated navigation achieved positioning accuracy better than 5 cm and orientation accuracy better than 0.2°, meeting target specifications. The intelligent robot successfully navigated various terrains, including flat ground, slopes, and uneven surfaces, while following predefined trajectories such as circles and figure-eights. In autonomous obstacle avoidance tests, the robot dynamically scanned for obstacles, performed local path planning with obstacle inflation and fitting, and avoided collisions reliably in multiple scenarios. Formation tests showed that multiple intelligent robots could maintain desired formations under leader-follower control, adapting to field conditions collaboratively. These results underscore the robustness and versatility of this intelligent robot in unstructured environments.
The design of this field autonomous intelligent mobile robot integrates advanced technologies in navigation, control, and perception to address the challenges of unstructured field environments. By leveraging a tracked chassis, multi-sensor fusion, and intelligent algorithms, the robot achieves high autonomy, adaptability, and collaboration capabilities. Potential applications include field search and rescue, military reconnaissance, environmental monitoring, and infrastructure inspection. Future work may focus on enhancing AI-based decision-making, integrating more sensors like LiDAR or cameras for richer perception, and improving energy efficiency for extended missions. This intelligent robot represents a significant step toward deployable autonomous systems for field operations, with broad prospects for societal benefit.
To further elaborate on the technical aspects, let’s delve into the navigation system. The combination of GNSS, inertial measurement units (IMUs), and odometry is implemented through an extended Kalman filter (EKF) for state estimation. The state vector includes position, velocity, attitude, and sensor biases. The prediction step uses IMU data for motion propagation, while the update step incorporates GNSS measurements and wheel encoder data. The EKF equations are as follows. Prediction:
$$ \begin{aligned} \hat{\mathbf{x}}_{k|k-1} &= f(\hat{\mathbf{x}}_{k-1|k-1}, \mathbf{u}_k) \\ \mathbf{P}_{k|k-1} &= \mathbf{F}_k \mathbf{P}_{k-1|k-1} \mathbf{F}_k^T + \mathbf{Q}_k \end{aligned} $$
Update:
$$ \begin{aligned} \mathbf{K}_k &= \mathbf{P}_{k|k-1} \mathbf{H}_k^T (\mathbf{H}_k \mathbf{P}_{k|k-1} \mathbf{H}_k^T + \mathbf{R}_k)^{-1} \\ \hat{\mathbf{x}}_{k|k} &= \hat{\mathbf{x}}_{k|k-1} + \mathbf{K}_k (\mathbf{z}_k – h(\hat{\mathbf{x}}_{k|k-1})) \\ \mathbf{P}_{k|k} &= (\mathbf{I} – \mathbf{K}_k \mathbf{H}_k) \mathbf{P}_{k|k-1} \end{aligned} $$
Here, \(\mathbf{x}\) is the state vector, \(\mathbf{u}\) is the control input, \(f\) is the nonlinear state transition function, \(\mathbf{F}\) is the Jacobian of \(f\), \(\mathbf{Q}\) is the process noise covariance, \(\mathbf{z}\) is the measurement vector, \(h\) is the measurement function, \(\mathbf{H}\) is its Jacobian, \(\mathbf{R}\) is the measurement noise covariance, and \(\mathbf{K}\) is the Kalman gain. This fusion ensures robust positioning for the intelligent robot even in GNSS-denied areas.
For path planning, the intelligent robot employs both global and local planners. The global planner uses A* or Dijkstra’s algorithm on a cost map generated from prior knowledge or satellite imagery, while the local planner uses dynamic window approach (DWA) or timed elastic bands (TEB) for real-time obstacle avoidance. The DWA optimizes velocity commands \((v, \omega)\) within feasible dynamic windows by evaluating trajectories based on criteria like alignment with goals, clearance from obstacles, and speed. The objective function is:
$$ G(v, \omega) = \alpha \cdot \text{heading}(v, \omega) + \beta \cdot \text{dist}(v, \omega) + \gamma \cdot \text{velocity}(v, \omega) $$
where \(\alpha, \beta, \gamma\) are weights, heading measures progress toward the goal, dist measures distance to obstacles, and velocity encourages faster motion. This allows the intelligent robot to navigate complex fields safely.
Energy management is critical for field operations. The intelligent robot’s power system includes lithium-ion batteries with battery management systems (BMS) for monitoring and protection. The energy consumption model considers motor power, electronics, and sensors. The average power draw \(P_{\text{avg}}\) can be estimated as:
$$ P_{\text{avg}} = P_{\text{motors}} + P_{\text{electronics}} + P_{\text{sensors}} $$
where \(P_{\text{motors}}\) depends on terrain resistance and speed, as derived earlier. For a mission duration \(T\), the required battery capacity \(C\) in watt-hours is:
$$ C = P_{\text{avg}} \cdot T $$
Assuming battery voltage \(V\), the ampere-hour rating is \(C / V\). With typical values, this intelligent robot can operate for several hours on a single charge, extendable via solar panels or swappable batteries.
Communication is facilitated through a star-shaped self-organizing network using IEEE 802.11 (Wi-Fi) or long-range radios like LoRa. Each intelligent robot acts as a node, with one as a coordinator for data aggregation and relay to a base station. The network protocol includes time-division multiple access (TDMA) for collision avoidance and adaptive data rates for reliability. This supports real-time telemetry, video streaming, and command exchange for multi-robot systems.
In summary, this field autonomous intelligent mobile robot embodies cutting-edge robotics technology. Its design emphasizes autonomy, adaptability, and collaboration, making it suitable for diverse field applications. Through continuous innovation, such intelligent robots will play pivotal roles in overcoming the challenges of unstructured environments, paving the way for smarter and safer autonomous systems.
