In the context of global agricultural modernization, traditional manual harvesting methods are increasingly inadequate to meet the demands for high efficiency and precision. Daylily, as an important economic crop, poses significant challenges during harvesting due to its concentrated picking window, requiring operations within 1–2 hours before flowering to maintain quality and yield. The labor-intensive nature, low efficiency, and high costs associated with manual harvesting have driven the exploration of automated solutions. As an researcher in agricultural robotics, I have focused on developing an intelligent robot system that integrates vision localization and navigation to address these issues. This paper presents my work on designing, implementing, and validating an intelligent trajectory planning system for daylily harvesting robots, leveraging advanced perception, decision-making, and control technologies to enhance operational performance in complex field environments.
The core of my research revolves around the development of an intelligent robot capable of autonomous navigation and precise harvesting. The challenges in daylily harvesting include the random distribution of flower buds, dense plant entanglement, and drastic natural lighting variations, which complicate target recognition and localization. Previous studies have made progress in visual detection and path planning, but gaps remain in system robustness and real-time adaptability under extreme conditions. My approach builds upon existing techniques by integrating multi-sensor fusion, deep learning-based recognition, and dynamic trajectory optimization to create a more reliable and efficient intelligent robot. This system aims to achieve high positioning accuracy, improved harvesting success rates, and enhanced operational efficiency, ultimately contributing to the advancement of smart agriculture.
To provide a foundation, I reviewed the current state of technology in agricultural robotics. Existing research on daylily harvesting robots has explored various visual localization and navigation methods. For instance, some studies have utilized RGB color spaces combined with deep convolutional neural networks for target region identification, while others have employed binocular vision systems with calibration techniques like Zhang’s method and Bouguet’s algorithm to enhance spatial positioning accuracy. Depth cameras, such as the Intel RealSense D435, have been integrated with improved detection networks like Faster R-CNN to achieve high detection precision in complex backgrounds. Additionally, path planning algorithms, including modified P-RRT* methods with KNN rapid search and adaptive step sizes, have been proposed to optimize harvesting paths in obstacle-rich environments. These advancements form a technological framework centered on visual perception, spatial localization, and intelligent path planning. However, challenges persist in handling severe光照变化, plant occlusion, and real-time response, necessitating further innovation. My work seeks to address these limitations by developing a more adaptive and robust intelligent robot system.
In designing the intelligent robot platform, I selected a 4 HF-2 type crawler-based daylily auxiliary harvester as the hardware foundation. The system comprises five main modules: the mobility system, vision system, upper computer, lower computer, and harvesting mechanism, all interconnected via controllers for coordinated operation. The mobility system features a crawler chassis equipped with navigation cameras for precise movement control. The vision system incorporates an RGB-D camera for capturing texture and depth information, complemented by a navigation camera and an inertial measurement unit (IMU) to provide attitude and acceleration data. The upper computer handles harvesting decisions and path planning, while the lower computer uses drive modules, encoder arrays, and electromagnetic valves to control executive mechanisms. The harvesting mechanism employs a two-finger flexible gripper paired with a pneumatic cylinder to perform efficient flower bud picking. This integrated design ensures that the intelligent robot can operate autonomously in field conditions, with all modules working in synergy to achieve high-performance harvesting.

The visual perception system is a critical component of the intelligent robot, enabling accurate environment understanding. To enhance perception stability, I developed a multi-source information fusion approach using an RGB-D camera, navigation camera, and IMU. The RGB-D camera captures crop texture and depth data, the navigation camera aids in path recognition and area localization, and the IMU provides real-time姿态角 and linear acceleration. These sensors are synchronized temporally and spatially aligned using camera extrinsic matrices, with data fusion performed via an Extended Kalman Filter (EKF) to maintain consistent environmental understanding. The state estimation model for fusion is represented as follows:
$$X_p = A_p X_{p-1} + B_p U_p + V_p$$
where \(X_p\) is the predicted system state vector at time \(p\), \(A_p\) is the state transition matrix, \(B_p\) is the control input matrix, \(U_p\) is the input vector, and \(V_p\) is the process noise vector. This fusion method significantly improves target detection rates under varying lighting conditions, with experiments showing an 18.6% increase and positioning errors stabilized within ±5 mm. Such precision supports the intelligent robot’s trajectory planning and control tasks effectively.
For target crop recognition and three-dimensional localization, I implemented a method based on deep convolutional neural networks (CNNs), tailored to the slender, soft, and densely distributed characteristics of daylily flower buds. In the recognition phase, I used an improved YOLOv5 network with a lightweight attention mechanism to boost accuracy for small-sized targets. After extracting target regions, I applied stereo vision to compute disparity and combined it with depth reconstruction algorithms for spatial coordinate solving. The three-dimensional localization relies on binocular vision principles, with the depth calculation formula given by:
$$D_q = \frac{F_q \times C_q}{d_q}$$
where \(D_q\) is the depth value of the target object, \(F_q\) is the camera focal length, \(C_q\) is the baseline distance between the cameras, and \(d_q\) is the disparity value corresponding to the pixel. By mapping pixel positions with the intrinsic matrix, I derived the target’s world coordinates, achieving high-precision 3D reconstruction. Testing under various conditions—sunny, cloudy, and backlit—resulted in a daylily recognition accuracy of 91.7% and 3D positioning errors below ±4.5 mm. This reliable localization provides essential data for the intelligent robot’s end-effector during precise harvesting operations.
Navigation and path planning are pivotal for the intelligent robot’s autonomous operation in field environments. I employed an environment-aware improved A* algorithm for path planning, incorporating a Manhattan distance-based heuristic to evaluate navigation costs efficiently. The cost function is defined as:
$$F(n) = \begin{cases} (k-2) \times g(n) + h(n) & \text{for } t=1 \\ g(n) + h(n) & \text{for } t=0 \end{cases}$$
where \(F(n)\) is the comprehensive evaluation cost of node \(n\), \(g(n)\) is the cumulative cost from the start node to the current node, \(h(n)\) is the estimated Manhattan distance from the current node to the goal node, and \(k\) is a dynamic adjustment coefficient. The Manhattan distance is computed as the sum of absolute distances along the X and Y axes, reflecting navigation costs in structured row-crop environments. The path planning process involves creating open and closed lists, iteratively expanding nodes based on optimal costs, and backtracking upon reaching the goal. This approach ensures continuous, smooth, and obstacle-avoiding paths, enhancing the intelligent robot’s ability to navigate complex fields. To further optimize performance, I integrated dynamic weight adjustments that adapt to obstacle density and path feasibility changes, enabling rapid local path replanning when needed.
To ensure the intelligent robot’s adaptability and stability in complex field conditions, I implemented several optimizations across perception, decision-making, and control layers. For visual perception, I used adaptive thresholding image segmentation and multi-scale feature extraction networks to mitigate the impact of severe natural lighting variations, maintaining target detection accuracy above 89% under强光,阴影, and弱光干扰. In path planning, the dynamic weight adjustment mechanism allows real-time tuning of heuristic function proportions, reducing overall path tracking errors to within ±8 mm. For mobility, the crawler chassis is equipped with an adaptive traction control module that uses IMU姿态信息 to correct trajectories, limiting operational deviations to less than 2% on muddy or sloped terrain. In the execution system, a flexible force control strategy for the end-effector adjusts gripping force based on real-time feedback, preventing bud damage and increasing harvesting success rates to 93.7%. These enhancements collectively bolster the intelligent robot’s robustness, enabling reliable performance across diverse environmental challenges.
To validate the system’s effectiveness, I conducted a series of experiments under typical field conditions using the 4 HF-2 crawler-based harvester platform. The tests focused on positioning accuracy, harvesting success rate, trajectory planning efficiency, and environmental robustness. For positioning and harvesting performance, I used an Intel RealSense D435 RGB-D camera for real-time data acquisition, with deep neural networks for target recognition and 3D localization. Positioning error was measured against manually identified picking points, and harvesting success was defined as the complete采收 of a flower bud in a single grip. The results from multiple trials are summarized in Table 1.
| Sample Number | Positioning Error (mm) | Harvesting Success Rate (%) |
|---|---|---|
| 1 | 4.2 | 94.5 |
| 2 | 4.8 | 93.2 |
| 3 | 5.0 | 92.6 |
| 4 | 4.6 | 94.1 |
| 5 | 5.1 | 91.7 |
As shown in Table 1, the intelligent robot achieved positioning errors within ±5 mm across all tests, with an average harvesting success rate of 93.22%. This demonstrates that the visual perception and trajectory control systems effectively support high-precision localization and efficient harvesting, highlighting the practical potential of the intelligent robot in agricultural applications.
To evaluate the impact of trajectory planning on operational efficiency, I compared autonomous navigation using the improved A* algorithm with traditional manual planning methods. Both approaches were tested in a standard daylily plot of approximately 100 m², with metrics including average trajectory deviation, harvesting quantity per unit time, and total operation time. The comparative results are presented in Table 2.
| Operation Method | Average Trajectory Deviation (mm) | Harvesting Rate (plants/h) | Operation Time (min) |
|---|---|---|---|
| Manual Planning | 12.7 | 96 | 63 |
| Autonomous Trajectory Planning | 5.6 | 134 | 45 |
Table 2 reveals that autonomous trajectory planning reduced average trajectory deviation by 55.9%, increased harvesting rate by 39.6%, and shortened operation time by 28.6% compared to manual planning. These improvements underscore the efficacy of intelligent trajectory planning in enhancing overall作业效率 and precision, while minimizing energy consumption and operational redundancy, thereby ensuring stable system performance during prolonged use.
To assess the intelligent robot’s robustness under varying environmental conditions, I performed tests in different weather scenarios: sunny, cloudy, and light rain. For each condition, 30 operations were conducted, with navigation accuracy and harvesting success rate recorded. Navigation accuracy refers to the system’s ability to correctly follow planned paths, while harvesting success rate measures the proportion of successfully picked buds. The results are detailed in Table 3.
| Environmental Condition | Navigation Accuracy (%) | Harvesting Success Rate (%) | Average Path Deviation (mm) |
|---|---|---|---|
| Sunny | 96.7 | 92.8 | 6.2 |
| Cloudy | 95.1 | 91.5 | 6.7 |
| Light Rain | 91.3 | 88.4 | 8.5 |
From Table 3, the intelligent robot maintained stable performance in sunny and cloudy conditions, with navigation accuracy above 95% and harvesting success rates over 90%. Even in light rain, it achieved a harvesting success rate of 88.4% and navigation accuracy of 91.3%, with average path deviation controlled within ±8.5 mm. These results confirm the system’s strong robustness and adaptability to多变 environmental factors, ensuring reliable continuous operation in real-world field settings.
The experimental outcomes validate the effectiveness of my intelligent robot system in addressing key challenges in daylily harvesting. The integration of multi-sensor fusion, deep learning-based recognition, and dynamic path planning has yielded significant improvements in定位精度,采摘成功率, and作业效率 compared to conventional methods. The use of公式 such as the EKF state estimation and binocular depth calculation提供了数学 rigor to the approach, while tables like Table 1, 2, and 3 offer clear quantitative evidence of performance gains. The intelligent robot’s ability to operate under diverse conditions—ranging from ideal weather to adverse scenarios—demonstrates its practical applicability in agriculture. However, I acknowledge limitations, such as performance fluctuations under extreme光照变化 or high-density obstacles, where path replanning speed and end-effector fine-tuning response could be enhanced. Future work will focus on refining environmental modeling精度, optimizing real-time path reconstruction algorithms, and improving柔性控制策略 to develop even more adaptive and stable intelligent robot systems. The continuous evolution of such technologies holds promise for broader adoption in smart farming, ultimately contributing to sustainable agricultural practices.
In conclusion, my research on the intelligent trajectory planning system for daylily harvesting robots represents a step forward in agricultural automation. By融合视觉定位与导航, I have created an intelligent robot capable of high-precision perception, autonomous navigation, and efficient harvesting in complex environments. The system’s design, incorporating hardware integration, advanced algorithms, and rigorous testing, has proven its superiority over traditional methods in terms of accuracy, success rate, and efficiency. The关键词 “intelligent robot” is central to this work, embodying the fusion of perception, decision-making, and action that defines modern robotics. As agriculture continues to evolve towards greater automation, intelligent robots like the one developed here will play a crucial role in enhancing productivity, reducing labor costs, and ensuring crop quality. I am confident that further innovations in this field will unlock new possibilities for intelligent harvesting solutions across various crops and settings, paving the way for a more efficient and sustainable future in farming.
