Design and Application of an Intelligent Robot Technology Simulation Platform in Higher Education

In the current higher education system, engineering experiment teaching plays a crucial role in cultivating students’ innovative thinking and practical abilities. In the field of robot technology education, experiment teaching not only deepens students’ understanding of robot technology but also stimulates their innovative potential. However, many educational institutions face various challenges when advancing robot technology experiment teaching, such as rigid teaching models, insufficient versatility of experiment platforms, outdated content, and limited experimental conditions. Additionally, in key technological areas, path planning, as a technique to ensure safe and efficient robot movement, is of paramount importance. Robot Operating System (ROS) provides a flexible software platform for robot technology, offering a modular and extensible development environment. To address these issues, it is essential for universities to introduce simulation technologies like ROS and Gazebo. Students can use ROS to quickly build robot control systems and debug them in virtual environments, reducing costs and risks. Gazebo offers a highly realistic simulation environment, allowing students to practice various robot technology tasks, such as navigation and object recognition, thereby enhancing their comprehension and application of theoretical knowledge. These simulation tools provide safe, low-cost experimental settings, eliminating concerns about hardware limitations or expenses.

To overcome the constraints of experimental conditions, improve teaching effectiveness, reduce students’ programming workload, and enhance teacher-student interaction, we designed and developed an intelligent robot technology simulation platform based on ROS. This platform aims to enable students to efficiently complete ROS-related simulation experiments, such as voice-controlled car model movement, Simultaneous Localization and Mapping (SLAM) navigation, and real-time path image transmission, with minimal programming effort. Through optimized human-machine interaction interface design, we constructed a realistic campus simulation environment and built robot kinematics models. This platform not only addresses issues like slow content updates and experimental limitations but also significantly improves students’ theoretical knowledge and application skills through comparative studies of path planning algorithms. Our experimental design and overall architecture aim to provide an innovative teaching solution in the field of robot technology education, promoting comprehensive student development.

In the design of the ROS-based intelligent robot technology simulation platform, the overall architecture serves as the core foundation for platform operation. It not only defines the division of labor and collaboration among various functional modules but also ensures the organic integration of systems such as simulation, control, and human-machine interaction. The overall architecture of our proposed platform is illustrated in a conceptual diagram. The teaching experiment service layer plays a key role in campus robot simulation experiments, primarily utilizing the physical computations and sensor interface data provided by Gazebo, encapsulated into nodes for publishing and data communication. The teacher-student interaction layer and student experiment operation layer are integrated into the designed QT interaction interface. Teachers can use this interface to add and manage robot teaching experiment tasks, achieving hierarchical robot practice teaching while facilitating engineering practice feedback and management. Teachers and students can independently build simulation environments and use the built-in path planning algorithm library for autonomous learning and experiment completion, enhancing teaching efficiency and learning outcomes. The platform’s operation not only improves teaching efficiency but also optimizes students’ learning effectiveness.

Modern teaching-oriented human-machine interaction systems impose new requirements on experiment teaching interfaces, including visual interface views, rich interactive experiences for teachers and students, and meeting the control performance needs of intelligent robot technology. Therefore, designing a human-machine interaction interface that is simple to operate, user-friendly, and offers extensible programming interfaces is crucial for implementing virtual simulation experiments of campus patrol robots using this platform. Through visual interface views, students can intuitively observe the robot’s operating status and control processes, thereby deepening their understanding of teaching content. Rich interactive operations enable students to actively participate in experiment teaching, increasing their learning interest and exploration enthusiasm. Customization for intelligent robot control performance requirements and extensible programming interfaces will promote students’ better exploration and practice of robot technology, enhancing their ability to apply knowledge in related fields.

Human-Machine Interaction Interface Design Based on QT

QT is a cross-platform visual programming framework based on C++, commonly used for developing graphical and non-graphical programs, with advantages such as cross-platform compatibility, ease of extension, and portability. Therefore, we designed the experiment teaching interface of the intelligent robot technology simulation platform based on the QT framework for teacher and student use. To improve teacher-student interaction rates, teacher management efficiency, and student theoretical learning efficiency and practical skills, the QT human-machine interaction interface for experiment teaching includes two views: a login interface and an experiment main interface. Correctly setting ROS_MASTER_URI and ROS_IP in the login interface is key to ensuring that ROS nodes can successfully locate the ROS Master and communicate effectively with other nodes in the simulation environment. Teachers and students input identity information after obtaining the simulation robot IP to log into the human-machine interaction interface, which also provides teaching feedback, experiment algorithms, and pre-configured robot node interfaces.

After successful login, the main interface of the intelligent robot technology simulation algorithm learning and practice module is displayed. This interface offers general robot motion controllers and velocity feedback modules, visual plugin switches, robot voice control module invocation switches, robot node log output windows, task submission, and experiment feedback entries, among other sections assisting teachers and students in completing virtual simulation experiment teaching. During teaching, to address issues such as cumbersome plugin invocation processes and inconvenient algorithm parameter tuning in traditional ROS robot experiments, the main interface integrates various ROS robot plugins, including 3D visual navigation views and camera image acquisition windows, facilitating students’ preview of robot motion status information and sensor data. Moreover, by applying this interface, the robot is no longer limited to keyboard and mouse motion control but can also use intelligent voice motion control, enhancing the robot’s comprehensive capabilities and intelligence level. Simultaneously, the interface has encapsulated the algorithms involved in experiments into an algorithm library for students to directly invoke during learning, and teachers can guide students’ experiment learning as needed.

Application of Human-Machine Interaction Interface in Experiment Teaching

In the process of robot virtual simulation experiments, the QT-based human-machine interaction experiment interface runs as the main thread. To help students better complete simulation applications such as voice recognition, image recognition, and path planning for virtual robots, this experiment adopts a hierarchical teaching model: first, theoretical explanation and assessment to ensure students grasp basic concepts; second, teaching demonstrations and guidance to enable students to understand and master skills more deeply through practical operations; finally, students apply the learned knowledge through autonomous practice to consolidate and expand their abilities. The complete hierarchical teaching process is depicted in a flowchart. To successfully complete the experiment, students are required to learn simulation modeling theory, robot kinematics fundamentals, path planning algorithms, and other experiment principles as specified. After passing basic theoretical tests, students can proceed to the next step of practical operation to improve proficiency and reduce error rates. Teachers upload experiment tasks and demonstration videos in advance on the teacher side, and students, with the help of demonstrations, more intuitively understand operations and deepen their understanding of related theories. Finally, with the existing experiment theoretical foundation and operation standards, students sequentially complete primary, intermediate practical operations, and comprehensive campus patrol robot virtual simulation experiments, submitting experiment results and reports.

This hierarchical teaching model helps improve students’ learning efficiency and depth, cultivating their ability to master robot algorithms and their applications, laying a good foundation for their future development. The integration of robot technology in education through such interfaces demonstrates the potential for scalable and interactive learning environments.

Robot Virtual Activity Space Construction

Gazebo is an open-source and free 3D dynamic simulation plugin that can be used to simulate the operation of intelligent devices such as robots, manipulators, and drones. Since 2011, Gazebo has supported ROS and is deeply integrated with the system. Using ROS and Gazebo, complete robot simulations can be conducted, including sensor data acquisition, control algorithm testing, and performance analysis, significantly reducing the time and cost of robot technology development. We built the experiment environment based on Gazebo. In the Gazebo simulation environment, users can freely manipulate virtual robots, test different algorithms, control strategies, and sensor configurations. Additionally, Gazebo supports custom simulation environments, allowing users to create their own environments and scenes. Gazebo has visualization capabilities, enabling users to observe the robot’s operating status in the simulation environment in real-time and perform visual analysis.

To enhance students’ campus belonging and experiment interest, we built a robot simulation environment highly consistent with real campus scenes. Constructing highly还原 simulation environments is an essential process for engineering students learning intelligent robot technology and the foundation for subsequent robot path planning virtual simulation experiment designs. The virtual campus based on Gazebo is shown in a conceptual diagram. We used parts of Beijing Information Science and Technology University as a background to build a simulation environment with campus characteristics on the Gazebo physical simulation platform. In this environment, necessary reference coefficients and multiple feature points are set, while friction coefficients, collision attributes, and other noises are added to obstacles to simulate real environments. The main scenes applied in experiment teaching include the school library, student cafeteria, student dormitories, and parking lot. Multi-scene campus environments help increase experiment richness and experiment control groups.

Virtual Robot Design

We designed a simulation robot with campus characteristics, named BISTUBOT. The BISTUBOT robot is depicted in a diagram, with the robot model designed using Blender and exported to the Gazebo environment as the main body for student simulation experiments. The campus robot consists of a control system, execution mechanism, and sensing system. The control system executes tasks, processes information, and outputs control signals. The execution mechanism converts commands into signals for processing. The sensing system comprises multiple sensors inside and outside the robot, effectively providing real-time surrounding perception and status information, offering reliable and up-to-date data sources for robot control.

Robot Kinematics Model

BISTUBOT adopts a forward kinematics model, calculating the velocity of the robot’s geometric center point based on the speeds of the left and right differential wheels. As shown in a diagram, the linear velocity directions of the robot’s left and right drive wheels are the same as the x-axis, and the linear velocity direction is perpendicular to the rotation radius. Therefore, the instantaneous center of rotation (ICR) must be located on the line connecting points L and R. The specific position of ICR on the straight line LR is determined by the speeds of the left and right drive wheels. From the equation $v = \omega \cdot r$, when $\omega$ is constant, $v$ is proportional to $r$. Thus, the velocities of points L, R, and CENTER can be expressed as follows:

$$ v = \frac{v_c}{r_c} = \frac{v_r}{r_c + d_{\omega b}/2} = \frac{v_l}{r_c – d_{\omega b}/2} $$

where $d_{\omega b}$ represents the robot’s outer diameter, $r_c$ represents the turning radius of the center point CENTER, and $[v_c, \omega]^T$ represents the velocity of the center point CENTER. Rearranging formula (1), the angular velocity $\omega$ can be expressed as:

$$ \omega = \frac{v_r – v_l}{d_{\omega b}} $$

The direction of angular velocity depends on $|v_r – v_l|$. Additionally, by rearranging formula (1), the linear velocity $v_c$ of point CENTER can be calculated in relation to the speeds of the left and right drive wheels $[v_l, v_r]$:

$$ v_c = \frac{v_l + v_r}{2} $$

Further, the turning radius $r_c$ of point CENTER can be expressed by combining formulas (1) to (3):

$$ r_c = \frac{v_c}{\omega} = \frac{(v_l + v_r) \cdot d_{\omega b}}{2(v_r – v_l)} $$

The forward kinematics model calculates the velocity of the geometric center point CENTER based on the speeds of the left and right drive wheels. It can be expressed by combining formulas (2) and (3):

$$ \begin{bmatrix} v_c \\ \omega \end{bmatrix} = \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{d_{\omega b}} & \frac{1}{d_{\omega b}} \end{bmatrix} \begin{bmatrix} v_l \\ v_r \end{bmatrix} $$

This formula satisfies that when $v_c = 0$ and $\omega \neq 0$, the robot is in a self-rotation state. By combining the designed virtual sensors with the virtual structure framework, we constructed a complete virtual simulation robot. This robot not only meets the framework structure and data source requirements of teaching robots but also largely mimics the working mode of real physical robots, achieving the purpose of robot virtualization. This virtual simulation robot can provide a safe, controllable environment for teaching experiments, allowing students to conduct experiments and operations without actual robots. By simulating the working mode of real robots, students can更深地 understand the working principles and control methods of robots, not only meeting teaching needs but also providing an effective platform for students to practice and explore robot technology.

Voice and Vision Module Design

Integrating a voice module into the robot can significantly enhance the practicality and interactivity of experiment teaching, enabling students to directly interact with the robot through natural language, thereby deepening their understanding and mastery of speech recognition and synthesis technologies in intelligent systems. The experiment teaching interface has imported the iFlytek voice module, which features automatic speech recognition (ASR) and text-to-speech (TTS) functions, integrated with the ROS system through APIs provided by iFlytek. When students use this module, they input instructions via voice, the module converts the instructions into text, and parses them into executable control commands for the robot; subsequently, the system can feedback execution results to students through text-to-speech functions, intuitively displaying the entire process from input to machine execution. This method enhances the engagement in robot technology learning.

Integrating a machine vision module into the robot can significantly enhance the robot’s environmental perception capabilities, enabling it to achieve more advanced functions such as target recognition and visual SLAM through visual information processing. These visual capabilities are crucial for robot applications in simulation environments, providing students with richer and more realistic learning experiences. The machine vision module of the BISTUBOT simulation robot is illustrated in a diagram. Visual applications mainly include ROS image processing packages, deep image processing tools, and libraries for target recognition using OpenCV or machine learning frameworks. By receiving and processing image data from depth cameras, students can achieve automatic target recognition and tracking, and directly observe the robot’s responses and path selections through real-time perspectives. These operations enable students to practice and understand how visual data is transformed into specific actions of the robot, further emphasizing the importance of robot technology in autonomous systems.

Cruise Robot Virtual Simulation Experiment and Teaching Application

Path planning is an important component of robot navigation and autonomous movement, serving as the foundation for achieving robot cruising and path tracking. The most widely used move_base framework in the ROS system’s Navigation function package can load maps to obtain position information of start and target points, and plan navigation routes combined with robot localization and sensor data. To further mobilize student participation and enthusiasm, improve teaching interaction quality, the experiment platform integrates a voice module, allowing students to achieve robot motion control through this module. During the process of going to the target location via global path planning, new dynamic obstacles may appear, thus requiring the use of local path planning programs to gradually perceive and plan the environment. The navigation framework of the BISTUBOT simulation robot is depicted in a conceptual diagram.

Experimental Principle Analysis

1. Dijkstra Path Planning Algorithm: The global path planner within the move_base navigation framework uses the Dijkstra algorithm. This algorithm is a typical breadth-first state space search computation, i.e., the computation starts from the starting point, searches the entire free space layer by layer, until reaching the target location, obtaining the optimal path. Assume a weighted directed (or undirected) graph $G = (V, E, W)$, where the weight of each edge $e_{i,j} = \{v_i, v_j\}$ is a non-negative real number $w_{i,j}(e_{i,j})$, representing the distance from node $v_i$ to node $v_j$, and set the start point $s \in V$. The algorithm’s task is to find the shortest path from the start point $s$ to all nodes in $V$. Let $A$ be the adjacency matrix of the road network structure graph, $S$ be the set of nodes that have been traversed to obtain the shortest path, $D$ be the shortest distance vector, $D[i]$ record the shortest distance from $s$ traversed to $v_i$, $P$ be the path node vector, $s_t$ be the source node, and $e$ be the target node.

$$ A = [a_{ij}], \quad \text{where } a_{ij} = \begin{cases} w_{ij} & \text{if } v_i \text{ and } v_j \text{ are adjacent} \\ \infty & \text{otherwise} \end{cases}, \quad \text{and } a_{ii} = 0 $$

Search steps: First, initialize $S$ and $D$. Let $S = \{s_t\}$, $D[i] = a_{s_t i}$, $i = 1, 2, \dots, n$. Then select $v_j$ such that $D[j] = \min D$, let $S = S \cup \{v_j\}$. Then, using $v_j$ as the center point, modify the values in the shortest distance vector. If:

$$ D[j] + a_{jk} < D[k] $$

Then change to:

$$ D[k] = D[j] + a_{jk} $$

Then update the path node vector $P$ for this path task. Finally,循环执行上述步骤 until completed. This algorithm is fundamental in robot technology for ensuring efficient navigation.

2. A* Path Planning Algorithm: The A* algorithm is a commonly used path planning and graph traversal algorithm. It can be seen as an extension of the Dijkstra algorithm. Due to the introduction of a heuristic function to guide the search, the A* algorithm typically has better performance, with eight search directions at each search point. The algorithm solves using a cost evaluation function $f(N)$, where $N$ is the current node with coordinates $(x_N, y_N)$, and $E$ is the endpoint with coordinates $(x_E, y_E)$. Starting from the start point, expanding towards the endpoint, then calculating the cost value of each node using $f(N)$:

$$ f(N) = g(N) + h(N) $$

where $g(N)$ is the cost function, describing the actual distance from the start point to the current point, and $h(N)$ is the heuristic function, describing the estimated distance from the current point to the endpoint. The A* algorithm predefines two sets when solving paths, one is the closed set, and the other is the open set, used to store nodes that have been traversed and will be traversed. First, add the start point to the closed set, then add the point with the smallest $f(N)$ in the open set to the closed set, then add the surrounding nodes of $N$ to the open set, repeat the above steps until the endpoint is also in the open set, the algorithm ends. The implementation of path planning using the A* algorithm in a random obstacle environment is shown in a diagram. This algorithm enhances the efficiency of robot technology in dynamic environments.

3. RRT Path Planning Algorithm: RRT (Rapidly-Exploring Random Tree) is a sampling-based path planning algorithm that rapidly searches non-convex high-dimensional spaces by randomly constructing Space Filling Trees. It grows a tree with the start point as the root node until the tree includes the target point. The algorithm constructs a tree through random sampling and grows towards the target point, ensuring path feasibility through boundary and obstacle detection. The demonstration of the RRT algorithm is shown in a diagram. As seen from the diagram, first initialize the random tree, using the start point position as the root node, generating an empty tree. Then execute the sampling function to obtain a random point $Q_{\text{rand}}$ in the search space. Then traverse all nodes in the tree to find the point $Q_{\text{nearest}}$ with the smallest cost to $Q_{\text{rand}}$. Obtain the extension point $Q_{\text{new}}$ in the direction from $Q_{\text{nearest}}$ to $Q_{\text{rand}}$ with a specified length, and further perform collision detection on $Q_{\text{new}}$ to ensure this point does not hit obstacles. Finally, if $Q_{\text{new}}$ passes the collision detection, use $Q_{\text{nearest}}$ as the parent node of $Q_{\text{new}}$, connect the line between the two points, and determine if $Q_{\text{new}}$ has reached the target area. If not, return to continue sampling. This algorithm is particularly useful in robot technology for handling complex and unknown environments.

In experiment teaching, the involved SLAM algorithm has been encapsulated into corresponding function packages to form an algorithm library and added to the experiment teaching interface, facilitating students’ autonomous completion of experiments. The following table provides a comprehensive comparison of the three path planning algorithms. Through the explanation of the principles of the above three path planning algorithms, students have a preliminary understanding of ROS robot navigation, while independently thinking about which algorithm to apply in robot design. The learning and comparison of algorithm theories enhance students’ ability to integrate professional knowledge, expand interdisciplinary knowledge, apply multi-disciplinary comprehensive knowledge, and innovate independently in programming.

Comparison of Path Planning Algorithms in Robot Technology
Algorithm Name Applicable Scenario Search Strategy Data Structure
Dijkstra Static environment Breadth-first search Priority queue
A* Static environment Heuristic search Priority queue
RRT Dynamic environment Uninformed heuristic search Tree data structure

4. BISTUBOT Offline Voice Service: In the ROS framework, to achieve intelligent voice control of the robot, a speech recognition service needs to be integrated into the execution environment. We use iFlytek’s offline voice service as an independent ROS node running, monitoring environmental audio input, and converting speech into text information. The converted text information is published to a ROS topic for use by other ROS robot nodes. The robot execution text subscription node subscribes to the ROS topic containing control instruction text information, receives and parses these text information, converting them into control commands that the robot can understand and execute. These commands are then passed to the robot control node through ROS’s communication mechanism, which directly controls the robot’s motion hardware (such as drive motors), thus enabling the robot to move or perform other tasks according to the user’s voice instructions. This setup allows students to interact with the robot through simple voice commands, making the control process both intuitive and convenient. The BISTUBOT offline voice service流程 is shown in a flowchart. This integration highlights the advancements in robot technology for user-friendly interfaces.

Experiment Process Overview

In the complex environment of the simulation campus, using the RRT algorithm as an example, guide students to complete the design of campus cruise robot virtual simulation experiments. Meanwhile, with the help of the experiment guidance and algorithm library provided by the teaching interface, students can complete experiment designs more efficiently. This experiment requires students to complete 6 operation steps: ① Open the QT human-machine interaction interface; ② Start the simulation environment; ③ Run the algorithm function packages required for the experiment; ④ Enter the practice module main interface, complete the cruise area mapping and path planning operations separately, and view the experiment process; ⑤ Experiment data processing and analysis; ⑥ Submit experiment results and reports. After students submit reports, teachers will give scores based on experiment results from three aspects: path effectiveness, safety, and time consumption. If the cruise of the specified area is completed, this module learning is finished; otherwise, re-learning is required. This structured approach ensures comprehensive understanding of robot technology principles.

Experiment Content

Design and implement campus cruise robot experiments to verify the important role of the innovative teaching platform in the simulation teaching experiment process. The above analysis of experiment principles and overview of experiment processes help students better conduct campus cruise robot virtual simulation experiment designs. Robot simulation experiments include SLAM environment mapping, navigation point path planning, target point setting, and intelligent voice control, thereby achieving campus cruise robot path planning and navigation tasks. The first phase of the experiment focuses on using the sensors equipped on the robot to real-time construct environment maps through Cartographer mapping technology in the Gazebo simulation environment. In this process, students will learn how to configure and optimize Cartographer algorithm parameters to improve the accuracy and efficiency of map construction. This process not only requires students to understand SLAM technology and Cartographer algorithm principles but also requires them to solve specific problems encountered in real-time environment recognition and map construction. After successfully constructing the environment map, students can combine intelligent voice control instructions to control the robot for path planning and navigation, no longer needing to separately execute motion control nodes and navigation control nodes. With the experiment interface integrating offline voice modules, the experiment’s multi-task parallel capability is increased. Students will send voice instructions through the teaching interaction interface based on QT development, such as “forward,” “backward,” “turn left,” “turn right,” etc., controlling the virtual robot’s movement in the Gazebo environment. Based on the results of path planning algorithm experiments, promote students to further learn how to calculate the optimal path using the best algorithm based on the robot’s current position and target position, and adjust the robot’s movement strategy through intelligent voice instructions, independently edit and determine navigation target points and execute through voice instructions, to adapt to environmental changes and potential obstacles.

Experiment Results

Simulation experiments not only demonstrate the complete process of robot autonomous navigation to students but also improve their understanding of the comprehensive application capabilities of robot technology. Additionally, the experiment design encourages students to actively participate, deepening their mastery of intelligent robot technology through practical operations, thereby promoting the development of their practical abilities and innovative thinking. Through the robot controller and the Cartographer function package in the algorithm library, students control BISTUBOT to complete the map construction of the cruise route within the campus, including the dormitory area, library, and school name stone. The robot uses the RRT algorithm for path planning based on the known map, connecting the start and end points to achieve round-trip cruising. Based on the ROS open-source platform and the QT teaching interface provided by this research, sequentially complete environment mapping, set robot routes and remote control, select navigation points and set parameters, use the RRT algorithm for path planning and voice control. After path correction and test verification during cruising, the robot successfully experiments with round-trip cruising of campus target points and has obstacle avoidance capabilities, thereby verifying the feasibility of combining the teaching interface to complete the design of campus cruise robot virtual simulation experiments in practical application processes. These results underscore the practical benefits of integrating robot technology in educational settings.

Teaching Effectiveness Evaluation

The “Robot Operating System” course at the School of Automation covers teaching content such as basic ROS concepts, system architecture, coordinate transformation, common components and tools, modeling and simulation, machine vision, SLAM and autonomous navigation, manipulator control, and robot platforms. The ROS-based intelligent robot technology simulation platform provides a simulation scene highly consistent with real scenes, unifying students’ online and offline experiment operation objects. The simulation platform and physical platform can achieve the same training effect, with online simulation and offline practical operations mutually promoting and supplementing each other, facilitating a smooth and safe transition of teaching activities. Additionally, virtual simulation experiment projects developed based on real physical scenes help increase students’ interest in participating in training and campus belonging, improve the effectiveness of virtual experiments, and thereby enhance the completion rate of intelligent robot operation and programming experiments. Through preliminary virtual simulation experiments, students can repeatedly practice key technical points in the simulation environment to achieve the best experiment results, which can greatly alleviate equipment damage caused by students’ unfamiliar operations, reduce training costs, and decrease equipment maintenance frequency. During the teaching process, we applied the ROS-based intelligent robot technology simulation platform to the experiment teaching of the “Intelligent Science and Technology” major in the 2021 and 2022 grades. Students sequentially completed basic theoretical learning, experiment demonstration learning, and simulation experiment practical operations through this simulation platform, finally completing the design of cruise robot virtual simulation experiments and submitting experiment reports. The course team teachers comprehensively evaluated the application effect of the ROS intelligent robot technology simulation platform based on theoretical knowledge exams and experiment reports. To verify the application effect of the ROS intelligent robot technology simulation platform, we analyzed the assessment results of the “Robot Operating System” course for the 2021 and 2022 grades after class, finding that the 2022 grade students using this platform significantly outperformed the 2021 grade students not using this platform in theoretical scores, experiment scores, and project design capabilities. Especially in the experiment operation assessment环节, the 2022 grade students demonstrated higher operation efficiency and ability to handle complex problems, with an excellent (grade A+B) ratio of 42.87%, an increase of about 15% compared to the 2021 grade excellent ratio, fully indicating that this platform can effectively enhance students’ practical skills. Simultaneously, the improvement in comprehensive experiment design scores reflects that students’ innovative design abilities have also been greatly enhanced. This evaluation confirms the positive impact of robot technology integration in education.

Conclusion

The open-source nature of ROS allows students to customize experiment content based on their interests and needs. Combined with the teaching interface, virtual simulation experiment designs can significantly reduce the difficulty of building virtual experiment environments by subscribing to and publishing encapsulated ROS nodes. Students can quickly get started with hands-on practice and rapidly understand the experiment architecture. This autonomous learning model can stimulate students’ innovative potential and improve the teaching quality of experiment teaching. Simulation experiments and teaching applications show that this platform is not only fully functional but also highly scalable and portable,同时兼具脚本语言的灵活性与图形化语言的便捷性, with high substantive equivalence to real experiment scenes, laying a solid foundation for students to conduct academic research and engineering practice in the field of robot technology. The continuous evolution of robot technology in educational contexts promises to foster a new generation of innovators and practitioners in this dynamic field.

Scroll to Top