Advancements in Robot Technology through Grey Wolf Optimized Visual Servo Cooperative Control

In recent years, robot technology has revolutionized automation by enabling precise and adaptive operations in dynamic environments. However, achieving both rapid and accurate target positioning at the robot end-effector remains a significant challenge, particularly in uncalibrated visual servoing systems. Traditional approaches often struggle with balancing dynamic performance and steady-state accuracy, leading to inefficiencies in real-world applications. To address this, we propose a novel cooperative control strategy that integrates grey wolf optimizer (GWO)-enhanced visual servoing with proportional-integral (PI)-regulated position servoing. This method leverages the strengths of both control paradigms to optimize robot technology for high-speed and high-precision tasks. By employing GWO to refine Kalman filtering parameters, we improve the online estimation of the image Jacobian matrix, thereby enhancing visual servo accuracy. Simultaneously, the PI-based position servo ensures fast dynamic responses. A Gaussian function-based cooperative mechanism seamlessly transitions between these controllers, ensuring robust performance across varying operational conditions. This paper details the theoretical foundation, controller design, stability analysis, and experimental validation of our approach, demonstrating its superiority in advancing robot technology.

Robot technology relies heavily on dynamic modeling to ensure precise control. The dynamics of an n-degree-of-freedom robot can be described by the following equation, which accounts for inertial, Coriolis, gravitational, and frictional forces:

$$ D(q) \ddot{q} + C(q, \dot{q}) \dot{q} + G(q) = \tau – \tau_f $$

where \( q \), \( \dot{q} \), and \( \ddot{q} \in \mathbb{R}^n \) represent the joint angle position, velocity, and acceleration vectors, respectively. \( D(q) \in \mathbb{R}^{n \times n} \) is the symmetric positive definite inertia matrix, \( C(q, \dot{q}) \in \mathbb{R}^{n \times n} \) encapsulates Coriolis and centrifugal effects, \( G(q) \in \mathbb{R}^n \) is the gravitational torque vector, \( \tau \in \mathbb{R}^n \) denotes the input torque vector, and \( \tau_f \) represents friction-induced torques, modeled as \( \tau_f = R_f \dot{q} \) with \( R_f \in \mathbb{R}^{n \times n} \) being a positive definite diagonal matrix of friction coefficients. Integrating this with permanent magnet DC motor dynamics, which drive the robot joints, the overall system model becomes:

$$ \begin{aligned}
L_a \frac{di_a}{dt} &= -R_a i_a – C_T \Phi \omega \eta \dot{q} + u_a \\
\tilde{D}(q) \ddot{q} + \tilde{C}(q, \dot{q}) \dot{q} + \tilde{G}(q) &= \tau_m – \tau_f
\end{aligned} $$

Here, \( \eta \) is the gear ratio, \( \tilde{D}(q) = \eta D(q) + \eta^{-1} J_m \), \( \tilde{C}(q, \dot{q}) = \eta C(q, \dot{q}) + \eta^{-1} R_m \), and \( \tilde{G}(q) = \eta G(q) \), with \( J_m \) and \( R_m \) representing motor inertia and friction, respectively. The visual servoing aspect of robot technology involves mapping image features to robot motion. The image Jacobian matrix \( J_s \) relates changes in image features \( s \in \mathbb{R}^m \) to the end-effector velocity in Cartesian space \( \dot{p} \in \mathbb{R}^n \):

$$ \dot{s} = J_s \dot{p} $$

Combining this with the robot Jacobian \( J_r \), which links joint velocities to Cartesian velocities (\( \dot{p} = J_r \dot{q} \)), the overall relationship is:

$$ \dot{s} = J_s J_r \dot{q} = J \dot{q} $$

where \( J = J_s J_r \in \mathbb{R}^{m \times n} \) is the combined Jacobian matrix. This formulation is crucial for uncalibrated visual servoing in robot technology, as it avoids the need for explicit camera parameter calibration.

The core of our proposed method lies in the GWO-optimized visual servo control, which significantly enhances the accuracy of image Jacobian estimation. Traditional Kalman filtering for Jacobian estimation often suffers from parameter sensitivity, leading to suboptimal performance in dynamic environments. We address this by employing the grey wolf optimizer, a metaheuristic algorithm inspired by wolf pack hierarchies, to tune the process noise covariance matrix \( Q \) and measurement noise covariance matrix \( R \) in the Kalman filter. The state-space model for the “eye-in-hand” robot system is:

$$ \begin{aligned}
x(k+1) &= x(k) + \eta(k) \\
y(k) &= H(k) x(k) + v(k)
\end{aligned} $$

where \( x(k) \) represents the estimated image Jacobian, \( y(k) \) is the measured image feature, and \( \eta(k) \) and \( v(k) \) are process and measurement noises with covariances \( Q \) and \( R \), respectively. The GWO algorithm iteratively updates wolf positions (solutions) to minimize the fitness function, defined as the M-norm of the image feature error:

$$ f = \| s – s_d \|_M $$

Here, \( s \) and \( s_d \) are the current and desired image features. The position update equations in GWO are:

$$ \begin{aligned}
X_1 &= X_\alpha – A_1 | C_1 X_\alpha – X | \\
X_2 &= X_\beta – A_2 | C_2 X_\beta – X | \\
X_3 &= X_\gamma – A_3 | C_3 X_\gamma – X | \\
X(t+1) &= \frac{X_1(t) + X_2(t) + X_3(t)}{3}
\end{aligned} $$

with parameters \( A = 2a \cdot r_1 – a \) and \( C = 2 \cdot r_2 \), where \( a \) decreases linearly from 2 to 0 over iterations, and \( r_1, r_2 \) are random vectors. This optimization ensures that \( Q \) and \( R \) are dynamically adjusted, improving the Kalman filter’s convergence and accuracy. The visual servo controller then computes the joint velocity command as:

$$ \dot{q}_s = -J_r^{-1} \hat{J}_s^+ e_s $$

where \( \hat{J}_s^+ \) is the pseudo-inverse of the estimated Jacobian, and \( e_s = s – s_d \) is the image feature error. The control output is integrated to obtain the joint angle command \( q_s = \int \dot{q}_s \, dt \). Stability is guaranteed by the Lyapunov function \( V_s = \frac{1}{2} e_s^T e_s \), whose derivative \( \dot{V}_s = -e_s^T J_s \hat{J}_s^+ e_s \leq 0 \) ensures asymptotic convergence.

In parallel, the PI-regulated position servo controller enhances the dynamic performance of robot technology. It operates based on the end-effector pose error \( e_p = X – X_d \), where \( X \) and \( X_d \) are the current and desired poses in Cartesian space. The control law is designed as:

$$ \dot{p} = -J_p^{-1} (K_P e_p + K_I \int e_p \, dt) $$

Here, \( J_p \) is the transformation matrix defined by Euler angles, and \( K_P > 0 \), \( K_I > 0 \) are tunable gains. The joint angle command \( q_p \) is derived through inverse kinematics: \( q_p = X_p^{-1}(q_p) \). The Lyapunov function \( V_p = \frac{1}{2} e_p^T e_p \) yields \( \dot{V}_p = -e_p^T (K_P e_p + K_I \int e_p \, dt) \leq 0 \), confirming stability. This controller enables rapid movement to the workspace, addressing the slow convergence of pure visual servoing.

The cooperative control strategy harmonizes the visual and position servos using a Gaussian function based on the image feature error. The cooperative function \( f_c \) is defined as:

$$ f_c = e^{-\left( \frac{\| e_s \|_M}{\sigma} \right)^2} $$

where \( \sigma \) is a scaling parameter. The overall control command is then:

$$ q_d = f_c q_s + (1 – f_c) q_p $$

This formulation ensures that when the image error is large (\( f_c \approx 0 \)), position servo dominates for fast dynamics, and when the error is small (\( f_c \approx 1 \)), visual servo takes over for precise positioning. The Lyapunov function for the cooperative system, \( V_c = V_s + V_p \), has a derivative \( \dot{V}_c \leq 0 \), proving asymptotic stability per LaSalle’s invariance principle. This approach exemplifies how adaptive control in robot technology can achieve both speed and accuracy.

To validate our method, we conducted extensive simulations and experiments within a MATLAB environment, emulating an “eye-in-hand” robot configuration. The target object was a square, with its four corners serving as image features. Initial image features and desired features were set as:

$$ s_d = \begin{bmatrix} 612 & 412 & 412 & 612 \\ 412 & 412 & 612 & 612 \end{bmatrix}, \quad s = \begin{bmatrix} 660 & 524 & 671 & 807 \\ 513 & 660 & 795 & 648 \end{bmatrix} $$

Kalman filter parameters were initialized with \( P(0) = 1000 I_{48} \), and GWO was configured with a population size of 50 and 100 iterations. The optimized \( Q \) and \( R \) matrices significantly improved Jacobian estimation. The table below compares the performance of our method against traditional visual servoing and position servoing in terms of image feature error and convergence time:

Control Method Average Image Error (pixels) Convergence Time (s) Steady-State Accuracy
Traditional Visual Servo 12.5 8.2 High
Position Servo 5.3 3.1 Low
Proposed Cooperative Control 1.5 4.0 High

The results demonstrate that our cooperative control reduces image error to 1.5 pixels with a convergence time of 4.0 seconds, outperforming individual methods. Furthermore, the joint angle errors and end-effector velocities were analyzed. The joint angle error \( \tilde{q} = q – q_d \) converged rapidly under cooperative control, as shown in the following equation derived from the computed torque controller:

$$ \tau_m = -\tilde{D} [K_2 K_1 \tilde{q} + (K_2 + K_1) \dot{\tilde{q}}] + \tilde{C} \dot{q} + \tilde{G} + \tau_f $$

with \( K_1 > 0 \) and \( K_2 > 0 \). The current controller ensured tracking of the desired armature current \( i_{ad} = \frac{\tau_m}{C_T \Phi} \) using:

$$ u_a = R_a L_a + C_T \Phi \omega – K_i L_a e_i $$

where \( e_i = i_a – i_{ad} \) and \( K_i > 0 \). The Lyapunov analysis for the overall joint servo subsystem, with \( V = V_2 + V_d \), confirmed stability with \( \dot{V} \leq 0 \).

Experimental validation on a six-degree-of-freedom robotic platform further confirmed the efficacy of our approach in real-world robot technology applications. The desired image features were set to coordinates (420, 400), (695, 400), (695, 640), and (420, 640). Using GWO-optimized parameters (\( Q = 0.0001 I_{48} \), \( R = 0.5 \text{diag}(1,1,1,1,1,1,1,1) \)), the image feature errors in the u and v directions converged to 1.03 and 1.87 pixels, respectively. The table below summarizes the experimental results across different control strategies:

Metric Visual Servo Only Position Servo Only Cooperative Control
u-direction Error (pixels) 3.2 6.5 1.03
v-direction Error (pixels) 4.1 7.8 1.87
Settling Time (s) 10.5 2.8 4.5

These findings highlight that cooperative control achieves a balance between rapid response and precise positioning, key for advanced robot technology. The integration of GWO into Kalman filtering reduces estimation uncertainties, while the PI regulator mitigates slow dynamics. Future work will explore real-time adaptation of the cooperative function parameters and application to multi-robot systems.

In conclusion, our research contributes to robot technology by introducing a cooperative control framework that synergizes GWO-optimized visual servoing and PI-based position servoing. This method addresses the limitations of individual controllers, offering enhanced performance in dynamic and steady-state operations. The theoretical stability guarantees and empirical validations underscore its potential for industrial automation, where robot technology demands both speed and accuracy. As robot technology continues to evolve, such intelligent control strategies will play a pivotal role in enabling robust and efficient robotic systems.

Scroll to Top