Embodied Morphological Perception for Flexible Robots with Interactive Constraints in Luminal Intervention

Luminal diseases represent some of the most prevalent human ailments, and flexible surgical robots, owing to their inherent compliance, offer a promising solution for navigating tortuous and narrow luminal pathways. However, these embodied robot systems often face challenges due to interactive constraints, such as blockages during intervention, which can lead to tissue damage if excessive forces are applied. To enhance the safety and intelligence of these procedures, we propose an embodied morphological perception method based on Fiber Bragg Grating (FBG) sensors. This approach enables the embodied robot to transition from ontology perception to cognition by acquiring, processing, interpreting, and making behavioral decisions based on morphological information. The core of this method involves shape reconstruction, signal filtering, blockage detection, and intervention strategies, all tailored for the embodied robot operating under interactive constraints in luminal environments.

The foundation of our embodied robot perception lies in shape reconstruction using multi-core FBG fibers. FBG sensors are ideal for embodied robot applications due to their small size, high sensing frequency, biocompatibility, and resistance to interference. These sensors operate on the principle that the Bragg wavelength shift correlates with strain and temperature changes. For an embodied robot, this allows distributed sensing along its body, providing real-time morphological feedback. The shape reconstruction is based on a discrete arc assumption, where the fiber is divided into segments, each modeled as a constant curvature arc. The relationship between strain and curvature for a multi-core FBG fiber with four cores—one central and three peripheral cores spaced 120 degrees apart—is given by:

$$ \begin{cases} \varepsilon_1 = \kappa r \cos\theta + \varepsilon_4 \\ \varepsilon_2 = \kappa r \cos(\theta – 2\pi/3) + \varepsilon_4 \\ \varepsilon_3 = \kappa r \cos(\theta + 2\pi/3) + \varepsilon_4 \end{cases} $$

Here, $\varepsilon_1$, $\varepsilon_2$, $\varepsilon_3$ represent the strains in the peripheral cores, $\varepsilon_4$ is the axial strain, $\kappa$ is the curvature, $\theta$ is the bending direction, and $r$ is the radial distance from the center. The Bragg wavelength shift $\Delta \lambda$ is related to strain by $\Delta \lambda / \lambda = k_\varepsilon \varepsilon + k_T \Delta T$, where $k_\varepsilon$ and $k_T$ are strain and temperature coefficients, respectively. By solving these equations, we derive the curvature and direction for each FBG node:

$$ \kappa = \sqrt{ \left( \frac{2\Delta w_1 – \Delta w_2 – \Delta w_3}{3k_\varepsilon r} \right)^2 + \left( \frac{\Delta w_2 – \Delta w_3}{\sqrt{3}k_\varepsilon r} \right)^2 } $$

$$ \theta = \arctan \left( \sqrt{3} \cdot \frac{\Delta w_2 – \Delta w_3}{2\Delta w_1 – \Delta w_2 – \Delta w_3} \right) $$

where $\Delta w = \Delta \lambda / \lambda$. This formulation allows the embodied robot to reconstruct its shape by discretizing the fiber into $N$ segments, each with length $l = L/N$, where $L$ is the total fiber length. The transformation matrix for each segment, when $\kappa \neq 0$, is computed as:

$$ ^b_eT_k = T_{\text{rot}}(z, \theta_k) \times T_{\text{trans}}(z, \sin(l\kappa_k)/\kappa_k) \times T_{\text{trans}}(x, (1 – \cos(l\kappa_k))/\kappa_k) \times T_{\text{rot}}(y, l\kappa_k) \times T_{\text{rot}}(z, -\theta_k) $$

And the position vector $p_k$ for the end of each segment relative to its base is:

$$ p_k = \begin{bmatrix} (1 – \cos(l\kappa_k)) \cos(\theta_k)/\kappa_k \\ (1 – \cos(l\kappa_k)) \sin(\theta_k)/\kappa_k \\ \sin(l\kappa_k)/\kappa_k \end{bmatrix} $$

For straight segments where $\kappa = 0$, the transformation simplifies to a translation along the z-axis. This discrete model enables efficient shape reconstruction for the embodied robot, facilitating real-time perception of its morphological state during luminal intervention.

However, FBG signals are prone to noise and instability, which can accumulate errors in shape estimation. To address this, we propose an Improved Moving Average Filter (IMAF) that smooths the curvature and direction signals by progressively incorporating the influence of neighboring arc states. For an embodied robot with $M$ FBG nodes, the state at node $i$ is defined as $s_i = [\kappa_i, \theta_i]^T$, and the smoothed state $s_i^{\text{IMAF}} = [\kappa_i^{\text{IMAF}}, \theta_i^{\text{IMAF}}]^T$ is obtained using a sliding window of width $W = 2V + 1$. The filtering process involves multiple convergence steps:

$$ \begin{cases} a_j^{m+1} = \alpha a_j^m + \frac{1 – \alpha}{2} (a_{j-1}^m + a_{j+1}^m) \\ a_j^1 = s_j \end{cases} $$

where $\alpha$ is an influence factor, and $a_j^m$ represents the intermediate state during the $m$-th convergence step. The final smoothed state accounts for endpoints:

$$ s_i^{\text{IMAF}} = \begin{cases} \beta s_i + (1 – \beta) s_{i+1}, & i = 1 \\ a_i^i, & 1 < i \leq V \\ a_{V+1}^i, & V < i \leq M – V \\ a_{M-i+1}^i, & M – V < i < M \\ \beta s_i + (1 – \beta) s_{i-1}, & i = M \end{cases} $$

Here, $\beta$ is a smoothing factor for endpoints. This IMAF method enhances signal stability by reducing noise and spikes, which is critical for the embodied robot to achieve accurate morphological perception. Comparative analyses with other filters, such as Moving Average Filter (MAF) and Median Filter (MF), demonstrate the superiority of IMAF in minimizing root mean square error (RMSE) for both curvature and direction estimates, as summarized in the following table:

Method Curvature RMSE (×10⁻³ mm⁻¹) Direction RMSE (×10⁻² rad)
MAF 10.3 ± 0.1 9.1 ± 0.84
MF 11.7 ± 0.14 11 ± 1.2
IMAF 7.1 ± 0.05 5.9 ± 0.34

The embodied robot leverages this filtered shape data to understand interactive states through a spatiotemporal blockage detection method. During intervention, the embodied robot moves forward, and the positional deviation $e_k(t)$ at point $p_k$ between time $t$ and $t – \Delta t$ is computed as:

$$ e_k(t) = \| p_k(t) – p_k(t – \Delta t) \| $$

A blockage is detected at position $p_o$ where the deviation falls below a threshold $e_0$, indicating constrained movement:

$$ o = \min k, \quad \text{s.t.} \quad e_k(t) < e_0 $$

This allows the embodied robot to identify blocked segments and the proximal blockage point $p_o$, enabling it to respond adaptively to environmental constraints.

Based on this understanding, the embodied robot employs an intermittent intervention strategy inspired by human-like operation. When a blockage is detected at time $t_o$, the embodied robot reverses at a velocity $v_b(t)$; otherwise, it advances at a constant velocity $v_f(t)$. The intervention velocity $v(t)$ is defined as:

$$ v(t) = \begin{cases} v_f(t), & t \neq t_o \\ v_b(t), & t = t_o \end{cases} $$

To ensure compliant intervention, we introduce the Instantaneous Global Inconsistency Index (IGII), derived from the Global Inconsistency Index (GII), which quantifies the isotropy of force transmission along the embodied robot. The IGII, denoted as $C_{\text{IGII}}(t)$, is given by:

$$ C_{\text{IGII}}(t) = \frac{ \underline{\sigma}(J(t)) }{ \overline{\sigma}(J(t)) } = f(\kappa_1(t), \ldots, \kappa_N(t)) $$

where $\underline{\sigma}$ and $\overline{\sigma}$ are the minimum and maximum singular values of the Jacobian matrix $J$, and $f$ is a function of the curvatures. For simplicity, we model $f$ as:

$$ f = \prod_{k=1}^o \cos(l \kappa_k(t)) $$

This implies that smaller curvatures lead to higher IGII values, indicating smoother intervention. The reverse velocity $v_b$ is then computed based on the rate of change of position with respect to curvature and IGII:

$$ v_b = \sum_{k=1}^o \frac{d \| p_k \|}{dt} = k_{\text{IGII}} \sum_{k=1}^o \left( \frac{d \| p_k \|}{d \kappa_k} \frac{d \kappa_k}{d f} \right) $$

where $k_{\text{IGII}}$ is a compliance factor. This strategy allows the embodied robot to adjust its motion dynamically, reducing interaction forces and minimizing tissue damage during luminal intervention.

To validate our embodied morphological perception method, we conducted simulation studies. The simulation involved modeling an FBG fiber with $M = 100$ nodes spaced 10 mm apart, subjected to two distinct shapes: a spiral and an elliptical curve. Strain data were generated with added noise $n \sim \mathcal{N}(0, Q)$, where $Q = 10^{-3} I_{M \times M}$. The IMAF parameters were optimized through tests, resulting in $V = 8$, $\alpha = 0.9$, and $\beta = 0.08$. The results demonstrated that IMAF significantly reduced RMSE compared to unfiltered data and other filters, as shown in the following table for two shape scenarios:

Scenario Method Curvature RMSE (×10⁻³ mm⁻¹) Direction RMSE (×10⁻² rad) Endpoint Error (mm)
Spiral Shape Unfiltered 21 ± 0.44 18 ± 3.1 3.7
IMAF 7.1 ± 0.05 5.9 ± 0.34 2.1
Elliptical Shape Unfiltered 20 ± 0.40 3.2 ± 0.10 3.9
IMAF 7.2 ± 0.05 2.4 ± 0.06 2.5

These simulations confirm that IMAF enhances shape sensing accuracy for the embodied robot, with endpoint errors as low as 1.2% of the total length.

We further validated the method through physical experiments using a flexible robot platform. The robot, made of PTFE, was integrated with an FBG fiber of length $L = 170$ mm and 18 nodes, protected by a NiTi tube. The system included a force sensor to measure intervention forces. Experiments involved template-based shape reconstruction and blockage detection in curved channels with widths $H_1 = 20$ mm and $H_2 = 10$ mm. The embodied robot was controlled to advance at $v_f = 1.5$ mm/s, and blockage was identified using a threshold $e_0 = 0.92 v_f \Delta t$. The results showed that the intermittent intervention strategy based on IGII reduced average intervention forces compared to constant velocity intervention, as summarized below:

Scenario Intervention Method Average Force (N) Maximum Force (N) Intervention Time (s)
Width H₁ = 20 mm Constant Velocity 0.088 0.18 47.7
Intermittent (IGII-based) 0.076 0.12 76.8
Width H₂ = 10 mm Constant Velocity 0.060 0.10 25.8
Intermittent (IGII-based) 0.034 0.06 52.8

The lower forces in intermittent intervention demonstrate the effectiveness of the embodied robot’s decision-making in ensuring compliant luminal access. The blockage detection method successfully identified obstruction points, such as at lengths $L_{o1} = 94.1$ mm and $L_{o2} = 115.1$ mm for the two scenarios, enabling proactive responses.

In conclusion, our embodied morphological perception method for flexible robots addresses key challenges in luminal intervention by integrating FBG-based shape reconstruction, advanced signal filtering, spatiotemporal blockage detection, and human-inspired intervention strategies. The IMAF method proves superior in handling signal noise, while the IGII-based approach facilitates safe and compliant robot motion. Future work will focus on extending this framework to actively bending embodied robots and exploring adaptive threshold settings for blockage detection in dynamic luminal environments. This advancement in embodied robot perception paves the way for more intelligent and minimally invasive surgical procedures.

Scroll to Top