The design of effective intervention tools for children with Autism Spectrum Disorder (ASD) represents a significant challenge and opportunity within the fields of assistive technology and industrial design. My research focuses on leveraging the documented visual-processing strengths of children with ASD to inform the aesthetic and formal design of companion robots. These robots are increasingly deployed in therapeutic and educational settings to improve social communication, joint attention, and emotional recognition skills. The core premise of my work is that a companion robot’s visual design, or modeling, is not merely a stylistic choice but a critical functional component that can either facilitate or hinder engagement. If a companion robot’s form aligns with the child’s inherent visual preferences, it can reduce anxiety, increase spontaneous interaction, and thereby enhance the efficacy of the therapeutic protocols it delivers. This article details the experimental methodology, based on eye-tracking technology, that I employed to empirically identify these preferences and translates the findings into actionable design principles for future companion robot development.
Children with ASD often exhibit atypical visual attention patterns, including a predisposition for local over global processing and potential avoidance of complex social stimuli like human faces. However, this same profile can include strengths in processing systematic, predictable, and geometric visual information. Prior studies have indicated that interventions utilizing visual supports, such as Picture Exchange Communication Systems (PECS), are highly effective. Similarly, research in human-robot interaction (HRI) for ASD has shown that robots, being simpler and more predictable than humans, can serve as excellent social mediators. The logical progression, which my research undertakes, is to optimize the robot’s physical form—the companion robot’s chassis and appearance—to capitalize on these visual strengths from the very first moment of encounter.

To systematically investigate this, I designed a controlled eye-tracking study comparing the visual behavior of children with ASD to that of Typically Developing (TD) children when presented with various companion robot models. The primary objective was to move beyond anecdotal evidence and obtain quantitative data on where and for how long children look at different robot forms. This data allows us to infer preference, cognitive load, and potential points of visual aversion. The overarching goal is to create a data-driven foundation for designing companion robots that are intrinsically more appealing and less intimidating to their target users, thereby maximizing their potential as assistive tools.
Experimental Methodology: An Eye-Tracking Approach
The experiment was structured to isolate the variable of form factor from other influences like color, brand, or known functionality. A carefully curated set of companion robot stimuli was presented to two matched groups of children while their gaze was recorded.
1. Participants
Two groups of children, aged 3-6 years, participated. The experimental group consisted of 15 children diagnosed with ASD (8 male, 7 female). The control group consisted of 15 age- and gender-matched Typically Developing (TD) children. All parents or guardians provided informed consent prior to participation.
2. Stimulus Preparation
An initial market survey identified 104 distinct companion robot models. Through iterative filtering based on sales volume, distinctiveness, and formal characteristics, this set was refined. With input from experienced designers, six final models were selected as archetypal representatives of three major form categories:
| Sample Code | Form Category | Key Morphological Features |
|---|---|---|
| Y1 & Y2 | Basic Geometric | Primary shapes (spheres, cubes, cylinders) with rounded transitions; minimal facial features; cohesive, simple silhouette. |
| Y3 & Y4 | Biomorphic | Explicit imitation of human or animal forms (e.g., humanoid robot, dog robot); defined limbs, head, and often detailed facial features. |
| Y5 & Y6 | Irregular/Complex | Angular, fragmented, or highly intricate forms; often modular or transformer-like; lacks a clear, simple geometric or organic reference. |
To control for confounding variables, all images were converted to grayscale and normalized for luminance using histogram matching. The six images (Y1-Y6) were then arranged in a Latin square design to prevent order effects. Areas of Interest (AOIs) were defined as the entire silhouette of each companion robot.
3. Apparatus and Procedure
The study utilized a wearable, glasses-type eye-tracker (aSee Glasses). Stimuli were displayed on a high-resolution tablet. Each child participated individually in a quiet room. After a standard 3-point calibration, the child was instructed to simply look at the screen as the six companion robot images were displayed sequentially. A fixation was defined as a gaze maintained for >100ms. The entire session was supervised, and participants received a small reward afterward.
4. Data Metrics and Analysis
Key eye-tracking metrics were extracted for each AOI (companion robot) for each participant:
- First Fixation Duration (FFD): Duration of the very first fixation within an AOI. Indicates initial capture of attention.
- Fixation Count (FC): Total number of fixations within an AOI. Suggests level of interest and information processing.
- Total Fixation Duration (FD): Sum of all fixation durations within an AOI. Reflects overall engagement time.
- Percentage of Fixation Duration (PFD): $$PFD_i = \frac{FD_i}{\sum_{j=1}^{6} FD_j} \times 100\%$$ where \( i \) represents a specific companion robot sample. This normalizes engagement across the stimulus set.
- Heatmaps & Scanpaths: Visual aggregations showing spatial density of gaze (heatmaps) and the temporal sequence of fixations (scanpaths).
Data were analyzed using repeated-measures ANOVA to examine effects of Group (ASD vs. TD) and Robot Form, and their interaction.
Results: Divergent Visual Pathways
The quantitative analysis revealed striking and statistically significant differences in how the two groups visually processed the companion robot forms.
Quantitative Eye-Tracking Metrics
The table below summarizes the key results (mean ± standard deviation). Statistical analysis confirmed significant main effects for Group and Robot Form, and critically, a significant Group × Form Interaction for all metrics (p < .05).
| Group | Metric | Y1 (Geom) | Y2 (Geom) | Y3 (Bio) | Y4 (Bio) | Y5 (Irreg) | Y6 (Irreg) |
|---|---|---|---|---|---|---|---|
| ASD | FFD (s) | 0.26 ± 0.25 | 0.46 ± 0.31 | 0.17 ± 0.15 | 0.17 ± 0.22 | 0.13 ± 0.12 | 0.15 ± 0.14 |
| FC (count) | 5.27 ± 1.79 | 7.32 ± 2.79 | 4.40 ± 1.57 | 4.03 ± 1.32 | 2.93 ± 1.24 | 3.51 ± 1.53 | |
| FD (s) | 1.05 ± 0.59 | 1.69 ± 0.64 | 0.74 ± 0.41 | 0.62 ± 0.40 | 0.39 ± 0.27 | 0.51 ± 0.34 | |
| PFD (%) | 18.1 | 29.5 | 12.7 | 10.6 | 6.8 | 8.8 | |
| TD | FFD (s) | 0.15 ± 0.14 | 0.17 ± 0.18 | 0.50 ± 0.32 | 0.34 ± 0.26 | 0.18 ± 0.14 | 0.16 ± 0.17 |
| FC (count) | 3.97 ± 1.55 | 5.23 ± 1.76 | 8.52 ± 2.76 | 7.28 ± 2.33 | 4.22 ± 2.88 | 4.18 ± 3.82 | |
| FD (s) | 0.56 ± 0.34 | 0.82 ± 0.48 | 1.85 ± 0.61 | 1.36 ± 0.52 | 0.59 ± 0.54 | 0.65 ± 0.66 | |
| PFD (%) | 9.8 | 14.0 | 32.1 | 23.5 | 9.6 | 10.4 |
The data tells a clear story: children with ASD showed a pronounced visual preference for the basic geometric companion robots (Y1, Y2), as evidenced by their highest FFD, FC, FD, and PFD values. In contrast, TD children were most visually engaged by the biomorphic companion robots (Y3, Y4). Both groups allocated the least visual attention to the irregularly shaped companion robots (Y5, Y6). A formal “Preference Score” \( S_p \) can be derived for each form category \( c \) within a group by averaging the normalized PFD values for the two samples in that category:
$$ S_p(c) = \frac{PFD_{c1} + PFD_{c2}}{2} $$
For the ASD group, this yields:
$$ S_p(\text{Geometric}) = \frac{18.1 + 29.5}{2} = 23.8,\quad S_p(\text{Biomorphic}) = \frac{12.7 + 10.6}{2} = 11.65,\quad S_p(\text{Irregular}) = \frac{6.8 + 8.8}{2} = 7.8 $$
The hierarchy \( S_p(\text{Geometric}) > S_p(\text{Biomorphic}) > S_p(\text{Irregular}) \) quantitatively confirms the visual preference for geometric forms in ASD.
Qualitative Visualizations: Heatmaps and Scanpaths
The heatmaps and scanpaths provided profound insights into the qualitative nature of the gaze patterns. For the ASD group, heatmaps on the geometric companion robots showed denser, more concentrated areas of attention, particularly on the central torso region. Scanpaths were often localized, jumping between a few key points on the robot’s body. When viewing the humanoid companion robot (Y3), their gaze conspicuously avoided the facial region, especially the eyes.
In stark contrast, TD children’s heatmaps on the biomorphic companion robots were more uniformly distributed across the entire form, including the face. Their scanpaths were systematic and continuous, tracing the overall contour of the companion robot, indicative of holistic processing.
This divergence underscores a fundamental difference in visual strategy. The ASD group’s approach aligns with a detail-oriented, part-based processing style, where the simpler, more predictable geometry of a basic companion robot is less cognitively demanding and potentially less socially threatening. The avoidance of realistic eyes on the biomorphic companion robot supports the theory of face aversion in ASD, which can extend to highly realistic artificial agents.
Design Principles for ASD-Focused Companion Robots
Based on the empirical evidence from the eye-tracking study, I propose the following formal design principles for companion robots intended for children with ASD. These principles aim to translate visual preference into tangible design features that can enhance initial engagement and sustained interaction.
| Design Principle | Rationale (From Eye-Tracking Data) | Concrete Design Application |
|---|---|---|
| 1. Prioritize Basic Geometric Forms | Highest Fixation Duration and Count for geometric samples (Y1, Y2). Simple shapes are likely easier to process visually and predict. | Use primary volumes: spheres, cubes, cylinders, and their smooth hybrids (e.g., spheroids, rounded cubes). Avoid complex, fragmented, or overly angular silhouettes. The core body of the companion robot should have a clear, simple, and cohesive geometry. |
| 2. Emphasize Torso/Central Body Design | Heatmaps showed concentrated attention on the torso of geometric robots. This area is a primary focal point for ASD children. | Design the torso to be the most expressive and visually stable part. Use it for displaying clear, unambiguous cues (e.g., color-changing panels for emotion, simple light patterns for direction). Minimize decorative clutter on limbs, keeping the central mass as the primary communication canvas for the companion robot. |
| 3. Abstract and Simplify Facial Features, Especially Eyes | Gaze avoidance of detailed eyes on the humanoid robot (Y3). Realistic eyes may trigger social anxiety or aversion. | Design eyes using abstract, non-threatening geometric forms. Use simple circles, ovals, or light-emitting diodes (LEDs) arranged in a neutral, friendly pattern. Avoid high-resolution screens displaying realistic human eyes. The expressiveness of the companion robot’s “face” should come from gross movement (tilting) and light, not microscopic muscle simulation. |
| 4. Minimize Visual Complexity and Irregularity | Lowest Fixation Duration for irregular samples (Y5, Y6) across both groups. Complex forms increase cognitive load. | Maintain a clean, uncluttered exterior. If the companion robot has moving parts or modules, ensure that in its default, “resting” state, it presents a simple and predictable shape. Avoid transformer-like aesthetics with many sharp lines and overlapping parts. |
| 5. Ensure Predictable Motion Silhouettes | Related to local processing; predictable changes are key. While not directly tested here, it follows from the preference for predictable forms. | Design limb movements and gestures to be slow, deliberate, and to maintain a clear geometric relationship. The motion path of the companion robot’s arms should trace simple arcs, not complex, erratic patterns. The moving form should remain interpretable as a set of basic shapes in relation to each other. |
The application of these principles aims to create a companion robot that feels safe, comprehensible, and engaging for a child with ASD. By reducing unpredictable visual noise and avoiding anxiety-provoking features (like hyper-realistic eyes), the design lowers the barrier to interaction. A geometrically clear companion robot acts as a stable, reliable platform upon which social and therapeutic scripts can be built. Its form communicates predictability before it even moves or speaks, aligning with the child’s cognitive style to foster trust and attention.
Conclusion and Future Directions
This research demonstrates that visual design is a paramount, yet often overlooked, factor in the development of effective companion robots for therapeutic applications with ASD children. By employing eye-tracking methodology, I have moved from general assumptions to specific, data-driven insights. The clear visual preference exhibited by children with ASD for companion robots with basic geometric forms, simplified faces, and an emphasis on the torso provides a robust foundation for future design work.
The proposed design principles offer a concrete starting point for engineers and designers. However, this work opens several avenues for further investigation. Future research should explore the interplay between form and other sensory modalities—how does the shape of a companion robot influence the perception of its voice or the intention behind its movements? Longitudinal studies are needed to determine if a preferred initial design leads to better long-term therapeutic outcomes, such as improved skill generalization. Furthermore, personalization should be considered; while a geometric form is broadly preferred, can adaptive features on the companion robot allow for slight customization to match an individual child’s evolving preferences?
In conclusion, the path toward more effective assistive robotics for ASD involves a deep synergy between technology, therapy, and design. By intentionally crafting the companion robot’s form to align with the unique visual-cognitive profile of ASD, we can create tools that are not just functionally capable but are intrinsically welcoming. This approach maximizes the potential for engagement, turning the companion robot from a piece of technology into a true social catalyst and companion in the child’s developmental journey.
