In recent years, the intelligent companion robot has become a significant product category within the smart home ecosystem, particularly for child development and education. Parents, especially from younger generations, are increasingly receptive to integrating such intelligent devices into their children’s daily routines for educational support and emotional interaction. The success of a children’s companion robot hinges not only on its interactive functionalities but profoundly on its physical form. The造型 directly influences a child’s initial perception, emotional response, and willingness to engage. Therefore, accurately capturing users’ emotional needs and preferences to guide造型 design is paramount for creating successful and differentiated products in a market often plagued by homogeneity.
Product perceptual imagery serves as a critical bridge between user psychology and physical design. It represents the comprehensive intuitive and思维反应 evoked in users when they encounter a product, influenced by its造型, color, texture, and other attributes. Effectively translating these subjective emotional responses into objective, actionable design parameters is a central challenge. Extenics, a formalized methodology for solving contradictory problems, offers a powerful framework. By modeling things, affairs, and relations using formalized primitives, Extenics allows for the systematic analysis, transformation, and expansion of design elements, facilitating innovative solutions. This work integrates Extenics theory with product perceptual imagery to establish a methodological system for the modeling image design of children’s companion robots. The goal is to construct an extension reasoning process that correlates emotional imagery with specific造型要素, thereby providing a data-driven foundation for design innovation that avoids同质化 and meets precise emotional targets.
Extension Reasoning Methodology for Product Imagery Design
The core of this approach lies in establishing a formal, extensible correlation between the emotional imagery基元 (representing user feelings) and the modeling element基元 (representing the physical product features). The overall process can be summarized in the following workflow.

1. Extension Analysis of Emotional Imagery Design Elements
The fundamental unit in Extenics is the basic-element, a triple comprising the object (O), its characteristic (C), and the corresponding value (V) of that characteristic. It formally describes matter-element, affair-element, and relation-element.
The basic-element model is expressed as:
$$B = (o, c, v) =
\begin{bmatrix}
o & c_1 & v_1\\
& c_2 & v_2\\
& \vdots & \vdots\\
& c_n & v_n
\end{bmatrix}$$
In the context of designing a companion robot, we define two key basic-elements: the Emotional Imagery Basic-element ($M_q$) and the Modeling Element Basic-element ($M_z$). They are correlated, denoted as $M_q \sim M_z$.
$$M_q = (o_q, c_q, v_q) = [\text{Companion Robot, Emotional Characteristic, Evaluative Value}]$$
$$M_z = (o_z, c_z, v_z) = [\text{Companion Robot, Modeling Characteristic, Feature Value}]$$
Divergence Analysis: This is a core extension method to explore multiple potential paths from a single starting point. Given a basic-element $B=(O, C, V)$, divergence analysis yields a发散树. For instance, diverging on the object yields:
$$\{(O_i, C, V), i = 1, 2, \cdots, n\}$$
Similarly, we can diverge on characteristics or values. This principle is applied to systematically expand the modeling features of the companion robot from geometric perspectives (point, line, surface, volume).
Conjugate Analysis: Every object possesses both a real part (physical, material aspects) and a虚 part (non-material, functional, or emotional aspects). For the companion robot, the虚 part is the user’s emotional perception (e.g.,可爱, safe), and the实 part is its physical造型. A conjugate analysis examines the relationship between these parts, establishing that certain characteristics in the虚 part are correlated with characteristics in the实 part. This underpins the entire premise of linking emotion to form.
2. Establishing the Extension Model for Emotional Imagery
Matter-Element Model: Based on the basic-element formula, we construct specific matter-element models for the companion robot’s imagery and form.
The emotional imagery matter-element model is:
$$M_q = \begin{bmatrix}
o_q & \text{Lightweight Feeling} & v_1\\
& \text{Delicate Feeling} & v_2\\
& \vdots & \vdots\\
& \text{Stable Feeling} & v_n
\end{bmatrix}$$
The modeling element matter-element model is:
$$M_z = \begin{bmatrix}
o_z & \text{Head Form} & v_1\\
& \text{Body Form} & v_2\\
& \text{Screen Shape} & v_3\\
& \text{Button Position} & v_4
\end{bmatrix}$$
Affair-Element Model: This model describes interactions and operations. The affair of a user operating a companion robot can be modeled as:
$$M_a = \begin{bmatrix}
o_a & \text{Operating Subject} & v_1\\
& \text{Receiving Subject} & v_2\\
& \vdots & \vdots\\
& \text{Aesthetic Judgment} & v_n
\end{bmatrix}$$
3. Expansion Analysis of Modeling Elements for Emotional Imagery
Applying divergence analysis to the geometric features of the companion robot yields a comprehensive set of造型要素 categories. Buttons are treated as ‘points’, screen outlines as ‘lines/surfaces’, and the head/body as ‘volumes’. The expanded categories are summarized below.
| Modeling Element | Categories |
|---|---|
| Head Form (A) | Spherical (A1), Hemispherical (A2), Cuboid (A3), Bionic (A4), Irregular (A5) |
| Body Form (B) | Cylindrical (B1), Spherical (B2), Bionic (B3), Irregular (B4) |
| Screen Shape (C) | Rectangular (C1), Circular (C2), Irregular (C3), Trapezoidal (C4), No Screen (C5) |
| Button Position (D) | Top of Head (D1), Front of Body (D2), Back of Body (D3), No Buttons (D4) |
Based on Extenics, the extension model relating情感意象 to造型设计要素 can be formally expressed. If $C_q$ represents the set of emotional imagery and $C_z$ the set of modeling elements, their发散关系 is:
$$C_q = \{\text{Emotional Imagery, Modeling Design Elements}\}$$
$$\begin{array}{ll}
\vdash & C_q = \{\text{Delicate, Agile, } \cdots \}\\
& C_z = \{\text{Head Form, Body Form, } \cdots \}
\end{array}$$
$$\begin{array}{ll}
\vdash & C_{\text{Head Form}} = \{\text{Spherical, Hemispherical, } \cdots \}\\
& C_{\text{Body Form}} = \{\text{Cylindrical, Spherical, } \cdots \}\\
& C_{\text{Screen Shape}} = \{\text{Has Screen, No Screen}\}\\
& C_{\text{Button Position}} = \{\text{Has Buttons, No Buttons}\}
\end{array}$$
$$C_{\text{Screen Shape}} = \{\text{Has Screen, No Screen}\} \vdash C_{\text{Has Screen}} = \{\text{Rectangular, Circular, } \cdots \}$$
$$C_{\text{Button Position}} = \{\text{Has Buttons, No Buttons}\} \vdash C_{\text{Has Buttons}} = \{\text{Top of Head, Front of Body, } \cdots \}$$
4. Establishing the Correlation Criterion for Product Emotional Imagery
The final step is to quantify the relationship. Representative emotional vocabulary is collected and screened through user surveys. Representative samples of companion robots are selected and their造型要素 are encoded using Quantification Theory Type I (values of 1 for presence, 0 for absence of a category). Users then evaluate these samples against the emotional vocabulary. A multiple linear regression analysis is performed where the independent variables ($X_i$) are the encoded modeling elements, and the dependent variable ($Y$) is the average emotional evaluation score.
The general form of the correlation model (regression equation) is:
$$Y(K) = \sum_{i=1}^{m} \alpha_i X_i$$
Where $Y$ is the score for a specific emotional word pair, $X_i$ represents the dummy variables for each modeling element category (e.g., A1, A2, …, D4), and $\alpha_i$ are the standardized partial regression coefficients indicating the influence weight and direction (positive or negative) of each category on the emotional score.
More specifically, for the companion robot with elements defined in Table 1, the model expands to:
$$Y(K) = \alpha_{A1}A1 + \alpha_{A2}A2 + \alpha_{A3}A3 + \alpha_{A4}A4 + \alpha_{A5}A5 \;(\text{Head Form})$$
$$+ \alpha_{B1}B1 + \alpha_{B2}B2 + \alpha_{B3}B3 + \alpha_{B4}B4 \;(\text{Body Form})$$
$$+ \alpha_{C1}C1 + \alpha_{C2}C2 + \alpha_{C3}C3 + \alpha_{C4}C4 + \alpha_{C5}C5 \;(\text{Screen Shape})$$
$$+ \alpha_{D1}D1 + \alpha_{D2}D2 + \alpha_{D3}D3 + \alpha_{D4}D4 \;(\text{Button Position})$$
5. Extension Superiority Evaluation for Emotional Imagery
To evaluate and compare design proposals, an extension superiority evaluation method is employed. Secondary evaluation indicators ($MI_i$) are established based on the target emotional imagery (e.g.,可爱,柔和). Each indicator is assigned a weight $\alpha_i$ (where $0 \le \alpha_i \le 1$ and $\sum \alpha_i = 1$). Users evaluate a design方案 $K_i$ against these indicators on a scale (e.g., -2 to 2). The superiority value $C(K_i)$ is calculated as the weighted sum of the average evaluation scores $k_i$ for each indicator:
$$C(K_i) = \sum_{i=1}^{n} \alpha_i \cdot k_i$$
A higher $C(K_i)$ indicates the design方案 better matches the target emotional imagery.
Case Study: Modeling Image Design for a Children’s Companion Robot
1. Collection and Screening of Representative Emotional Imagery Vocabulary
An initial pool of 130 affective words related to products and companion robots was gathered. After refinement using the KJ method and expert review, 20 pairs of bipolar adjectives were selected. A survey using a five-level Semantic Differential scale (1=Unimportant to 5=Very Important) was conducted to identify the most relevant terms for a companion robot. The mean scores were calculated, as shown below.
| Emotional Word | Total Score | Mean | Std. Deviation |
|---|---|---|---|
| Safe | 518 | 3.611 | 1.156 |
| Easy-to-Control | 515 | 3.601 | 1.089 |
| Cute | 514 | 3.594 | 1.118 |
| Soft | 513 | 3.587 | 1.029 |
| Intelligent | 509 | 3.556 | 1.188 |
| Sturdy | 506 | 3.534 | 1.141 |
| Round | 505 | 3.531 | 1.105 |
| Friendly | 505 | 3.531 | 1.112 |
| Eco-friendly | 508 | 3.552 | 1.161 |
| Lightweight | 498 | 3.482 | 1.080 |
The four word-pairs with the highest mean scores—”Safe-Dangerous”, “Easy-to-Control-Difficult-to-Control”, “Cute-Ugly”, and “Soft-Hard”—were selected as the representative emotional意象词汇 for guiding the design of the children’s companion robot.
2. Screening of Typical Samples and Encoding
140 images of children’s companion robots were collected from markets. After initial screening, 48 samples remained. Through Multidimensional Scaling (MDS) analysis and cluster analysis (combining system clustering and K-means) based on perceptual similarity judged by 40 participants (postgraduates, preschool teachers), 14典型 samples with distinct features were selected.
The modeling elements of these 14 companion robot samples were analyzed and encoded according to the categories in Table 1. The presence of a category is marked ‘1’, its absence ‘0’.
| Sample | Head Form (A) | Body Form (B) | Screen Shape (C) | Button Position (D) | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ID | A1 | A2 | A3 | A4 | A5 | B1 | B2 | B3 | B4 | C1 | C2 | C3 | C4 | C5 | D1 | D2 | D3 | D4 |
| 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
| 2 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
| 3 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
| 4 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 5 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 7 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 8 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 9 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 |
| 10 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 11 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 12 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 |
| 13 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
| 14 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
3. Characterization of the Correlation Criterion between Emotional Imagery and Modeling Elements
The 14典型 samples were evaluated by 112 target users (including young adults and parents) using a five-point scale for the four selected emotional word-pairs. The average scores for each sample are listed below.
| Sample ID | Safe-Dangerous | Soft-Hard | Easy-to-Control-Difficult | Cute-Ugly |
|---|---|---|---|---|
| 1 | 1.00 | 0.42 | 0.79 | 0.58 |
| 2 | 1.37 | 0.95 | 0.95 | 1.58 |
| 3 | 0.53 | 0.53 | 0.11 | 0.05 |
| 4 | 1.16 | 0.89 | 1.00 | 0.74 |
| 5 | 0.58 | -0.79 | 0.00 | -0.11 |
| 6 | 1.26 | 1.42 | 0.95 | 1.16 |
| 7 | 1.05 | 0.58 | 0.68 | 0.37 |
| 8 | 0.68 | 0.63 | 0.26 | 0.32 |
| 9 | 0.68 | 0.47 | 0.68 | 0.26 |
| 10 | 0.00 | -0.37 | -0.16 | -0.42 |
| 11 | 0.63 | 0.42 | 0.53 | 0.32 |
| 12 | 1.11 | 1.21 | 0.74 | 0.89 |
| 13 | 0.95 | 0.53 | 0.53 | -0.11 |
| 14 | 1.11 | 1.05 | 0.79 | 1.00 |
Multiple linear regression was performed using the coding data (Table 3) as independent variables and the emotional scores (Table 4) as dependent variables. The resulting standardized coefficients (partial regression coefficients) for each造型要素 category under each emotional context are shown below. The absolute value of the coefficient indicates the degree of influence, and its sign indicates association with the positive or negative pole of the emotional word pair.
| Modeling Element | Category | Emotional Word Pair (Coefficient) | |||
|---|---|---|---|---|---|
| Safe-Dangerous | Soft-Hard | Easy-to-Control-Difficult | Cute-Ugly | ||
| Head Form (A) | A1 (Spherical) | 0.310 | 0.422 | 0.302 | 0.357 |
| A2 (Hemispherical) | 0.332 | 0.099 | 0.415 | 0.207 | |
| A3 (Cuboid) | -0.446 | -0.562 | -0.512 | -0.310 | |
| A4 (Bionic) | 0.047 | 0.192 | -0.074 | -0.036 | |
| A5 (Irregular) | -0.214 | -0.012 | -0.104 | -0.140 | |
| Body Form (B) | B1 (Cylindrical) | 0.148 | 0.057 | 0.149 | -0.108 |
| B2 (Spherical) | -0.185 | -0.073 | -0.024 | -0.080 | |
| B3 (Bionic) | 0.008 | 0.246 | 0.127 | 0.113 | |
| B4 (Irregular) | -0.057 | -0.314 | -0.303 | 0.032 | |
| Screen Shape (C) | C1 (Rectangular) | -0.091 | -0.376 | -0.227 | 0.015 |
| C2 (Circular) | 0.200 | 0.233 | 0.281 | 0.216 | |
| C3 (Irregular) | -0.263 | -0.018 | -0.349 | -0.220 | |
| C4 (Trapezoidal) | -0.145 | -0.048 | 0.092 | -0.111 | |
| C5 (No Screen) | 0.193 | 0.318 | 0.139 | 0.046 | |
| Button Position (D) | D1 (Top of Head) | -0.150 | -0.062 | -0.082 | -0.257 |
| D2 (Front of Body) | 0.185 | 0.104 | 0.290 | 0.341 | |
| D3 (Back of Body) | 0.106 | -0.073 | 0.178 | 0.055 | |
| D4 (No Buttons) | -0.052 | 0.220 | -0.155 | -0.003 | |
| R² (Goodness of Fit) | 0.753 | 0.682 | 0.655 | 0.720 | |
The regression results reveal the varying influence weights of造型要素 under different emotional contexts for the companion robot:
- For “Safe-Dangerous”: Influence order: Head Form > Screen Shape > Button Position > Body Form.
- For “Soft-Hard”: Influence order: Head Form > Screen Shape > Body Form > Button Position.
- For “Easy-to-Control-Difficult”: Influence order: Head Form > Screen Shape > Body Form > Button Position.
- For “Cute-Ugly”: Influence order: Head Form > Button Position > Screen Shape > Body Form.
The head form consistently emerges as the most influential factor across all emotional dimensions for this children’s companion robot.
By examining the coefficients with large absolute values, we can derive the design element categories most strongly associated with each emotional pole.
| Emotional Pole (Positive) | Corresponding Design Element Categories | Emotional Pole (Negative) | Corresponding Design Element Categories |
|---|---|---|---|
| Safe | A2 (Hemispherical Head), B1 (Cylindrical Body), C2 (Circular Screen), D2 (Front Button) | Dangerous | A3 (Cuboid Head), B2 (Spherical Body), C3 (Irregular Screen), D1 (Top Button) |
| Soft | A1 (Spherical Head), B3 (Bionic Body), C5 (No Screen), D4 (No Buttons) | Hard | A3 (Cuboid Head), B4 (Irregular Body), C1 (Rectangular Screen), D3 (Back Button) |
| Easy-to-Control | A2 (Hemispherical Head), B1 (Cylindrical Body), C2 (Circular Screen), D2 (Front Button) | Difficult-to-Control | A3 (Cuboid Head), B4 (Irregular Body), C3 (Irregular Screen), D4 (No Buttons) |
| Cute | A1 (Spherical Head), B3 (Bionic Body), C2 (Circular Screen), D2 (Front Button) | Ugly | A3 (Cuboid Head), B1 (Cylindrical Body), C3 (Irregular Screen), D1 (Top Button) |
Substituting the specific coefficients from Table 5 into the general correlation model公式 yields the precise emotional意象造型要素关联 models for the children’s companion robot. For example, the model for “Cute-Ugly” is:
$$Y(\text{Cute-Ugly}) = 0.357A1 + 0.207A2 – 0.310A3 – 0.036A4 – 0.140A5$$
$$- 0.108B1 – 0.080B2 + 0.113B3 + 0.032B4$$
$$+ 0.015C1 + 0.216C2 – 0.220C3 – 0.111C4 + 0.046C5$$
$$- 0.257D1 + 0.341D2 + 0.055D3 – 0.003D4$$
Similar equations are constructed for the other three emotional pairs.
4. Model Verification
To verify the predictive validity of the established关联 models, five additional companion robot samples (not part of the original 14) were selected. Their造型要素 were encoded, and users provided emotional evaluation scores. These actual scores were compared against the scores predicted by plugging the编码 into the four regression models. A paired-sample t-test showed no significant difference (Sig. > 0.05 for all four word pairs) between the actual and predicted values, confirming the models’ reliability and feasibility for guiding the造型意象 design of companion robots.
5. Design Practice and Evaluation
Guided by the research findings, a new design for a children’s companion robot was initiated. The target emotional imagery was set as a blend of “Cute” (weight: 60%) and “Soft” (weight: 40%). Consulting the correlation models (Table 5 and derived equations) and the对应表 (Table 6), the optimal造型要素组合 was determined: Spherical Head (A1), Bionic Body (B3), Circular Screen (C2), and Buttons on the Front of Body (D2). A color scheme of sky blue (main) and white (accent) was chosen to convey a sense of safety, openness, and tranquility suitable for a child’s成长 companion.
The final design方案 was rendered. To evaluate its success, it was presented alongside the 14 original samples to 248 users in a survey using a five-point Likert scale for the four emotional word pairs. The superiority value $C(K_i)$ for each sample was calculated using the weighted sum formula. The results are shown below.
| Sample / Design | Superiority Value (C) |
|---|---|
| New Design (X) | 1.326 |
| Sample 2 | 1.328 |
| Sample 6 | 1.264 |
| Sample 14 | 1.020 |
| Sample 12 | 1.018 |
| Sample 4 | 0.800 |
| Sample 1 | 0.516 |
| Sample 7 | 0.454 |
| Sample 8 | 0.444 |
| Sample 3 | 0.242 |
| Sample 9 | 0.344 |
| Sample 11 | 0.360 |
| Sample 13 | 0.146 |
| Sample 5 | -0.382 |
| Sample 10 | -0.400 |
The new design方案 achieved a superiority value (1.326) among the highest, demonstrating its successful embodiment of the target “Cute” and “Soft” emotional imagery for a children’s companion robot, thereby validating the proposed extension-based methodology.
Conclusion
This research presents a systematic methodology integrating Extenics with product perceptual imagery for the modeling design of children’s companion robots. The formalized extension reasoning process, encompassing基元 modeling, divergence analysis, and conjugate analysis, provides a structured framework to bridge the gap between subjective user emotion and objective product form. By constructing quantifiable correlation models between specific emotional意象词汇 and detailed造型要素 categories, the method moves beyond intuitive design towards a data-informed approach. The case study confirms that the head form is a predominant factor influencing various emotional perceptions of a companion robot, and it successfully demonstrates how target emotions can be translated into a specific combination of design features, resulting in a verified high-performing design. This Extenics-based approach offers a powerful tool to combat product homogeneity, enabling designers to innovate with clarity and precision based on a deep understanding of user emotional needs. Future work could extend this methodology by incorporating other design dimensions such as color, material, and interactive dynamics into the extension model, further enhancing its comprehensiveness and accuracy for the complex design of intelligent companion robots.
