抄録
This study analyzes the emotional understanding characteristics and persona diversity of Large
Language Models (LLMs) using a fuzzy-based evaluation framework. Using 4,227 experimental data
points from 36 LLM types across 4 personas and 3 literary texts, we reveal: (1) PCA identifies three
components explaining 95.5% cumulative variance with significant inter-persona differences across
all emotion dimensions (Interest: F=9.51, p<0.001; Surprise: F=19.95, p<0.001; Sadness: F=2.92,
p=0.033; Anger: F=3.22, p=0.022); (2) The poet persona (P3, temperature=0.9) shows significantly
higher emotional sensitivity than the robot persona (P4, temperature=0.1) with Cohen's d=0.18-0.32
(p<0.001), demonstrating synergistic effects between temperature parameters and persona cognitive
characteristics (r=0.97, p=0.031); (3) t-SNE clustering identifies five distinct model groups—
dialogue-optimized models (Claude, GPT-4o series), reasoning-specialized models (o1, DeepSeek-R1
series), and multilingual models—with consistency scores ranging 0.746-0.886; (4) Text genre
significantly influences emotion correlations (allegorical: r=-0.19; narrative: r=-0.70; poetic: r=-
0.66, all p<0.001), reflecting the emotional tension structure of literary genres. These findings provide
empirical evidence for LLM selection in emotion-sensitive applications: high-consistency dialogue
models for emotional support systems, reasoning-specialized models for logical analysis, and highsensitivity
multilingual models for creative assistance.