2025 年 40 巻 5 号 p. B-P11_1-12
Attention mechanisms have played a crucial role in the success of Transformer models, as seen in platforms likeChatGPT. However, since they compute attentions from relationships between only one or two object types, they failto effectively capture multi-object relationships in real-world scenarios, resulting in low prediction accuracy. In fact,they cannot calculate attention weights among diverse object types, such as the ‘comments,’ ‘replies,’ and ‘subjects’that naturally constitute conversations on platforms like Reddit or X, although their relationships are simultaneouslyobserved in real-world contexts. To overcome this limitation, we introduce the Tensorized Attention Model (TAM),which leverages the Tucker decomposition to calculate attention weights across various object types and seamlesslyintegrates them into the Transformer models. Evaluations show that TAM significantly outperforms existing encodermethods, and its integration into the LoRA adapter for Llama2 enhances fine-tuning accuracy.