2025 年 32 巻 2 号 p. 598-632
In the context of Real World Understanding (RWU) for vision and language (V&L) models, accurately aligning language with the corresponding visual scene is critical. Since current models typically assume language inputs to be plain text, RWU faces potential issues with structural ambiguity, where a single sentence can have multiple meanings due to various phrase structures. This paper proposes to use linguistic formalism as input, which enriches language information and addresses the issue of structural ambiguity. Our focus is on the Contrastive Language-Image Pre-training (CLIP) model, a prominent V&L model, focusing on image discrimination tasks of RWU. Our experiments test various approaches to incorporating formalism into the CLIP model, depending on the type of formalism and its processing method. We aim to determine the effectiveness of formalism in discriminating ambiguous images and identify which formalism works best. Additionally, we employ a gradient-based method to gain insights into how formalism is interpreted within the model’s architecture.