Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Orignal Paper
Learning to Compose Distributed Representations of Relational Patterns
Sho TakaseNaoaki OkazakiKentaro Inui
Author information
JOURNAL FREE ACCESS

2017 Volume 32 Issue 4 Pages D-G96_1-11

Details
Abstract

Learning distributed representations for relation instances is a central technique in downstream NLP applications. In particular, semantic modeling of relations and their textual realizations (relational patterns) is important because a relation (e.g., causality) can be mentioned in various expressions (e.g., “X cause Y”, “X lead to Y”, “Y is associated with X”). Notwithstanding, the previous studies paid little attention to explicitly evaluate semantic modeling of relational patterns. In order to address semantic modeling of relational patterns, this study constructs a new dataset that provides multiple similarity ratings for every pair of relational patterns on the existing dataset [Zeichner12]. Following the annotation guideline of [Mitchell 10], the new dataset shows a high inter-annotator agreement. We also present Gated Additive Composition (GAC), which is an enhancement of additive composition with the gating mechanism for composing distributed representations of relational patterns. In addition, we conduct a comparative study of different encoders including additive composition, RNN, LSTM, GRU, and GAC on the constructed dataset. Moreover, we adapt distributed representations of relational patterns for relation classification task in order to examine the usefulness of the dataset and distributed representations for a different application. Experiments show that the new dataset does not only enable detailed analyses of the different encoders, but also provides a gauge to predict successes of distributed representations of relational patterns in the relation classification task.

Content from these authors
© The Japanese Society for Artificial Intelligence 2017
Previous article Next article
feedback
Top