2022 Volume 29 Issue 1 Pages 187-223
In this study, we propose a method designed to extract named entities and relations from unstructured text based on table representations. To extract named entities, the proposed method computes representations for entity mentions and long-range dependencies using contextualized representations without hand-crafted features or complex neural network architectures. To extract relations, it applies a tensor dot product to predict all relation labels simultaneously without considering dependencies among relation labels. These advancements significantly simplify the proposed model and the associated algorithm for the extraction of named entities and relations. Despite its simplicity, the experimental results demonstrate that the proposed approach outperformed the state of the-art methods on multiple datasets. Compared with existing table-filling approaches, the proposed method achieved high performance solely by independently predicting the relation labels. In addition, we found that incorporating dependencies of relation labels into the system obtained little performance gain, indicating the effectiveness and sufficiency of the tensor dot-product mechanism for relation extraction in the proposed architecture. Experimental analyses were also performed to explore the benefits of joint training with named entity recognition in relation extraction in our design. We concluded that joint training with named entity recognition assists relation extraction to improve the span-level representation of entities.