Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 3A1-GS-6-03
Conference information

Time-aware Language Model using Multi-task Learning
*Hikari FUNABIKILis Kanashiro PEREIRAMayuko KIMURAMasayuki ASAHARAAyako OCHIFei CHENGIchiro KOBAYASHI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Temporal event understanding is helpful in many downstream natural language processing tasks. Understanding time requires common knowledge of the various temporal aspects of events, such as duration and temporal order. However, direct expressions that imply such temporal knowledge are often omitted in sentences. Therefore, our goal is to construct a general-purpose language model for understanding temporal common sense in Japanese. In this study, we conducted multi-task learning on several temporal tasks. Especially, we used the English temporal commonsense dataset MC-TACO translated into Japanese, in addition to the other temporal classification tasks in tense, time span, temporal order, and facticity. We employed a multilingual language model as the text encoder, as well as a Japanese language model. Our experimental results showed that the choice of the tasks for the multi-task training, as well as the language model used play an important role in improving the overall performance of the tasks.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top