Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
37th (2023)
Session ID : 1U3-IS-2a-01
Conference information

Semi-Autoregressive Transformer for Sign Language Production
*Ehssan WAHBIMasayasu ATSUMI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Sign language production (SLP) aims to generate sign language frame sequences from the corresponding spoken language text sentences. Existing approaches to SLP either rely on autoregressive models that generate the target sign frames sequentially, suffering from error accumulation and high inference latency, or non-autoregressive models that attempt to accelerate the process by producing all frames parallelly, which results in the loss of generation quality. To optimize the trade-off between speed and quality, we propose a semi-autoregressive model for sign language production (named SATSLP), which maintains the autoregressive property on a global scale but generates sign pose frames parallelly on a local scale, thus combining the best of both methods. Furthermore, we reproduced the back-translation transformer model, in which a spatial-temporal graphical skeletal structure is encoded to translate to text for evaluation. Results on the PHOENIX14T dataset show that SATSLP outperformed the baseline autoregressive model in terms of speed and quality.

Content from these authors
© 2023 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top