人工知能学会全国大会論文集
Online ISSN : 2758-7347
37th (2023)
セッションID: 1U3-IS-2a-01
会議情報

Semi-Autoregressive Transformer for Sign Language Production
*Ehssan WAHBIMasayasu ATSUMI
著者情報
会議録・要旨集 フリー

詳細
抄録

Sign language production (SLP) aims to generate sign language frame sequences from the corresponding spoken language text sentences. Existing approaches to SLP either rely on autoregressive models that generate the target sign frames sequentially, suffering from error accumulation and high inference latency, or non-autoregressive models that attempt to accelerate the process by producing all frames parallelly, which results in the loss of generation quality. To optimize the trade-off between speed and quality, we propose a semi-autoregressive model for sign language production (named SATSLP), which maintains the autoregressive property on a global scale but generates sign pose frames parallelly on a local scale, thus combining the best of both methods. Furthermore, we reproduced the back-translation transformer model, in which a spatial-temporal graphical skeletal structure is encoded to translate to text for evaluation. Results on the PHOENIX14T dataset show that SATSLP outperformed the baseline autoregressive model in terms of speed and quality.

著者関連情報
© 2023 The Japanese Society for Artificial Intelligence
前の記事 次の記事
feedback
Top