2025 Volume 16 Issue 1 Pages 132-146
This study investigates the use of the encoder-decoder model and its application to generative artificial intelligence (AI) for learning on edges. Current generative AI mainly uses a machine learning model called Transformer. However, the core of this model is the existing encoder-decoder model and the attention mechanism. Therefore, by focusing on the encoder-decoder model, we implement and evaluate a sequence transformation model called Sequence to Sequence (Seq2seq) to achieve a generative AI that can be trained on edges. We evaluate the model's performance on an arithmetic task, which is needed to gain a common representation between the input and output. The implementation and evaluation demonstrate the ability to perform the sequence transformation tasks. Throughout the study, we show the prospect of generative AI that can perform on edges.