International Journal of Activity and Behavior Computing
Online ISSN : 2759-2871
Development of an EEG-Based Silent Speech Recognition Model on the Native Arabic Silent Speech Dataset Using Light BERT Architecture
Masaki Shuzo Reiya HiramotoRyoma IshigakiShingo AndoMotoki Sakai
Author information
JOURNAL OPEN ACCESS

2025 Volume 2025 Issue 2 Pages 1-16

Details
Abstract
This study participated in the Silent Speech Decoding Challenge (SSDC) and investigated the application of a lightweight BERT-based architecture for EEG-based silent speech recognition. We pre-trained a foundation model using publicly available EEG data and fine-tuned it on the SSDC dataset. The model was evaluated on six silent speech commands: “right,” “left,” “up,” “down,” “select,” and “cancel.” The average accuracy and F1 score across all eight subjects were 0.165 and 0.137, respectively. Subject 5 achieved the highest discrimination performance, with an accuracy of 0.239 and an F1 score of 0.223. However, the overall classification performance remained below 25%. The confusion matrix analysis revealed frequent misclassifications across multiple classes, highlighting the challenges of EEG-based silent speech recognition. Accuracy varied across subjects, with the highest exceeding 20% and the lowest below 10%. These findings indicate that while pre-training captured meaningful EEG signal representations, the classification accuracy after fine-tuning was limited, emphasizing the difficulty of silent speech recognition using EEG. Despite these challenges, our approach provides insights into EEG-based classification and demonstrates the potential of BERT-based architectures for future research in this domain.
Content from these authors
© 2025 Author

この記事はクリエイティブ・コモンズ [表示 4.0 国際]ライセンスの下に提供されています。
https://creativecommons.org/licenses/by/4.0/deed.ja
Previous article Next article
feedback
Top