International Journal of Activity and Behavior Computing
Online ISSN : 2759-2871
Native Arabic EEG-based Silent Speech Decoding Using Deep Learning Techniques
Taveena Lotey Salini YadavPartha Pratim Roy
Author information
JOURNAL OPEN ACCESS

2025 Volume 2025 Issue 2 Pages 1-19

Details
Abstract
Silent Speech Recognition (SSR) using electroencephalography (EEG) is an emerging area in brain-computer interface (BCI) research, enabling communication without vocal articulation. However, EEG-based SSR remains challenging due to low signal-to-noise ratio, inter-subject variability, and limited training data. This study explores machine learning (ML) and deep learning (DL) models for EEG-based silent speech decoding, utilizing the Native Arabic Silent Speech Dataset, which consists of EEG recordings from ten participants performing six distinct silent speech commands. A comprehensive preprocessing pipeline, including epoching, baseline correction, Independent Component Analysis (ICA) for artifact removal, and bandpass filtering, is applied to enhance signal quality. We evaluate both traditional ML classifiers (Support Vector Machines, Random Forests, and K-Nearest Neighbors) and DL models such as ShallowNet, EEGNet, Long-Short Term Memory (LSTM), and EEG-Conformer to assess their effectiveness in silent speech decoding. Our best-performing model, LSTM, achieved an accuracy of 19.57% and 22.79% under cross-subject evaluation and subject-wise evaluation, respectively. The study highlights the challenges in generalizing EEG-based SSR models and the need for improved domain adaptation techniques for better classification performance. This research is part of the Silent Speech Decoding Challenge (SSDC) challenge of Activity and Behavior Computing (ABC) conference.
Content from these authors
© 2025 Author

この記事はクリエイティブ・コモンズ [表示 4.0 国際]ライセンスの下に提供されています。
https://creativecommons.org/licenses/by/4.0/deed.ja
Previous article Next article
feedback
Top