論文ID: e24.128
The analysis of articulatory movements, particularly tongue movements, is a key challenge in speech research. However, traditional methods for extracting tongue contours often face difficulties due to poor image quality and noise interference, complicating tongue motion analysis. This study proposes a novel approach to automatically extract the contours of the tongue and other articulatory organs from ultrasound and real-time magnetic resonance imaging (rtMRI) data. We employed DeepLabCut (DLC), a deep-learning-based tool. Our experiments demonstrated that DLC is not reliant on image edges or contrast, demonstrating robustness against noise and enabling effective automatic contour extraction. This paper highlights the method used and evaluates the accuracy of contour extraction for the tongue and other articulatory organs. By leveraging advanced deep-learning techniques, we aim to enhance the understanding of articulatory movements and improve speech analysis tools, ultimately contributing to enhanced outcomes in speech therapy and pronunciation training.