IIP情報・知能・精密機器部門講演会講演論文集
Online ISSN : 2424-3140
セッションID: IIP-E4-1
会議情報

fMRIを用いた複合音におけるトノトピーに着目した音色推定AIの構築
*楠元 惇ノ介芝田 京子佐藤 公信
著者情報
会議録・要旨集 認証あり

詳細
抄録

The purpose of this study is to establish a decoding technique to estimate sounds heard by humans from fMRI images using deep learning. The sounds we hear in usually have a distinctive timbre. The timbre is determined by the combination of sound pressure levels of the overtones in compound tones, i.e., the frequency spectrum. Previous studies suggested that tonotopy, which shows a specific pattern of activation in the auditory cortex, is influenced by the frequency spectrum. In this report, we focus on tonotopy and estimate timbres from fMRI images using deep learning. Four types of timbres were prepared, and four pitches differing by two timbres each were learned by two classifications to create six classifiers. When the classifiers were used for estimation, the maximum estimation rate for untrained data was 67.22%, and the average estimate rate for the four timbres was 45.31%, far exceeding the chance level of 25.00% for four classifications. This suggests that the proposed estimation method is useful. In addition, issues remained for unlearned data including unlearned pitches.

著者関連情報
© 2025 一般社団法人 日本機械学会
前の記事 次の記事
feedback
Top