2023 Volume 27 Issue 6 Pages 207-211
Timbre conversion of musical instrument sounds, utilizing deep neural networks (DNNs), has been extensively researched and continues to generate significant interest in the development of more advanced techniques. We propose a novel algorithm for timbre conversion that utilizes a variational autoencoder. However, this system must be capable of predicting the amplitude spectrogram from the melfrequency cepstrum coefficient (MFCC). This research aims to build a DNN-based decoder that utilizes the MFCC and time-frame-wise total amplitude as inputs to predict the amplitude spectrogram. Experiments conducted using a musical instrument sound dataset show that a decoder incorporating bidirectional long short-term memory yields accurate predictions of amplitude spectrograms.