2020 年 41 巻 4 号 p. 675-685
We present an auditory scaling method to generate reverberant sounds that more appropriately match the expected auditory impression of a space in a 2D image. Since the conventional method uses linear scale for the regression parameters of reverberation characteristics, correspondence with the human sense scale has been not considered. We have incorporated concepts from psychoacoustics into the reverberate parameters to improve regression performance in an actual environment, including the sound-masking effect, equal-loudness curves, and subjectively-equal reverberation time. Estimation errors in our scaling method were significantly lower than in comparison with previously presented results. The proposed reverb synthesis method was then evaluated in tests, using several scenes to demonstrate its benefits. Our reverb synthesis method can reproduce plausible reverberant sounds from 2D images, which can be used in mixed and augmented-reality applications.