Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 08, 2016 - June 11, 2016
This paper addresses Deep Neural Network (DNN) for Sound Source Identification (SSI) of acoustic signals recorded with a microphone array embedded in an Unmanned Aerial Vehicle (UAV), aiming at people’s voice detection quickly and widely in a disastrous situation. It is well known that training a SSI-DNN needs huge dataset to improve its performance, but building such a dataset is not often realistic owing to the cost of annotation done by human. Therefore, we propose Partially Shared Deep Neural Network (PS-DNN) training using noise-suppressed acoustic signals, which can be obtained in automatic process, in addition to label data annotated by human. This results in more accurate SSI in the situation of lack of dataset for training.