IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<Speech and Image Processing, Recognition>
Robust Extraction of Desired Speaker's Utterance in Overlapped Speech
Haoze LuYuma AkaiwaYasuo HoriuchiShingo Kuroiwa
Author information
JOURNAL FREE ACCESS

2015 Volume 135 Issue 8 Pages 1009-1016

Details
Abstract

In this paper, we propose a speaker indexing method using speaker verification technique to extract one desired speaker's utterances from conversational speech. To solve the overlapped speech problem, we construct overlapped speech models with the observed conversational speech itself. The overlapped speech models include overlapped speech of target and cohort speaker, and speech model of two cohort speakers. In order to evaluate the proposed method, we made a simulated conversational speech that has up to 50% overlapping segments. The EER was reduced by up to 43.7% compared with the conventional methods that use a target speaker model only, and use a target model and overlapped speech model trained with a speaker independent large speech database.

Content from these authors
© 2015 by the Institute of Electrical Engineers of Japan
Previous article Next article
feedback
Top