Abstract
The purpose of our study is to evaluate the quality of synthesized speeches and noises by measuring the mismatch negativity (MMN) of the event-related potentials (ERPs). The proposed synthesis has required the superposition of six-channel sinusoidal interpolation between the extremal data of the filtered waveform, in which each passband had one octave, belonging to the total bandwidth of 80 to 5,120Hz. Subjects were presented with a repetitive stimulus of a male syllable /ki/, randomly replaced at a 14.3% probability by a synthesis. No MMN responses were observed in the task, suggesting that the extremal data played a key role in the synthesis of the phonetic equivalence of the acoustic structure. When the discontinuous or distorted interpolation was employed for the syllable and mechanical noise of a door slamming as deviant stimuli, the degraded syllable merely elicited MMN. The result has interpreted as the evidence of the superiority of speech perception to noise.