Abstract
Japan is currently rapidly aging. Concurrently, the number of patients suffering speech disorders is increasing every year, and the incidence is higher as age increases. Those suffering from speech disorders face problems with communicating in daily conversation. They are often able to communicate with speech substitutes, but these typically do not provide a sufficient sound frequency range to be understood in conversation. Therefore, we proposed a speech support system using body-conducted speech recognition. This system retrieves speech from body-conducted sound via a transfer function, using recognition to select a sub-word sequence and its duration. In this study, we demonstrate the effectiveness of producing clear body-conducted speech using a linear predictive coefficient instead of a transfer function. Instead of dividing body-conducted speech into syllables in a heuristic manner as in past studies, we used continuous sub-word recognition automatically. To confirm the improvement in generated speech, a jury test and articulatory feature analysis were employed.