Abstract
In the previous report, we proposed a speech production substitute using a pen-tablet and a wearable PC. From the evaluation tests, it was found that by tracing the tablet using a pen, some consonants contained in the continuous speech sounds were perceived even though the consonants did not phonetically exist. Furthermore, it was quite easy to produce non-verbal expression by changing the movement velocity and a touching rhythm. Our speech production method is very effective for a communication tool of people with aphasia or with some articulation difficulties. It would also make them to communicate with their families as well as during speech rehabilitation. In this paper, two pitch control methods were added to the prototype model in order for users to produce more emotional expression than the prototype. One is an intonation producing method that uses a touch panel having a pressure sensor and the other is a song producing method that uses a MIDI (Musical Instrument Digital Interface) keyboard. After an experiment by short-time training, some emotional expression such as laughing and surprising could be produced. Furthermore, one of famous Japanese songs also could be produced by using the pen-tablet with a MIDI keyboard. From the evaluation test, our new models adding intonation and melody are effective for communication with their families or during speech rehabilitation.