Participants in conversation take speaking turns regularly and smoothly although who speaks and when he speaks are not decided in advance. Great attention has been paid to these phenomena in various areas. As for processing models of turn-taking, however, only the
code model based on turn-taking signals has been proposed so far, wihch has been pointed out, by several researchers, to have serious problems. In this paper, we propose an alternative model, the
autonomous model. The basic assumptions of this model are: (1) the speaker and the hearer share cognitive environment including the speaker's speech, according to which the hearer takes her own speech actions, and (2) unlike the code model, the hearer need not
decode speaker's intent on his speech actions. We predict the distribution of smooth, as well as non-smooth, transitions between speakers based on the autonomous model and the code model, and compare them with those observed in real spoken dialogues, showing that the autonomous model can account for fundamental characteristics of turn-taking phenomena more precisely than the code model. We further discuss how the autonomous model accounts for the predominant occurrence of smooth transitions.
抄録全体を表示