It has been pointed out that gaze alternation by infants, which is the behavior of alternately gazing at a caregiver and at particular objects, is related to the process of the development of intentional agency. Intentional agency is defined as acting with a desired goal and means. In this paper, we adopt a theoretical hypothesis that the infants understand others' intentions based on intentional agency, and we consider how to construct a computational model of intentional agency. We designed a model of an infant agent which acquires gaze alternation through interaction with caregivers based on a reflex behavior and an emotional behavior. First, the agent learns the visual orientation of gazing at a target in the center of the visual field as the reflex behavior. Based on the visual orientation, the agent learns to gaze in the same direction as the caregiver's focus. The learning is implemented with an association module which is serially connected with the visual orientation module. In the model, the agent associates the caregiver's focus with an object, and orients the agent's eye to gaze at the object. This behavior uses visual orientation as a means to attain the agent's goal of gazing at the same object. The internal states composed of goals and means are considered to be intentional agency. Second, we add two emotional states, ease and anxiety, to relate an emotional behavior to the serial architecture acquiring intentional agency. The agent looks back at the caregiver when the agent comes to the anxiety state. The emotional behavior provides opportunities of interaction with caregivers to the infant agent. Finally, we discuss how intentional agency functions as a basis of understanding others' intentions. Through this discussion, we propose that a nested structure of intentional agency between self and other is a primitive mechanism of understanding others' intentions and shared intentionality.
We constructed a science class where Japanese university freshmen experienced scientific activities. The class topic was the psychology of discovery. Students discovered laws and regularities of phenomena through experiments and constructed explanations why the phenomena appeared. Generally, students face many difficulties in constructing an explanation in student-centered learning. First we identified the factors that interrupt the students' construction of explanation through the review of preceding studies and our empirical investigation. Then we designed the class in which the students collaboratively construct the explanation by using the jigsaw method for overcoming these difficulties. As a result, almost all students successfully constructed the explanation. In the students' collaborative activities, we observed the students' monitoring other members' explanation and referring to knowledge that other members acquired. These collaborative activities have the students overcome the difficulties, leading to successful construction of explanation. As our conclusion, we propose the following design principle for collaborative learning: constructing a jigsaw group where knowledge needed for explanation is distributed to each of the group members with understandings on the structure of explanation based on task analysis, and having students collaboratively engage in constructing explanations.
Feature integration theory and parallel process model have been described how and when multiple features of objects are processed and integrated. These studies dealt with only integration of physical features such as color or orientation of bars. We can instantaneously identify not only physical information but also cognitive information. Here, we studied integration of a physical feature and a cognitive feature using event-related potentials. In this study, colored numerals were presented serially on a monitor. Participants had to determine their response to the stimulus according to either the color (i.e., red or black; Color condition), the numerical attribution (i.e., odd or even; Number condition) or the combination of these features (Conjunction condition). N100 and N200 were chosen as an index reflecting selective attention. P300 was also employed to study stimulus evaluation time and cognitive effort. Results showed that the amplitude of N100 and N200 under the Conjunction condition was not different from that in the Number condition but larger than that in the Color condition. In addition, response time and P300 latency in the Conjunction condition were not different from those in the Color condition but shorter than those in the Number condition. These findings indicate that physical feature is processed automatically even in the condition where this feature is not relevant for the task. In other words, physical feature and cognitive feature are processed in parallel.
In this experiment, one digit arithmetical problems followed by a masking sound were given auditorily from a computer. Three types of calculation tasks; addition tasks, multiplication tasks, and kuku tasks, were tested under two conditions that varied depending on the position of the masked sound in a given formula. For example, when a left side of a calculating formula consisted of number 6 and 7, the addition task was conducted as “6 + X (masking sound) = 13 (roku, tasu, X, wa, juusan)”, or as “X + 7 = 13”, and the multiplication task was presented as “6 × X = 42 (roku, kakeru, X, wa, yonjuuni)”, or “X × 7 = 42”, while the kuku task was presented as “roku, X, shijuuni”, or as “X, shichi, shijuuni”. When each stimulus was presented, each of the participants of 10 men and 10 women was required to respond by answering with what was missing. The results revealed that they answered faster in the kuku tasks than in addition tasks. The results indicate the possibility that calculation by kuku was mostly executed through a process similar to a playback of verbal memory stored as linguistic representation, and when we could ascertain the kuku tasks, the quantitative representation of numbers almost do not come to the surface. Thus, Japanese adults answered faster in the kuku tasks than in the addition tasks.
Many linguistic studies reveal that sign language is a fully-grammaticized language, not a pantomimic communication system. This paper reviews the results of investigation for sign language acquisition and examines how sign language studies contribute towards explaining the human capacity for language and the limits of language learning. First, the human capacity of language creation was discussed by reviewing the researches of deaf people with no or inconsistent language input. Second, this paper discussed whether the late language acquisition affects the ability to produce and comprehend a number of syntactic and morphological structures, by assessing sign language abilities of Deaf people who has various language environments. The results showed that the outcome of the sign language was sensitive not about the timing to expose to sign language learning but about the timing of the first language acquisition. Finally, sign language acquisition process in Deaf children was compared to spoken language acquisition process in hearing children. Language modality plays a very minor role in how children acquire language because the developmental course of language is very similar in two modalities. The above teaches us about the nature of human language and cognition.