The present study was designed to examine whether the second language (L2) visual word recognition is affected by the first language (L1) word processing pattern. Participants were divided into two groups according to their L1, the Japanese native speakers and the Chinese native speakers. Priming lexical-decision task for English word was performed in both groups. The stimulus-onset-asynchrony (SOA) condition between the prime stimuli and target stimuli was varied in Experiments 1 (200ms) and 2 (800ms). In Experiment 1, the Japanese group showed a strong facilitation effect with phonological priming and the Chinese group showed a strong facilitation effect with orthographic priming on L2 English word recognition. A similar pattern of the results of Experiment 1 emerged in Experiment 2. These results suggest that the processing pattern of L1 word may be automatically activated in L2 English word recognition. In other words, the cognitive strategies of L1 constrain the shaping of L2 word processing pattern.
In this article, we provided a mathematical model for analyzing the distribution of reaction time and estimated which types of visual search processes would be observed, such as memory-driven or memory-free types mentioned in Horowitz and Wolfe (2003). Our model was a convolution of the Weibull and Gaussian distributions (Weibull-Gaussian model). We conducted visual search experiments and model fittings for the obtained data. The model comparison revealed that the Weibull-Gaussian model was more appropriate than the ex-Gaussian model. Applying the model to the data revealed that the parameter indicating the type of process changed within a session. These results indicated that the visual search processes were not fixed in one type.
People are likely to have deceptive memories of witnessing an event when the event has been both visually imagined and actually heard. The present study examined whether or not people incorrectly claimed to have witnessed events during a memorizing period. Fifteen of the thirty participants identified the source of the event that they recalled immediately after the memorizing session ended. The other fifteen participants were tested one week later. The results indicated that the participants who were tested later more frequently claimed to have witnessed the events that they had visually imagined and actually heard than those that they had only heard, only imagined to have witnessed, only imagined to have witnessed twice, or imagined as having both witnessed and heard. These finding suggested that due to the confusion of incorrect information from various sources, an event that was actually heard produced a false visual memory over time.
This study examined the availability of probabilistic modeling and spatial statistics for analyzing eye movement data. The data from Kamiya, Sakagami, Masuda, & Omori (2006), which showed that eye movement speed (EMS) can be controlled by operant conditioning, were analyzed. In Examination 1, the best fitting model to the EMS distributions was selected based on Akaike's information criterion, the Bayesian information criterion, and residual analysis; next fixation criterion speeds in each participant and condition were calculated from the model. In Examination 2, distributions of the fixation point defined by the above criterion were analyzed by spatial statistics. The results evinced the following: (1) the model consists of one exponential distribution as fixation and two normal distributions as saccade, which best describes the EMS distributions; (2) the fixation criterion speeds were approximately 16 deg/s; (3) there was no adequate duration to define the fixation; and (4) fixation points were more clustered under the condition with high EMS. These results demonstrated the merit of using the above two methods.
Using photographic images containing two persons, this study investigated the effects of lens focal length (F) on the perception of absolute distances to a person (A_p) and the relative distance between two persons (R_p). The absolute size of every person changed in proportion to F and in inverse proportion to actual distances, while the relative sizes of two persons did not change by F when the actual distances were the same. In the experiment, the participants estimated either A_p or R_p in each condition of different actual distances with different F. The results indicated that A_p was strongly affected by F, while R_p was scarcely influenced by F. These findings suggested that a powerful determinant of A_p was absolute size; however, a powerful determinant of R_p was relative size. Therefore, there might be a difference in the manner of processing information for the perception of absolute distances and relative distances in photographic images.
Sensory physiologists and psychologists have recognized the importance of attention on human performance for more than 100 years. Since the 1970s, controlled and extensive experiments have examined effects of selective attention to a location in space or to an object. In addition to behavioral studies, cognitive neuroscientists have investigated the neural bases of attention. In this paper, I briefly review some classical attention paradigms, recent advances on the theory of attention, and some new insights from psychophysics and cognitive neuroscience. The focus is on the mechanisms of attention, that is, how attention improves human performance. Situations in which the perception of objects is unchanged, but performance may differ due to different decision structures, are distinguished from those in which attention changes the perceptual processes. The perceptual template model is introduced as a theoretical framework for analyzing mechanisms of attention. I also present empirical evidence for two attention mechanisms, stimulus enhancement and external noise exclusion, from psychophysics, neurophysiology and brain imaging.
Motion of an object can be estimated from its shape and/or bodily action even when the object remains stationary on the retina. In addition, such an object-based motion signal induces motion appearance in the background image. These phenomena show that our perceptual system produces dynamic mental representation independent of retinal motion signals. Here I review studies about the object-centered motion perception, and I discuss underling mechanisms and relevance to real-world situations.
We examined how observers' active control of the stimulus change affects the visual processing of motion. We compared the flash-lag effects for motion (Experiment 1) and for luminance (Experiment 2) under several conditions that differed in the degree of the observers' active control of change in a stimulus. The flash-lag effect was salient if the observers passively viewed the automatic change in the stimulus. However, if the observers actively controlled the stimulus change by the use of a computer-mouse, the flash-lag effect was significantly reduced. In Experiment 3, we examined how observers' active control of the stimulus movement by a mouse affects the reaction time for the shape change in the moving stimulus and flash. Results showed that the active control reduced the reaction time for both moving stimulus and flash. In Experiment 4, we found a significant reduction of the flash-lag effect was restricted to the condition in which the direction of the stimulus motion was consistent with the direction of the active mouse movement. These results suggest that the learning of a specific directional relationship between hand movements and stimulus movements is essential for the reduction of the flash-lag effect in terms of active observation.
We can infer functional specificity of a specific brain area beyond the limit of spatial resolution of functional magnetic resonance imaging (fMRI) by using the method of fMRI adaptation (fMRI-A). Here, I will summarise the basics of fMRI-A and report our results on motion perception. In one study, we have revealed that first-order (luminance modulation) and second-order (contrast modulation) motions are separately processed in human MT complex (hMT+) by showing separate fMRI adaptation effects for them. In another study, we have shown specificity to global flow patterns of expansion and rotation in hMT/MST. There are some problems in fMRI-A, some of which are common among general adaptation or general fMRI experiments. Nonetheless, fMRI-A is certainly a powerful method and will allow us further understanding of brain functions.
Neuroimaging techniques have now made possible to "view" brain processes that before were not visible. Such revolutionary methods have caused much enthusiasm within the neurosciences as well as in the general public, but some scholars remain skeptical about the actual progress derived from such a research. Among the criticisms, the localizing power of brain scans seems to add little knowledge about how the mind works. However, it can be argued that neuroimaging studies can be theoretically grounded and strongly contribute to our understanding of the mind. Indeed, our understanding of particularly difficult issues, like that of human consciousness, may greatly benefit from the ability to observe neural activity as it happens inside our heads. For example, private experiences as "seeing" mental images or synaesthetic colors can be shown to correlate to neural activity in the brain's sensory areas that support visual perception. Not only this evidence confirms the sensorial aspect of these conscious events but it has also the power to resolve long-standing theoretical issues about their nature.
Functional magnetic resonance imaging (fMRI) has been used for investigating the cortical activation for many kind of cognitive processing. In this paper, one of the latest effective connectivity analysis, dynamic causal modelling (DCM) analysis, was introduced. Effective connectivity is defined as the influence that one brain region exerts over another. DCM aims to estimate and make inferences about the causal influences or coupling among brain regions and how this coupling is changed by the experimental manipulation at the neuronal level. By introducing some studies using DCM analysis, the possibility of the effective connectivity analysis to unit cognitive information processing model and neural basis was discussed.
One of the central issues in brain science is how percepts are formed in the brain. Dissociations between physical stimulation and subjective experience provide an opportunity to clarify brain activity related to the formation of percepts. This article introduces findings from neuroimaging studies using auditory illusions. Prolonged listening to a repeated word without a pause produces illusory transitions of the unchanging word, which are called verbal transformations. The number of verbal transformations is positively and negatively correlated with activity in the left inferior frontal cortex and anterior cingulate cortex, respectively. The percept frequently switches between one coherent and two distinct streams when an unchanging triplet-tone sequence is presented. Activity in the thalamus or the auditory cortex occurs earlier during perceptual switching in auditory streaming. These findings suggest that short-term plasticity leading to the formation of auditory percepts is implemented in feedback and feed-forward loops between two brain regions.
Authors have been conducting research on understanding of human's tactile sensation and development of tactile sensors and tactile displays, simultaneously. I hope this approach is useful for the development of research field of tactile sensation. In this article, some of the results of authors research are shown.
Tactile sensation usually provides reliable perception of an object's presence and its surface geometric profile. However, the human tactile perception is not necessarily faithful reproduction of the physical property of an object in real world. Such phenomenon is called as tactile illusion, and it provides a lot of insights about the characteristics of human tactile perception and its mechanisms as like as visual illusion research. Researches on tactile illusions have been reported little by little from the era of Aristotle to present in psychological field, and actively reported by engineers in recent years. It is because that the study on tactile illusions is also helpful for engineers to develop a tactile display, which is an apparatus to (re) produce tactile perception. This article mainly reviews the Fishbone Tactile Illusion as a casework of tactile illusion research in engineering field, then discusses the mechanisms of human tactile perception based on the findings of the illusion, as well as the application of those findings.
The stochastic resonance (SR, hereafter) is one of the basic principles intrinsically possessed by any living thing to highly adapt for the complicated environment including various disturbances. It was found that Hodgkin-Huxley model capable of emulating living neuron activity shows the SR. A system of two differential equations dominating Hodgkin-Huxley model is similar to a differential equation system for Belousov-Zhabotinsky (B-Z, hereafter) reaction, which is one of phenomena caused by complex science systems. Although a noise is mixed inevitably by contact with an object and sensor's movement on it in tactile sensing, a human being can evaluate the several micrometers unevenness on the object surface by means of the SR. If the SR mechanism is incorporated into a tactile sensing system, the system can measure an object surface with high precision in not only the controlled environment adjusted like a precision measurement room but a living environment. In our laboratory, we intend to apply remarks obtained from complex science systems such as SR and B-Z reaction to robotic systems such as a tactile sensing system and a tactile display. Some recent topics of our results are described in the presentation of IIP 2008.
The visibility of rivalrous flashes presented to the 2 eyes can be modulated by presenting a stimulus prior to the flashes (visibility modulation, VM). The present study investigated VM with chromatic gratings of high luminance contrast and found that VM occurred in a color-orientation-specific manner. Taken together with our previous finding that VM could occur in a color-based manner with an isoluminant chromatic grating, we suggest that the binocular integration of color and pattern varies depending upon stimulus parameters such as luminance contrast.
The Rotating Snakes illusion is a motion illusion in which a static figure appears to rotate. Murakami et al. (2006) proposed the model that the biphasic shape of the temporal impulse response (TIR) function induces this illusion. We attempted to test this hypothesis by manipulating retinal illuminance because it is well known that the shape of the TIR function changes from biphasic to monophasic with decreasing retinal illuminance. The results indicated that the illusion strength indeed decreased with decreasing retinal illuminance. To obtain a better view of the change in temporal response property, the TIR functions were estimated by the double-pulse method (Burr & Morrone, 1993) under certain light conditions. We confirmed that the biphasic character of TIR gradually changed to monophasic with decreasing retinal illuminance. We argue that the biphasic characteristic of the TIR function contributes to the illusion strength.
We investigated whether the physical offset of an invisible stimulus could result in the recovery of visual awareness suppressed by motion-induced blindness, in which a target stimulus surrounded by motion distractors often subjectively disappears. Participants were presented with two objects (a target object embedded in an outlined object) that were close to each other. When the two objects disappeared simultaneously, the physical offset of only the outlined object led to the recovery of visual awareness of the target object. Here, we report the basic effect, its spatiotemporal properties, and its relationship with the offset detection of the invisible object.
During the identification of a target embedded in a rapid serial visual presentation sequence, observers frequently miss a target when it appears early in the sequence as compared to when it appears later. This phenomenon, called attentional awakening, reflects a gradual modulation of temporal attention for a rapid sequence. In this study, we inserted a temporal gap (0, 100, 500, or 1000ms) in the sequence and measured the critical conditions that destroyed the modulated state of temporal attention. The results showed that the correct identifications of the target were impaired immediately after the 1000-ms gap (Experiments 1). However, this impairment was eliminated by superimposing a continuous sequence of random dots so that the sequences temporally separated by the gap could be encapsulated as a single event (Experiment 2). These findings suggested that the temporally modulated attention could be maintained within a single event.
This study investigated the relationship between word and sentence context processing in the early stage of sentence reading. We examined the influence of sentence context on the perception of contextually anomalous words in a Japanese text. A target word (contextually anomalous two-kanji compound word or its contextually consistent counterpart) in a short text was briefly presented, followed by a four-alternative forced-choice (4AFC) recognition task. The recognition list consisted of four two-kanji compound words: contextually anomalous target word, consistent target word, anomalous new word, and consistent new word. The results showed that the participants recognized anomalous words less frequently than their correct counterparts, and they were more likely to select contextually consistent words. The context effect was obtained even when the participants possessed only information of content words in a sentence. These results demonstrate that processing sentence context has priority over processing each constituent word and that semantic information of content words plays a primary role in the early stage of Japanese sentence reading.
It is well established that English garden-path sentences require more time to be read as compared to normal sentences (garden-path phenomenon). In contrast, no clear evidence for this phenomenon is reported in the case of Japanese garden-path sentences. In this study, we examined the reading performance while varying the presentation speed, and estimated the temporal cost of Japanese garden-path sentences. The results showed that Japanese garden-path sentences require longer processing time than non-garden-path sentences. The estimated cost is about 50-100ms/ phrase. Thus, the examination with the experimenter-paced method revealed the garden-path phenomenon in Japanese.
We examined whether simultaneous illusory thermal sensations can influence the perception of hardness. By applying thermal stimuli on the index and ring fingers, we elicited referred thermal sensation on the middle finger, to which only hardness stimuli was applied without affecting the thermal properties of the peripheral receptors. Seven participants underwent a two-alternative forced-choice task on the perceived hardness of rubber pieces with various hardness values. The results show that the participants found samples to be harder under the cold condition. This phenomenon implies that the illusory thermal experience induces a tactile illusion in which warm (cold) material is perceived to be soft (hard).
Recent studies on visual short-term memory have demonstrated that our visual system can retain only a few objects. However, it remains unclear whether this capacity limitation is because we forget the visual representation during a delay interval or because visual representations are formed for only a fraction of the visual objects from the beginning. The present study manipulated the delay interval to test these two possibilities. In the experiment, a sample display of six oriented bars was presented, followed by a 0-s or 1-s blank interval, and then by a test display with a cue. The participants reported the orientation of the sample stimulus at the cued location. The results showed that the measured visual capacity was about 3 items under both the 0-s and 1-s conditions. The measured capacity varied substantially across individuals. However, an individual's capacity under the 1-s condition was highly correlated with that under the 0-s condition. These results suggest that the capacity limit is originated from encoding rather than retention or retrieval processes.
We investigated whether fMRI-derived cortical activity patterns induced by optic flow can predict the direction of heading. The cortical activity from hMT+, which is related to motion perception, led to good predictions. Focusing on the good performance of heading perception with eye movement, we also measured the cortical activity from an optic flow with eye movement. The decoder was trained with the activity patterns without eye movement and tested on those with actual eye movements, and vice versa. hMT+ exhibited a less robust, though still relatively the best, decoding performance. These results suggest that hMT+ is involved in heading perception and coding head-centric motion along with compensating for extra-retinal information. More generally, our study shows that decoding techniques can be used as effective tools in identifying the functions associated with cortical activity.
Two experiments were performed to investigate the effects of target connectivity in multiple object tracking (MOT) by chimpanzees. In the experiments, chimpanzees were required to track 2 or 4 targets among 8 discs moving around the display. Every 2 discs were connected by a line. In the first experiment, the tracking performances under the paired targets condition in which the targets were 1 or 2 pairs of discs, and the unpaired targets condition in which each target was selected from pairs different from that of each other were compared. The chimpanzees demonstrated better tracking performance under the paired targets condition. In the second experiment, an unconnected line condition where the line did not reach the paired discs was introduced. The tracking performances of the chimpanzees under connected and unconnected line conditions did not differ. These results were not consistent with the reports of previous studies performed on human participants. Perceptual characteristics of object merging and/or perceptual grouping in object-based attention would be different between chimpanzees and humans.
We used the preferential looking technique to examine whether 3-4-month-old and 5-6-month-old infants perceive the illusory transparent surface induced by a newly developed mixed polarity Kanizsa configuration. Our results suggest that 3-4-month-old and 5-6-month-old infants discriminate the transparent Kanizsa configuration both from its rotated counterpart and from the non-transparent Kanizsa configuration, and they also prefer the former over the latter 2 configur-ations. Our stimuli and experimental manipulation exclude the possibility that these responses could be based on the geometrical properties of the figure or the local contrast difference between the figures. The present finding suggests that both 3-4-month-olds and 5-6-month-olds could organize the fragmented elements presented in our transparent Kanizsa configuration and perceive the illusory transparent surface.
This study investigated the relation between automatically synchronized stepping and interactive situations. Eight pairs (sixteen persons) participated in the experiment. In the face condition, two persons faced each other, and in the back condition, one person faced the other's back. In the first half of each trial, the curtain was closed between the persons such that they could not see each other in both the conditions. In the second half, the curtain was open and the persons could see each other in the face condition; however, only one person could see the other's back in the back condition. Participants were only instructed to continue stepping during a trial. We calculated the phase difference in their stepping cycles. In the face condition, the phase difference in stepping in the case of most of the pairs periodically increased and decreased when the curtain was closed, but was generally constant and close to zero when the curtain was open. However, in the back condition, the phase difference continued to periodically increase and decrease even while the curtain was open, and was not stable around zero. These results revealed that synchronized stepping automatically occurs only in the face-to-face situation, which can be necessary for social interaction between persons.