We developed a driving simulator that provides visual (luminance change) and/or auditory (sound pressure change) assistance information to enhance the perception of a car's lateral position in real time. We aimed to test the effects of multimodal assistance information on driving performance. Continuously changing auditory assistance improved driving accuracy without increasing drivers' workload, as did the combination of continuously changing auditory assistance and discretely changing visual assistance (i.e., audiovisual multimodal assistance). Since steering is mainly a visual-motor task, auditory assistance information should not disturb drivers' steering behavior and could therefore contribute to steering. Thus, it is suggested that auditory assistance information is useful for improving drivers' performance with no additional load.
The present study examined a model of the role of cognitive functions related to attention and decision making in merging manoeuvres during simulated highway driving. To test this model, we screened participants who scored high or low on two sets of tasks that were assumed to represent attentional function and decisiveness. We then conducted a driving simulation experiment in which attentional function and decisiveness were between-subject factors and task difficulty was varied as a within-subject factor. The results indicated that the attentional function was the primarily determinant of the swiftness and success of merging. The role of decisiveness was confined primarily to the activation of the turn signal. Thus, the present experiment suggests that attention is the cognitive function that determines performance during merging behaviour in the setting under examination. Exploratory multivariate analyses confirmed the measureable behaviour obtained from the simulation. The present results provided information about the cognitive functions for merging behaviour that may be useful to the automobile industry.
Information derived from facial and vocal nonverbal expressions plays an important role in social communication in the real and virtual worlds. In the present study, we investigated cultural differences between Japanese and Dutch participants in the multisensory perception of emotion. We used a face and voice that expressed incongruent emotions as stimuli and conducted two experiments. We presented either the face or voice in Experiment 1, and both the face and voice in Experiment 2. We found that both visual and auditory information were important for Japanese participants judging in-group stimuli, while visual information was more important for other combinations of participants and stimuli. Additionally, we showed that the in-group advantage provided by auditory information was higher in Japanese than Dutch participants. Our findings indicate that audio-visual integration of affective information is modulated by the perceiver's cultural background, and that there are cultural differences between in-group and out-group stimuli.
In visual searches, target prevalence (the ratio of target-present to target-absent trials) affects miss rates. We are more likely to miss targets in situations in which targets rarely appear, such as during detection of dangerous materials by airport security or routine medical screening for breast and cervical cancer. This tendency is termed the "prevalence effect." Here, we review recent advances in visual search studies pertaining to this effect. First, we describe how target prevalence affects error rates, reaction time (RT), and criterion and sensitivity in target detection. Second, we discuss the mechanisms of the prevalence effect, especially response and motor errors, observers' conservative criterion shifts, and shortening of search termination times (RT for target-absent trials). Third, we discuss whether or not expectations regarding target prevalence influence search termination times and miss rates. Finally, based on previous studies we discuss how to prevent the prevalence effect in visual search.
The present study investigated whether the modulation of the compatibility effect depending on conflict frequency in a given block (the proportion of compatible trials) generalizes from one kind of conflict to another. To test this, we conducted four experiments using a combined-conflict paradigm involving the Simon and Spatial Stroop tasks. We manipulated the conflict frequency (high/low) in one of the two tasks, while the conflict frequency in the other task was 50%. We focused on the block-wise conflict adaptation associated with conflict frequency in both tasks. The results of the four experiments showed that the block-wise conflict adaptation effect appeared only for the task in which conflict frequency was manipulated. These results suggest that the block-wise adaptation associated with conflict frequency does not generalize from the Simon task to the Spatial Stroop task or vice versa.
The previous vection (illusory self motion perception) studies can be explained by a one simple rule that is vection strength is determined by the plausibility of a stimulus as a representation of the world. When the world appears to be moving, we must make sense of the situation, but we refuse to accept the solution that the world is moving. Instead, we favor the solution that we ourselves are moving, because we know that our bodies are movable, whereas the world is static. I propose a hypothesis that vection will be stronger when a perceived visual stimulus makes us strongly infer a more plausible representation of the world.
In a history of vection studies, researchers have employed abstract visual patterns without particular meanings, e.g., random-dot pattern or striped pattern, as visual inducers, and investigated visual factors which can affect an occurrence and strength of vection. These studies have contributed to our better knowledge about perceptual mechanism underlying self-motion perception. Recently, technological advancement of equipment for image processing makes us possible to execute vection experiments with employing real world image (motion image of real environment shot by a moving camera) as visual stimulus. In this article, results of two vection experiments using real world stimulus were reported in order to discuss factors such as meanings or impressions of visual images which can be assessed only by using real world stimulus and possibly affect perceived self-motion.
We investigated how auditory space was represented during linear self-motion. Blindfolded participants were passively transported forward/backward by a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the participant. A short noise burst was presented when the participant's coronal plane reached the location of one of the speakers (null point). In Experiment 1, the participants indicated which direction the sound was presented, forward or backward relative to their coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward acceleration. In Experiment 2, the participants indicated, by pointing a rod, the perceived location of the sound that was presented at various distances from the null point. All the sounds that were actually located in the direction of movement were perceived as being closer to the null point. These findings suggest the compression of auditory space representations during forward accelerations.
The present paper describes common auditory systems from humans to bats for effective listening. First, the noise-vocoded speech sounds and auditory induction are explained to consider real-time modification of neuronal signal processing. Second, in order to discuss localized functions in the brain, the biosonar system of bats and pit-falls/problems are introduced as examples. Finally, for better understanding of effective and precise listening, usage of stabilized carrier waves in the bat biosonar system and strategies for sophisticated signal detections in jammed situation are discussed after introducing small wireless telemetry microphone system (Telemike).
In my laboratory, our main research themes are the followings: reading the brain, examining the brain and "modulating the brain." These have been done due to the recent advances in non-invasive measurements of human brain function. Our final purpose is to visualize the human brain functions by which the pathophysiological mechanisms of neuro-psychiatric disorders are clarified. Now, ancient techniques such as electroencephalography (EEG), electromyography (EMG) and event-related potentials (ERPs) as well as newly developed techniques including magnetoencephalography (MEG), functional magnetic resonance imaging (fMRl) and near-infrared spectroscopy (NIRS) are available. These techniques allow us to identify when, where and how bran works. Here, I present the recent findings on human neurophysiology. It is my hope that such approach provides the cross-bridge between medicine and psychology.
Neuroscientists have studied the visual system by accumulating knowledge about the structural and functional properties of single neurons. However, recent studies performing multi-unit recordings have revealed that a significant fraction of neurons exhibit temporally correlated activities. It has been suggested that correlated activities among neurons may have a functional role in visual information processing, though there is a possibility that such activities may be obtained as the byproduct. The present article reviews recent advances in retinal neuroscience using multi-unit recordings, molecular technologies, and pharmacological manipulations. The analysis of visually guided behaviors revealed the functional significance of neural codes suggested by electrophysiological data. These approaches also enabled us to clarify the neural mechanisms of the detection of image motion, which is fundamental to vision.
Maturation and integration of neurons and grial cells in the hippocampus is considered to be essential for regulating endocrine, affective, and congnitive functions, and the disruption of such process may cause mental illness. Previously, we have reported that mice heterozygous for a null mutation in α-CaMKII, which has a key role in a synaptic plasticity, show abnormal behaviors related to psychiatric disorders, such as schizophrenia and bipolar disorder. In these mutant mice, almost all neurons in the dentate gyrus are at a pseudo-immature properties, whihc we referred to as "immature dentate gyrus (iDG)." To date, the iDG phenotype have been found in mustant strains including Schnurri-2 knockout, SNAP-25 mutant, and forebrain-specific calcineurin knockout mice which show similar behavioral phenotypes. In addition, both chronic fluoxetine treatment and pilocarpine-induced seizures can reverse the maturation state of the mature neurons, resulting in the iDG phenotype in wild-type mice. Such iDG-like phenomenon was observed in the brains from patients with schizophrenia/bipolar disorder. Based on the findings, we proposed that the iDG is a potential new endophenotype of neuropsychiatric disorders. This review summarizes the behavioral abnormalities, iDG phenotype, and the implications in the pathophysiology of neuropsychiatric disorders.
This paper overviews recent attempts to control motion and emotion by haptic interface. Tactile stimulation can induce motion, such as hanger reflex that induces involuntary head rotation by applying pressure, induction of forearm motion by tangential deformation of the skin, and control of walking by pulling earlobe. On the other hand, presentation of false physiological response by tactile sense can modulate emotion, such as facilitation of affection by presenting false heartbeat, and acceleration of a surprised feeling by artificial control of piloerection on the forearm. We also considered generalization of these phenomena by considering bi-directionality of our information processing system.
Humans can easily discriminate between randomly spaced and regularly spaced visual patterns. Herein we demonstrate that observers can adapt to the randomness of two-dimensional visual patterns. Pattern adaptations with varying levels of physical randomness caused an aftereffect in which the perceived randomness decreased (increased) following an adaptation to a pattern with high (low) physical randomness (Experiment 1). Adaptations to 22.5°-rotated patterns did not cause an aftereffect in the non-rotated test patterns (Experiment 2). Additionally, contrast polarity did not affect an aftereffect (Experiment 3). These results indicate that the perception of visual pattern randomness is governed by second-order orientation processing, which is insensitive to luminance contrast.
It is difficult to identify a target in the peripheral visual field when it is flanked by close distractors. In the present study, we investigated this "crowding" effect on the ability to discriminateion of the direction of a point-light biological motion. Three point-light walkers were presented horizontally in the periphery and the central figure was the target. When the inter-walker distance was small, discriminating the direction of the target became difficult. Moreover, the reported direction for the target was reflected in the pooling of the three directions. These results indicate that crowding occurs for biological motion perception. However, when the scrambled point-light walkers flanked the target as distractors, crowding was not seen. This result suggests that crowding in biological motion perception occurs at a high level of motion perception.
Our previous studies demonstrated that spatial factors, such as finger position (hand posture) and distance from the stimulated finger to the head modulate tactile roughness perception. Herein we investigate whether spatial factors influence the sensitivity or response bias of tactile roughness discrimination using signal detection techniques. Participants were asked to discriminate the roughness of two-sided stimuli between two fingers in a variety of postures. When the stimuli were presented close to the head, the discrimination sensitivity for the finger located on the back surface of the stimuli was lower than that for the finger on the front surface. Furthermore, judgments by the finger on the back-side were biased toward the same direction as the roughness of the front surface regardless of the distance of the hand from the head. These results indicate that spatial factors differently modulate the sensitivity and the response bias of roughness discrimination.
We examined the effect of temporal change in numerical magnitude on time perception. In two experiments, we measured perceived duration of numerical sequences, in which the numerical magnitude of digits (Experiment 1) or the number of dots (Experiment 2) increased or decreased over time. The presentation durations of the numerical sequences were sub-second and supra-second. Experiments 1 and 2 showed that the perceived duration of the decreasing sequence was longer than that of the increasing sequence for any presentation duration except the sub-second duration of Experiment 1. These results suggest that temporal change in numerical magnitude influences time perception, but it takes a certain length of presentation duration for the effect to occur when the magnitude is represented by symbolic numerosities such as digits.
Although visual motion processing is believed to be enhanced by attention and inhibited by inattention, this study finds behavioral evidence that voluntary diversion of attention improves, not impairs, detection of 'global motion,' which is defined as the spatial integration of local motion vectors. Specifically, human observers were asked to detect coherent motion in an optic flow with or without concurrently performing a letter identification task. The threshold signal-to-noise ratio necessary to detect global motion was lower when the observers had to identify a letter in the alphabet and detect motion than when they only had to detect motion. This paradoxical improvement was not found for the absolute sensitivity to local motion. The results support the notion that inattention can facilitate spatial integration of local motion signals and promote global motion perception.
In humans, visual processes use retinotopic coordinates. However, not all processes are strictly retinotopic. We used a variant of the Ternus-Pikler stimulus to examine whether pigeons, visually dominant vertebrates, share the same retinotopic and non-retinotopic processing as humans. Six humans and four pigeons were required to discriminate the rotational direction of target disks surrounded by other nonrotating disks. When all disks flickered in synchrony and moved in tandem back and forth, target localization in humans was hampered but that in pigeons was not affected. These results suggest that visual processes in humans can be accomplished with using non-retinotopic coordinates, but those in pigeons are strictly retinotopic.
Auditory stimuli often improve visual detection and identification performance, although they do not directly convey information related to visual stimuli. This facilitation effect is assumed to be related to attentional mechanisms, which have hemispheric asymmetry. In this study, we examined the relationship between the facilitation effect of audio-visual integration and hemispheric asymmetry in attentional mechanisms using a dual-stream RSVP task. Auditory stimuli affected the left hemisphere when two targets have to be identified, but affected the right hemisphere when two targets have to be localized. Because the auditory facilitation effect corresponds to hemispheric specialization in perceptual processing, we propose that salient stimulus-driven attention has greater effects on the dominant hemisphere to promote stable and efficient processing.
In humans, a number of studies have found that the single nucleotide polymorphism (SNP) rs1O44396 on the CHRNA4 gene influences individual differences in attention. However, its association with other cognitive functions remains to be clarified. In the present study, we explored the effects of genetic variations in CHRNA4 on rapid scene categorization by 100 healthy human participants. The rapid scene categorization task required participants to judge whether the category of a scene image (Natural or Man-Made) was compatible with a cue word presented at the re- sponse phase. The target-mask stimulus onset asynchrony (SOA) ranged from 13 ms to 93 ms. Heterozygotes and homozygotes for the CHRNA4 T allele responded more accurately at long SOAs (67 ms and 93 ms) compared with non-T allele carriers, but only with the Natural scene category. Our discoveries provide evidence that genetic variations in CHRNA4 can moderately contribute to natural scene categorization performance.
In cluttered environments, attentive searching is necessary to find targets defined by a conjunction of multiple features (e.g., color and shape). Thus it has classically been assumed that the allocation of attention causes binding. In contrast, here we show a phenomenon where the causality between attention and binding is reversed; the visual processing of feature binding affects spatial shifts of attention. Subjects searched for targets with a combination of particular colors and orientations (e.g., red, vertical bars)that were embedded in peripheral areas of a search array. When a central area consisted of bars with regular color-orientation pairings, these regular pairings automatically changed perceptual bindings in the peripheral area, resulting in target omission. These results indicate that the visual system performs a rapid extrapolation of binding information spreading from center to peripheral areas, which throws off attention. Our results thus provided evidence supporting the hypothesis that binding affects attention.
In this tutorial article, we introduce Psychophysics Toolbox Version 3 (PTB-3). PTB-3 is a free set of functions developed for vision researchers, being able to handle usual experiments of cognitive psychology. PTB-3 is designed to be used with MATLAB or Octave on MacOSX, Windows and Linux systems. As such, PTB-3 would be beneficial for many psychologists as a tool to run their experiments. We demonstrate easy-to-use features of PTB-3, providing primary information of PTB-3 and MATLAB. Our example consists of three main components of a computer program to run an experiment for human cognition; preparing variables, running trials, and saving data. All source codes are available from the web site of the first author.
In this paper we report grant project of Face perception and recognition: multidisciplinary approaching to understanding face processing mechanism. On behalf of the official research project, a Grant-in-aid for Scientific Research on Innovative Areas, "Face perception and recognition: Multidisciplinary approaching to understanding face processing mechanisms" was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. This research project had started in 2008 and finished at the end of March 2013. This project involved thirty-nine researchers whose research fields are really multidisciplinary.