We used a visual search paradigm to compare and evaluate the preferential detection of angry and happy faces, employing both schematic and real faces. In Experiment 1, we studied visual search using schematic faces to compare the detection of angry and happy faces. In Experiments 2 and 3, we assessed the influence of homogeneous (Experiment 2) and heterogeneous (Experiment 3) distractors on the detection of real angry and happy faces showing no exposed teeth. As homogeneous and heterogeneous distractors, we used facial expressions of the same or different persons as the target, respectively. Throughout Experiments 1–3, angry faces with happy face distractors were detected faster than vice versa. When neutral faces were used as distractors, the three Experiments showed different results. While anger detection was superior in Experiment 1, no detection preference was found for real angry or happy faces in Experiment 2. Furthermore, a happiness detection preference was observed in Experiment 3. These results suggest that the anger detection superiority observed with real faces can be explained by the efficient rejection of happy face distractors.
The oblique effect refers to the phenomenon whereby visual performance is inferior in oblique orientations than in horizontal and vertical orientations. We compared the oblique effect among young adults, typically developing children (5- to 14-year-olds), and children with reading and writing difficulties (6- to 15-year-olds). Both groups performed a line-orientation matching task and a line-copying task. When the stimuli were oblique lines with either horizontal or vertical orientations, the oblique effect frequently occurred in both groups. When standard stimuli had orientations of 22.5°, 67.5°, 112.5°, and 157.5°, matching performance correlated with age in typically developing children but not in those with reading and writing difficulties. These results suggest that the perception of oblique lines gradually develops until the early teen years, and that this development is slower in children with reading and writing difficulties than in typically developing children. Orientation errors were larger in the line-copying task than in the line-orientation matching task in all three groups when oblique lines had orientations that were close to either horizontal or vertical. This is probably because the line-copying task requires finer finger control than the line-orientation matching task.
People are generally unable to correctly determine the fixation point from the gaze direction when facing another person. This study investigated this tendency from the viewpoint of the anisotropy of space. Experiment 1 showed that compared with the physical space, the gaze perceptional space was 1.227 times larger laterally and 0.516 times narrower sagittally, suggesting that the gaze perceptional space has an orthotropic property. Furthermore, this space had an anisotropic property in the oblique direction. In Experiment 2, the space was constructed from the verbal cues of distance and angle, with the same size as the physical space in Experiment 1. Compared with the physical space, the space constructed from the verbal cues was 0.866 times larger laterally and 0.783 times narrower sagittally. These results show that the gaze perceptional space differs from the space constructed by verbal cues in terms of the degree of anisotropy. They also suggest that gaze direction was not judged on the basis of quantitative verbal representation concerning distance and angle.
The first animation movie apparatus, the phenakistoscope, which displayed apparent motion, was invented and named by J. Plateau in 1832. While this apparatus has been treated merely as an amusing toy in the history of psychology, it was, in fact, the first device that brought about a modern perceptual experience through purely bodily (brain) processes. This article describes the innovation process of the phenakistoscope, which was developed through scientific research based on the concept of stroboscopic vision, to reveal its historical distinction as the first scientific apparatus that not only demonstrated apparent motion but also introduced the modern experience of “perception” itself as a scientific objective. This paper will also discuss the relations between perception and media technology, including virtual reality, and the special role and potential of science outreach workshops in promoting the science of perception.
This article examines two types of nonarbitrary correspondences between apparently disconnected perceptual-cognitive entities—synesthesia and sound symbolism—the latter of which is a kind of crossmodal correspondence. First, the article describes the basic characteristics of these phenomena along with the similarities and differences between them. While both synesthesia and sound symbolism are intriguing matchings between different domains that are consistent over time, they differ in terms of their prevalence (rare vs. frequent) and idiosyncrasy (idiosyncratic vs. widely shared across individuals), and whether the concurrent is consciously experienced (conscious vs. not necessarily conscious). Although the similarities are often emphasized, the differences should never be neglected in the exploration of the underlying mechanisms of these phenomena. This article further discusses recent advances in these research fields, especially those showing a close relationship between language development and each of the two phenomena. Studies on synesthesia and sound symbolism should provide new insights into cognitive language processing.
One of the factors that affect the judgment whether signals received by receptors of different sensory modalities belong to the same event is synchrony perception among multiple types of sensory information. In this article, we introduce the simultaneity judgment (SJ) task in cross-modal combinations (especially olfactory-gustatory):(1) development of measurement system for SJ task using olfactory and gustatory stimuli, (2) SJ tasks for three cross-modal combinations (olfactory-visual, visual-gustatory, and olfactory-gustatory), and (3) SJ tasks for olfactory-gustatory combinations under match/mismatch conditions. Taste and odor are important components of flavor. The results of our studies suggested that there is a correlation among match/mismatch between taste and odor, oral referral, and temporal resolution of synchrony perception from the perspective of flavor perception.
Aromas of fresh foods induce attractive feeling whereas odors of rotten foods induce aversive feeling. The sense of olfaction has adaptive physiological and psychological effects. Recent studies have revealed how the brains distinguishes and recognizes wide variety of odorous molecules. However, it is still unclear how the attractive and aversive feelings are created by the brain when we sense olfaction. In this paper, we introduce recent studies of the olfactory tubercle, an area which constitutes the olfactory cortex and ventral striatum, and discuss the roles of the olfactory tubercle in attractive and aversive behavioral responses to odor cues in mice.
Reasons for people to choose food include both biological and cultural functions. Through experiences studying Korean society from the standpoint of cultural anthropology, I would like to introduce the studies on the cultural aspects of food selection, and consider the existence where Koreans ask the reason “Why do they eat it” as an example of dog meat diet.
Judgments of facial impressions such as attractiveness, trustworthiness, and dominance affect wide-ranging social outcomes. Research on perceptual mechanisms in forming such impressions is therefore central to the understanding of social cognition. This article provides a brief overview of two disparate approaches to understand how people form impressions from faces: computational modeling and experimental psychological approaches. The computational modeling approach allows for identifying the links between facial features and impressions in a data-driven manner, and thus for manipulating a face along an arbitrary impression dimension. The experimental psychological approach contributes to an understanding of how we perceive facial impressions in our cognitive systems by adopting perceptual and cognitive psychological experimental paradigms. In future studies, increased theoretical and methodological cross-talk between the computational modeling and the experimental approach would benefit progress in both fields.
The McGurk effect is a well-known illusion that demonstrates the influence of visual speech on voice perception (McGurk & MacDonald, 1976). This effect has robustness, but also diversity. In this paper, we review results of previous studies regarding two topics on diversity of the McGurk effect: cultural differences between Japanese and English speakers (e.g., Sekiyama & Tohkura, 1991; Sekiyama, 1994), and the difference between individuals with Autism Spectrum Disorder (ASD) and neurotypical individuals (e.g., de Gelder, Vroomen & van der Heide, 1991). The differential gaze patterns toward a speaker’s face between the cultures seem to account for the cultural differences of the McGurk effect, but not for the difference between neurotypical and ASD. Although further investigations are required to clarify why McGurk effect is weaker in ASD population, I argue that atypical multisensory integration in ASD population would be the most promising account currently available.
The direction of another’s gaze provides a strong cue to their intentions and future actions. The perception of gaze is a remarkably plastic process: adaptation to a particular direction of gaze over a matter of seconds or minutes can cause marked aftereffects in the perceived direction of others’ gaze. Computational modelling of data from behavioural studies of gaze adaptation allows us to make inferences about the functional principles that govern the neural encoding of gaze direction. This in turn provides a foundation for testing computational theories of neuropsychiatric conditions in which gaze processing is compromised, such as autism.
Quantifying the size and timing of an event-related potential (ERP) component is an essential part of ERP studies that focus on human information processing in the brain. The present paper overviews currently available methods for quantifying (1) a size of an ERP component (i.e., measurement of peak amplitude, local peak amplitude, mean amplitude, and signed area amplitude),(2) a midpoint of an ERP component (i.e., measurement of peak latency, local peak latency, and 50% fractional area latency), and (3) an onset point of an ERP component (i.e., measurement of onset latency, 25% fractional area latency, and 50% fractional peak latency). An important point is that each quantification method has both advantages and disadvantages, and the best method varies depending on the nature of data set to be analyzed, such as morphologies of an ERP component of interest, the shape of waveforms in which the ERP component is included, and the level of noises included in the waveforms. Therefore, it is always required for researchers to select the best method with careful consideration of compatibility between quantification methods and data set.