Previous studies have shown that the time interval marked by the onsets of two successive pure tone bursts is perceived to be longer when the second sound marker is lengthened. The present study examined whether this phenomenon appeared in a more natural setting in which the time interval was marked by instrumental sounds with complex temporal and spectral structures. Real piano sounds and synthesized sounds that simulated either just the temporal structure of the piano sound or both its harmonic and temporal structures were used as sound markers. Lengthening the second marker increased the perceived duration of the interval, as in previous studies, but only in limited cases, and this did not occur in an experiment in which only the synthesized piano sounds were used. Thus, the effect of sound durations was weakened with the new series of sounds. Characteristics of piano sounds that were not captured in the synthesized sounds seem to have played an important role in duration perception.
The peak-interval procedure is an effective tool for measuring interval-timing behavior in rats. However, its stability has not been previously examined. Therefore, the purpose of this study was to examine behavioral stability across sessions when rats were trained using the peak-interval procedure. To this end, we performed 100 sessions of the peak-interval 30-s schedule with rats. Responses during 90-s probe trials were classified into 3-s bins individually and were averaged across five session-blocks in order to examine behavioral stability between sessions. Data analysis revealed the stability with a relative response rate for one block in five sessions. Furthermore, we applied the provided relative response rate to a normal distribution curve and calculated the kurtosis of the curve at each response rate and Gaussian fitting, i.e., the peak time of the curve. Examination of the stability showed a Cronbach's coefficient of α≧0.86 (R2=0.98, peak time=0.93, kurtosis=0.86). This indicates that the peak-interval procedure is an excellent reliable tool for measuring the interval-timing behavior in rats.
The present study examined the effect of performance feedback on interval timing. Time production tasks with start-stop procedures were performed in two experiments that consisted of three phases. Experiment 1 investigated whether the feedback effect persisted after it had been withdrawn (learning effect). Participants had to produce 10-seconds intervals in all three phases, but they received the feedback only in the second phase. Participants were randomly assigned to one of four feedback groups (accurate, 80%-feedback, 125%-feedback, no-feedback). Produced interval times shifted according to the feedback in the second and third phases, thus showing the learning effect on interval timing. Experiment 2 examined the generalization effect of feedback with longer target duration in the third phase. The feedback effect was also observed when participants had to produce 30-second intervals in the third phase, suggesting that the feedback effect can be generalized to different intervals. With biological internal clock model, these results are interpreted that feedback affects not only specific temporal representations, but also other representations in the reference memory.
It has been reported that the perceived duration of high-frequency stimuli is shorter than that of low-frequency stimuli, even though the physical duration is equal. We call this phenomenon the repetition effect on time perception. This study examined the effect using intra-modal and cross-modal tasks as well as the magnitude estimation method. In the intra-modal tasks (Exp. 1), simple visual stimuli with two different shapes were used as standard and comparison stimuli, and the repetition effect was observed even when the standard and comparison stimuli were interchanged. In the cross-modal tasks (Exp. 2), no repetition effect was observed when the visual and auditory stimuli were used as the standard and comparison stimuli, respectively, but the effect was clearly observed when auditory and visual stimuli were exchanged. These results support the auditory dominant hypothesis in time information processing. The repetition effect was explained by the inhibition hypothesis.
Human time perception is known to be distorted by adaptation to visual motion stimuli. Many studies have investigated time perception distortion and its underlying mechanisms, but their results have been inconsistent. This may be because they have used different adaptation durations, as it is known that time duration can affect visual processing levels. Previous studies have elucidated how time perception is distorted under adaptation conditions of a few seconds or much longer, but have not investigated this distortion under shorter durations. Therefore, we investigated how time is distorted and how time perception mechanisms are affected under short adaptation durations (e.g., <2 s). We found that a short duration produced a characteristic time distortion. In addition to orientation selectivity, we confirmed a transient effect on time perception distortion. These results suggest that visual processing concerning orientation selectivity contributes to time perception, and that transient components of the adaptation effect are useful for investigating the mechanisms of adaptation-based perceived time distortion.
Recent studies have proposed that there exist components of memory specific to each sensory modality for time perception. Moreover, several studies have suggested that memory for duration is more efficient and robust for visual stimuli than auditory stimuli, while a majority of studies reported auditory dominance over vision for temporal perception. The present study, using a time reproduction task with auditory, visual, and audio-visual stimuli, tested memory components by manipulating retention delays between the end of the target presentation and the beginning of reproduction. If vision dominates sensory specific memory for duration, performance with visual stimuli should be more accurate and stable under longer delays than performance in other modality conditions. Results showed that reproduced durations were longer and more unstable under longer delays than shorter delays in all modalities. Moreover, we found that auditory stimuli were reproduced more stably and for longer than visual stimuli. These findings did not support the existence of sense-modality specific memory components or visual dominance.
Throughout the history of research on animal learning, it has been widely acknowledged that the temporal relationship between events exerts a critical influence on the acquisition of a conditioned response. Until more recently, however, no explicit and systematic studies had investigated how animals learn the temporal relationship itself. In this article, we first review some basic functions of temporal information in classical conditioning. We then focus on one influential learning theory, temporal coding hypothesis, which posits that animals can automatically encode temporal relationships between events and express learned behaviour through integration of multiple temporal relationships acquired across contexts. After reviewing basic results supporting the temporal coding hypothesis, we present an alternative explanation of some temporal coding-like phenomena on the basis of AESOP model combined with the potentially different contributions of motivational and sensory US representations in higher-order conditioning. In a second article (Fujimaki, Shimbo, Matsui, Shi, & Kosaki, 2015), we will discuss interval timing in operant conditioning and neural substrates of timing behaviour.
In the previous article (Kosaki, Shi, Matsui, Shimbo, & Fujimaki, 2015) we reviewed studies regarding how animals represent temporal information in classical conditioning. In this article, we first review various issues pertaining to interval timing in operant conditioning, with the main goal of providing a concise summary of procedural and theoretical developments in interval timing research. In the remainder of the article, we will review recent empirical findings and theories about the neural substrates underlying timing mechanisms both in classical and operant conditioning, and discuss how the hippocampus and striatum might contribute to different aspects of temporal information processing during conditioning. We then argue that the potentially different timing mechanisms implemented by the hippocampus and the striatum, as evidenced by the recent discovery of time cells in the hippocampus and consistent findings regarding involvement of the striatum in interval timing, might each be considered to constitute a part of functionally dissociable multiple memory systems that have been described elsewhere in the literature, particularly in the context of spatial learning and the organisation of voluntary behaviour.
Human observers have the ability to rapidly extract summary information about many items' features. Regarding passive perception, many studies have explored abilities such as making statistical summary representations of many different feature dimensions in various sensory modalities. In real life, however, human operators actively interact with the environment and deal with specific objects or systems. It is conceivable that operators can obtain useful information about the conditions of the operating system by accumulating information over time about the relationship between their operations and the system's responses. In particular, variance of operation-response noise is important information because it can indicate abnormal and possible high-risk conditions of the operation system. In this study, we conducted an experiment involving discriminating the variance of operation-response noise to investigate information-processing mechanisms underlying perception of such noise (Experiment 1). We also examined (Experiment 2) the effect of active observation on variance discrimination by having observers passively perceive the same visual motion stimuli as in Experiment 1. The results suggested that compared with passive observation, active observation facilitated the perception of the variance of visual motion. In addition, the just-noticeable difference first fell and then rose as pedestal variance increased, which produced a ‘dipper' function in both conditions.
Our visual worlds are composed of many objects. Despite the fruitful accumulation of object recognition studies, it is still unclear how we see entire visual scenes. We conducted three experiments to verify the application of Edelman's (1999) object recognition model to scene recognition. Experiment 1 showed participants' superior performance recognizing target scenes in novel views nearly located in experienced views. This was due to the combination of multiple views, but not due to comparing the novel view to the one most similar experienced view (the view combination effect). Experiment 2 indicated that these processes are elicited by the “summation” of stored multiple previous experiences, and not by comparison to a prototype. Experiment 3 was conducted to test whether these processes of scene recognition would apply to another context, by a spatial bisection task. Performance did not differ due to prior view experience. These results emphasize the difference between recognition and perceptual processes involved in bisection tasks. Thus, our results suggest that the psychological processing of visual scenes includes various phases, and may be subject to task-dependent effects.
One of the key issues in current cognitive neuroscience is how language acquisition and brain development co-occur in early childhood. In this sense, a systematic observation of functional brain development in both first (L1) and second/foreign (FL) language is crucial. Using ERPs and NIRS in the cohort study of Japanese school-aged children, the process of word learning has been clarified. The results include similarities of the brain responses between L1 and FL, the important role of the right hemisphere in the initial stage of learning, and cortical sex differences emerged with proficiency. The necessity to investigate genetic factors combined with language and brain is also pointed out to elucidate more fine-grained characteristics of language learning in each individual.
The Noh Mask Test measures social skill ability. Noh Mask image stimuli of this test were compared with Human face photos stimuli by Matsumoto & Ekman, Face Drawings. A multi-dimensional scaling analysis of the comparative judgments of the healthy subjects for these three stimuli revealed a two-dimensional solution involving pleasure and arousal. The reaction of Face Drawings located on X and Y axis. Although face stimuli of human lacks the first quadrant, Noh mask image to draw a smooth oval, covering all of the quadrants. Representing subtle look, Noh Mask Test was considered to expose the feelings of continuity. The test was compared with Rorschach Test and MMPI, also performed test–retest method. Schizophrenia in outpatient day care performed Noh Mask Test, and they were studied after 4 years later. The Noh Mask score of employed after 4 years later was more similar to healthy employee than unemployed patients. It was found that Noh Mask Test is a test for examining the social skills.
In this review, we examined; (1) the reason why people used face as emoticon, (2) the function of emoticon compared to emoji, (3) the timing we need “face-like expression”, and (4) the meaning of “face-like expression” in communication. The result implied that “face-like expression” is easy to understand as a figure, and has stronger effects on emotion than sentences. Due to these features, emoticon was the most important way to decorate the e-mail visually, and the effective expression which defined the mood of communication to express politeness. On the other hand, in unaddressed expression such like Twitter, emoticon didn't have functions related to politeness, and it expressed the sender's emotion more directly as well as decorative intention.
In interpersonal communication, people exchange informatin, feelings, and intentions not only by verbal message but also by non-verbal messages such as facial expressions, gestures, posture, and tone of voice. In particular, facial expression are considered to play a important role among non-verbal media. Focusing on the importance of facial expressions in face-to-face communication, social communication robots which can disply facial expressions have been developed. This paper gives outline of Android Robot SAYA that can express human-like facial expressions first. Then, the face robot that resembles a real female and is able to mimic her facial experssions is introduced. Finnally, a receptionist system and an educational system are introduced as examples of practical application of SAYA.
If the minds of hospitalized children can be evaluated more quantitatively and easily, they can receive more effective medical treatments and care regarding their state of minds. In the process of developing the tool focusing the children's minds as Child Life Design, we have noticed that we should create the Kansei evaluation tool. In this paper, I described why I came round to the idea, what specific elements should be incorporated into the evaluation tool, and what effects the tool made.
As the opportunity to use the computer in life increases rapidly, more natural relationships between computer and the human being rather than the traditional relationships between them are needed. Under these circumstances, it is difficult to correspond to developing a new product in the conventional idea. Therefore, manufacturing innovation through new thinking has become necessary. One of the methods which create new ideas to secure the quality of the relationship of the product and human is to find the many participants in manufacturing and to integrate the knowledge between different fields. Therefore, in order to perform the manufacturing integrated, we aim to construct a cognitive sharing model for sharing the recognition of each other, and take advantage of the expertise of each using the shared recognition. In this paper, we propose a process that worked as project-based educational activities“KADEN Project”.
This study is concerned with development in the picture space by projection where I aimed at “person, picture and spatial relationship.” I found the functionality of the picture as a space material, and created the space for a person to take action by the environment. In actual fact, we have been trying to create the space in a pediatric hospital, handicapped children's school (children with a disability) and a tea party. Focusing the illusion as a feature of the image, “Phantom,” our study is also a proposal to the evolution of the functionality of picture.
Our adaptive visual processes require us to quickly detect human presences in a visual scene. In this light, it is proven that faces and human body forms particularly capture our attention, which connotes the key roles these two aspects play in how we detect others. Functional neuro-imaging studies have revealed three face-selective and three body-selective regions in the human cortex. It is found that each of these body- selective areas is located in close proximity to one of the three face-selective areas respectively, and that the corresponding areas have similar functioning. Yet until quite recently, there have been but few studies that address how we detect faces or human presence against natural backgrounds. This paper reviewing studies on the earlier stages of visual processing of faces and human presence, includes proposals for further research.
In the field of visual information processing of facial expressions, it has been insisted on that angry faces receive faster processing when compared to other expressions, which is referred to as the anger superiority effect. Although more than a few studies have yielded even contradictory findings about the high-priority processing of angry faces, it remains widely accepted by researchers due to its consistency with evolutionary notions such as adaptation and survival. In the present study, the more efficient detection of angry faces revealed by visual search experiments is reexamined, especially in terms of low-level visual features and search asymmetry.
Typically, most research on facial expressions has relied on photograph images or “static” information. However, affective facial expressions are dynamic in nature. In this article, the author proposes a morphological approach for elucidating complex spatiotemporal information embedded in dynamic facial expressions. To extract the components of dynamic facial expressions, a combination of the methodologies of geometric morphometrics and a multiway decomposition method, known as PARAFAC, was applied to sequential, three-dimensional landmark coordinate values that were obtained using a facial motion capture system. The result suggests that dynamic facial expressions consist of plural components that differ in spatiotemporal characteristics (i.e., long- and short-term facial deformations).
This note is a short report on the activity of the Young Researchers Committee of the Japanese Psychonomic Society (JPS) launched in late 2013. As the highlighted activity of the committee in 2014, we held the satellite oral session of 33th annual meeting of JPS. Eight finalists selected from 28 entries made presentations on their studies. One of the finalists who got the most votes was awarded as the Young Psychonomic Scientist of the Year 2014. The committee will continue the activity in order to promote research and educational activities in Psychonomic science.
The Young Researchers Committee of the Japanese Psychonomic Society conducted a survey of postdoctorates in the field of psychonomic science. Results showed some discrepancy from national surveys: postdoctorates in the field of psychonomic science tend to have higher rates in research positions, especially in academia.
In science education, it is crucial to create teaching materials that are directly linked to students' individual experiences. On “The Japanese Psychonomic Society Committee for Developing Teaching Materials for High School Students,” using computer graphics technology, we developed a visualization system called “Face Homunculus Viewer” (FHV) that provides an opportunity for students to gain a deeper understanding of the relationship between brains and minds. Additionally, we conducted a workshop on human touch perception using FHV. In the workshop, we defined the two-point discrimination threshold of touch as “tactile acuity,” and students measured their tactile acuity with the methods used in psychological experiments. Our contributions are the development of an interactive system for visualizing differences in tactile acuities between bodily sites within each individual and differences between individuals. We believe that this study will indicate a new direction in science education in which computer graphics can be applied.
Functional magnetic resonance imaging (fMRI) is the way to study neural correlates of consciousness involving perception, memory, learning, thinking and affection through measuring cardio vascular response using the principle of nuclear magnetic resonance. By estimating differential functions between oxy- and deoxy-hemoglobin in the blood, we can calculate BOLD signals and obtain a brain image which indirectly suggest brain's local activation induced by the current task. It should be noted that the activation images are assumed not direct evidence reflecting brain's neuronal activities. Moreover, we briefly discussed on the parameters for imaging, image analyses using SPM and restraints on the participant.