The influence of "miniature effect" applied to motion image upon visually induced self-motion perception (vection) was analyzed by psychophysical experiment in which 11 observers were participated. Original motion image was recorded by a camera set in front of moving train. Saturation enhancement, blurring, or both of them was applied to the original movie, as miniature transformation. The experiment revealed that the miniaturized movies can induce self-motion perception as strong as the original version, although naturalness of the image experienced under the miniature conditions were significantly detracted. The results suggested that "naturalness hypothesis" which assumed that more natural images can induce stronger vection has not been fully certified yet.
The relationship between two constraints related to perception of 3 dimensional shape from shading was examined by using stimuli with temporal modulation in shading gradient (circular patches which appear either concave or convex). The temporal frequency was manipulated between 0.5 - 8 Hz. In experiments, observers reported perception from three alternatives: (1) surface deformation, (2) movement of light source, (3) flickering. Observers perceived either (1) or (2) when the frequency was below 4 Hz, and (3) for the frequency above 4 Hz. However, the two perception modes in lower frequency range varied when the stimulus configuration, i.e. number of patches, direction and homogeneity of shading gradient, was manipulated. However, all results can be accounted for by assuming the existence of the two constraints and additional assumption about homogeneity.
Brain-Machine Interface (BMI) is a technology that can be used to interact with computers only by means of brain activities. Electroencephalogram (EEG) is used in many cases and the conventional BMI has been operated by individual subject with averaging brain signals. It has been required to improve the information transfer rate and to show new application concepts. In this paper, aiming to realize new BMI applications with improved information transfer rate, we focus on population EEG. The simultaneous measurements of P300 with three subjects were performed by using visual oddball paradigm, and the detection accuracy of population P300 was studied consequently with nine subjects. The machine learning was performed and it was found the accuracy with population subjects was remarkably higher than that with individual subject. This technique might be applied in future to the study in social psychology, neuromarketing in economics, life-log and CSCW in information systems engineering and entertainment etc.
In recent years, the human pet interaction has been emerging to realize better relationship between human and pet. Among a variety of pet, dogs have been best friends of human beings for a long time and the technology is required to understand what the dogs are thinking. In this paper, we focus on Event-Related Potential (ERP) of dogs. In the first experiment, we used suddenly-presented auditory stimulation aiming at the confirmation of the ERP occurrence on the scalp of a dog. In the following experiment, we performed similar measurements by using a variety of voice which included the voice of the dog owner. As a result, we identified electric potential to be seen as P300 after the auditory stimulations in 200 - 400ms. Furthermore, it was confirmed the effect of the voice of dog owner was significantly different from other stimulations.
In this study, we investigated the effect of self-motion perception induced by visual stimuli, i.e. vection and/or vestibular information on sound localization. To make an observer perceive vection we presented random dots which were moving laterally on a wide screen. To present vestibular stimuli we used a three-degree of freedom (3 DOF) motion platform which inclines right or left. Sound stimuli were presented behind the screen when an observer perceived self-motion induced by visual stimuli and/or the platform. The observer's task was to point the position of the sound image on the screen. Experimental results showed that perceived sound position shifted to the opposite direction of perceived self-motion induced by visual information, regardless of the direction of vestibular information. Moreover, this tendency was observed only one side of median sagittal plane; the side whose direction was same as that of the movement of visual information. The result indicates that the auditory spatial perception is changed by the self-motion due to the coexistence of visually induced self-motion and vestibular information.
Recently, several VR systems that use wind to enhance users' sensation of presence have been built. In typical systems, simple fans are often used as wind sources and those wind sources are arranged rather sparsely. However, it is not known whether the wind direction is presented precisely enough with such a configuration. In order to find a guideline for optimal arrangement of wind sources, we examined the properties of wind direction perception at the head. We measured the just-noticeable difference (JND) on the perceived wind direction at front of the face using a method of constant stimuli when using a single fan. Among various parameters that might affect the wind direction perception, we focused ourselves on the effect of difference between stimulus parts and wind velocities. The results imply that the alignment of the user's head is important and increased wind velocity might improve perception of wind direction. We also discuss important points when designing a fan-based wind display.
This study aimed at investigating if CNV-Iike variations are elicited in Electroencephalograms (EEG) when people remember human continuous motion. It is known that there are some important postures in continuous human motion, and the segments divided by the significant postures attract human attention. Since the situation is similar to Event-related brain potential (ERP) experiments, we expected that brain wave variations like Contingent Negative Variation (CNV) would also be elicited during motion remembering. In this study, we found that negative variations tend to precede segments divided by manually selected significant postures.
We measured SSVEP (steady-state visual evoked potential) using contrast-modulated flickers of photographic scenes to apply for AR (augmented reality) system. Ten participants observed the contrast-modulated flickers, which were made of alternating an original image of a photographic scene and its contrast-degraded image (0, 20, 40, 60, and 80% of original contrast) at a rate of 5, 7.5 or 10 Hz. We found that SSVEP was observed even with 80% contrast-image flickers using 1s time-window SFFT analysis. The strength of SSVEP depended on contrast and stimulus frequency. It was stronger with higher contrast and lower frequency. Then, we investigated effects of eccentricity and applied LDA (linear discrimination analysis) to test feasibility of SSVEP-based BMI (brain-machine interface) using the contrast-modulated flickers. We found that gaze could be estimated using 20% contrast-image flickers with LDA of a SSVEP at the stimulus frequency, moreover that it could be estimated using 80% contrast-image flickers with LDA of 20-dimensional data of EEG(1-20Hz). These results suggest that the contrast-modulated flickers of photographic scenes can elicit SSVEP and it could be applied to BMI using AR in a future.
As a new haptic device for daily use, a pillow-shaped haptic device using a pneumatic actuator, the Air-pillow telephone, is proposed. The Air-pillow telephone allows an intimate couple at a distance to mutually share the partner's sense of touch and presence as if they were together. We have developed a prototype system with an air bag driven by a piston-cylinder mechanism. The user's head motion affects the air pressure of the air bag and then its signal is bilaterally transmitted in real time via internet to actuate the partner's air pillow. This paper describes the Air-pillow telephone concept and application as well as implementation of the prototype system and experimental results on a commercial internet environment.
Recently, there are growing needs for accurate and interactive surgical training systems with realistic sensation. However, it takes large computation time to achieve such simulations. Our goal is to construct interactive operation training systems for doctors to practice the ablation operation. In this paper, we propose an efficient deformation and ablation simulation using online re-mesh and edge-based non-linear finite element models. In the simulation, meshes of elastic objects are refined and cut adaptively based on Mises stress. The Mises stress is computed efficiently because of the rectangular tetrahedral mesh feature, each having 3 orthogonal axes or planes. In experiment, by evaluation of accuracy of deformation and Mises stress and computation time, effectiveness of our approach was confirmed.
In this paper, we verify a hypothesis: "it has positive effects for user's memorization ability to use features of Augmented Reality (AR)". The basis of this hypothesis is derived from the following two features. One is a feature of AR: "AR can provide information associated with specific locations in the real world". The other is a feature of human memory: "humans can easily memorize information if the information is associated with specific locations". To verify this hypothesis, we conduct three user studies. As a result, significant differences are found between the situation in which information is associated with the location of the target object in the real world and that in which information is connected with an unrelated location.
Senses of presence and verisimilitude are affected differently by temporal asynchrony between audio-visual components of audio-visual content. To investigate whether this result is valid more generally, we conducted an experiment using a clip of western orchestral music. Results revealed that the sense of verisimilitude is more sensitive to audiovisual synchronicity than to the display size, whereas the sense of presence is more sensitive to the spatial size than the temporal property. These findings corresponded well with those of the previous study, which indicated that the sense of verisimilitude is distinguishable from the sense of presence. Furthermore, we discussed important considerations related to measurement for Kansei information such as the sense of presence and verisimilitude.