This paper describes our studies on perception of virtual object locations. We explore the behavior of various factors related to depth perception, especially the interplay of fuzziness and binocular disparity. Experiments measure this interplay with the use of a three-dimensional display. Image fuzziness is ordinarily seen as an effect of the aerial perspective in two-dimensional displays for distant objects. We found that it can also be a source of depth representation also in three-dimensional display space. We also investigated an effect of fuzziness on a manipulation task in a virtual environment. This paper presents results of a series of experiments.
We studied on the spatial cognition in human postural control system by comparing with 3-D and 2-D images. We measured the subjects' body sway induced by repeatedly reversed rotation of 3-D and 2-D images. As a result, in the case of repeated rotation images, there existed the component corresponding to motion of visual stimuli in the sagittal body sway. As compared with 3-D and 2-D images, the body sway strongly was induced by motion of visual wide-field 3-D images rather than 2-D images.
Perceived surface slant produced by size disparities was measured by a tactile matching method. Horizontal or vertical size disparity was introduced into whole or a part of 60°-wide random dot stereoscopic display. For a 60° display, slant produced by vertical size disparity was opposite that produced by horizontal size disparity. For a smaller display presented with a zero-disparity surround, the slant produced by vertical size disparity decreased but that produced by horizontal disparity increased. The results suggest that vertical size disparity is extracted globally for the perception of surface slant.
We studied changes in convergence eye movements by using 3D display in normal young subjects. 3D images were presented by Head Mounted Display with two LCDs. Eye movements were measured by magnetic searchcoil method with eye-coils embedded in contact lenses. Subjects were asked to perform 120 trials of a disparity task in about 25 minutes. Before and after the task, their abilities to converge at a target in four different positions in depth were compared. Motor learning or fatigue was seemed to be related to the changes in ocular convergence function.
The Image conversion from 2D images into 3D images with the "Modified Time Difference" Method is proposed. This allows to convert ordinary 2D images into binocular parallactic 3D images according to the detected movements of objects in the images. Using this method, automatic and real time image conversion can be realized. We implemented this "MTD" method into a single LSI, which made the 2D/3D coversion board very compact.
To develop human interface for virtual environment, we have constructed a Space Interface Device for Artificial Reality (SPIDAR), which allow us to manipulate virtual object directly just like in real space. SPIDAR can both measure the movement of user's finger tip and offer force display. Since proper force feedback comes out of the proper position measurement, in this paper, we will analyze the possible reasons that may cause position measurement error, and propose an algorithm which can revise the error and improve position measurement precision.
The effect of adaptation at the level of relative motion on the motion aftereffect (MAE) was examined. In experiment 1,MAE induced by surrounding motion was tested by measuring the MAE duration. In experiment 2,MAE was measured by the nulling technique, with adapting and the test stimuli presented in the same area. In both experiments, effects of adaptation at the relative motion level was clearly found. These results indicate the existence of adaptation of relative motion detecting mechanisms and its crucial role on MAE, supporting the idea that MAE simultaneously reflects adaptation both at the local and the relative motion processing levels.