There are several ways of communication between human operator(s) and machine(s). In most conventional tele-existence systems.(1)body and limb movements, or (2)special input device such as keyboard and mouse, are used as human-interface. Instead of these methods, we have focused on biological signals for the purpose of operating machines by "thinking". We exemplify EOG (Electro-OculoGraphy) in this study, and use it for the manipulation of a robot eye. The experimental results have revealed the possibility and feasibility of using biological signals in tele-existence systems, and discussed the future application.
We have developed a head mounted display (HMD) equipped with infrared LEDs and a CCD camera for each eye. The purpose of the HMD is to analyze visual functions, for instance, eye movement, eyeblinks or pupil diameter while watching some visual images and to be a new device for human-computer interaction employing such visual functions. In this report, First, the specification of the HMD and the real-time process for eye's image signal derived from the CCD camera are introduced. Then, application of the HMD is described.
We describe a method to detect hand position, posture and finger bendings using multiple camera images. Stable detection of position and posture can be performed by using skeleton images. We confirmed the stability of those through experiments. This system can be used as a user interface device in a virtual environment, replacing glove type devices and overcoming most disadvantages of contact type devices. Future work includes development of parameter adjustment mechanism for hand movement and verification of availability as a gesture based man-machine interface system.
Recently, the quality of 3-D virtual spaces has become an important issue in virtual reality research. In order to generate more photo realistic 3-D virtual spaces, methods based on two dimensional photo images should also be considered in addition to the approaches based on typical three dimensional models. In this paper, two approaches based on 2-D photo images are discussed in which 3-D spaces are represented by manipulating 2-D photo images without any detailed 3-D information.
We have developed a new scene synthesizing method which makes new images by assembling parts of source images. Those parts are chosen by their rays and target points in the field of view. Our method does not need 3D reconstruction of the scene nor correspondence among source images. Since a fine image at any viewpoint and any viewing direction can be synthesized, we can easily construct virtual environments and perform powerful walkthrough operations in those environments.
Virtual reality technology has been recognized to be very effective for manipulating a remote robot-arm that is placed in hazardous environment. It is, however, very difficult to construct virtual space automatically even based on state-of-the-art ranging technology. This paper discusses a method of applying motion-stereo to interactive construction of virtual environment by using CCD camera loaded on a robot-arm.
The ratio of apparent size of a virtual object to a actual object is measured. A concrete and an abstract object were presented in a virtual or a real space which was constructed using a rectangular paralleleplped with a viewing window. Virtual objects were modeled and rendered using a computer and presented on the CRT screen. Five subjects who could easily view a 3D image participated in the experiment. Distance from eyes to the object was varied among 50,75,100,130 and 175 cm. Results showed that the apparent size of virtual objects was expressed by the exponential function of distance and the difference of apparent size ratio between concrete and abstract objects was little.
The periodical inspection of nuclear power plants is indisp ensable in their operations. It requires, however, a lot of workforces with a high degree of technical skill in assembling and disassembling various sorts of machines in hazardous environment. This paper describes the whole structure of this training system and shows how Petri-net can be used for controlling the behavior of the objects in the virtual environment, and also explains the method for identifying antomatically what to be executed next whenever the trainee wants to know.
In developing a machine-maintenance training system based on Virtual Reality technology, Petri net is a useful tool to express the state of objects in virtual environment. But the workforces for constructing a huge Petri net is too burdensome without an appropriate supporting system. In this study, we have developed a support system to construct Petri net using Tcl/Tk. This system is based on GUI(graphical user interface), and it has been shown through experiments that this support systern can decrease so much the workforces for construction of Petri net.
The Hyper Hospital is a medical care system which is distributedly constructed on an electronic information network using the virtual reality as the principal human interface. In the present study, we report an attempt to extend our Hyper Hospital system to the Internet and the intranet We used the WWW(World Wide Web) as the interface to the Internet, and implemented our Hyper Hospital system to the WWW server. Results showed that the WWW version of our Hyper Hospital system can provide the end user reconfiguable virtual world to world-wide users of the Internet. For the preliminary implementation of our Hyper Hospital system on the intranet, we reformed an immature baby incubator as a model of hospital information system. It was shown that this model can be used as a scale down model of the intra-hospital network.
In recent years, the elderly population is increasing year by year. This paper describes the system to simulate the visual functions and aging of elderly people in order to develop in-car display systems for drivers. The proposed method is implemented by image processing techniques, which is intended to visualize three vision properties ; a spatial response, a spectral transmittance of crystalline lens and an accommodation. The system can be used as a tool for young designers to experience the visual performance of old people. In addition, we propose a new method to improve visibility of display, which is intended to compensate the deteriorations of vision properties as aging.
To construct virtual reality systems with multimodal sensory interaction, it is important to make clear the function of sensory integration in human. It is indicated that active perception affects the function of sensory integration. In this paper, effects of active perception for sensory integration in comparison with effects of active learning and passive learning are shown.
The present paper discusses the possibility of application of virtual reality (VR) on human stress reduction or relaxation. Autonomic responses and subjective assessment were used to evaluate the effect of VR experience. The VR system used in a experiment consisted of two SGI's ONYX workstations, a 100 inch projector and a mouse. A driving simulation software and a flying one were introduced to make subjects relaxed and reduce the pain which was induced by thermal stimuli as an external stressor. Expect for one subject who had a heavy VR sickness, HR reduction, respiration acceleration, pain threshold elevation and subjective positive emotion were observed during VR. It is suggested that VR can beapplied to stress reduction if it is designed not to cause VR sickness.
Perceived depth from subjects' position of actual or virtual objects presented in a virtual or a real space was measured. The objects presented were chosen from abstract forms, such as a sphere and a rectangle, and well-known concrete goods, such as a tennis ball and a postcard. The virtual objects were modeled and displayed on a CRT screen so that the subjects could view them as 3D objects through a liquid-crystal-shuttered glasses. Fifteen subjects expressed the perceived distance to objects verbally and seven responded by indicating the object position with their index finger. Every actual object was perceived to exist at a distance proportional to real one, but it was always underestimated. Nearer virtual objects were felt more distant than actual and farther ones were felt nearer than actual. These characteristics were explained by a two-channel model with the accommodation sensory channel and the convergence sensory channel under hypothesis that if the values obtained from the two channels differ, depth perception is compensated using the both values.
We have developed a platform for the training system using the virtual reality technologies. As a part of our research, we have begun the study of preliminary experiments for the development in the future of the diagnostic evaluation system of human brain's functionalities. This paper reports the results obtained from our psychophysical experiments on the psychological effects exerted on the observer by visual wide-field display.
It is important to make better presence in virtual reality technology. It is necessary to discuss the effect of virtual force provided by force display on the human's mental model. This paper proposes the concept of force mental model and describes the model in a semantic space made up by force sense factors. Using the semantic differential technique, it is possible to analyze the effect of virtual force in the view of psychology.
Higher dimensional space enhances intellectual activity of human beings 3D graphics contains much more information than 2D graphics. We proposed visual and haptic representation of five-dimensional space. Our 5D space is generated by scanning 3D cube. The user's hand can essentially move in 3D space. We therefor use rotational motion of the hand for scanning 3D cube in 5D cube. The 3D cube is cutting volume of the 5D cube. The cutting volume moves by rotational motion around roll and pitch axis of the user's hand. Force display presents potential field which indicates axis of rotation. The user can easily separate rotational motion from translational motion by force feedback. Usability of the 5D cube is examined through recognition performance tests.
The purpose of this paper is to provide the relationship between the pin-matrix density of a tactile display and the recognition performance of displayed 3-dimensional shapes. Three kinds of pin-matrix tactile displays, that generate 3D shapes, were used for the experiment. The pitch of pin was 5mm, 3mm, 2mm for each. As we assumed that surfaces, edges and vertices were the primitive information of the 3D shapes, tested shapes were classified into these three categories. As for the performance data, recognition time and classified error counts were measured. As the results, we obtained that the relationship between pin-matrix density and recognition performance data depend deeply on the primitives of 3D shapes.