-
Article type: Cover
Pages
Cover1-
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Index
Pages
Toc1-
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Masaharu Inoue, Shoichi Hasegawa, Seahak Kim, Makoto Sato
Article type: Article
Session ID: HIR2001-54/NIM2001-6
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
As wire-driven force display producing resultant force by the tension of wires, the area that optimal force can be displayed is small when comparing to the movable area. This is because the area of displaying force is limited by the area computed form directions of wires. In the area that exact force cannot be displayed, the representative force closed to the exact force is displayed instead. In this case, the tension becomes discontinuity and the vibration often occurs. In this paper, we propose a tension control method to display a stable force for wire driven force display. The optimal force computed from this proposed method can efficiently solve the problem of discontinuity and eliminate the vibration.
View full abstract
-
Shunsuke KAIDA, Kinya FUJITA
Article type: Article
Session ID: HIR2001-55/NIM2001-6
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Working efficiency was analyzed in direct manipulation of virtual objects with a wearable fingertip force display system. Two kinds of virtual objects were utilized that allows the deformation of 5mm or 20mm. The elastic coefficient of the object was varied from 0 to 1600 N/m. The grasp and transport task was performed with 30ms sample interval as well as 60ms in five volunteers. Following trends were observed ; 1)the better working efficiency was obtained with larger elastic coefficients ; 2)the difficulty of the task is regulated by the allowed deformation rather than the allowed force ; 3)the better working efficiency was obtained at higher sample rate condition. It was concluded that the working efficiency can be improved by using force display devices at relatively lower sample rate by selecting the adequate elastic coefficient.
View full abstract
-
Yoshihisa Yamaguchi, Yoshifumi Kitamura, Fumio Kishino
Article type: Article
Session ID: HIR2001-56/NIM2001-6
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper discusses the effectiveness of the internal model theory using a technique of virtual tools. The novel tools in a virtual environment are developed based on a virtual chopsticks that matches a user's operational image. The processes while users learn how to use each tool are observed and investigated. The experimantal results show that the process of internal model acquistion were different from each ther by the users or variation of tasks.
View full abstract
-
Kunihiro Nishimura, Makoto Kano, Hiroyuki Aburatani, Koichi Hirota, Mi ...
Article type: Article
Session ID: HIR2001-57/NIM2001-6
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The analysis of gene expression data is expected to make clear the functions of genes in the genome science field. As the gene expression data are one of parameters of state of cells, genome science researchers hope to be able to analyze and grasp the whole gene expression data. We visualize the multi-dimensional gene expression data to understand and to deepen the insight of it. We construct the virtual environment to analyze the gene expression data interactively and to show the process of analysis using Immersive Projection Display (CABIN).
View full abstract
-
Seiji Sasakido, Yoshifumi Kitamura, Fumio Kishino
Article type: Article
Session ID: HIR2001-58/NIM2001-6
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The virtual baseball system provides users an immersive environment in which he/she can interact with several players such as pitcher, batter, and fielders through his/her body motions. In this paper, the virtual batting simulation system is presented. The user can hit the pitched ball with a bat. The system simulates several kinds of physical laws such as hydrodynamics and collisions among the ball, the bat, and the ground, and generates the motion of the ball in real-time according to the user's interaction. The practicality of the system is evaluated through subjective and quantitative validation experiments.
View full abstract
-
Keisuke Endou, Hirotake Ishii, Hidekazu Yoshikawa
Article type: Article
Session ID: HIR2001-59/NIM2001-6
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The goal of this study is developing a virtual environment for collaborative work in which a human-shaped virtual agnet would educate a trainee about the complicated tasks such as machine maintenance work and plant operation. To develop the virtual environment, some elemental technologies are required, and in this study a new method has been proposed for modeling the virtual agent's behavior by using Petri-net. Moreover, the design support system for constructing Petri-net through Graphical User Interface has been developed. As the result, it was confirmed that the design support system could make it efficient to model and decide the virtual agent's behavior.
View full abstract
-
Kazuhiko Hirose, Toshio Yamada, Tetsurou Ogi, Koichi Hirota, Michitaka ...
Article type: Article
Session ID: HIR2001-60/NIM2001-6
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Communicstions in immersive projection displays connected with each other by broad band network attract a great deal of attention. A video avatar is one of those communication method with high lively image. In a immersive projection display, like CABIN, it is presented as the proxy of other users and its image is based on a real live image of other users. However, it comes into question that video avatar system restrict user's wide motion by its narrow view of a camera. This paper describes the method of making a video avatar which can be obsearved from various directions. In this study, we developed the way how to expand view by camera tracking and how to get avatar images from video images in changeable background elements.
View full abstract
-
Takashi Kobayashi, Yoshihiro Osaka, Michiya Yamamoto, Hirotake Ishii, ...
Article type: Article
Session ID: HIR2001-61/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
As a base system of distributed virtual environment, the system has been proposed by which an user can build the various virtual worlds freely by utilizing the information resources on a network as well as participate in the DVE. This paper discribes a method for supporting the generation of user interface for such DVE, by which users can smoothly interact with various kind of virtual world. By designing a DVE system and constractry prototype environment, it was ascertained that anyone can careate the interface as he thought it to be.
View full abstract
-
Takashi Nakagawa, Yoshitaka Zaitsu, Masahiko Inami, Naoki Kawakami, Ya ...
Article type: Article
Session ID: HIR2001-62/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
A Telexistence system consits of a camera system and a display system. And we were not be able to separate these two systems, because there were only one display system suitable for the camera system. We were not be able to access many camera systems from one display system. We needed to have many display systems corresponding to camera systems. So we propose a method to combine any camera system to any display system. In this paper we describe the method and report a telexistence system which we constructed based on the method.
View full abstract
-
Syohei Naka, Hirokazu Kato, Keihachiro Tachibana
Article type: Article
Session ID: HIR2001-63/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper we describe an augmented reality system called 'Augmented Mirror' which overlays virtual objects in the mirror. A video minitor and a camera are used as a metaphor of mirror. Images captured by the small camera on the monitor are displayed on it after being flipped horizontally. Magnetic 6DOF sensors are also used to get 3D information of a user's head and hand. Then Virtual objects can be displayed on them. We made a prototype system which enables a user to put virtual glasses or virtual hats on his face or head based on this idea.
View full abstract
-
Yoshitaka Zaitsu, Takashi Nakagawa, Naoki Kawakami, Masahiko Inami, Ya ...
Article type: Article
Session ID: HIR2001-64/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The torso-shaped robot camera is the new robot camera for The Next Generation of telexistence. This robot camera is able to follow a sitting operator's head motion in real-time. By using it, the operator is able to have the same field of view as it. In this paper, we report development and verification of this robot camera.
View full abstract
-
Hiroshi Yamaguchi, Koichi Hirota, Michitaka Hirose
Article type: Article
Session ID: HIR2001-65/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
By using photographs, we are able to construct a photo-realistic virtual world. In order to construct a virtual world using photographs, the camera conditions such as rotatations, position and the lenses' focal length are necessary. However, most of existing photographs don't have such parameters. This paper proposes the method to calculate the camera parameters from photographs. We should be able to construct a photo-realistic virtual world by placing them in the three-dimensional space according to the calculated camera conditions. This paper also introduces the method to achieve a wide view field by combining some photographs taken at the same eye position. Finally, we propose a virtual world to travel along the time axis by integrating the past and the present photographs taken at the same position.
View full abstract
-
Atsushi Hiyama, Ryoko Ueoka, Koichi Hirota, Michitaka Hirose
Article type: Article
Session ID: HIR2001-66/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
This paper describes the development of hands-free input interface for wearable computing. The device is designed to operate the computer by facial actions, which are detected by the piezo films attacted to the surface of the skin. In order to evaluate the input interface using facial actions, we measured user's behavior and mental state by wearable measuring device. The user's behavior is observed by the CCD camera and pedometer, and user's mental state is measured by the RRV (Variance of R-R intervals of EGC). We evaluated the efficiency of the developed interface by comparing with cellular phone, which can connect to the intermet. We proved that the input interface using facial actions can combine computer activity and daily activities.
View full abstract
-
Naoaki Yamamoto, Noriyuki Kitajima, Takashi Takeda
Article type: Article
Session ID: HIR2001-67/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Visual landmarks seem to be important to findway in an unfamiliar town. Here we studied the role of auditory signal as landmark using virtual reality technology called IPT. One of twelve loud speakers, which were set on IPT, presented White noise and was changed to a next speaker serially to simulate sound from a constant position in virtual maze. Subject's tasks were to search five symbols in the maze and to draw the symbols on a map after searching. It was found that performance of wayfinding tend to be better with sound condition than with no sound condition, however, the difference wasn't significant statistically.
View full abstract
-
Kota Miyagawa, Satoshi Beppu, Yasuhiro Tanaka, Tatsuo Takada, Nobuyuki ...
Article type: Article
Session ID: HIR2001-68/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Currently, 3D Binocular Stereo Vision System is going to be applied in various fields ranging from medical devices (Endoscopic Surgery) to construction machinery. However, most of systems do not take care of vergence adjustment into consideration. Consequently, direction of eyes in 3D Binocular Stereo Vision System camera had been fixed. And sometimes, operator who manipulates the system often complains that operating the system tires his eyes. In order to study the relationship between vergence adjustment and fatigue of eyes, we developed a new device, which emables to adjust vergence during operation and conducted an experiment on difference in the effect of vergence adjustment among eye system individuals. In this report, the experimental results measured using the newly developed system were introduced.
View full abstract
-
Shogo Fukushima, Kenshi Suzuki, Shuji Murakami, Ryoji Nakajima
Article type: Article
Session ID: HIR2001-69/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Light weight glasses for measuring pupil and eye movement equipped with image display function have been introduced. Realtime analysis of both eyes' behavior can be achieved by using the developed image processing unit. Eye behavior includes pupillary response and eye movment under the customized visual environment generated by the embedded displays. The developed glasses are so light that itcould be used for both laboratory experiments and practical situations such as clinical use. The system can be easily applied to the medical checkup for the sense of balance called nystagmus test or smooth pursuit eye movement test without setting a display screen to present visual stimuli.
View full abstract
-
Kikuo Asai, Noritaka Osawa, Yuji Y. Sugimoto, Yoshiaki Tanaka
Article type: Article
Session ID: HIR2001-70/NIM2001-7
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have developed a virtual space interface using gesture in immersive projection display. 3D posture of human arams and hieght of body are tracked by image processing without any device on one's body. This allows us to enhance freedom of moving in the display and not to lose the immersion due to manipulation of objects. The images are taken with two monochrome cameras which have enough sensitivity even in the dark projection display. After tracking edges of the arms and top of the head, user's viewpoint is obtained from calculating movemennts of the posture. To evaluate the interface, we have done the experiment that four subjects made trials of walk-through in 3D space, comparing the gesture interface with a joystick. The results show the gesture-interface is not only faster than the joystick for moving control in the 3D virtual space, but also better at making perspective perception or immersion. The developed system is applicable as a non-contact type user interface for moving around in virtual space.
View full abstract
-
Article type: Appendix
Pages
App1-
Published: June 11, 2001
Released on J-STAGE: June 23, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS