-
原稿種別: 表紙
2008 年 13 巻 2 号 p.
Cover1-
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 目次
2008 年 13 巻 2 号 p.
Toc1-
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 目次
2008 年 13 巻 2 号 p.
Toc2-
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
-
竹村 治雄, 加藤 博一
原稿種別: 本文
2008 年 13 巻 2 号 p.
123-
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 付録等
2008 年 13 巻 2 号 p.
124-
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
-
石黒 祥生, 比嘉 恭太, 木村 朝子, 柴田 史久, 田村 秀行
原稿種別: 本文
2008 年 13 巻 2 号 p.
125-128
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
The perceptual effects for mixed realty space by the newly introduced audio information in audio-visual mixed reality space were studied in this research. First of all, the experiment whether the audio information could help depth perception at MR space was carried out before and after introduction of audio information. As a result, audio information could help the depth perception. We also discovered when the audio information such as volume was changed, observers felt strongly discomfort to change of delay time and that affected to sound volume impression. In conclusion, real scale audio information could create the high realistic and immersive sensation at the miniature world in MR space.
抄録全体を表示
-
家崎 明子, 杣田 明弘, 木村 朝子, 柴田 史久, 田村 秀行
原稿種別: 本文
2008 年 13 巻 2 号 p.
129-139
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
This paper describes the influence of visual stimulation on tactual sense in mixed reality (MR) environment; i.e. how tactual impression of real object is affected by seeing a superimposed image which is taken from a different kind of material. If the behavior and extent of such an influence, a sort of illusion, are well investigated, the objects whose materials are very limited can be perceived differently. This would greatly come in useful in the field of digital engineering. Thus, we performed various experiments systematically. As the result, we obtained the interesting and promising result: (1) the feeling of texture can be represented by MR visual stimulation if the roughness on the surface of object is visually and tactually similar, and (2) even though the roughness on the object is visually and tactually similar, the feeling of texture cannot be represented because of the hardness when gripping the object.
抄録全体を表示
-
村瀬 香緒里, 小木 哲朗, 齋藤 康太, 小山 尚英
原稿種別: 本文
2008 年 13 巻 2 号 p.
141-150
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
This paper proposes an immersive augmented reality display system, named "AR View", that generates a high presence augmented reality environment using the immersive projection technology. In this system, the stereoscopic image of virtual objects projected onto a floor screen by stereo projectors and real objects placed in front of or behind a highly transparent mirror film are combined optically, using the mirror placed at an angle of 45 degrees to the floor. In order to create correct occlusion effect in over large area of the augmented realty environment, light projectors have been used to illuminate the surface of real objects using the occlusion shadow function rather than standard light bulbs. The AR View was applied to various applications such as the high presence communication using a video avatar by connecting it to a broadband network.
抄録全体を表示
-
穴吹 まほろ, 石井 裕
原稿種別: 本文
2008 年 13 巻 2 号 p.
151-160
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
We introduce AR-Jig, a new handheld tangible user interface for 3D digital modeling in Augmented Reality space. AR-Jig has a pin array that displays a 2D physical curve coincident with a contour of a digitally-displayed 3D form. It supports physical interaction with a portion of a 3D digital representation, allowing 3D forms to be directly touched and modified. This project leaves the majority of the data in the digital domain but gives physicality to any portion of the larger digital dataset via a handheld tool. Through informal evaluations, we demonstrate AR-Jig would be useful for a design domain where manual modeling skills are critical.
抄録全体を表示
-
薄 充孝, 中川 知香, 佐藤 智和, 横矢 直和
原稿種別: 本文
2008 年 13 巻 2 号 p.
161-170
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
In this paper, we propose a novel method that estimates camera position and posture from a single image using a feature landmark database. Conventionally, several kinds of camera parameter estimation methods are proposed in which pre-constructed database is used to estimate the camera parameters. In the most of these methods, they achieve fast and high accurate estimation by limiting searching range for the database using the assumption that camera motion for successive image frames is small. However, if the input is a single image, these approaches do not work because there is no good initial parameter to limit the searching range for the database. In this research, we gradually limits the searching range for the landmark database by using GPS position, SIFT distance, and consistency of camera position and posture. The validity of the proposed method has been shown through experiments for an outdoor environment.
抄録全体を表示
-
牧田 孝嗣, 神原 誠之, 横矢 直和
原稿種別: 本文
2008 年 13 巻 2 号 p.
171-181
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
This paper describes a wearable annotation overlay system which can correctly annotate dynamic users of wearable computers. To provide users with the newest annotation information, a network-shared database system for wearable AR systems has been proposed. With the database, a wearable annotation overlay system which can dynamically annotate users of wearable computers has been investigated. In conventional systems, since dynamic users' positions are transmitted to wearable AR systems via a shared database server, it is difficult to overlay annotations at a correct position mainly due to the low frequency of updating and the delay of client-server communication. In this paper, we propose a new method for wearable AR systems to obtain dynamic users' positions via hybrid peer-to-peer(P2P) network. The proposed method can overlay annotations on dynamic users correctly enough to show the relationships between users and annotations.
抄録全体を表示
-
小竹 大輔, 佐藤 清秀, 内山 晋二, 山本 裕之
原稿種別: 本文
2008 年 13 巻 2 号 p.
183-193
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
We propose a hybrid camera pose estimation method using an inclination sensor and line segments. While our method does not require any prior information on the camera pose and the correspondences between 2D and 3D line segments, it can calculate the camera pose fast using the two-step algorithm that calculates azimuth first and then position under the inclination constraint obtained from the sensor. These features meet the requirement of initialization for edge-based registration used in mixed reality (MR) systems. This paper describes the details of the method and shows its effectiveness with experiments in which the method is used in an actual MR application.
抄録全体を表示
-
遠藤 隆明, 佐藤 清秀, 内山 晋二, 山本 裕之
原稿種別: 本文
2008 年 13 巻 2 号 p.
195-205
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
Stable estimation of camera pose is one of the most important keys to put a Mixed Realty system into practical use. In this paper, we carefully analyze multiple factors that are destabilizing camera pose estimation, then comprehensively discuss and develop strategy how to stabilize it. According to the strategy, we propose two methods for estimating the stable camera pose; one is reference constrain-based method and the other is error distribution-based method. In addition to theoretical discussions, we demonstrate the effectiveness of our methods by experiments in comparison with conventional methods.
抄録全体を表示
-
川本 海, 蔵田 武志, 酒田 信親, 大隈 隆史, 葛岡 英明
原稿種別: 本文
2008 年 13 巻 2 号 p.
207-215
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
This paper describes a novel method for measuring the position and orientation of physical tags on a large tabletop display. This method employs a set of photo sensors and accelerometers embedded in a tag to observe fiducial marker patterns shown on the display and to predict the incoming position/orientation of the tag. In this paper, we especially propose a new fiducial marker pattern that is more robust in terms of ambient light, the unevenness of display luminance, and the measurement range than the one previously proposed. The new pattern also makes the physical tags smaller and less obtrusive. We finally show the results of several preliminary experiments that we conducted for each sensor device.
抄録全体を表示
-
野嶋 琢也, 梶本 裕之
原稿種別: 本文
2008 年 13 巻 2 号 p.
217-225
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
The head up display (HUD) is becoming increasingly common in the aerospace field because it has many benefits such as enabling operations in poor visibility and improving flight safety. The HUD is a kind of augmented reality display that enables a pilot to observe the scene outside the cockpit while simultaneously viewing an artificial image of flight information. However, the HUD is too expensive and heavy for light airplanes. In this paper, we propose a new method to compose a simple HUD using Retro-reflective Projection Technology and a propeller. In this report, we also describe the developed system and preliminary experimental results.
抄録全体を表示
-
比嘉 恭太, 西浦 敬信, 木村 朝子, 柴田 史久, 田村 秀行
原稿種別: 本文
2008 年 13 巻 2 号 p.
227-237
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
In virtual reality, there have been many implementations using both audio and visual senses. However, mixed reality (MR), which merges the real and virtual worlds in real time, has thus far dealt with these implementations using only the visual sense. In this study, we developed an MR system that merged the real and virtual worlds in both the audio and visual senses, and in which the geometric consistencies of the audio and visual senses were coordinated. We also tried two approaches for merging real and virtual worlds in the audio sense, one using open-air headphones, and the other using closed-air headphones. The former corresponded to an optical see-through method and the latter, in the case of the visual sense, to a video see-through method.
抄録全体を表示
-
吉野 祥之, 苗村 健
原稿種別: 本文
2008 年 13 巻 2 号 p.
239-246
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
We have developed a sound localization system that presents sound images only to the users by using ultrasonic wave. We call this the u-soul(Ultrasonic/Ubiquitous-SOUnd Localization interface). It provides the auditory differences in volume and arrival time between both ears to realize sound localization. For this purpose, we introduce a headphone with ultrasonic microphones which capture the sounds modulated into the ultrasonic wave. In this paper, we describe the system design, an algorithm for modulation, and some applications.
抄録全体を表示
-
大槻 麻衣, 木村 朝子, 西浦 敬信, 柴田 史久, 田村 秀行
原稿種別: 本文
2008 年 13 巻 2 号 p.
247-255
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
In mixed reality (MR) space, there is the characteristic that the users can see their hands and the interactive device in their hand. In this study, we propose the novel interaction method with MR space using this characteristic. This multi-model interface was developed based on the traditional pointing device that can manipulate 2D plane or 3D space directly and its feedback uses sound event in the real world, and the visual feedback (and tactile feedback in some cases) in the MR space. More specifically, we attached the small liner microphone array onto a head mounted display (HMD), used the direction and position of sound source for input into MR space, and extended it from 2D to 3D pointing. This paper describes some implementation using our method such as the menu selection and the MR attraction.
抄録全体を表示
-
中里 祐介, 神原 誠之, 横矢 直和
原稿種別: 本文
2008 年 13 巻 2 号 p.
257-266
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
A wearable augmented reality (AR) system has received a great deal of attention as a new method for displaying location-based information in the real world. In wearable AR, it is required to precisely measure position and orientation of a user for merging the real and virtual worlds. This paper proposes a user localization system for wearable AR in indoor environments. To realize a localization system, it is necessary to easily construct environments for localization without producing undesirable visual effects. In the proposed system, wallpapers containing printed invisible markers are pasted on ceilings or walls. To construct environments for localization, this system contains a tool which calibrates the alignment of the markers from photos of the markers with digital still camera. The user's position and orientation are estimated by recognizing the markers using an infrared camera with infrared LEDs.
抄録全体を表示
-
元川 洋一, 斎藤 英雄
原稿種別: 本文
2008 年 13 巻 2 号 p.
267-277
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
Playing the guitar is very difficult for beginners because of the complex and unfamiliar hand positions that are required. We propose the method to track the guitar movement using the structure of the guitar for realizing a support system for guitar playing using Augmented Reality. The system can track the guitar movement even when a part of the guitar is occluded, and the visual aid information can be projected at the proper position with high accuracy because our method utilizes a wide range of features obtained from the guitar. Furthermore, we do not need to attach the visual marker on the guitar, so there are no obstacles to play the guitar, and the preparation required beforehand is very simple.
抄録全体を表示
-
高田 大輔, 小川 剛史, 清川 清, 竹村 治雄
原稿種別: 本文
2008 年 13 巻 2 号 p.
279-287
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
Filtering annotations is very important in networked wearable AR systems in order for server to efficiently deliver information that each user needs to his/her wearable computer. In this paper, we propose a hierarchical data structure associated with the real environment, a dynamic priority control technique for filtering annotations, and divided/weighted transfer of annotations. Our dynamic priority control technique is able to give higher priority to more necessary annotations according to user's position and angular velocity of viewing direction. We also show the results of simulation experiments regarding performance evaluation of our proposed technique.
抄録全体を表示
-
石井 裕剛, 中井 俊憲, 卞 志強, 下田 宏, 泉 正憲, 森下 喜嗣
原稿種別: 本文
2008 年 13 巻 2 号 p.
289-300
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
A support method of decommissioning work of nuclear power plants using Augmented Reality was proposed and a prototype system was developed. Then, using the prototype system, the proposed method was evaluated. Three workers used the prototype system along with a scenario which dismantles an ion tower, and then questionnaire and interview investigation was conducted. As the results, it was found that the proposed method that records the cutting parts on the CAD data using a stylus pen seems easy for the workers compared to the legacy recording method using paper documents, but a new tablet PC that has a larger screen with lighter weight is necessary in order to apply the proposed method to the real decommissioning work.
抄録全体を表示
-
向井 信彦, 豊嶋 匡樹, 小杉 信
原稿種別: 本文
2008 年 13 巻 2 号 p.
301-307
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
Computer graphics (CG) has enabled us to represent sand deformation by using particle-based method. On the other hand, there is a problem that sand is blown by wind, and accumulated on a road next to seashore, and it takes a lot of time and work to find where and how many sand proof fences should be installed. Therefore, this paper describes a representation method of sandy beach with a sand proof fence by CG. By considering the slope of the sand dunes, boundary conditions near sand proof fences and change of wind direction, we have succeeded to represent sandy beach that is very similar to the real one.
抄録全体を表示
-
原稿種別: 付録等
2008 年 13 巻 2 号 p.
309-311
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 付録等
2008 年 13 巻 2 号 p.
App1-
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 表紙
2008 年 13 巻 2 号 p.
Cover2-
発行日: 2008/06/30
公開日: 2017/02/01
ジャーナル
フリー