-
原稿種別: 表紙
2005 年 10 巻 3 号 p.
Cover1-
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 目次
2005 年 10 巻 3 号 p.
Toc1-
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 文献目録等
2005 年 10 巻 3 号 p.
Misc1-
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
-
横矢 直和, 竹村 治雄
原稿種別: 本文
2005 年 10 巻 3 号 p.
271-
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
-
一刈 良介, 木村 朝子, 柴田 史久, 田村 秀行
原稿種別: 本文
2005 年 10 巻 3 号 p.
273-280
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
Compared with visual effects in movies or full CG animations, Mixed-Reality based content production, a technique to overlap CG image with real world scene, demands more strict real-time representation and consistency with real environment. Natural phenomena are sometimes too complicated to describe in MR environment by physical modeling. We pick up "Falling Natural Objects" such as cherry blossoms, autumn leaves and snow to represent for stage effects. We proposed an effective method to approximate movements of fluttering natural objects by normal random walk. A simulation confirmed that the proposed method was also adaptable to wind change and different CPU load to represent various polygon size objects.
抄録全体を表示
-
高橋 和彦, 柴崎 隆男, 赤塚 祐一郎
原稿種別: 本文
2005 年 10 巻 3 号 p.
281-284
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
This paper describes a new AR system that it is easy to carry and work stand-alone. We developed OST (Optical See-Through) viewer that can compute positional posture with displaying 3D-CG at 10fps. Moreover, the calibration problems including this OST method are listed. Our two steps-calibration makes it able to use the viewer without complicated calibration when the re-installation.
抄録全体を表示
-
大江 統子, 佐藤 智和, 横矢 直和
原稿種別: 本文
2005 年 10 巻 3 号 p.
285-294
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
In the field of wearable augmented reality, absolute position and posture of user's camera are necessary to overlay CGs on real images. In this research field, two kinds of approaches have been investigated; sensor-based and vision-based methods. However, both kinds of conventional methods have a problem that it was not easy to use these methods in large and variable environments; usable environment is limited by selection of sensors in the sensor based method, and many artificial markers must be allocated in large environment in the vision based method to prevent accumulative errors. This paper proposes a novel vision-based camera position and posture estimation method which does not require sensors and artificial markers. In the proposed method, feature landmark database is constructed in advance by using large number of natural feature points in omni-directional image sequences. Next, by detecting feature landmarks from images captured by wearable camera, camera position and posture are estimated in every frame without accumulative errors.
抄録全体を表示
-
中里 祐介, 神原 誠之, 横矢 直和
原稿種別: 本文
2005 年 10 巻 3 号 p.
295-304
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
To realize an augmented reality (AR) system using a wearable computer, the exact position and orientation of a user are required. We propose a localization method which is based on using an IR camera and invisible visual markers consisting of translucent retro-reflectors. In the method, to stably extract the regions of markers from the captured images, the camera captures the reflection of IR LEDs that are flashed on and off continuously. In experiments, we first describe the quantitative evaluation in computer simulation to decide the alignment of markers in real environments. We then carry out the localization experiments in real environments in which markers are installed based on the marker alignment decided by simulations. Finally, we develop a wearable augmented reality system using the proposed localization system.
抄録全体を表示
-
天目 隆平, 神原 誠之, 横矢 直和
原稿種別: 本文
2005 年 10 巻 3 号 p.
305-311
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
By realizing augmented reality on wearable computers, it becomes possible to overlay annotations on the real world based on user's current position and orientation. However, it is difficult for the user to understand links between annotations and real objects intuitively when the scene is complicated or many annotations are overlaid at the same time. This paper describes a view management method which emphasizes user's gazed real objects and their annotations using 3D models of the scene. The proposed method effectively highlights the objects gazed by the user. In addition, when the gazed object is occluded by other real objects, the object is complemented by using an image, which is made from 3D models, on the overlaid image.
抄録全体を表示
-
竹村 雅幸, 大田 友一
原稿種別: 本文
2005 年 10 巻 3 号 p.
313-321
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
We propose a new scheme to restore gaze awareness among multiple users in a Shared Mixed-Reality Space. The gaze awareness in a mixed-reality space is lost as the side effect of wearing head-mounted displays. Our system synthesizes facial images in real time with arbitrary poses and eye expressions by using several photographs and a 3D-model of the user. The facial image is overlaid in order to diminish the HMD in his partner's view for the recovery of gaze awareness. Subjective evaluations of the proposed scheme are conducted by examining the perception of gaze direction between two users in the developed system.
抄録全体を表示
-
柴田 史久, 木村 朝子, 橋本 崇, 古野 光紀, 平岡 貴志, 田村 秀行
原稿種別: 本文
2005 年 10 巻 3 号 p.
323-331
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
We are now aiming at building of a general framework of mixed reality (MR) systems in which MR functions can be installed onto various kinds of mobile computers such as cellular phones, personal digital assistants (PDAs) and wearable computers. Our goal is not only to develop a software layer of application independent mobile platform but also to establish a general framework that can absorb the difference between types and performance of mobile computers. We developed a system architecture in which MR functions were distributed into a server and clients. Then we designed a content description language called SKiT-XML. This language is used to describe MR contents used in communication between the server and the clients. We implemented a cabling support system for OA floor as a sample application and verified the effectiveness of the framework.
抄録全体を表示
-
酒田 信親, 蔵田 武志, 興梠 正克, 葛岡 英明, ビリングハースト マーク
原稿種別: 本文
2005 年 10 巻 3 号 p.
333-342
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
The Wearable Active Camera/Laser (WACL) is a new collaboration tool that allows remote experts not only to independently set their viewpoints into the workplace of fieldworkers wearing the WACL but also to point to real objects directly with the laser spot. In this paper, we introduce the WACL and the functions such as the camera/laser head stabilization, and describe a user study to examine the advantages and limitations of the WACL interface in remote collaboration by comparing a head-mounted display and a head-mounted camera based headset interface. Results show that the WACL is more comfortable to wear, is more eye-friendly, and causes less fatigue to the wearer, although there is no significant difference in task completion time. In communication analysis, Experts talked more to workers wearing the WACL when detailed instructions were needed and experts talked more to workers wearing the headset when view changes were required. Finally we clarify improving the WACL so as to redress the communication asymmetries by enhancing the visual assistance.
抄録全体を表示
-
浦谷 謙吾, 町田 貴史, 清川 清, 竹村 治雄
原稿種別: 本文
2005 年 10 巻 3 号 p.
343-352
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
It is often difficult to perceive the depth of virtual objects in an augmented reality (AR) environment due to poor depth cues and visual congestion. Existing visualization techniques that address the depth ambiguity problem are intended primarily for normal objects having 3D geometry. In this paper, we discuss depth visualization techniques for virtual annotations. Virtual annotations are used to display key information in AR applications such as navigation and maintenance systems. We begin by describing the depth ambiguity problem with regard to virtual annotations. Next, a number of possible solutions are discussed by introducing a metaphor of monocular vision. Finally, we describe three experiments that reveal the effectiveness and characteristics of the three visualization techniques implemented in the prototype system.
抄録全体を表示
-
立野 圭祐, 竹村 雅幸, 大田 友一
原稿種別: 本文
2005 年 10 巻 3 号 p.
353-361
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
The concept of "enhanced eyes" to restore gaze awareness in a collaborative mixed-reality space is proposed. Three "enhanced eyes" schemes are described: controlling the highlight in the eyes to aid awareness of eye contact, deforming the eyelids to enhance eye motion, and adjusting the rotation angle of the eyeballs to improve perception of gaze direction. The effectiveness of the schemes has been confirmed by subjective evaluations. The "enhanced eyes" emulate the natural appearance of the face and are effective not only to indicate gaze direction but also to create the feeling of gaze.
抄録全体を表示
-
嵯峨 智, 川上 直樹, 舘 〓
原稿種別: 本文
2005 年 10 巻 3 号 p.
363-369
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
We propose a haptic teaching system that uses haptic device to teach some handwork skills. Specifically, we chose some tasks with pencil-like devices and examined how to teach expert's handwork skills. We propose a new haptic teaching method, in which the haptic device produces force that is opposite in direction. The operator try to cancel the force and consequently, necessary force is "proactively" generated. Our hypothesis is that this "proactiveness" is essential for haptic teaching. We made a prototype system and compared our methods with existing teaching methods.
抄録全体を表示
-
林 建一, 加藤 博一, 西田 正吾
原稿種別: 本文
2005 年 10 巻 3 号 p.
371-380
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
In Augmented Reality, the occlusion between virtual and real objects has to be managed correctly so that users can look at the natural scene. In order to overcome this problem, the depth of the real world from the user's viewpoint is employed. However, if the system is used in an environment including moving or deformable objects, the depth has to be measured in real-time. In this paper, we propose a method for real-time stereo matching using a contour based approach to acquire the accurate depth of the boundary of real objects.
抄録全体を表示
-
坂内 祐一, 宮狭 和大, 鈴木 雄士, 重野 寛, 岡田 謙一
原稿種別: 本文
2005 年 10 巻 3 号 p.
381-389
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
We propose a remote collaboration system where each of the worker and the expert is wearing a video see-through HMD. In this system, the expert can interact with mixed reality objects in the worker's space sharing the stereo images form the worker's viewpoint. While the system has the WYSIWIS feature as a merit, it has the defect that the expert's field-of-view is completely controlled by the worker. We conducted an experimental evaluation to investigate the effects of these features, and conclude that the expert can assist the worker's task using this system.
抄録全体を表示
-
佐藤 清秀, 内山 晋二, 山本 裕之
原稿種別: 本文
2005 年 10 巻 3 号 p.
391-400
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
This paper describes a new robust registration framework, which is named "UG+B", for Augmented or Mixed Reality (AR/MR) applications. Our methods measure pose of a head-mounted display (HMD) using not only an image captured by a user's view camera mounted on the HMD, but also images captured by bird's-eye view cameras each of which observes the HMD from a third-person's viewpoint, as well as an orientation measured by a sensor mounted on the HMD. In our methods, the markers placed in the scene and that on the HMD are tracked on the user's view and the bird's-eye view image respectively. Then, the pose of the HMD is estimated to minimize the sum of the re-projection error for all markers. Furthermore, the inclination angles measured by the orientation sensor are used as known values to constrain the pose, thus, the remaining parameters, i.e. the position and azimuth angle, are deduced by image information. In addition to theoretical discussions, this paper demonstrates the effectiveness of our methods by experiments in comparison with conventional vision-based registration methods.
抄録全体を表示
-
小竹 大輔, 内山 晋二, 山本 裕之
原稿種別: 本文
2005 年 10 巻 3 号 p.
401-410
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
This paper describes a calibration method of markers which are used for registration in MR applications. In vision-based registration methods where multiple markers are utilized, it is necessary that the geometric information of the marker arrangement such as their positions and orientations be known in advance. In this paper, we propose a hybrid method combining the conventional bundle adjustment method with some constraints on the marker arrangement which are obtained a priori. After showing the basic framework that achieves our concept, the specific solutions for some typical constraints are described. Then, some experiments are shown to prove the effectiveness of our method.
抄録全体を表示
-
青木 洋一, 花谷 佐和子, 堀井 千夏, 佐藤 宏介
原稿種別: 本文
2005 年 10 巻 3 号 p.
411-419
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
This paper presents a new virtual reproduction methodology on color property using an optical see-through display with a special half-mirror with variable transmittance. The model of this method adjusts the mixture ratio between real and virtual scene by estimating what color should be displayed and how much the most effective transmittance of the variable filter is. This method of controlling the most suitable transmittance can expand the range of the color reproduction quality on Mixed Reality worlds. In this paper, we explain the mathematical model of the color restoration and experimental results for changing the color appearance of a real degraded painting, Japanese Ukiyoe, for virtual heritage.
抄録全体を表示
-
舟橋 健司, フェルズ シドニー
原稿種別: 本文
2005 年 10 巻 3 号 p.
421-428
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
In this paper, we describe the method to express water wave and splash for virtual swimming as artwork; Swimming Across the Pacific. Our water simulation is based on the method in which the water surface is modeled as a thin film. For the sea surface plane waves, we use recurrence relations to solve the partial differential equation for the 2D wave as is standard practice. In this method, the sea surface is modeled as a mesh. We approximate the volume of the swimmer with bounding boxes to improve performance and determine when they intersect any of the water grids when they move. Then waves are then made and propagated. If the height is over a pre-defined threshold value and/or the swimming avatars motion interferes with waves, some particles are made in the air to simulate splashing.
抄録全体を表示
-
大石 岳史, 増田 智仁, 倉爪 亮, 池内 克史
原稿種別: 本文
2005 年 10 巻 3 号 p.
429-436
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
We digitally restored the original Great Buddha and the main hall of Todaiji temple. Todaiji temple has been destroyed by natural and artificial disasters and rebuilt for a number of times. As a result, the shapes of the current Great Buddha and its main hall are slightly different from the original states. We reconstructed the 3D models of the original Great Buddha and its main hall. The 3D model of the original Great Buddha was reconstructed by morphing the 3D model of the current Great Buddha, which was obtained by laser scanning and Modeling-from-Reality techniques. The 3D model of the original main hall was reconstructed by assembling the partial 3D models of the main hall of Toshodaiji temple. We combined these models and visualized the original state of Todaiji temple by Computer Graphics techniques.
抄録全体を表示
-
渡辺 啓蔵
原稿種別: 本文
2005 年 10 巻 3 号 p.
437-442
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
An original method for visualizing 4D hyperobjects is presented. The color of a point on the 2D projection of a 4D hyperobject is determined by the geometry information integrated along the intersection between hypersurfaces of the hyperobject and a 'planar ray' from the observer's eye, a set of these planar rays covers the whole hypersurface area in 4D space. The geometry information reads out with light, which is pseudoreflected by not the hypersurface itself but the displacement of its normal to allow the observer to be aware of hypersurface boundaries and curving on the hyperobject. This method can visualize both hyperobjects made of flat hypersurfaces like 4D polytopes and ones made of curved hypersurface like 4D hypersphere.
抄録全体を表示
-
友國 誠至, 平井 慎一
原稿種別: 本文
2005 年 10 巻 3 号 p.
443-452
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
Deformable soft objects such as food and tissue show both elastic and viscoplastic properties, and are referred to as rheological objects. A physical model of virtual objects has been developed, but computer power is insufficient to compute large virtual rheological objects in real-time. This paper describes the real-time computation of the deformation of virtual rheological objects on an FPGA (Field Programmable Gate Array). An FPGA is an LSI in which logical circuit is rewritable. FPGAs enable us to perform computation of virtual rheological deformation in parallel. We designed a logical circuit to compute the deformation of virtual rheological objects on an FPGA, and we realized a system that computes the deformation 8.26 times as fast as a PC with 1.7 GHz CPU. Our estimations show that an FPGA is capable of computing the deformation 75.2 times faster than a PC.
抄録全体を表示
-
原稿種別: 付録等
2005 年 10 巻 3 号 p.
453-455
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 付録等
2005 年 10 巻 3 号 p.
App1-
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー
-
原稿種別: 表紙
2005 年 10 巻 3 号 p.
Cover2-
発行日: 2005/09/30
公開日: 2017/02/01
ジャーナル
フリー