-
Seiichiro Hangai
2004 Volume 58 Issue 6 Pages
750-752
Published: June 01, 2004
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Shigeru Akamatsu
2004 Volume 58 Issue 6 Pages
753-755
Published: June 01, 2004
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Toshio Nakamura
2004 Volume 58 Issue 6 Pages
756-758
Published: June 01, 2004
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Kazuhiko Sumi
2004 Volume 58 Issue 6 Pages
759-762
Published: June 01, 2004
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Yoichi Seto
2004 Volume 58 Issue 6 Pages
763-766
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Satoshi Miyaji, Glenn Russell, Edward Miller, Yasuhiro Takishima
2004 Volume 58 Issue 6 Pages
767-772
Published: June 01, 2004
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Ichiro Kuriki
2004 Volume 58 Issue 6 Pages
773-778
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Norio Ohga
2004 Volume 58 Issue 6 Pages
779-782
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Kimihiro Tomiyama, Miwa Katayama, Yuichi Iwadate, Hiroyuki Imaizumi
2004 Volume 58 Issue 6 Pages
797-806
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
We are researching a modeling algorithm for the moving human body to generate a fine, dynamic, 3D object. In this paper, we propose a modeling method from multiple viewpoints. This modeling method uses an approximate shape, obtained by the volume intersection method with restrictive conditions set by the stereo matching method.
In addition, in order to map the image texture on the surface of the dynamic 3D objects, we propose a vertex texture mapping method based on ray-casting. Our modeling experiments were performed using 19 Firewire cameras.
We were able to confirm that the proposed methods are effective for generating fine, dynamic, 3D objects. The dynamic 3D object generated can be displayed as an “animation” by using a VRML browser.
View full abstract
-
Hideaki Takada, Shiro Suyama, Munekazu Date, Kaori Hiruma, Kenji Nakaz ...
2004 Volume 58 Issue 6 Pages
807-810
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
We have developed a compact 3-D display using a depth-fused 3-D (DFD) visual illusion in which two overlapped images with edges displayed at different depths can be perceived as a 3-D image.To make the display compact, we utilize the apparent luminance summation at two transparent LCDs instead of direct luminance sum-mation.Moving color 3-D images can thus be shown on a compact display.Our compact DFD display is promising for various applications, such as cellular phones, PDAs, and PC monitors.
View full abstract
-
Tohru Tamura
2004 Volume 58 Issue 6 Pages
811-816
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
Recently, a new three-dimensional display method that enables an observer to perceive an apparent 3D image in continuous depth when the 2D images are positioned at different depths, and when their luminances are divided between them according to the 3D image depth. This method called Depth Fusion Display (DFD) might be bas*ed on different perceptual cues from conventional stereoscopic image systems. Investigating the 3D perception cues of this new method is necessary to apply this method to a 3D display system. A depth perception model in which the edge position of the fusion image of the left and right eyes is important to perceive a 3D image has been proposed. However, this model has not been fully verified. In this study, two experiments were done to investigate the 3D perception cues of DFD. One was a monocular view experiment and the other was an experiment in which only the degree of luminance on the edges of the 2D image changed. The results showed that edge positions of the fusion image are important cues. Edge position cue model, therefore was found to be correct.
View full abstract
-
Kenji Tanaka, Junya Hayashi, Ichiro Kawabuchi, Masahiko Inami, Susumu ...
2004 Volume 58 Issue 6 Pages
819-826
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
TWISTER (Telexistence Wide-angle Immersive STEReoscope) is a panoramic autostereo display designed for face-to-face telecommunications. This device works as a cylindrical display by rotating multiple display units around an observer and by presenting time-varying patterns. Panoramic autostereo vision is achieved by using a “rotating parallax barrier” method, the effectiveness of which was already reported. To date, we have succeeded in presenting panoramic stereo motion pictures using the third prototype TWISTER III. Its design, implementation, and evaluation are reported, and past and present problems are enumerated. The discussion includes psychophysical experiments conducted to evaluate the display performance. Issues on constructing panoramic stereo images are also described.
View full abstract
-
Hiroyuki Shimizu, Noriyuki Ono, Kiyoharu Aizawa
2004 Volume 58 Issue 6 Pages
827-834
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
Construction of large-scale virtual environments are needed for various applications such as city planning and virtual tourism, among others. We have proposed a new framework by using laser altimety data and laser range data. In this paper, we described parts of this framework, namely, constructing rough 3D large-area models and integrating precise models into these.
View full abstract
-
Yusuke Oike, Makoto Ikeda, Kunihiro Asada
2004 Volume 58 Issue 6 Pages
835-841
Published: June 01, 2004
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
In this paper, we present a smart image sensor with high-speed and low-intensity ID beacon detection for an augmented reality (AR) system. AR systems are designed to provide an enhanced view of the real world with meaningful information on a computer. Our target AR system uses an optical device with an ID beacon such as a blinking LED. Our sensor architecture simultaneously achieves analog readout for 2-D image capture and high-speed digital readout for ID beacon detection. The pixel circuit has a logarithmic-response photo detector and an adaptivemodulation amplifier to detect the low-intensity ID beacon with a wide range of background illumination. We developed and successfully tested a 128x128 smart sensor. It achieves 20 bytes/ID-frame with a 40 kHz carrier at 30 fps, and less than-10.0 dB signal-to-background ratio (SBR) with more than 40 dB background illumination in robust ID-beacon detection.
View full abstract
-
Keita Ushida, Hiroshi Harashima, Jun Ishikawa
2004 Volume 58 Issue 6 Pages
842-845
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
The authors focus on building easy-to-use “interaction platforms”. To realize it, the authors have been developing “i-ball 2” which is provided with a special display system that shows the image as if it's floating in a crystal ball and various input interfaces. In this paper, hardware of i-ball 2 and its applications are reported.
View full abstract
-
Nagul Cooharojananone, Kiyoharu Aizawa
2004 Volume 58 Issue 6 Pages
846-849
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
We propose a home video summarization representing by a group of representative frames (R-frames) from each shot. Each shot, a number of R-frames is calculated by the shot duration and motion activity. When events of shots are not related to each other (One tape contains many events), it is desired to retrieve more information from more shots. Our algorithm allows users to select the number of shots appearing in the summarized video which give an optional way to understand the original sequence. In our experiments, we summarize video into variable number of shot appearing in the summary. We also evaluate the summary by users's subjective evaluations.
View full abstract
-
Atsushi Matsumura, Sei Naito, Ryoichi Kawada, Atsushi Koike, Shuichi M ...
2004 Volume 58 Issue 6 Pages
850-853
Published: June 01, 2004
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
Reference images with their depth maps are often used as materials for free viewpoint images. However, occlusion can degrade the quality of the output images. To solve this problem, we developed an effective synthesizing method using multi-layered background buffers that increases the quality of free viewpoint images. Since the buffers are created and updated dynamically without additional information such as separate background images, the method is applicable to conventional materials. The effectiveness of the proposed method was shown by several simulation experiments.
View full abstract