-
Kiyoharu Aizawa
2003 Volume 57 Issue 10 Pages
1228-1229
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Tatsuaki Hashimoto, Takashi Kubota
2003 Volume 57 Issue 10 Pages
1230-1234
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Takao Akatsuka, Tadanori Fukami, Akira Uchida, Jin Wu, Tohoru Takeda
2003 Volume 57 Issue 10 Pages
1235-1240
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Toshiharu Mukai, Masahiro Kawakita
2003 Volume 57 Issue 10 Pages
1241-1245
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Masaaki Fukumoto, Yasuyuki Sumi, Kenji Mase
2003 Volume 57 Issue 10 Pages
1246-1250
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Tools and Tips about LaTeX for Windows
Takeo Azuma
2003 Volume 57 Issue 10 Pages
1251-1256
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Private Animation Studio
Sonoko Kifune
2003 Volume 57 Issue 10 Pages
1257-1264
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Nobuhide Nakamura
2003 Volume 57 Issue 10 Pages
1265-1269
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Kohei Inoue, Kiichi Urahama
2003 Volume 57 Issue 10 Pages
1279-1283
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
Stippling is a simple non-photorealistic rendering (NPR) method. NPR stippling images are monochromatic binary images, which are made up of black dots distributed over white backgrounds. In this paper. we extend the composition and present methods for producing monochromatic multi-valued stippling and color stippling images. Our method is based on the scheme of point redistribution according to probability distributions presented by Secord et al. We extend their scheme and apply it to stippling with multi-valued dots over multi-valued backgrounds and to color images. Additionally, we develope a technique for improving the detail description by varying the dot sizes according to the gradient of tonal intensities of the input images.
View full abstract
-
Yuji Teshima, Koji Nishio, Ken-ichi Kobori
2003 Volume 57 Issue 10 Pages
1284-1291
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
Similarity retrieval for three-dimensional models has been in the spotlight because of the increase in digital archives. We propose a similarity estimation method that uses mathematical morphology. It consists of a feature extraction and a similarity estimation process. First, position of a shape is determined by eigenvectors of the covariance matrix, and an amount of feature corresponding to the projections and hollows of the shape is extracted by repeating mathematical morphology. As the direction of the shape is occasionally reversed in the case that the eigenvectors of the covariance matrix are used, we apply the standard deviation to an amount of feature. Second, the degree of similarity between the amount of feature and those which are preserved in advance is calculated. Experimental results showed that our method is more effective than conventional ones.
View full abstract
-
Keita Takahashit, Takeshi Naemurat, Hiroshi Harashima
2003 Volume 57 Issue 10 Pages
1292-1299
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
This paper focuses on the sampling problem in light field rendering (LFR), which is a fundamental approach to image based rendering. The quality of LFR depends on the light-ray database generated from pre-acquired images, since image synthesis is the process of gathering appropriate light-ray data from the database. For improving the quality, interpolation of light-ray data is effective. It is based on the assumption that objects in a scene are placed on a plane called a “focal plane”. According to the depth of the focal plane (which is the distance between cameras and the focal plane), a focus-like effect would appear on the synthesized images. In this paper, we formulate the depth of field in LFR to address the range of depth where scene objects can be rendered in focus. Our theory is based on the plenoptic sampling theory, and includes some other related works. The proposed concept could be applicable for intuitive measurements of synthesis quality, configurations of sampling conditions, and evaluations of spatial coding methods.
View full abstract
-
Bunyou Okumura, Takashi Machida, Naokazu Yokoya
2003 Volume 57 Issue 10 Pages
1300-1306
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
A technique is described for rendering in real time reflections on surfaces with non-uniform roughness, which is difficult to do with conventional methods. It combines off-line processing with hardware rendering. First, a multiscale environment map is created of the rough surfaces. Next, the pixels that would significantly affect the rendering results in an omnidirectional image are filtered out. Finally, the reflections on the surfaces are rendered in real time using the map to reflect the effects of the roughness. Experiments demonstrated that the proposed method can render surfaces with non-uniform roughness sufficiently accurately in real time.
View full abstract
-
Yoshihiro Yasumuro, Yu Ishikawa, Masataka Imura, Koichi Minami, Yoshit ...
2003 Volume 57 Issue 10 Pages
1307-1313
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
For presenting virtual objects in a real space, mixed reality (MR) technique is very effective to simulate an overview of a scenery. In interior design, for instance, the coordinates of the arrangements and the colours of the items need to be determined based on various layout plans. We propose a new framework for interactively arranging and rendering virtual objects in real space by manipulating physical 3D markers. A 3D marker consists of 2D codes and a spherical mirror for acquiring both geometric and photometric conditions by single camera instantaneously. Experimental results showed that the proposed approach is effective for providing appropriate geometry and local lighting conditions to present each virtual object in a simple system. The proposed system allows the user to experiment arranging of virtual furniture and verify the created scene which is visually consistent with the physical space.
View full abstract
-
Keiko Yoshimoto, Masataka Imura, Ichiroh Kanaya, Yoshihiro Yasumuro, Y ...
2003 Volume 57 Issue 10 Pages
1314-1319
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
This paper proposes a method of synthesis of lighting environment based on the atmospheric conditions for creating highly realistic outdoor scenes. The optical atmospheric model takes into account the effects of Rayleigh and of Mie scattering as a part of the physical model. Clouds, which enhance the reality effect of rendered images, are extracted and composed onto the spherical light image. A CIE XYZV color model is used as a visual modelto create scotopic vision. A comparison of a calculated and a measured spherical light image showed that using the proposed method can reproduce a proper lighting environment under any atmospheric conditions.
View full abstract
-
Yoshifumi Kitamura, Takashige Konishi, Sumihiko Yamamoto, Fumio Kishin ...
2003 Volume 57 Issue 10 Pages
1320-1327
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
We propose a stereoscopic display system for multiple users. It allows three or more people to simultaneously observe individual stereoscopic image pairs from their own viewpoints. The system tracks the position of the user's heads and it generates distortion-free images for each user. The system consists of a normal display and a display mask with a hole in the center. The display mask is placed at a suitable distance over the display surface. By controlling the position of the image drawing area for each user according to the corresponding user's viewpoint, each user can observe the stereoscopic image pairs shown in an individual area of the display system with shutter glasses. On the other hand, no user is able to see the image drawing areas of the other users because these areas are adequately occluded by the display mask. Accordingly, the display system can simultaneously provide intelligible 3D stereoscopic images for three or more moving observers without flickering or distortion.
View full abstract
-
Junji Sonet, Hitoshi Ikoma, Yoshimasa Tokuyama, Hiroshi Toriyat, Hiroa ...
2003 Volume 57 Issue 10 Pages
1328-1333
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
-
Atsushi Iiduka, Satoshi Tanaka, Hiroshi Yoshikawa
2003 Volume 57 Issue 10 Pages
1334-1336
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Makoto Hasegawa, Kenji Mikawa
2003 Volume 57 Issue 10 Pages
1337-1341
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Makoto Kimura, Hideo Saito
2003 Volume 57 Issue 10 Pages
1342-1346
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Removal of View-Disturbing Waterdrops in Images by Using a Pan-Tilt Camera System
Atsushi Yamashita, Toru Kaneko, Masaki Nishimura, Kenjiro T. Miura
2003 Volume 57 Issue 10 Pages
1347-1353
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
We propose a method for removing view-disturbing waterdrops from images by processing images taken with a pan-tilt camera system. Images that are taken on rainy days are often hard to make out due to adherent waterdrops on the surface of the protecting lens. In our method, a first image of a distant view is taken and then a second image is taken from a different angle. The new image is transformed with the projective transformation and compared with the first one to identify the regions with waterdrops. We can distinguish which image portion has waterdrops by considering the distance between two potential waterdrop regions. Finally, the region with waterdrops can be eliminated by merging two images. Experimental results showed the effectiveness of this method.
View full abstract
-
Kiyohiko Abe, Mikio Ohuchi, Shoichi Ohi, Minoru Ohyama
2003 Volume 57 Issue 10 Pages
1354-1360
Published: 2003
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
Many communication aid systems have been developed for severely physically handicapped people like ALS patients. The eye-gaze input system is being studied as an interface for a communication aid that can operate a computer with eye movement alone. Conventional eye-gaze input systems, however, can harm users. This is due to the use of infrared rays on the eyes to detect the eye-gaze. The prolonged irradiation could potentially cause eye damage such as glassblower's cataract, retinitis solaris, or dry eyes. We propose an eye-gaze input system using a personal computer and a home video camera for eye-gaze detection under natural light. The system detects the eye-gaze using image analysis based on the limbus tracking method, and it does not require special image processing units or sensors. The system is safe, compact and easy to use.
View full abstract
-
Hiroshi Watabe, Seiji Hotta, Sueharu Miyahara
2003 Volume 57 Issue 10 Pages
1361-1366
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Mitsuru Ohkura, Hiroyuki Morita, Takashi Hamamoto, Hideyuki Shimada, Y ...
2003 Volume 57 Issue 10 Pages
1367-1372
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Nobuhiko Mukai, Yoshiaki Sakaguchi, Hisashi Shigeoka, Makoto Kosugi
2003 Volume 57 Issue 10 Pages
1373-1375
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Takahiko Fukinuki
2003 Volume 57 Issue 10 Pages
1377-1379
Published: 2003
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS