The Journal of the Society for Art and Science
Online ISSN : 1347-2267
ISSN-L : 1347-2267
Volume 10, Issue 4
Displaying 1-8 of 8 articles from this issue
Papers for Special Issues "Entertainment Computing 2010"
  • Shunsuke Yoshimoto, Yuki Hamada, Takahiro Tokui, Tesuya Suetake, Masat ...
    Article type: research-article
    2011Volume 10Issue 4 Pages 204-214
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS
    In this study, we developed “Haptic Canvas” which enables users to blend, draw and feel haptic sensations. Especially, we used the haptic device which controls the dilatant fluid to present unique haptic sensation according to the viscosity and stiffness of the fluid. Users wear the haptic glove which consists of the filters of the starch particle and sucking or ejecting components and have experience of the haptic interaction within the shallow pool filled with the dilatant fluid. The generating and blending interaction of the haptic sensation was achieved by introducing the haptic primary colors, including “stickiness”, “hardness” and “roughness” sensations.
    Download PDF (4324K)
  • Masataka Imura, Yuki Uranishi, Sei Ikeda, Yoshitsugu Manabe, Osamu Osh ...
    Article type: research-article
    2011Volume 10Issue 4 Pages 215-225
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS
    “Heijo-kyo Walk-through” is a system which enables users to stroll in the virtual “Heijo-kyo”, an old capital of Japan, which is reproduced by computer graphics. The system has distributed structure suitable for exhibition in various environments. A broad area of Heijo-kyo is procedurelly modeled based on a couple of arrangement rules. Immersive displays are used for visualizing an old scenery of “Heijo-kyo”. Users can walk virtually inside the scenery by body motion such as stepping. To detect stepping motion, we use a single camera for shooting heels from behind and analize up-and-down movement and grounding location.
    Download PDF (15001K)
Papers
  • Kenji Matsuo, Masafumi Hagiwara
    Article type: research-article
    2011Volume 10Issue 4 Pages 226-233
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS

    This paper proposes an entertainment aquarium system using Augmented Reality (AR). In the proposed system, 3-dimensional CG fish are superimposed on the PC screen captured by Web camera. Fish are selected according to the color information in the background image taken by the Web camera. The user can feel a sense of moving real fish. Three kinds of elements for entertainment are introduced: interactivity, communication, and unexpectedness. As for interactivity, the objects displayed in real time are reflected by user's operations using AR. As for communication, users can cooperate to raise fish and can communicate with other users using Twitter. Finally, as for unexpectedness, rainbow appears according to the sound the user utters. We carried out evaluation experiments and obtained good results in terms of the three elements compared with the existing similar application software.

    Download PDF (1364K)
  • F. J. Menendez, O. Halabi, T. Fujimoto, N. Chiba
    Article type: research-article
    2011Volume 10Issue 4 Pages 234-240
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS
    In this paper we present an innovative way to combine a laser (vector) projector with a video (raster) projector, using the former to enhance or augment the 3D images projected with the latter. As we will show, two main problems arise from this setting. One is to find a relation between the video projector space and the laser projector space, so that we can transform pixels into laser coordinates. The second problem is related to hiding virtually occluded laser segments to create a seamless fusion of the two projected parts. Also, we apply a calibration technique to compensate visual deformations caused by projector placement and/or non-planar screens.
    Download PDF (2999K)
  • Naoyuki Furuta, Kazunori Mizuno, Sawako Kon, Seiichi Nishihara, Yukio ...
    Article type: research-article
    2011Volume 10Issue 4 Pages 241-250
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS
    Virtual cities have been practically used in various scences including digital cities on the Internet,3D games, movies, and urban planning. We have developed a system that can generate virtual cities whose cityscape is varied according to the amount of changes of local features at each urban block for each time period. In this paper, we describe the method that makes the features change by using cellular automata. In our method, the features of each urban block, defined as one cell, are locally changed by interaction between adjacent cells. In addition, the features are also influenced by road properties. We demonstrated that our mathod can produce various and realistic simulation patterns of urban block changes and our whole system can construct 3D scenes of time-varying virtual cities.
    Download PDF (5570K)
  • Takashi Matsuo, Koji Mikami, Taichi Watanabe, Kunio Kondo
    Article type: research-article
    2011Volume 10Issue 4 Pages 251-262
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS
    In 2D animation and comics, the thickness of object outlines and contour lines are often varied to emphasize or exaggerate the shape of the object. Adding a 2D touch to 3DCG animation using toon-rendering is common, but existing techniques can not achieve varying line thickness. This paper proposes a method of recreating the effect of varying outline and contour line thickness based on object shape in real-time 3D environments, such as video games. Our method focuses on exaggerating thickness of lines where the target polygon is curved. Furthermore, by using an backface polygon of the target polygon, we are able to create a continuous outline of the object, thus recreating a closed outline of object with varying line thickness.
    Download PDF (2913K)
  • Fumio TAKENAKA, Tadahiro FUJIMOTO, Osama HALABI, Norishige CHIBA
    Article type: research-article
    2011Volume 10Issue 4 Pages 263-275
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS
    Supplementary material

    This paper proposes a method, which is based on a volume intersection method by utilizing projection mappings, to reconstruct and render the 3D shape of an object in real time from images captured by multiple video cameras. The proposed method imitates a voxel-based volume intersection method on a pseudo-voxel space, which is constructed by placing orthogonally three sets of parallel equally-spaced planes in the 3D space. In each frame time, first, the frame image of each camera is processed to generate a projection texture having different alpha values on foreground (object) pixels and background ones. Then, the projection texture of every camera is projected onto the parallel planes by projection mapping functions of a graphic library. Finally, the visual hull of the object is efficiently reconstructed as the intersection of all cameras' projection regions in the 3D space by alpha operations of the graphic library, and is rendered using color values. The proposed method is more efficient than a conventional voxel-based volume intersection method, and is able to generate virtual viewpoint images with a moving object in real time.

    Download PDF (3268K)
  • Yuuki OGAWA, Katsutsugu MATSUYAMA, Tadahiro FUJIMOTO, Norishige CHIBA
    Article type: research-article
    2011Volume 10Issue 4 Pages 276-284
    Published: December 15, 2011
    Released on J-STAGE: March 31, 2023
    JOURNAL FREE ACCESS
    Supplementary material
    In this paper, we propose the technique for animating flickering flames by applying Beier et al's morphing method to a single still image of a candle flame. In this morphing method, a pair of control lines arranged on a source image and a destination image area are used to define the deformation of the source image. Assuming that the fluctuation of wind can be expressed by 1/fβ noise, we apply the noise to the control lines for giving natural fluctuation. Our method can produce an animation of not only a candle flame flickering in light breeze but a scene including many candles.
    Download PDF (2441K)
feedback
Top