Proceedings of the Annual Conference of the Institute of Image Electronics Engineers of Japan
Online ISSN : 2436-4398
Print ISSN : 2436-4371
Proceedings of Visual Computing / Graphics and CAD Joint Symposium 2007
Displaying 1-50 of 55 articles from this issue
Date: June 23-24, 2007 Location: Osaka Institute of Technology
Saturday, June 23
9:05-10:25 Chair: Koji KOYAMADA, Kyoto University
  • Ai GOMI, Takayuki ITOH, Jia LI
    Session ID: 07-1
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    The recent revolution of digital camera technology has resulted inmuch larger collections of images. Browsing techniques for imagesthus become increasingly important for overviewing and retrievingimages in sizable collections. This paper proposes CAT (ClusteredAlbum Thumbnail), a technique for browsing large image collections,and its interface for controlling the level of details. This newsystem clusters images according to their keywords and pixel values,and selects head images for each cluster. It then visualizesthe clusters by applying a hierarchical data visualization technique,which represents the hierarchical clusters as nested rectangular regions.Interlocking to the zooming operation, it selectively displayshead images while zooming out, or individual images while zoomingin. We argue that such an operation is friendly for users to exploreand search for specific images from huge image collections,because the users are familiar with the Graphical User Interfaces(GUI) for file systems. The hierarchical organization of images inCAT parallels the organization of files in GUI and also supports explorationin a top-down manner. We provide several examples andpresent thorough evaluation methods for the techniques developed.
  • Hiroko NAKAMURA MIYAMURA, Yuji SHINANO, Takafumi SAITO, Ryuhei MIYASHI ...
    Session ID: 07-2
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    We propose an adaptive visualization technique for a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links that represent the parent-child relationship. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data, because many primitives are overlapped within a limited region. In this context, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style based on the density of the nodes. In addition, we demonstrate the effectiveness of the proposed method by applying it to perfect binary trees and large branch-and-bound trees.
  • Hiroyuki WADA, Masato OGATA
    Session ID: 07-3
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    To achieve a deep sense of immersion or the feeling that one is actually present in the experience with a training or research simulator, high-resolution and a wide field of view are indispensable. Multi-projector technology has been under consideration in recent years. This technology allows the generation of wide field of view and high-resolution images in a cost-effective manner. Conventional methods, usually involve lenses for optical correction of the distortion, has been widely used. However, they have problems such as limited screen size, complicated adjustments of projectors and luminance compensation. To solve these issues, we developed an automatic calibration method, “virtual camera method ” , which yielded high-precision calibration regardless of the camera position based on computer vision. In this paper, we propose a "multiple-shot calibration" that is a calibration method for applying "virtual camera method" to a wide field of view screen.
10:35-11:30 Chair: Shigeru OWADA, Sony Computer Science Laboratories, Inc.
  • Norimasa YOSHIDA, Takafumi SAITO, Tomoyuki HIRAIWA
    Session ID: 07-4
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose Quasi-Aesthetic Curves and a method for interactively controlling typical class A Bezier curves.Quasi-Aesthetic Curves are curve in the form of rational cubic Bezier curves with approximately linear Logarithmic Curvature Histograms. To verify the monotonicity of curvature of Quasi-Aesthetic Curves, we derive the monotonicity curvature condition of rational cubic Bezier curves.Typical class A Bezier curves are polynomial Bezier curves of monotone curvature. Since typical class A Bezier curves lack interactive control, we develop a method for interactively controlling the curves by specifying two endpoints and their tangents.We show that as the degree of a class A Bezier curve converges to infinity, the curve converges to a logarithmic spiral, which is included in Aesthetic Curves as α=1. Finally, we summarize interactive control method of curves of monotone curvatures and discuss the future directions of research.
  • Kenshi TAKAYAMA, Takeo IGARASHI, Ryo HARAGUCHI, Kazuo NAKAZAWA
    Session ID: 07-5
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This article proposes a sketch-based interface for modeling muscle fiber orientation of a 3D virtual heart model. Our target was electrophysiological simulation of the heart and fiber orientation is one of the key elements to obtaining reliable simulation results. We designed the interface and algorithm based on the observation that fiber orientation is always parallel to the surface of the heart. The user specifies the fiber orientation by drawing a freeform stroke on the object surface. The system first builds a vector field on the surface by applying Laplacian smoothing to the mesh vertices and then builds a volumetric vector field by applying Laplacian smoothing to the voxels. We demonstrate the usefulness of the proposed method through a user study with a cardiologist.
15:00-16:45 Chair: Tsuneya KURIHARA, Hitachi, Ltd.
  • Kei TATENO, Tsuyoshi KITAZUME, Wei XIN, Kunio KONDO, Toshihiro KOMMA
    Session ID: 07-6
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    There are some researches that can generate cartoon-like motion from Mocap data for the efficiency of production of animation that uses Mocap system. The purpose of these researches is to add exaggerated expressions that are changed according to time to the Mocaped motion. However, not only the exaggerated expression but also the pose that expresses the entire of the character at a specific time has many symbolic meanings. Animator corrects those poses along the each line that goes through the entire of the pose to make those poses more attractive. In our research we called the line “line of action”. We propose a method for exaggerating whole of the motion by correcting each poses along a line of action that is generated from Mocap data and extracted keyframes from the Mocap data.
  • Tomohiko MUKAI, Shigeru KURIYAMA
    Session ID: 07-7
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    We propose an example-based method of motion retiming with a temporal feature analysis of cooperative movement of two joints. Our method first computes a two-dimensional map of given motion samples based on dissimilarity of the temporal features. The system then identifies two joints appearing essentially different movement among pre-categorized motion classes. The cooperative timing of joint movement is edited by cloning the temporal features of motions samples according to the user-specified location on the map.
  • Ryota KAIHARA, Hiroshi YASUDA, Suguru SAITO, Masayuki NAKAJIMA
    Session ID: 07-8
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    We propose a novel approach to make visual outlines of motion capture clips that is suitable for checking multiple clips at the same time, unfolding the motions into two-dimentional stripes of keyframes.
  • Akira YOSHIDA, Reiji TSURUNO
    Session ID: 07-9
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents a paper-mosaic like animation rendering method. Both object models and rendering process are 3D, each images arerendered as hand torn paper-mosaics. Animation which maintained frame-to-frame coherence is generated by paper pieces moving or transformating on 2D.
16:55-18:15 Chair: Masanori KAKIMOTO, SGI Japan, Ltd.
  • Yonghao YUE, Kei IWASAKI, Yoshinori DOBASHI, Tomoyuki NISHITA
    Session ID: 07-10
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Photo-realistic rendering taking into account global illumination is one of the most important research subjects in Computer Graphics. In this paper, we propose a fast rendering method aimed for lighting design. Our purpose is to render high-quality images at interactive frame rates even when the viewpoint or light sources are moved or characteristics of materials are changed under the assumption that the objects in input scenes are fixed. We take into account materials with diffuse or low-frequency glossy BRDFs (Bidirectional Reflectance Distribution Functions).In recent years, photon mapping combined with final gathering is often used to achieve high quality rendering. The combined method has two major drawbacks in computational speed, therefore does not achieve rendering at interactive frame rates. The two drawbacks are as follows: ray tracing is needed when the viewpoint or light sources are moved, and the radiance estimation technique used in photon mapping takes a lot of computational time. To improve these two drawbacks, we propose to reuse light paths for the former one, and a new estimation technique called Hierarchical Histogram Estimation for the latter one.
  • Masato WATANABE, Suguru SAITO, Masayuki NAKAJIMA
    Session ID: 07-11
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In case of distributed rendering for spatial subdivision, antialiasing method causes image artifacts.In this paper, we introduce a method to solve this problem and discuss theoritical maximum error.
  • Junya HIRAMATSU, Masashi BABA, Masayuki MUKUNOKI, Naoki ASADA
    Session ID: 07-12
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Based on the Lambertian model, we have proposed a new diffusereflection model called extended Lambertian model. In this paper, ourmodel is compared with the conventional one by evaluating thereproducibility of real object reflection. Experiments were performedto estimate the parameters of each model from a sequence of imagestaken by varying the light condition. The results have shown theeffectiveness of our model in reproducibility of color and brightnessof wood grain.
Sunday, June 24
9:00-10:20 Chair: Kenichi ARAKAWA, NTT
  • Kazuyo KOJIMA, Shigeo TAKAHASHI, Masato OKADA
    Session ID: 07-13
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    A photomosaic is an artistic representation of an image,which is achieved by partitioning the reference image into a rectangular grid of sections and replacing each section with a small photograph simulating its local color distribution.However, the grid type of partitioning usually reduces the visual quality of photomosaics because it does not account for the arrangement of the underlying features in the input image.In this paper, we enhance the photomosaics by fully respecting the associated perceptual saliency of the image.This is accomplished by adaptively placing seamlessly connected quadrilaterals so that they can be aligned with both the global color gradation and local image edges,together with the sophisticated assignment of small photographs based on the metric of image saliency.This simple modification allows us to significantly improve the decorative coloring of the final photomosaics while maximally preserving the original quality of the selected small photographs.
  • Takashi YONEYAMA, Kunio KONDO, Masaki FUJIHATA
    Session ID: 07-14
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we present methods of the parametric transformation using models of the visual object by the parameters ofform vision, spatial vision and color vision that are introduced from a model of the visual perceptual process based onvisual feature analysis of paintings centering on modern abstract paintings. Our purpose is to introduce methods of the parametric transformation using three-dimensional polygonal mesh models and to construct the painterly image generation system implemented dialogical parametric transformation.
  • Takuya SAITO, Yosuke BANDO, Tomoyuki NISHITA
    Session ID: 07-15
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    We present an image composition method which seamlessly matches the color of a piece of a source image to a target image region that is partially occluded by foreground objects.Previous methods assume that a target image region has small color variation, and therefore it is difficult to paste source images so that they overlap foreground objects in a target image, as this induces color bleeding from the foreground objects. To overcome this problem, we propose to perform color matching only from the background by excluding the foreground objects.We show how we compose objects from source images both behind and in front of the objects in a target image, and we demonstrate that visually pleasing seamless composition can be achived.
10:30-11:50 Chair: yoshiyuki KOKOJIMA, Toshiba Corp.
  • Akihiro YAMAMOTO, Ryutarou OHBUCHI, Jun KOBAYASHI, Toshiya SHIMIZU
    Session ID: 07-16
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    A shape similarity judgment among a pair of 3D models is often influenced by their semantics, in addition to their shapes. In this paper, we present a method to improve shape-based 3D model retrieval performance by learning multiple semantic categories off-line from a small set of training examples. Learning multiple semantic categories at a time from a small number of labeled training samples whose features have high-dimensionality has been quite difficult. In our proposed method, we apply unsupervised learning to partially purify the set of features so that their saliency improves. Then, using a supervised learning algorithm captures the set of semantic categories from the partially purified feature set. Our experimental evaluation showed that the retrieval performance using the proposed method is significantly higher than those of both supervised-only and unsupervised-only learning methods.
  • Yoshihiro KANAMORI, Shigeo TAKAHASHI, Tomoyuki NISHITA
    Session ID: 07-17
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recent detail-preserving deformation techniques have exhibited the advantages in intuitive shape design such as deformations of articulate figures.Representative surface-based methods, however, suffer from high computational cost and lose interactivity especially when handling large-scale objects.This paper presents a fast deformation method for such large-scale objects while preserving their shape details.The present method extends the 2D deformation technique based on moving least squares in order to derive the optimal transformation for each vertex.Although allowing us fast deformations, the original 2D technique cannot provide thelocal control of the shape deformation due to the global support of the control graph.To address this issue, the present method introduces explicit associations between each vertex and the control sub-graph.This improvement also localizes the computational resources and accelerates the deformation.Moreover, by exploiting the parallel nature of the computation, we demonstrate the high performance for deforming complex models on GPU.
  • Toshiya SHIMIZU, Jun KOBAYASHI, Akihiro YAMAMOTO, Ryutarou OHBUCHI
    Session ID: 07-18
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a method to improve 3D model retrieval performance by using localized shape features. In the proposed approach, a 3D model to be compared is partitioned into six sub-parts simply by three orthogonal planes. The planes are derived from the principal axes of inertia of the model. An overall distance from the model to another model is computed as the sum of distances from the features of corresponding six sub-parts. Experimental evaluation showed that the proposed simple approach to localized shape feature set improved retrieval performance of eight shape feature we have tried.
15:05-16:00 Chair: Tomoyuki NISHITA, The University of Tokyo
  • Yuki MORI, Takeo IGARASHI
    Session ID: 07-i
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    We introduce Plushie, an interactive system that allows nonprofessional users to design their own original plush toys. To design a plush toy, one needs to construct an appropriate two-dimensional (2D) pattern. However, it is difficult for non-professional users to appropriately design a 2D pattern. Some recent systems automatically generate a 2D pattern for a given three-dimensional (3D) model, but constructing a 3D model is itself a challenge. Furthermore, an arbitrary 3D model cannot necessarily be realized as a real plush toy, and the final sewn result can be very different from the original 3D model. We avoid this mismatch by constructing appropriate 2D patterns and applying simple physical simulation to it on the fly during 3D modeling. In this way, the model on the screen is always a good approximation of the final sewn result, which makes the design process much more efficient. We use a sketching interface for 3D modeling and also provide various editing operations tailored for plush toy design. Internally, the system constructs a 2D cloth pattern in such a way that the simulation result matches the user's input stroke. We successfully demonstrated that nonprofessional users could design plush toys or balloon easily using Plushie.
  • Hideki Todo, Ken-ichi Anjyo, William BAXTER, Takeo IGARASHI
    Session ID: 07-ii
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recent progress in non-photorealistic rendering (NPR) has led to many stylized shading techniques that efficiently convey visual information about the objects depicted. Another crucial goal of NPR is to give artists simple and direct ways to express the abstract ideas born of their imaginations. In particular, the ability to add intentional, but often unrealistic, shading effects is indispensable for many applications. We propose a set of simple stylized shading algorithms that allow the user to freely add localized light and shade to a model in a manner that is consistent with conventional lighting techniques. The algorithms provide an intuitive, direct manipulation method based on a paint-brush metaphor, to control and edit the light and shade locally as desired. Our prototype system demonstrates that our method enhances both the applicability and quality and of conventional stylized shading for interactive applications as well as offline animation.
16:10-17:30 Chair: Kazufumi KANEDA, Hirosima University
  • Yuki SHIMADA, Mikio SHINYA, Michio SHIAISHI, Takahiro HARADA
    Session ID: 07-19
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents a fast cloud rendering method for dynamic scenes, where cloud shapes and lighting environments dynamically change. Although the Harris method [Harris and Lastra 2001] has been widely used for static cloud rendering, it can be fatally slow for real-time applications when, for example, light directions change. By introducing the 3D attenuation buffer and re-arranging the algorithm, we improved the rendering speeds of dynamic clouds by a factor of 10-100 times. The image quality is also improved due to a finer representation of light distribution.
  • Yoshinori DOBASHI, Tsuyoshi YAMAMOTO, Tomoyuki NISHITA
    Session ID: 07-20
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In computer graphics, simulation of natural phenomena is one of the most important research topics. In this paper, we focus on the rendering of dynamic clouds. To create realistic images of clouds, the multiple scattering of light inside clouds has to be taken into account. However, this increases the computational cost significantly. To address this problem, we propose an example-based method for the efficient computation of the multiple scattering. First, in a preprocessing step, a set of example clouds are generated and stored as a database after computing the multiple scattering. Then, in a rendering process, intensities of clouds with an arbitrary density distribution are calculated efficiently by refering the precomputed database. The important feature of our method is the ability to calculate the intensity very quickly, even if the density distribution of clouds, the viewpoint, and the direction of the sunlight all change. The time-consuming database construction process does not need to be repeated for these changes. We demonstrate that our method is more than tens times faster than the previous methods.
  • Mikio SHINYA, Michio SHIRAISHI, Yoshinori DOBASHI, Kei IWASAKI, Tomoyu ...
    Session ID: 07-21
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents a new method for visual simulation of multiple scattering phenomena. Quasi-analytic solutions are derived for layered uniform materials, and efficient rendering methods are developed by coupling these solutions with the ray-marching method.
17:40-18:35 Chair:Kenichi ANJO, OLM Digital, Inc.
  • Megumi NAKAO, Toshitaka KAWAMOTO, Kotaro MINATO
    Session ID: 07-22
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes finite element modeling methods for real-time cutting simulation in order to establish VR environment where users can try cutting procedure. Compared to related study, we describe both geometry and physics of soft tissue incision while not subdividing model elements. This approach does not change the number of vertices, which avoids increase of computation time and allows fast update of stiffness matrix. Some experiments on general-purpose PC confirmed that the methods could simulate valid shape of incision in real time.
  • - A Real-Time Deformation of Human Organs for Surgical Simulators -
    Takaaki KIKUKAWA, Manabu NAGASAKA, Masato OGATA
    Session ID: 07-23
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In recent years, to achieve physics simulations in real-time, various attempts have been made on system architectures for speedup of simultaneous visualization and computation. We have been developing a high speed numerical computation platform for real-time physics simulations, which will be applied to practical surgical simulators. In the paper, we report techniques for the deformation of the human organs with GPU (Graphics Processing Unit) and its performance evaluations. The experimental result shows that the computation implemented in GPU is 10 times faster than CPU when the size exceeded more than 50,000 dimensions. In addition, we have confirmed that the method applied in the on going prototype surgical simulators can simulate deformation of kidney model (22, 324 Finite elements, 4,771node) in real-time.
12:00-14:00 Poster Session Chair: Tatsuo YOTSUKURA, Advanced Telecommunications Research Institute International
  • Kunio Osada, Tomohisa Banno, Ryutarou Ohbuchi
    Session ID: 07-24
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a view-based 3D shape comparison method that employs local visual features computed at selected “salient” points. The method renders, after normalizing for similarity transformation, images from six canonical viewpoints. Each image is processed by using Scale Invariant Feature Transform (SIFT) algorithm developed by Lowe to detect a set of salient points and to compute a set of feature at each salient point. Our method computes dissimilarity between a pair of 3D models by using these salient points and their respective local features. Our method constrains salient-point correspondence pair by using geometrical proximity, and culls the correspondence pairs to avoid false matches. Experimental evaluation showed that the method achieved the retrieval performance on a par with the best finisher among the SHREC 2006 3D model retrieval contest entrants.
  • Keiko NISHIYAMA, Takayuki ITOH
    Session ID: 07-25
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    3D structure pf proteins deeply takes part in the expression of the protein that is an important composition element of the living thing. The molecular surface of the protein have very complex shapes, and contains a lot of projections and the hollow shapes (Hereafter, it is called ruggedness shapes).It is well-known that the function of the protein appears strongly in that bumpy part. We propose a technique to extract local bumpy shapes from molecular surface geometry database (eF-site), and classify them based on their geometry. Moreover, we propose an interface for effectively visualizing the result of extraction and classification of the local bumpy shapes.This technique assumes that molecule surface is approximated as a triangular mesh.It first extracts groups of triangles forming local bumpy parts. It then calculates feature values and forms a histogram for each bumpy parts, that denotes the distribution of distances of points on the bunpy parts around the axis. It finally clusters the bumpy parts according to the histogram. We use the hierarchical data technique "Heiankyo view" as a visual interface to explore the clustering results. Our technique is one of the fundamental step for the analysis of partial similarity of the molecular surface among proteins.
  • Kazuya MORIWAKI, Xiaoyang MAO
    Session ID: 07-26
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
  • Ken-ichi WAKISAKA, Tomohiko MUKAI, Shigeru KURIYAMA
    Session ID: 07-27
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Traditional classification and searching could not retrieval motion data that have same meaning but appearances are different. For example, “over-hand” throw and “under-hand” throw are belong same category, but cannot retrieve “over-hand” throw using search key by “under-hand” throw. We proposed to method of adding semantics using logical expression and inductive logical programming for motion data. First, extraction spatio-temporal features by pre-categorized motion data, training data. Next, lead the categorize rule from these features that common to motion data used inductive logical programming. Then, adding meaning information for unknown motion data that belong specific category. Finally, implement for search system based on similarity meaning of motion. We can retrieve motion data that have similar meaning but appearances are different. But, cannot retrieve motion's poses are very different from training data.
  • Hideki KINOSHITA, Hidetoshi ANDO, Ryutaro OHBUCHI
    Session ID: 07-28
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Some of the most successful shape-similarity comparison methods for retrieval of 3D models are appearance-based. Despite their high retrieval performance, these methods require substantial amount of computation for generating, storing, and then comparing features. This paper proposes a GPU-based algorithm for shape similarity-based retrieval of 3D model. Based on a multiple-view 2D image comparison method, our algorithm performs both feature computation and inter-feature dissimilarity computation on a GPU. Experimental evaluation showed significant reduction in time for 3D model retrieval over conventional CPU-based method, while preserving equivalent retrieval performance and retrieval results.
  • Masashi SHIMIZU, Yuta TAKABAYASHI, Koji TORIYAMA, Hidetoshi ANDO
    Session ID: 07-29
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose two methods for real-time visualization of 3D numerical simulation on GPU. The first is a method of using 3D particles. 3D particles are advected by the velocity field, and deformed according to the velocity field at the position of 3D particles. The other, arrow-polygons are put on a fixed position, and deformed according to the velocity field at the positions. The two proposed methods do not need processing by CPU. Instead most of the processes of them are performed on GPU. As a result, a real-time visualization of 3D numerical simulation has been achieved.
  • Takahiro AMAYA, Makoto FUJISAWA, Kenjiro T. MIURA
    Session ID: 07-30
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a fast computational method of video stabilization using the Graphics Processing Unit (GPU) that removes the unwanted vibrations from videos.The video stabilization is composed by estimation of the global motion, removal of the undesired motion and mosaicking. When these are processed with CPU, the computational cost for the global motion estimation is very high. We improve the speed of this computation with GPU that the parallel processing is possible. Our method can obtain the result by forwarding the frame image of the video to GPU as texture data, and drawing the calculation result to the offscreen buffer. Although the transfer speed from GPU to CPU is very slower than the other way around, the method only has to transfer one pixel data from GPU.
  • Takahiro HARADA, Seiichi KOSHIZUKA, Yoichiro KAWAGUCHI
    Session ID: 07-31
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Smoothed Particle Hydrodynamics (SPH) is a particle method which computes fluid motion by calculating particle interaction forces. When graphics processing units (GPUs) are used for accelerating SPH, it is difficult to search neighboring particles. In this study, we developed a method to simulate SPH entirely on the GPU in which neighboring particles are searched for on the GPU. Then the computation time of the proposed method was measured and compared with that of the calculation on CPUs. The comparison showed that the proposed method could accelerate computation drastically.
  • Eitaro IWABUCHI, Izulu WATANABE
    Session ID: 07-32
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    We present a new method for solving the distributed rendering problems. Current methods are to construct the cluster. Our new methods use Grid computing technology. Our methods reduce the cost to construct the system and to maintenance for the system.
  • Katsuyuki UEDA, Yasutaka OMOYA, Kei IWASAKI, Saeko TAKAGI, Fujiichi YO ...
    Session ID: 07-33
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents a real-time rendeing method for multilayer thin films.To render multilayer thin films, the products of the incident radiance, the composite reflectances, and the color matching functions are integrated.To calculate the integration, the incident radiance, composite reflectances, and the color matching functions are stored at each discretized wavelength. Therefore, the required data size is quite largeand the computational cost is quite expensive. To address this problem, our method approximates the spectrum distributions of them using a small number of wavelets. This reduces the data size and makes it possible to render on GPUs. Our method can render an object consists of ten thousands verticesat about 200 frames per second.with wavelets.
  • Takashi IMAGIRE, Henry JOHAN, Tomoyuki NISHITA
    Session ID: 07-34
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Rendering of scenes with light scattering effects such as shafts of light is a challenging task in real-time applications such as games. In this paper, we propose a ray casting based method that realizes real-time and anti-aliased rendering of scenes with light scattering effects. The proposed method also does not generate artifacts when the viewpoint is inside the volumetric object. Efficient rendering is realized through implementation that exploits the capabilities of GPU.
  • Yuki TAKEDA, Hiromi T. Tanaka
    Session ID: 07-35
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Noh is a historical play for 600 years and Noh costumes are precious cultural heritage. Currently, some researchers group grapple with digital archiving of cultural heritages to save them. We propose the efficient image-based method of cloth object modeling to create digital contents of the Noh costumes. At first, we divide embroidery regions and golden yarn region based on its color difference. In each region, we propose image-based anisotropic reflectance modeling with the normal distribution at each pixel. We generate Bi-directional Reflectance Function(BTF) from proposed model and render the Noh costume.
  • Makoto Fujisawa, Kenjiro T. Miura
    Session ID: 07-36
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a fast and efficient method for producing physically based animationsof the ice melting phenomenon, including thermal radiation as well as thermal diffusion andconvective thermal transfer. Our method adopts a simple color function called the VOF(Volume-of-Fluid) with advection to track the free surface, which enables straightforwardsimulation of the phase changes, such as ice melting. Although advection of functions thatvary abruptly, such as the step function, causes numerical problems, we have solved these bythe RCIP (Rational-Constrained Interpolation Profile) method. We present an improvementto control numerical diffusion and to render anti-aliased surfaces. The method also introducesa technique analogous to photon mapping for calculating thermal radiation. By the photonmapping method tuned for heat calculation, the thermal radiation phenomenon in a scene issolved efficiently by storing thermal energy in each photon. Here, we report the results ofseveral ice melting simulations produced by our method.
  • Hiroshi WATABE, Yutaka OHTAKE, Takashi KANAI, Takashi MICHIKAWA, Kunio ...
    Session ID: 07-37
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose shape modeling and visualization method of time series consecutive tomographic images using 4D implicit functions.our proposing method generates 4D implicit surfaces model with the application of implicit surfaces called SLIM surfaces to binarized consecutive tomographic images.4D implicit functions model that 3D Shape at arbitrary time calculatable easily is obtained by the proposal method.And we presented utility of the visualization of shape with transform by the experiment that used our method.
  • Takashi MICHIKAWA, Ken'ichiro TSUJI, Hiromasa SUZUKI
    Session ID: 07-38
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents a method for computing distance fields from large volumetric models.Conventional methods have limits in terms of the amount of memory space, as all of data must be allocated to the RAM.We resolve this issue by using an out-of-core strategy. Proposed method first decomposes volumetric models into sub-blocked clusters and applies distance transforms to each cluster. Then, other clusters can be saved on the bulk memory. In addition, we apply inter-cluster propagation to remove inconsistency of distance fields. We also propose an ordering algorithm for reducing the number of distance transforms for each cluster by using propagated distance values. Finally, this paper demonstrates to calculate distance fields with over billion cells in practical time.
  • Yasufumi TAKAMA, Hironori YAMASHITA, Hiromi TANAKA
    Session ID: 07-39
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    Surgical training systems are needed for practices of surgical tasks.In such systems, shapes and deformations of objects must be fast calculated correctly to get realistic simulations. Generally triangles or tetrahedra, which are mesh elements, are used to represent virtual objects.However, when the number of mesh elements increase, the calculation time also increase. Moreover, we need much time if meshes are reassembled from the begining, when we want to represent meshes according to deformations of virtual objects and intaraction of virtual surgical tools.In this papar, we propose the algorithm for dynamic tetrahedral adaptive mesh generation.It allows fast dynamic structural modifications of tetrahedral adaptive meshes,according to stretch ration of edges of tetrahedra.
  • Maiko YAMAZAWA, Takayuki ITOH, Fumiyoshi YAMASHITA
    Session ID: 07-40
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents Level of Detail Control in "JunihitoeView", a hierarchical multi-variete data visualization technique. The presented technique displays variables of a leaf node themselves,when a user zooms in the visualization result. Further, it unifies the variablesand display typical values of a higher level hierarchy,when the user zooms out the visualization result.Moreover, the paper shows our user tests, in order to prove the validity of the proposal technique.
  • Yuta OGAWA, Issei FUJISHIRO, Yuriko TAKESHIMA
    Session ID: 07-41
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    When visualizing volume datasets, we frequently feel that optical occlusion makes it hard to comprehend spatial location (esp. depth) of target objects precisely. Haptization is a well-known technique to address this problem. A key to the success of effective haptization is judicious design of appropriate haptic transfer functions (HTFs) tailored specifically to a given dataset. Our primary focus in this study is placed on 3D diffusion tensors. Due to rapid increase in the availability of related scanning devices, the analysis of diffusion tensors has recently attracted much attention from related researchers. This paper is an initial report, where we strive to present a novel 6DOF HTF, which utilizes 3D forces and 3D torques sophisticatedly to convey the core information of diffusion tensor values. The effectiveness of a pilot implementation of the 3D diffusion tensor haptaization using PHANToM Premium 3.0 is proven through user evaluation experiments.
  • Kazuaki SUZUKI, Suguru SAITO, Youngha CHANG, Masayuki NAKAJIMA
    Session ID: 07-42
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    When we downscale pictures or graphs, it is need to keep thin line structure.To keep thin line structure, it is need not to break thin line and not to defocus thin line.Nearest-neightbor break thin line, bicubic and bilinear defocus thin line.We propose that 1/2n downscaling method to keep thin line structure.
  • Naoto OKAICHI, Takashi IMAGIRE, Henry JOHAN, Tomoyuki NISHITA
    Session ID: 07-43
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In recent years, there are a lot of digital painting researches to simulate existing painting techniques and painting pigments in NPR. In particular, the simulations of painting tools are very important because they enable intuitive painting experience for a user and generate rich painting effects. There are a lot of methods for simulation of painting using a brush, but not for other painting tools. As a result, there is still a limitation in the variety ofpainting effects that can be generated. In this paper, we propose a method to simulate painting using a painting knife which is an important tool in oil painting. We model the painting knife, model the paint such that it is suitable for realizing the impasto style, and present a method for interactive oil painting simulation.
  • Linlin JING, Kohei INOUE, Kiichi URAHAMA
    Session ID: 07-44
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    An iterative algorithm for computing centroidal Apollonius tessellations of input images is presented and is applied to non-photorealistic rendering for stippling and mosaic effects. Continuous line drawings are generated from stippling by connecting points in the stippling on the basis of traveling salesman problem. Moreover, maze-like images are generated from the line drawings.
  • Kenichi YOSHIDA, Shigeo TAKAHASHI, Issei FUJISHIRO
    Session ID: 07-45
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    When 3D objects are rendered on a display using Computer Graphics technique, perspective projection that is based on a pinhole camera model is usually used. On the other hand, many of illustrations and artistic paintings are projections that are distorted and exaggerated partially affected by the human perception. Such projections are called Nonperspective projections, and it is studied as the model that seamlessly combines the projections from multiple viewpoints into one projection. However, there is no mechanism in the previous work that controls unnatural distortions in a projection by using the compositional arrangement of objects. Therefore, in this paper, we present a novel method that automatically controls a nonperspective projection avoiding the unnatural distortions as a whole by using a compositional arrangement of 3D scene objects. First, psychology experiments are conducted in order to clear the law between the tolerance of distortions in a nonperspective projection and a compositional arrangement of local linear perspectives. In addition, we present a method for editing a nonperspective projection while keeping distortions unnatural by using the results of the experiments as constraints on the edit of the nonperspective projection.
  • Teppei MIYAKE, Shigeru KURIYAMA
    Session ID: 07-46
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    A pictorial image is automatically generated from natural image by the strokes of an imaginary brush. We intentionally select a stroke's direction among the discrete angles of 0, 45, 90, and 135 degrees according to each bit pattern of the embedded data, and the brush's color is computed by averaging the colors in the drawing area. Next, we divide the pictorial image into N x M blocks, where each block can store 2 bits, and add positive noise of blue color to the border and negative noise to the internal region of the stroke. The border can be efficiently detected in the blue color space, and the data are extracted by detecting the edges of the strokes using Laplacian of Gaussian filter, zero crossing, and Hough transform. We take an image with a digital camera by change its resolution. We successfully embedded and extracted 150 bytes of data in a 100 x 148 mm image. However, our method requires high-resolution images for extracting data in high accuracy.
  • Yasushi ISHIBASHI, Hiroyuki KUBO, Hiroaki YANAGISAWA, Akinobu MAEJIMA, ...
    Session ID: 07-47
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we describe a method which can be used to synthesize individual facial expressions based on the facial muscle model To represent individual facial expressions, we first allocate17 major facial muscles to the face model(conventionally 44 are used), and mount the actuator which can control jaw rotation on the face model. We then automatically estimate the optimal facial muscle contraction parameters and the jaw rotation angle parameter to synthesize a hand-generated “key shape with the target expression” (Hereinafter, we will refer to this, as the “target shape”). To improve the approximation for a target shape which is difficult to be synthesized by 17 facial muscles, we attach a few new facial muscles on the face model successively. Using our approach, it is also possible to transfer a character's expression to another character by cloning the original's facial muscle placement and contraction parameters.
  • Masaki HIRAGA, Hidetoshi ANDO
    Session ID: 07-48
    Published: 2007
    Released on J-STAGE: February 03, 2009
    CONFERENCE PROCEEDINGS FREE ACCESS
    To express complex models requires a great amount of data like voxel expression. With a mass of data, it is possible to express the object in detail, though, it sometimes hard to manipulate the object in real-time. On the other hand, several techniques including Parallax mapping and Relief mapping have been used to express the detail of the model with low polygons. These techniques enable to produce stereoscopic expression only with a quad polygon and some textures. But, typically these techniques are used to render the scene. In this paper, we show the modeling method with the same data structure as Relief mapping in real-time with GPUs.
feedback
Top