The Journal of The Institute of Image Information and Television Engineers
Online ISSN : 1881-6908
Print ISSN : 1342-6907
ISSN-L : 1342-6907
Volume 60, Issue 10
Displaying 1-28 of 28 articles from this issue
Focus
Lecture
Invited Paper
Special Edition
Image Information Technology for Safety and Security
Topics
Technical Guide
How Does Computer-Vision Imitate Human-Vision with the Latest Techniques?;
Practical Approaches to Avoid Difficulties
TV Cafe
Epoch-making Technologies
Keywords you should know
Visiting a Video Production Site
News
Papers · Short Papers
Special Section Computer Graphics
Papers
  • Masahiko Saeki, Masataka Imura, Yoshihiro Yasumuro, Yoshitsugu Manabe, ...
    2006Volume 60Issue 10 Pages 1593-1598
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    Colors in nature can be classified into two categories: colors which originate in pigments (coloring matter) and structural colors, which are caused by optical path differences at the level of microscopic structures. Structural colors change according to viewing angles and lighting conditions. This paper proposes a generic method for rendering structural colors in real time. The proposed method's use of optical path differences as common parameters enables unified treatment of various types of microscopic structures and representation by textures that store optical path differences, which can speed up rendering. The proposed method can be applied to object of arbitrary shapes and enables rendering from random viewing angles and lighting conditions in real time.
    Download PDF (2262K)
  • Tomoya Ishikawa, Kazumasa Yamazawa, Naokazu Yokoya
    2006Volume 60Issue 10 Pages 1599-1610
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    The advent of high-speed network and high performance PCs has prompted research into networked telepresence, which allows a user to see a virtualized real scene in remote places. View-dependent representation, which provides a user with arbitrary images using an HMD or an immersive display, is especially effective in creating a rich telepresence. The goal of this study is to create a novel view telepresence that enables a user to control the viewpoint and view-direction by virtualizing real dynamic environments. We describe a novel method of generating views that uses image-based rendering techniques from multiple omni-directional images captured from different positions and that evaluates image quality using a simulated environment. We also describe our prototype system and an experiment with the novel view telepresence that used the system in a real environment. Our prototype novel view telepresence system constructs a virtualized environment from real live videos. The system synthesizes a view based on the user's viewpoint and view-direction as measured by a magnetic sensor attached to an HMD and presents the generated view on the HMD. Our system can generate a user's view in real-time by presenting corresponding points and estimating camera parameters in advance.
    Download PDF (4800K)
  • Keita Takahashi, Takeshi Naemura
    2006Volume 60Issue 10 Pages 1611-1622
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    The aim of our study was to synthesize free-viewpoint images without offline processing using input from multiple cameras that are configured in a 2D array. A set of depth layers is used to represent a target scene. We need to know to which layer each pixel on the synthesized image belongs. Acquiring this information is the same as estimating a pixel-by-pixel depth map that depends on the viewpoint from where the image is to be synthesized. In this paper, we describe an efficient method for estimating this depth map on-the-fly by verifying the color consistency of the input images. We then describe an efficient implementation method that uses acceleration of the graphics hardware and show that we can generate high-quality images at interactive frame rates. A quantitative evaluation to support the effectiveness of our method is presented.
    Download PDF (4381K)
  • Norihiro Nakamura, Yusuke Inoue, Koji Nishio, Ken-ichi Kobori
    2006Volume 60Issue 10 Pages 1623-1629
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    Boundary representation and spatial partitioning models are widely used in CG. Though boundary representation models require only a small amount of data, manipulating shapes with them is complicated. Spatial partitioning models, on the other hand, have a simple data structure, making shape manipulation easy. However, increasing the precision of the shape model requires ever more data. If these models can be transformed, good use can be made of the advantages of both models. To transform the boundary representation model into the voxel model it is necessary to decide whether each voxel is inside or outside of the shape being manipulated. However, this is a costly process. To solve this problem, we propose a fast method that generates a voxel model from a boundary representation model using GPU. We also describe a method for improving the accuracy of the voxel data.
    Download PDF (2684K)
  • Yukihito Sakai, Shuji Hashimoto
    2006Volume 60Issue 10 Pages 1630-1647
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    We have developed an interactive visualization system that enables people to both experience and intuitively understand 4-D space. We proposed a framework in which people can visualize 4-D space using 5-D homogeneous processing. This way of processing uses 5 × 5 matrices to express various types of transformations and simplifies geometric operations without using any division operations. Therefore, the proposed method improved the system's reliability. We also constructed an interactive environment that associates the user's 3-D position with a 4-D eye point that lies on a 4-D spherical surface surrounding a 4-D solid. The system is composed of a glasses-free 3-D display, a controller pad, and a personal computer. In this environment, we were able to smoothly observe a 4-D solid from an arbitrary 4-D eye point and direction.
    Download PDF (4561K)
Short Papers
  • Kohei Inoue, Kiichi Urahama
    2006Volume 60Issue 10 Pages 1648-1651
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    A simple technique is presented for generating pencil drawing images resembling hand-drawn sketches of 3D objects. We devised a method for extracting edge enhancement components by using division of an input image by its blurred one instead of conventional subtraction methods. Depth images are used for completion of edge enhancement components, which are impossible to extract from shading images. Edges are enhanced by removing one side of the edge enhancement shading. Thus, the obtained edge-enhanced shading images are transformed to stippling images using the point redistribution method presented by Secord et al. This stippling image is then anisotropically smoothed to produce a hatching image. We also applied this technique to moving images. One merit of our method is that it can produce movies with very low flicker without any specific overhead processing.
    Download PDF (1238K)
  • Kohei Inoue, Kiichi Urahama
    2006Volume 60Issue 10 Pages 1652-1654
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    A non-photorealistic rendering (NPR) method for generating beadwork-like images is described. An input image is approximated using a distribution of disks of various sizes and colors. Each color-disk is replaced with a color-sphere, which is a three-dimensional model of beads. The distribution of the color-spheres is visualized using the POV-Ray three-dimensional graphics tool.
    Download PDF (1088K)
  • Kenji Kamimura, Norimichi Tsumura, Toshiya Nakaguchi, Takashi Sugaya, ...
    2006Volume 60Issue 10 Pages 1655-1657
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    The resolution of a texture image in computer graphics is determined by the camera system used to take the image. Zooming in on the object beyond the optical resolution results in a loss of edges and details. We developed a “super resolution” method that uses a database of textures and texton substitution to create plausible high-frequency detail in enlarged texture images. Conventional example-based super-resolution techniques involve a high computational cost and users must provide the optimal parameters for the input image. Using texton in the wavelet space and a pixel-based (not a patch-based) substitution reduces the computational cost.
    Download PDF (1304K)
  • Sihwan Kim, Jonghyun Ha, Yuichi Taguchi, Keita Takahashi, Takeshi Naem ...
    2006Volume 60Issue 10 Pages 1658-1663
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    We have been studying on a real-time Image-Based Rendering (IBR) system that uses a single camera with thousands of lenslets to capture multi-view images of dynamic 3D scenes and to synthesize arbitrary views of the scenes in real time. In our previous papers, we have formulated the upper limit of the spatial frequency captured with the optical system by analyzing the effects on the image sensor of blurring and sampling. In this paper, we defined view-parameters and formulated their relation with acquisition-parameters, which are used to decide the location setting of the devices. Our results showed that our theory is an effective way of designing acquistion-parameters for our real-time IBR system.
    Download PDF (2133K)
Regular Section
Papers
  • Takuya Nagase, Shinji Ozawa
    2006Volume 60Issue 10 Pages 1664-1671
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    Scene analysis and indexing of video images from soccer games have been the focus of a lot of research. But recognition of play has not been studied enough for data analysis. We propose a method for determining play in soccer games using multiple view images taken by fixed cameras. Our method consists of four components: tracking of the ball, event detection, tracking of players controlling the ball, and play determination. The ball is tracked using template matching, and event frames are extracted using ball velocity. In the event frame, the player controlling the ball is detected and is tracked until the next event frame. Then, play is determined using information about the ball, the players, and the features of play. By using information from each camera, ball and player are tracked with high accuracy. Experimental results show that our method is effective in determining play.
    Download PDF (3065K)
  • Koki Shibata
    2006Volume 60Issue 10 Pages 1672-1680
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    We propose a delay profile estimation method for Digital Terrestrial Television Broadcasting (DTTB) using Orthogonal Frequency Division Multiplexing (OFDM). The delay profile is conventionally estimated by the auto-correlation method; however, the conventional method calculates pseudo delay paths unnecessarily, as well as the actual delay path. In contrast, our method no longer calculates pseudo delay paths, because of the phases of the transfer function obtained from the magnitude of the transfer function by Hilbert transform. Another feature of the method is its independency from transmission formats (mode, modulation of subcarrier, etc.), since our estimation method requires only the magnitude spectrum like the conventional estimation method.
    Download PDF (3176K)
  • Tomohisa Akafuji, Akira Honda, Tetsuo Kawano, Seiji Hatta, Hiroshi Nak ...
    2006Volume 60Issue 10 Pages 1681-1688
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    The development of ultra-broadband IP network has enabled video transmission over an IP network, and this provides the next-generation video transmission method for broadcasting stations. Standardized and stable IP technology enables broadcasts such as multicast over a low-cost network system. Video transmission of live TV broadcasts requires high-quality and low-delay video. In this paper, we propose IP video transmission method for uncompressed HDTV video that meets the high quality and low-delay requirements. And our video transmission system was applied to TV broadcasts, and stably used for half a year. We describe the knowledge and experience that we obtained through the development and use of technologies for producing and broadcasting TV content.
    Download PDF (3395K)
  • Akihiro Nagase, Yoshiaki Okuno, Jun Someya, Hiroaki Sugiura
    2006Volume 60Issue 10 Pages 1689-1695
    Published: October 01, 2006
    Released on J-STAGE: March 07, 2008
    JOURNAL FREE ACCESS
    We developed a contour correction method using a scaling technique to improve sharpness of images. This new method can detect contour areas in an image and steepen the changes in contour areas. It does this by scaling images in the contour area locally without causing unnatural overshoot or undershoot.
    Download PDF (2468K)
feedback
Top