The Journal of The Institute of Image Information and Television Engineers
Online ISSN : 1881-6908
Print ISSN : 1342-6907
ISSN-L : 1342-6907
Volume 63, Issue 6
Displaying 1-24 of 24 articles from this issue
Focus
Massage from Honorary Member: For Members Carrying on Next Generation
Special Edition
Progress on Technologies for Higher Image Quality and Image Quality Assessment of Cameras and Displays for Consumer Use
Technical Survey
Technical Guide
Embedded Technology for Image Processing Engineers
Keywords you should know
My Recommendations on Research and Development Tools
Fresh Eyes -Introduction of Video Research Laboratory-
News
  • Natsuki Takeda, Kiyoshi Kiyokawa, Haruo Takemura
    2009 Volume 63 Issue 6 Pages 794-800
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    In a virtual reality (VR) environment, a wide field-of-view (FOV) image contributes to improve the immersive experience and situation awareness. The typical FOV of a head mounted display (HMD) is around 20 to 40 degrees, and widening this has been a technological challenge for decades. We designed a new HMD that has a hyperbolic half-silvered mirror. This HMD can resolve the technological challenges and provide a wide FOV, large observational pupil, and optical see-through display. We made a prototype HMD and a VR walkthrough system that displays a pair of stereo images with a FOV of over 120 degrees. This paper reports our HMD ' s basic concept and the implementation of the prototype.
    Download PDF (6393K)
  • Takumi Yoshida, Kensei Jo, Kouta Minamimzawa, Hideaki Nii, Naoki Kawak ...
    2009 Volume 63 Issue 6 Pages 801-809
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    A wide field of vision is important for driver safety and vehicle operability. However, the window area of vehicles is limited. Therefore, we developed head-tracked retro-reflective projection technology that displays vehicle blind spot information. The system enables the driver to observe the surroundings in the blind spot as if seeing through the inner wall of the vehicle. The system uses a head-mounted projector and multiple cameras. We describe the system design, the implementation of a prototype, and performance evaluation experiments. Early experiments showed that the system effectively displayed the blind spot information corresponding to the operator's looking around motion.
    Download PDF (10344K)
  • Mie Sato, Makiko Sembon, Naoki Hashimoto, Masao Kasuga
    2009 Volume 63 Issue 6 Pages 810-815
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    To utilize flat walls as screens for image projection, radiometric compensation for projected images is necessary. This study focuses on the projector characteristics of luminance response and spatial distribution and the luminance reflectance of walls. We developed a radiometric compensation algorithm on the basis of human perceptual characteristics. This algorithm provides viewers with brighter projected images that can make the impression of the projected images close to the impression of the original images. In our experiment, we prepared a flat wall with patterns. We examined whether the projected images, which were radiometrically compensated by our algorithm, looked strange to the viewers. Also, we consider the amount of clipping errors that was acceptable to the viewers and the changes to the projected images that might have caused them to look strange.
    Download PDF (6442K)
  • Miguel Miranda Miranda, Kiyoshi Kiyokawa, Haruo Takemura
    2009 Volume 63 Issue 6 Pages 816-821
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    Interaction in an immersive virtual environment is limited by imperfect depth cues, unstable hand placement in midair, and so on. In this study, we summarize the design and implementation of a magic lens interface within an immersive virtual environment using a handheld device such as a personal digital assistant (PDA) or an ultra mobile personal computer (UMPC). Our interface simplifies the selection and manipulation processes using image-based interaction techniques. An empirical study shows the effectiveness of the proposed interface for selecting 3D objects, especially when the target is small or in motion.
    Download PDF (8263K)
  • Motomasa Tomida, Kiyoshi Hoshino
    2009 Volume 63 Issue 6 Pages 822-828
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    In the authors' previous systems for estimating human hand position, it was difficult to estimate with high accuracy without holding the hand still. The inability of such systems to work with a moving hand position reduces the convenience and applicability of the system in different situations. Our system therefore uses an easier algorithm based on image aspects of input hand images. This system can extract the hand area from a hand and arm with high speed and high accuracy. The system is constructed to find and match similar hand images from a large database with high speed and high accuracy and to search for similarities with the image features used. Experimental results using a high-speed camera and a notebook computer showed that the system can estimate 3D hand position and wrist flex and rotation movements with high accuracy. The averaged estimation error was −2.11 degrees and the standard deviation of the estimation error was±14.14 degrees.
    Download PDF (1980K)
  • Makoto Okuda, Takao Tsuda, Kazutoshi Mutou, Hitoshi Yanagisawa, Seiki ...
    2009 Volume 63 Issue 6 Pages 829-836
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    We are developing an automatic system for shooting TV programs that uses multiple robot cameras. In this paper, we propose a method for determining the shots of robot cameras in discussion programs. We define shooting rules that automatically determine the robot cameras' shots based on an analysis of the shooting method of TV cameramen. Then, we present a method of automatically generating shooting rules that are appropriate to each program. We show through a simulation and program shooting experiments that, using our method, robot cameras can accurately shoot different styles of programs without burdening system operators.
    Download PDF (2299K)
  • Takeshi Yoshitome, Jiro Naganuma, Yoshiyuki Yashima
    2009 Volume 63 Issue 6 Pages 837-846
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    An MPEG-2 to H.264 transcoding method preserving DCT information is proposed. This method uses the encoding information from an MPEG-2 stream and keeps the DCT coefficients of the original MPEG-2 bitstream as much as possible. Experimental results show that the proposed method increased the PSNR by about 0.20-1.11 dB compared with the conventional method.
    Download PDF (12040K)
  • Kok-Meng Ong, Wataru Kameyama
    2009 Volume 63 Issue 6 Pages 847-856
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    This study addresses the challenge of analyzing affective video content. The affective content of a given video is defined as the intensity and the type of emotion that arise in a viewer while watching that video. In this study, human emotion was monitored by capturing viewers' pupil sizes and gazing points while they were watching the video. On the basis of the measurement values, four features were extracted (namely cumulative pupil response (CPR), frequency component (FC), modified bivariate contour ellipse area (mBVCEA) and Gini coefficient). Using principal component analysis, we have found that two key features, namely the CPR and FC, contribute to the majority of variance in the data. By utilizing the key features, the affective content was identified and could be used in classifying the video shots into their respective scenes. An average classification accuracy of 71.89% was achieved for three basic emotions, with the individual maximum classification accuracy at 89.06%. The development in this study serves as the first step in automating personalized video content analysis on the basis of human emotion.
    Download PDF (11190K)
  • Natsumi Kusumoto, Shinsaku Hiura, Kosuke Sato
    2009 Volume 63 Issue 6 Pages 857-865
    Published: June 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    Exaggerated defocus cannot be achieved with an ordinary compact digital camera because of its tiny sensor size, so taking pictures that draw the attention of a viewer to the subject is hard. Many methods are available for controlling the focus and defocus of previously taken pictures. However, most of these methods require custom-built equipment such as a camera array to take pictures. Therefore, in this paper, we describe a method for creating images focused at any depth with an arbitrarily blurred background from a set of images taken by a handheld compact digital camera that is moved at random. Our method can produce various aesthetic blurs by changing the size, shape, or density of the blur kernel. In addition, we demonstrate the potential of our method through a subjective evaluation of blurred images created by our system.
    Download PDF (11246K)
feedback
Top