The Journal of The Institute of Image Information and Television Engineers
Online ISSN : 1881-6908
Print ISSN : 1342-6907
ISSN-L : 1342-6907
Volume 61, Issue 12
Displaying 1-26 of 26 articles from this issue
  • Sang-hyun Kim, Takashi Shibata, Takashi Kawai, Kazuhiko Ukai
    2007 Volume 61 Issue 12 Pages 1742-1749
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    A field-sequential color projection system displays color images using a single panel and a color filter wheel.The wheel spins very rapidly,and creates a sequence of red,green,blue,and white images that combine to create a single full-color image.Although the use of a single panel can reduce both projector size and costs,the field-sequential color projection mechanism causes the observer to perceive trichromatic separation during times of rapid eye movement.This phenomenon is called “color breakup”.We examine the characteristics of saccadic eye movement,which result from viewing images containing color breakup.We describe two experiments that were conducted to examine the effects of color breakup as manifested through the ergonomic indices of eye movement and the subjective symptoms of asthenopia.
    Download PDF (732K)
  • Hirotake Yamazoe, Akira Utsumi, Shinji Abe
    2007 Volume 61 Issue 12 Pages 1750-1755
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    We propose a real-time gaze estimation method based on facial-feature tracking using a single video camera.In ourq method,gaze directions are determined as 3D vectors connecting both the eyeball and iris centers.Since the center of eyeball cannot be directly observed from images,the geometrical relationship between the eyeball centers and the facial features and the radius of the eyeball(face model)are calculated in advance(calibration process).The 2D positions of the eyeball centers can be estimated by using the face model and facial feature positions.Gaze direction can then be determined by tracking the facial features.In the calibration process,we employ an image sequence(more than three frames)where a subject moves his/her head while keeping his/her gaze on the camera location.In such a situation,since the camera,iris centers and eyeball centers lie in a straight line,the eyeball centers can be observed as the position of the iris-center.Using data from these observations enables us to easily obtain the relations between the eyeball centers and facial features.Experimental results show that the gaze estimation accuracy of the proposed method is 4°horizontally and 7°vertically.
    Download PDF (7172K)
  • Takeshi Fujimoto, Yutaka Ishibashi
    2007 Volume 61 Issue 12 Pages 1756-1765
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    We investigated the effect of inter-stream synchronization errors in haptic media, sound and video communications on output quality by using a haptic media,sound and video transfer system.The system transmits a sense of force,and generated sound and video while a human subject uses haptic interface devices to touch a real object.Using subjective assessment,we demonstrated that the media output quality is highest when video output is observed slightly earlier than haptic media is felt and sound is heard.
    Download PDF (9709K)
  • Kazutaka Suzuki, Haruyoshi Toyoda
    2007 Volume 61 Issue 12 Pages 1774-1778
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    High-speed,minute eye movements such as ocular microtremors(OMTs),flicks and saccades contain important information about the health of the subject.Since OMTs have a frequency as high as 100 Hz and minute angular displacement as low as 0.01 degree,measuring the speed and position of OMTs with adequate accuracy under non-contact conditions in the long-term is difficult.To measure a living sample stably and reliably,we have developed a novel system for measuring eye movement by combining an intelligent vision system and the corneal reflex method.The intelligent vision system has both high-speed and real-time parallel image processing capabilities,which measure to a high level of precision at 1 KHz.We have carried out basic experiments usinga simulated eye in conjunction with experiments on OMTs and saccade measurements.The results of these experiments confirmed that such a system is sufficiently accurate.
    Download PDF (1366K)
  • Tomoya Kurokawa, Kiyoshi Nosu, Kiyoyuki Yamazaki
    2007 Volume 61 Issue 12 Pages 1779-1784
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    There has been a lot of researches on the estimation/characterization of human emotions by using communication channels such as facial expressions. However, most of this research has focused on extracting facial features for some specific emotions at specific situations because the difficulty of the general characterization. We have developed a system that can characterize an emotion of an e-Learning user by analyzing his/her facial expression and biometric signals. The criteria used to classify the eight emotions were based upon a time sequential subjective evaluation of emotions as well as a time sequential analysis of facial expressions and biometric signals. The average coincidence ratio between these discriminated emotions by using the criteria of emotion diagnosis and the time sequential subjectively evaluated emotions for ten e-Learning examinees was 71%. When only the facial expressions were observed, the coincidence ratio was 66%. This suggests the multi-modal emotion diagnosis is effectiveness for estimating an e-Learning user's emotions.
    Download PDF (1886K)
  • Koichiro Ishikawa, Makoto Oka, Akito Sakurai
    2007 Volume 61 Issue 12 Pages 1785-1794
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    We propose an automatic and adaptive method to detect relatively important parts in multimedia content.The important parts of content should characterize the whole content and also be regarded as its summary.By applying our method,users capture important screenshots of the content and extract a set of important parts of the content that contain characteristic scenes.To evaluate the importance of the various parts,our method uses the user's viewing history.Numerical experiments show the feasibility of the proposed method.
    Download PDF (11471K)
  • Takeshi Dairiki, Toyohiko Hatada, Yasuhiro Takaki
    2007 Volume 61 Issue 12 Pages 1795-1802
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    The appearances of objects,such as glare and transparency,are the results of the reflection and the refraction of rays.The high-density directional display,which was originally developed to realize a natural 3D display,precisely controls the ray directions so that it can reproduce appearances of objects.In this study,the high-density directional display which has the resolution of 640×400 and emits rays in 72 different horizontal directions with the angle pitch of 0.38°was constructed.Two 72-directional displays having the resolution of 320×400 were combined,each of which consisted of a high-resolution LCD panel(3,840×2,400) and a slanted lenticular sheet.Two images produced by the two displays were superimposed by a half mirror.A slit array was placed at the focal plane of the lenticular sheet to reduce the horizontal image crosstalk.Subjective evaluation showed that the developed display provided higher appearances and presence than conventional 2D displays.
    Download PDF (8661K)
  • Kazuya Ueki, Tetsunori Kobayashi
    2007 Volume 61 Issue 12 Pages 1803-1809
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    To reduce the rate of error in gender classification,we propose the use of an integration framework that uses conventional facial images along with neck images.First,images are separated into facial and neck regions,and features are extracted from monochrome,color,and edge images of both regions.Second,we use Support Vector Machines(SVMs) to classify the gender of each individual feature.Finally,we reclassify the gender by considering the six types of distances from the optimal separating hyperplane as a 6-dimensional vector.Experimental results show a 28.4% relative reduction in error over the performance baseline of the monochrome facial image approach,which until now had been considered to have the most accurate performance.
    Download PDF (9114K)
  • Hisayuki Taruki, Akira Ohno, Fumie Ono, Takayuki Hamamoto, Tomoshi Sas ...
    2007 Volume 61 Issue 12 Pages 1810-1817
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    Object detection while suppressing the effect of the change of the ambient light is performed by using different values of images with the modulated light on and off.In the actual environment,the changes of ambient light affect conventional object detection.To overcome the effects of changes in ambient light and to achieve wide dynamic range imaging,a new way of detecting objects is introduced.We also describe a pixel circuit specifically designed for use with this method and present the results of evaluations of the fabricated prototype chip.
    Download PDF (8466K)
  • Takahiro Ogawa, Daisuke Sakuma, Shin ichi Shiraishi, Miki Haseyama
    2007 Volume 61 Issue 12 Pages 1818-1827
    Published: December 01, 2007
    Released on J-STAGE: January 29, 2010
    JOURNAL FREE ACCESS
    We propose the use of a system that provides information about emergency rescue procedures for mobile phone users.To enable the users to easily understand the procedures,avatars demonstrate how they are used.Since the avatars are efficiently constructed on a subset of scalable vector graphics(SVG),they can be quickly transmitted to the mobile phone and with low computational loads.In addition to the avatars,complementary voice and text data that explainthe procedures are also transmitted via a synchronized multimedia integration language(SMIL, pronounced “smile”) format.Moreover,to verify the performance and the effectiveness of the proposed system,we implemented a dedicated SMIL player suitable for use with mobile phones.
    Download PDF (12105K)
feedback
Top