The Journal of The Institute of Image Information and Television Engineers
Online ISSN : 1881-6908
Print ISSN : 1342-6907
ISSN-L : 1342-6907
Volume 63, Issue 5
Displaying 1-25 of 25 articles from this issue
Focus
Massage from Honorary Member: For Members Carrying on Next Generation
Special Issue
The Latest Trend in IPTV
1. Abstract and Current Status of IPTV
2. The Latest IPTV Standardization Activities in ITU-T
3. IPTV Services
4. Activities for Enhancing IPTV
5. Current Status Around IPTV
Technical Guide
Embedded Technology for Image Processing Engineers
Keywords you should know
My Recommendations on Research and Development Tools
Fresh Eyes -Introduction of Video Research Laboratory-
Report
  • Toshiharu Wada, Masanobu Takahashi, Keiichiro Kagawa, Jun Ohta
    2009 Volume 63 Issue 5 Pages 657-664
    Published: May 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    We propose a method to realize mouse functions using a laser pointer. A function to automatically move the mouse cursor to laser spots on a screen was realized. Restrictions on illumination environment were overcome using a frequency-demodulation image sensor. Laser spots were successfully detected at a luminance as high as several thousands lx. Methods to detect spots and transmit information, such as a mouse click without using a synchronous signal, were proposed and then verified by experiments. Such asynchronous operations make a signal line to a laser pointer unnecessary. Identifying a laser pointer among a plural number of laser pointers is also possible. The proposed system realizes smoother operability in presentations as well as some human-computer interaction applications.
    Download PDF (1467K)
  • Hiroyo Ishikawa, Hideo Saito
    2009 Volume 63 Issue 5 Pages 665-672
    Published: May 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    This research proposes a method of representing objects for a 3D display device that uses a pulse laser and generates plasma luminous bodies in arbitrary positions in midair. By using a xyz-scanner, the device controls the position of plasma luminous bodies that appear and disappear one by one in 1 kHz. The device can display an object in one stroke in midair. We propose our method of representing primitive objects— polygons, polyhedrons and curved surface objects— in consideration of hardware limitations and human visual characteristics, and evaluate the objects in experiments. As a result, polygons can be represented visually, effectively reducing the scanner burden by smoothing accelerations at corners and increasing the plasma density; polyhedron faces are best drawn one by one; and when curved surface objects are drawn using spiral, we can stably perceive them.
    Download PDF (9048K)
  • Kentaro Doba, Hiroshi Masuda
    2009 Volume 63 Issue 5 Pages 673-678
    Published: May 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    Interactive mesh editing techniques are commonly used for creating a new mesh model by deforming existing mesh models. In surface-based deformation, geometric shapes are encoded using differential equations, and they are deformed so that the equations and other constraints are satisfied in a least-squares sense. Although constraints are typically approximated as linear equations, constraints that preserve volume require nonlinear equations. In such cases, it is time-consuming to solve the nonlinear equations. In our method, we enclose a mesh model with multiple overlapping lattices so that a subset of vertices is shared by two or more lattices. Then, the vertex coordinates in equations are replaced by the coordinates of the lattices. Vertices shared by multiple lattices are used to propagate deformation between disconnected lattices. Our method makes it possible to efficiently solve non-linear equations and interactively deform mesh models while preserving the volume of mesh models.
    Download PDF (7169K)
  • Koichiro Honda, Hiroshi Masuda
    2009 Volume 63 Issue 5 Pages 679-684
    Published: May 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    Interactive mesh deformation is a powerful tool to create new mesh models from existing mesh models. When a mesh model is deformed, it is very important to preserve visually distinct features. However, existing deformation methods, which typically maintain discrete mean curvature, fail to preserve such visually distinct features. This is because mean curvature is calculated using just one- or two-ring vertices, although geometric features can be represented in various scales of mesh regions. In our method, the shapes of features are constrained based on visual saliency, which is calculated using a weighted sum of various resolutions of mean curvature values. We introduce a new constraint that can control the stiffness of a feature region based on the value of visual saliency. Such constraints are effective to realize interactive deformation that adequately preserves visually distinct features. We show that our method can interactively deform mesh models while preserving the visual saliency of mesh models.
    Download PDF (7166K)
  • Satoshi Handa, Yoshinobu Ebisawa
    2009 Volume 63 Issue 5 Pages 685-691
    Published: May 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    Physically handicapped people such as amyotrophic lateral sclerosis (ALS) patients, who can move only their eyes, have difficulty transmitting their intentions to the surrounding people. As an intention transmission device that a user can operate using eye movements, we experimentally produced a head-mounted display (HMD) with the eye gaze detection function. Conventional HMDs of this type are large, and the eye-gaze calibration procedure must be repeated if a user moves relative to the HMD. To solve these problems, the methodology for eye-gaze detection, and the optical systems to detect eye gaze and to present the displayed image have been improved and are described in this paper. The experimental results show that the developed HMD can maintain precise eye gaze detection even if the relative position between the finder and the eye changes.
    Download PDF (1515K)
feedback
Top