ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
22.39
Displaying 1-11 of 11 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (17K)
  • Article type: Index
    Pages Toc1-
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (52K)
  • Ichiro MATSUDA, Susumu ITOH
    Article type: Article
    Pages 1-6
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper proposes a new region-oriented coding scheme of still images. The scheme first segments an image by using Voronoi diagrams. Then luminance components in each Voronoi region are encoded using an adaptive transform method. We utilize Schmidt's orthogonalizing method to produce basisfunctions for arbitrarily shaped regions. Since the property of basis-function depends on the order of input vectors in orthogonalization, we change the order adaptively to improve coding performance. Simulation results indicate that the proposd coding scheme achieves higher coding performance than the KLT-based scheme which we reported formerly, with much less computation.
    Download PDF (1078K)
  • Akira Utsumi, Jun Ohya
    Article type: Article
    Pages 7-12
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We propose a hand shape recognition system using multiple-viewpoint images. Most conventional hand shape recognition system did not concern about self-occlusion problem due to the hand rotation and a user needs to pay attention to his hand's direction to avoid the problem. We employ multiple-viewpoint images to estimate the pose of a human hand. After a pose estimation, a "best view" for the hand shape recognition is selected based on the estimation result. Hand shape recognition is performed based on the shape representation using P-type Fourier descriptor that is not affected with image scaling and translation. Based on the shape recognition, we developed a system where a user co create virtual graphical scenes interactively. In the system, a user can change the virtual objects' positions, sizes, colors, etc. with hand gestures. This system can be used as a user interface device, replacing glove-type devices and overcoming most of the disadvantages of contact-type devices.
    Download PDF (874K)
  • Kuniteru SAKAKIBARA, Takahiro WATANABE, Masahiko YACHIDA
    Article type: Article
    Pages 13-18
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper describes a real time gesture recognition methodfor interactive systems. Our method can recognize gesture robustlyin a variety of real complex environments. For the robust gesture recognition, our method performs an interactive model generationbefore the recognition processing, and an individual gesture model is constructedby the interaction between a user and the computer. The individual gesture model is based on the template matching technique andit is constituted by the set of color template images which represent a set of specific poses of human part in a gesture. Using our method, we realize a real time interactive system, the Gesture Game System, which can activate the characters of video game by gestures in real-timeand indicates the usefulness of our method.
    Download PDF (970K)
  • Hiroaki BESSHO, Takahiro WATANABE, Satoshi KIMURA, Masahiko YACHIDA
    Article type: Article
    Pages 19-24
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper describes a new methed of facial expression recognition for Man-Machine Interface. Our method can not only recognize some kinds of facial expressions but also estimate its degree information, and the recognition results can use for many applications of interface. Our method is based on the idea that facial expression recognition can be achieved by extracting a variation from expressionless face ith considering face area as a whole pattern. Using a elastic net model, a varlation of facial expression is represented as motion vectors of the deformed Net from a facial edge image. Then, applying K-L expansion, the change of facial expression represented as the motion vectors of nodes is mapped into low dimensional eigen space : the Emotion space, and estimation is achieved by projecting input images on to the Emotion Space. In this paper we have constructed three kinds of expression models : happiness, anger, surprise, and experimental results are evaluated. Using our method, we realize an interactive system : the Facial Expressiz Video controller, which can control a video playing by the recognition results, and indicates the usefulness of our method.
    Download PDF (1194K)
  • Akira WADA, shin-ichi MURAKAMI
    Article type: Article
    Pages 25-30
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents a detection and recogmition method of human movements from a series of image data. We propose a method to transform the appeared image size of human body into a real size in order to recognize the object position and locus of motion in a real space.
    Download PDF (904K)
  • Jun Shimamura, Yoshifumi Kitamura, Fumio Kishino, Haruo Takemura, Naok ...
    Article type: Article
    Pages 31-36
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The recent progress in graphics computations has created opportunities to construct various virtual environments, such as urban or natural scenes. Furthermore, such environments can be enriched with a photograph from the world as a texture image. In walk-through of a virtual environment, it is important to represent motion parallax following viewpoints according to user's movement. This paper describes a method to create a realistic distant scenery in a virtual environment using panoramic real images taken by an omnidirectional stereo imaging sensor.
    Download PDF (1179K)
  • Miyuki MUKASA, Yoshito Mekada, Hiroshi Hasegawa, Masao Kasuga, Shuichi ...
    Article type: Article
    Pages 37-42
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    It has been expected that the computer speaks speech with human emotion for achieving a comfortable speech interaction with computer. In this paper, we try to ascertain the physical features included in our conversations. We employ the eight categories of human emotion and collect the speech signals of each emotion. These speech signals are compared with speech signals that have no emotions by perceptual experiments and analyzing their physical features. According to these results, it is found that the pitch frequency becomes high and the power becomes weak if the speech signals are with "surprise". Speech signals with "joy" have the same pitch characteristic, and opposite characteristic about the pewer. Speech signals with "sad", there are certain oscillation both peak frequency of spectra and pitch.
    Download PDF (1053K)
  • Norikazu Akutsu, Hiroshi Hasegawa, Masao Kasuga, Shuichi Matsumoto, At ...
    Article type: Article
    Pages 43-48
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, sound localization in the horizontal plane was simulated by approximated headrelated transfer functions (HRTFs). The approximated HRTFs were obtained by the following filters : (1) FIR filters, (2) third to seventh-order IIR filters, and (3) third-order IIR filters. We performed localization tests in 5 subjects by headphone reproduction of the auditory stimuli using the HRTFS. Results of the localization tests show that sound localization in the horizontal plane simulated by the HRTFs with the IIR filters (2) or (3) is the same accuracy as one simulated by the HRTFS with the FIR filters (1).
    Download PDF (534K)
  • Article type: Appendix
    Pages App1-
    Published: July 28, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (73K)
feedback
Top