ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
23.62
Displaying 1-17 of 17 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (15K)
  • Article type: Index
    Pages Toc1-
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (73K)
  • Kenichi Kanazawa, Hiroshi Hasegawa, Masao Kasuga, Shuichi Matsumoto, A ...
    Article type: Article
    Session ID: MIP99-67
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a method using IIR filters in order to simply realize the auditory localization. First, head-related transfer functions(HRTFs)of a dummy-head are approximated using IIR filters. The approximated HRTFs were obtained by the following filters: (a)FIR filters, (b)fourth to seventh-order IIR filters, and(c)third-order IIR filters. Next, sound localization experiment using the IIR filters is carried out. Result of the localization tests show that sound localization on the horizontal plane simulated by the HRTFs with the IIR filters(b)or(c)is the same accuracy as one simulated by the HRTFs with the FIR filters(a).
    Download PDF (548K)
  • Kazuhiro TAKAHASHI, Hiroshi HASEGAWA, Miyoshi AYAMA, Masao KASUGA
    Article type: Article
    Session ID: MIP99-68
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The role of audio-visual interaction has been recognized to be considerable importance in creating realistic andio-visual space. The effect of visual image on the real sound localization and the binaural sound localization reproduced by two speakers was investigated. The transfer function from each speaker to each ear of the subject who was sitting in the center of the semicircle screen facing 0 deg was measured in order to repoduce binaural sound image.
    Download PDF (563K)
  • Osamu Takeuchi, Madoka Hasegawa, Shigeo Kato
    Article type: Article
    Session ID: MIP99-69
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    A resolution conversion method becomes necessary when images are exchanged between devices of different resolution, if devices are binary devices. However, because there are more high frequency patterns in dithered images than in facsimile images, many resolution conversion methods can not obtain satisfactory image quality when applied to dithered images. In this paper, we proposed a shrinking method for dithered images by Bayer mask. In our method, to compensate the difference of luminances between an original dithered image and the converted image, the similar technique as the Δ-Σ modulation method is introduced. Simulation results show that our method can obtain higher image quality than PRES or conversion method based on Hadamard transform.
    Download PDF (1187K)
  • Takeharu Adachi, Madoka Hasegawa, Shigeo Kato
    Article type: Article
    Session ID: MIP99-70
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently, multimedia data such as still images, motion images and audio signals have been widely used. However, they can be simply and frequently duplicated without degradation, so that many copy problems may be caused. To solve this problem, copyright information as a signature and ID is embedded into an image. When images are used by unexpected some one without authentication, it is possible to protect copyright by extracting an embedded data from an image and verifying them. Such methods are called watermaiking methods. In this paper, we propose a watermaking method, where we use Discrete Cosine Transform(DCT)as orthogonal transform. Our proposed method is to embed data with considering embedded image quality and robustness to attack. Especially, the proposed scheme has much robustness for JPEG compression. Finally, we show simulation results for our proposed scheme.
    Download PDF (1036K)
  • Osamu SUGIMOTO, Ryoichi KAWADA, Shuichi MATSUMOTO
    Article type: Article
    Session ID: MIP99-71
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    SNR is often used for an objective picture quality assessment of video coding. However, it is impossible to calculate SNR in an absolute evaluation which evaluates picture quality only by decoded pictures, as both the original and decoded pictures are required. In this report, the authors propose an absolute objective assessment method of picture quality for video coding using steganography. The proposed method embeds watermark bits into quantized values of DCT coefficients. The watermark bits stand for DCT coefficients of original pictures and are compared with those of decoded pictures to estimate the SNR of the decoded pictures. Computer simulations show the effectiveness of the proposed method, and prove that the proposed method actually achieves good SNR estimations.
    Download PDF (796K)
  • Eiji Sawamura, Takao Monma, Tetsuo Mitsuhashi, Terumasa Ehara, Katsuhi ...
    Article type: Article
    Session ID: MIP99-72
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    The Telecommunications Advancement Organization(TAO)of Japan has initiated a project in which electronically available text of TV news programs is summarized and synchronized with the speech and video automatically, then superimposed on the original programs for the benefit of the hearing impaired people in Japan. The project started in 1996 and will end in 2001. This report describes the recent results of our research for closed-caption display method, especially assessment of favorite for closed-caption display methods and production system of various closed-captions for testing.
    Download PDF (1238K)
  • Aki Hashizume, Hirotada Ueda, Kazuhito Yaegashi, Mitsue Ito, Tsuyoshi ...
    Article type: Article
    Session ID: MIP99-73
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We are developing a TV program production system utilizing the script description Language TVML(TV program Making Language). TVML is designed to describe a whole TV program with a natural language like scripting. We are also developing the TVML editor. Using the TVML editor user does not necessarily learn the TVML grammar and through the GUI he/she can easily generate a TVML script. To improve the maneuverability of the GUI for the TVML editor, we have newly developed a Director Metaphor. Through this metaphor, the user can give any directions to CG actors or props in a direct manipulation method by a simple mouse operation. In this paper, we will describe the concept of our Director Metaphor and the results of our prototype.
    Download PDF (1769K)
  • Tomohiro ITO, Yoshito MEKADA, Hiroshi HASEGAWA, Masao KASUGA, Kazuo KA ...
    Article type: Article
    Session ID: MIP99-74
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a care system that is able to secure the privacy of handicapped people and lighten the burdens of their helper. We mention a prototype of the care system in particular conditions. The performance of the noise canceler that played an important role in the system was estimated. From the results, we concluded that it is effective when the correlation between noises and voices is low.
    Download PDF (862K)
  • [in Japanese], [in Japanese], [in Japanese], [in Japanese]
    Article type: Article
    Session ID: MIP99-75
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper deals with an assistive system that provides information necessary for deaf workers at workplace to process one's job by two language media ; Japanese(text)and signed language(motion pictures). The system also provides some graphics such as figures, tables and still pictures to help a user to comprehend them. We first introduce the basic concept of the system including how bilingual information interacts in comprehension. We then show how to generate signed language sentences with the use of an editing approach. We finally show the preliminary experimental results for the interaction between two language media.
    Download PDF (1097K)
  • Kazuo Kamata, Yoshihito Nishibori, Tetsuya Nishimoto
    Article type: Article
    Session ID: MIP99-76
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper addresses the computerized teaching/learning materials for hearing signed language learners. We first show the purpose, function, and structure of the system. We then show the overview of a computerized system; We are now intending to expand the function of the previous one. One of the main functions of the current system is to provide useful information about the appropriate use of Japanese(translation)according to the sense of a sign word. We show some sign words to explain the correspondences between Japanese sign language(JSL)words/sentences and Japanese translations.
    Download PDF (914K)
  • Yasuhiro TANAKA, Yoshito MEKADA, Masao KASUGA, Hiroharu YOSHIKAWA
    Article type: Article
    Session ID: MIP99-77
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In recent years, it has become necessary to devalop a facial discrimination system. We have developed a feature that corresponds to personal facial characteristic by using the normal vectors of approximated plane of small areas. In this paper, we improve this feature by extracting the effctive divisions for personal identification based on our features. The result showed that the accuracy of personal identification was progressed using improved feature.
    Download PDF (926K)
  • Daisuke KODAMA, Yoshito MEKADA, Miyoshi AYAMA, Masao KASUGA
    Article type: Article
    Session ID: MIP99-78
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    It is important to research driver's sensory function like movement of the eyes and the area which we can perceive an object during driving. Because we consider that it is useful of planning safety car, constructing a road and place of speed meter in a car. In this paper we research functional visual field during driving experimentally. In the case, the subject is given the additional information about auditory or visually information or both of them, we exame the differentation of the each functional visual field. In addition, we define the quantity of image variation by using optical flow, and we seed relation between this feature and measured functional visual field.
    Download PDF (1119K)
  • Daisuke SASAKI, Yoshito MEKADA, Masao KASUGA, Nobuo UEDA
    Article type: Article
    Session ID: MIP99-79
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Many informations are included in a face - for example, personal informations, and informations concerned with communications. Many researches that extract the human facial parts and evaluate one's emotions are carried out. We studied about an analysis of facial expressions using feature points of typical facial parts, "eye", "eyebrow", and "mouth". We measured the movement of facial parts during variation of facial expressions, "happiness", "anger", "surprise", "sadness" and "neutral" for comparison. As a result, we got several characteristics of these facial expressions. We made a facial description of these emotions based on these charactaristics afterward.
    Download PDF (602K)
  • Yuichi Iwadate
    Article type: Article
    Session ID: MIP99-80
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    By rapid progress in computer technology in recent years, anyone can easily handle multimedia contents such as video, audio and so on. It is expected to realize a more expressive communication system with which we can convey our emotion and intention effectively by utilizing the multimedia system. From this view point, we are researching on Image Expression to express something like emotion or sensation which can not be expressed in words. In this paper, recent progress in Image Expression research at ATR-MIC3 is reported.
    Download PDF (1598K)
  • Article type: Appendix
    Pages App1-
    Published: October 21, 1999
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (86K)
feedback
Top