ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
25.35
Showing 1-15 articles out of 15 articles from the selected issue
  • Type: Cover
    Pages Cover1-
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (14K)
  • Type: Index
    Pages Toc1-
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (58K)
  • Takao MONMA, Eiji SAWAMURA, Terumasa EHARA, Katsuhiko SHIRAI
    Type: Article
    Session ID: AIT2001-62
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Today, the expansion of closed-caption on TV programs for hearing-impaired people is a social mission. We integrated automation technology of closed-captioning, summarization, synchronization, and developed efficient and rapid closed-caption production system. In this paper, we reported on the technique of the automatically line-feeding and page-feeding toword the manuscript or automatic summarized text, as an elemental technique which realizes automatic production system of TV closed-caption.
    Download PDF (519K)
  • Mamoru DOUKE, Masaki HAYASHI
    Type: Article
    Session ID: AIT2001-63
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We have been studying various TV program production methods using computer since we proposed TVML (TV program Making Language). We have studied automatic TV program production technique as one of the researches related to the TVML. Using this technique, a user can obtain a TV program produced automatically by putting minimum amount of necessary information into a computer. In this paper, we propose a new concept named "PROGRAM BLOCK" applied to the automatic TV program production algorithm based on TVML. Program blocks are used to represent small segments corresponding to the sections of a TV program. A user can then produce a TV program by using a computer and arranging some of the program blocks in the desired sequence. We explain the program blocks and introduce a simple experiment based on this method.
    Download PDF (752K)
  • Masanobu Hirata
    Type: Article
    Session ID: AIT2001-64
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Recently at Broadcasting Stations, we are using many CGI(Computer Generated Images) which have the wide range of contents from a still image to 3D Virtual Set. We use CGI for the following reasons, 1. The rational way of making images, 2. The easy connection between data, 3. The new way of expression. We report the present situation and the possibility of CGI with some examples "ABC 50^<th> Anniversary Forum" "Election2000" broadcasted last year.
    Download PDF (1491K)
  • Shigeru Aoki
    Type: Article
    Session ID: AIT2001-65
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    A High-Definition editing system works in Universal Studios Japan from March 2001. This report is the introduction of this system and a future plan and ideas of post production facilities.
    Download PDF (439K)
  • Atsushi KASAO
    Type: Article
    Session ID: AIT2001-66
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    An image enlargement without jaggy by Synergistic Image Creator (SIC) is presented. A small modification of SIC's program make you produce your own style image. This is the most important SIC's characteristics, because we bereave that an essential interest of creating CG images is making image creation algorithm.
    Download PDF (1963K)
  • Kazuya KIMITA, Yooko YAMAKATA, Koh KAKUSHO, Michihiko NINOH
    Type: Article
    Session ID: AIT2001-67
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose to use sample images, in order to communicate the visual imagery about shapes, colors and layouts of objects to others. If we employ a single sample image or images that ressemble one another, those images would represent only a part of the visual imagery. We employ the "representative images", which are sample images with sufficient variety to express the wide range of the visual imagery. In order to obtain the representative images, we introduce an interactive process in which the user selects images corresponding to his/her visual imagery from among the images synthesized by combining image regions in database, based on a model of visual imagery obtained from each sample image.
    Download PDF (703K)
  • Ryotaro Suzuki, Michihiko Minoh, Makoto Tadenuma
    Type: Article
    Session ID: AIT2001-68
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we discuss methods to extract rhythmic information from movie components and to synchronize these components based on that information. Generally speeking, there are two kinds of rhythm:fixed rhythm and freerhythm. In addition, another type of rhythm that has a certain fluctuation in its frequency can also be introduced in-between. We developed two kinds of synchronization method corresponding to these rhythm types. One is frequency synchronization and the other is counterpoint synchronization. The frequency synchronization is developed for fixed and fluctuation rhythm. It fits the frequencies and the phases of components using the least square method. The counterpoint synchronization is developed for free rhythm. It fits temporal key points of the components as much as possible. We conducted a synchronization recognition assessment test to evaluate these synchronization methods. The result shows that the difference between these method suits the difference of the rhythmic features.
    Download PDF (1090K)
  • Kazunobu Azuma, Hirohiko Mori, Makoto Kosugi
    Type: Article
    Session ID: AIT2001-69
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose an approach that extracts rules of melody that allows human beings to recognize a collection of sound as music. In this approach, sounds created through the process of using random numbers and rules are evaluated by human listeners, and the results acquired from this evaluation are again evaluated to extract the rules. By using this approach, we are able to discover the rule of music that could not be discovered through the approach done in the past (music studies approach and cognitive science approach). The rules used in this rasearch gives constraints, for example, in the sounds' pitch and length, as well as given constraints between groups of sounds.
    Download PDF (787K)
  • Tsuyoshi Adachi, Haruhiro Katayose
    Type: Article
    Session ID: AIT2001-70
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Most of rhythm games utilize gap between prescribed time and a user's button push in order to evaluate the user's rhythmic sense. This study aims at development of evaluation model of a player's rhthmic sense, for gestures which are input as continuous quantitative data. The model evaluates both precision and variety expression of gestures. The model is implemented as a rhythm game on Puradata, a visual programming environment. The proposed system can evaluate the rhythm of the dance to the music more naturally than the other existing dance games.
    Download PDF (740K)
  • Mamiko SAKATA, Mariko SHIBA, Makoto TADENUMA
    Type: Article
    Session ID: AIT2001-71
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper reports the difference between the evaluation of real movies and the reproductive ones by 3-D animation. In the result, though it is possible to reproduce the same kinds of emotions, it seems that the strength of emotions become weaker by reproducing as 3-D animation, and it is difficult to reproduce the subjective features of body movements.
    Download PDF (1106K)
  • Masayuki INOUE, Yuichi IWADATE, Ryotaro SUZUKI, Mariko SHIBA, Makoto T ...
    Type: Article
    Session ID: AIT2001-72
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Kansei feature extraction from body-movements has been investigated as a part of development of image expression techniques using multimedia. We chose dance as a subject of our research, because dance performers can present fundamental and typical motion patterns efficiently. It is shown that three psychological factors were extracted through SD test and principal component analysis and also shown that human body-movement can be expressed by linear combinations of physical parameters measured for dance scenes through appling multiple regression analysis.
    Download PDF (906K)
  • Ikuo Harada, Makoto Tadenuma, Takahiro Nakai, Ryotaro Suzuki, Naoto Hi ...
    Type: Article
    Session ID: AIT2001-73
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    A human activity that expresses one's emotion is significantly important in non-verbal communications. Human body motion such as a dance performance is one of the important factors in expressing such an emotion. In this paper, an effective method to extract emotional information in real time from dance images is described. A novel heuristic function is also proposed to evaluate the degree of concert between two dancers. As a typical application, a dance system called MIDAS that interactively combines dancers' images with video, sound, and CG characters is introduced. MIDAS can visualize how the dancers are in good concert with each other and encourage their expressions.
    Download PDF (1009K)
  • Type: Appendix
    Pages App1-
    Published: May 25, 2001
    Released: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (82K)
feedback
Top