The Journal of The Institute of Image Information and Television Engineers
Online ISSN : 1881-6908
Print ISSN : 1342-6907
ISSN-L : 1342-6907
Volume 63, Issue 2
Displaying 1-19 of 19 articles from this issue
Focus
Massage from Honorary Member:For Membera Carrying on Next Generation
Special Edition
New Technology for Production of Digital Contents
Topics
Technical Survey
Technical Guide
Embedded Technology for Image Processing Engineers
Keywords you shoud know
My Recommendations on Research and Development Tools
Fresh Eyes-Introduction of Video Research Laboratoty-
News
  • Yudai Shinoki, Hironobu Fujiyoshi
    2009Volume 63Issue 2 Pages 209-215
    Published: February 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    Personal video sharing services such as YouTube have become popular because videos can easily be recorded in high-definition (HD) using a personal camcorder. However, it is difficult to broadcast an HD video via the Internet due to the large amount of data involved. We describe a method for generating videos with virtual camerawork based on object tracking technology. Once the user specifies the positions of the region of interest (ROI) on keyframes, our method can be used to generate virtual camerawork between two keyframes in a row based on the results of bi-directional tracking. We evaluated our method with subjective experiments that demonstrate its effectiveness.
    Download PDF (7347K)
  • Kenji Hirose, Koji Nishio, Ken-ichi Kobori
    2009Volume 63Issue 2 Pages 216-221
    Published: February 01, 2009
    Released on J-STAGE: May 01, 2010
    JOURNAL FREE ACCESS
    Automatic generation of character behavior by giving motion data to objects Recently, virtual space design with high quality three-dimensional CG has become possible due to a rapid improvement in computer performance. To generate a character's behavior, we have to apply the motion data to CG character after making them with the motion capture device or by hand work. However, it is necessary to make series of new motion data whenever the scene is changed. The production cost has increased because this is complicated work for a creator. Therefore, we have developed a technique for automatically generating the character motion data by giving motion information to objects of which the scene of a virtual space is composed. Thus, each object includes the motion data that cause the characters to act.
    Download PDF (1263K)
feedback
Top