ITE Technical Report
Online ISSN : 2424-1970
Print ISSN : 1342-6893
ISSN-L : 1342-6893
22.2
Displaying 1-9 of 9 articles from this issue
  • Article type: Cover
    Pages Cover1-
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (17K)
  • Article type: Index
    Pages Toc1-
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (49K)
  • SeoWon Han, JoonWhaon LEE, Masayuki NAKAJIMA
    Article type: Article
    Pages 1-6
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Currently virtual studio has used the cromakey method in which an image is captured, and the blue portion of that image is replaced by a graphic image or a real image. The replaced image must be changed according to the camera motion. This paper proposes a novel method to extract camera parameters using the recognition of pentagonal patterns which are painted on the blue screen. The corresponding parameters are position, direction and focal length of the camera in the virtual studio. At first, pentagonal patterns are extracted through image processing, and the matched points of two projected patterns are found using invariant features of the pentagon. Then, the projective transformation of two projected images and the camera parameters are calculated using the matched points. Simulation results indicate that camera parameters are more easily calculated compared to the conventional methods.
    Download PDF (522K)
  • Michitoshi Ishiwaka, Seiki Inoue
    Article type: Article
    Pages 7-12
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    It is difficult and important for virtual studio systems and mediaart products to be designed effective interactions between a performance person, who appears in the studio, and the virtual studio equipments, and, an appreciation person and artistic products. The difficulties originate with no even-grained component and no framework of interaction design work. In this article, a layered authoring model is proposed in order to facilitate system designers and artists to construct the system more easily and flexibly using fine-grained media components. And some experimental interactive virtual environments based on the model are shown.
    Download PDF (791K)
  • Cassio M. Yorozuya, Yu Liu, Masayuki Nakajima
    Article type: Article
    Pages 13-18
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper presents a new prediction method RBP region-based prediction model where the context used for prediction contains regions instead of individual pixels. There is a meaningful property that RBP can partition a cartoon image into two distinctive types of regions, one containing full-color backgrounds and the other containing boundaries, edges and homo-chromatic areas. With the development of computer techniques, synthetic images created with CG (computer graphics) becomes attractive. Like the demand on data compression, it is imperative to efficiently compress synthetic images such as cartoon animation generated with CG for storage of finite capacity and transmission of narrow bandwidth. This paper utilizes the property of RBP to partition cartoon images into two regions and then apply a lossy compression method to full-color regions and a lossless compression method to homo-chromatic and boundaries regions. Two criteria for partitioning are described, constant criterion and variable criterion. The latter criterion, in form of a linear function, gives the different threshold for classification in terms of contents of the image of interest. We carry out experiments by applying our method to a sequence of cartoon animation. Compared with the available image compression standard MPEG-1,our method gives the superior results in both compression ratio and complexity.
    Download PDF (683K)
  • Naoki ONO, Satoshi ISHIBASHI, Naoki KOBAYASHI
    Article type: Article
    Pages 19-24
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    In this paper, we propose a lossless motion video coding method using motion compensation. For realtime coding and transmission, we developed a lossless video codec based on the proposed method. It was confirmed the codec functions normally in an experiment involving coding and transmitting through an actual ATM network. Furthermore, we proposed a lossless coding method with bit-plane expansion under a constant bitrate. Two approaches, a closed bit-plane approach and a merged bit-plane approach, are considered and characteristics of their compression efficiency are estimated. Simulation results show that the proposed method is suitable for lossless video coding with bitrate control.
    Download PDF (667K)
  • Yoshiaki Shishikui, Yutaka Kaneko, Shinichi Sakaida, Wantao Zheng, Yuj ...
    Article type: Article
    Pages 25-30
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    This paper describes the concept of object-based coding for future broadcasting environments. Digital broadcasting uses the MPEG2 coding scheme which is regarded as a picture-based coding. An object-based coding scheme is a potential candidate for future broadcasting both for studio and distribution uses, and it offers a higher compression and more flexible content handling. This paper also describes key technologies that we have been developing for the object-based coding, e.g., image analysis, object extraction and coding of objects.
    Download PDF (774K)
  • S.H. Joo, H. Kikuchi, S. Sasaki, J. Shin
    Article type: Article
    Pages 31-36
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    We introduce a new zerotree scheme that effectively exploits the inter-scale self-similarities foound in the octave decomposition by a wavelet transform. A zerotree is useful to efficiently code wavelet coefficients and its efficiency was proved by Shapiro's EZW. In the coding scheme, wavelet coefficients are symbolized and entropy-coded for more compression. The entropy per symbol is determined from the produced symbols and the final coded size is calculated by multiplying the entropy and the total number of symbols. In this paper, we analyze produced symbols from the EZW and discuss the entropy per symbol. Since the entropy depends on the produced symbols, we modify the procedure of symbolic streaming out for the purpose. First, we extend the relation between a parent and children used in the EZW to raise a probability that a significant parent has significant children. The proposed relation is flexibly extended according to the fact that a significant coefficient is highly addressed to have significant coefficients in its neighborhood. The extension way is reasonable because an image is decomposed by convolutions with a wavelet filter and thus neighboring coefficients are not independent with each other.
    Download PDF (559K)
  • Article type: Appendix
    Pages App1-
    Published: January 15, 1998
    Released on J-STAGE: June 23, 2017
    CONFERENCE PROCEEDINGS FREE ACCESS
    Download PDF (77K)
feedback
Top