The Journal of the Institute of Television Engineers of Japan
Online ISSN : 1884-9652
Print ISSN : 0386-6831
ISSN-L : 0386-6831
Volume 50, Issue 10
Displaying 1-30 of 30 articles from this issue
  • Kazuhiko Goto
    1996 Volume 50 Issue 10 Pages 1422-1429
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
  • New Development of Medical Image Processing in Multimedia Environment
    Junichiro Toriwaki
    1996 Volume 50 Issue 10 Pages 1431-1434
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (722K)
  • Shigeki Yokoi
    1996 Volume 50 Issue 10 Pages 1435-1438
    Published: October 20, 1996
    Released on J-STAGE: August 17, 2011
    JOURNAL FREE ACCESS
    Download PDF (561K)
  • Akira Iwata
    1996 Volume 50 Issue 10 Pages 1439-1443
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (903K)
  • Masatoshi Tsuzaka
    1996 Volume 50 Issue 10 Pages 1444-1448
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (638K)
  • Naoki Suzuki, Akihiro Takatsu
    1996 Volume 50 Issue 10 Pages 1449-1451
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (1211K)
  • Jun-ichi Hasegawa
    1996 Volume 50 Issue 10 Pages 1452-1454
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (865K)
  • Artificial Life (I) Foundamental Theory and Frameworks
    Yasuo Yonezawa
    1996 Volume 50 Issue 10 Pages 1455-1462
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (1226K)
  • Sumio Jijima
    1996 Volume 50 Issue 10 Pages 1463-1465
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (823K)
  • Hiroshi Nagahashi, Kenji Komine, Takanori Nagae, Takeshi Agui
    1996 Volume 50 Issue 10 Pages 1467-1473
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Plant modeling is indispensable for natural scene description, but compared with procedural modeling based on fractal theory, plant modeling based on computer vision or other such shape acquisition techniques is not popular because both the shape and topology of plants are too complicated to be acquired from natural multiple views. In this paper, we introduce a method for acquiring the shape of a potted plant as a volume, segmenting it, and then modeling the parts with several geometric primitives using some knowledge about the parts, i.e, the leaves, stems, and pots We discuss the abilities of computer vision as one of the modeling tools for computer graphics by investigating the entire process from image acquisition to rendering
    Download PDF (1950K)
  • Yumiko Tatsuno, Satoshi Suzuki, Naokazu Yokoya, Hidehiko Iwasa, Haruo ...
    1996 Volume 50 Issue 10 Pages 1474-1481
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    In most conventional approaches to the synthesis of human facial expressions, facial images are generated by manually moving feature points on a face based on the concept of FACS (Facial Action Coding System), primarily with 3D models, such as a wireframe model This paper describes a synthesis-by-analysis approach using range images for producing human facial 3D images with primary expressions First view-independent representations of 3D locations of facial feature points are obtained using an object-centered coordinate system defined on the face Then we quantify feature point locations for the neutral expression and six primary expressions. Applying an image warping technique on both registered range and surface texture images, we finally generate 3D facial expression images from a neutral expression image and motion vectors of facial feature points.
    Download PDF (2042K)
  • Mikio Yoshida, Yuri A. Tijerino, Tsutomu Miyasato, Fumio Kishino
    1996 Volume 50 Issue 10 Pages 1482-1488
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    This paper proposes an intuitive interface system for designing 3D shapes in the physical location between a computer and its user This system allows the generation of a 3D shape using hand gestures combined with verbal expressions that attempt to represent a 3D shape. Current 3D modeling tools do not serve shape design well because they primarily use two dimensional pointing devices that are not intuitive in their use If we use a virtual environment as an arena for getting ideas about 3D shape design. we could intuitively describe 3D objects Our approach is to convert from gesture trajectory point position data to a generalized super-quadric function representation, and to utilize verbal expressions to constrain vibration modes of a generalized super-quadric function representation. We achieved such an interface system to integrate hand gestures and verbal expressions to represent a 3D shape
    Download PDF (1555K)
  • Tomoaki Nakano, Kazunori Higuchi, Shin Yamamoto
    1996 Volume 50 Issue 10 Pages 1489-1495
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    In recent years, the use of display systems such as car navigation systems and in-car traffic information terminals has been increasing. Furthermore, the elderly population is also rapidly increasing. To improve visibility of in-car display systems for drivers, it is indispensable to design a display system from which young designers can experience the visual performance of older people. This paper describes a system to simulate the visual functions and aging of elderly people. The proposed system is composed of image input devices, a computer and a display device. The method of this system is implemented by image processing techniques, which are intended to visualize three human vision properties; spatial response, spectral transmittance of crystalline lens and accommodation. We have concluded that this system plays an important role in understanding the virtual driving view of elderly people.
    Download PDF (1885K)
  • Kazuyuki Ebihara, Jun Ohya, Fumio Kishino
    1996 Volume 50 Issue 10 Pages 1497-1506
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    A new method for the real-time detection of facial expressions from time-sequential images is proposed. The proposed method does not require the use of tape marks that are pasted to the face to detect expressions in real-time in the current implementation for virtual space teleconferencing. In the proposed method, four windows are applied to four areas of a facial image : left and right eyes, mouth and forehead. Each window is divided into blocks consisting of 8 by 8 pixels. Discrete cosine transform (DCT) is applied to each block, and the feature vector of each window is obtained by taking the summations of the DCT energies in the horizontal, vertical and diagonal directions. To convert the DCT features to virtual tape mark movements, we represent the displacement of a virtual tape mark by a polynomial of the DCT features for the three directions. We apply a genetic algorithm to train facial expression image sequences to find the optimal set of coefficients that minimizes the difference between the real and converted displacements of the virtual tape marks. Experimental results shows the effectiveness of the proposed method.
    Download PDF (2263K)
  • Yu Liu, Masayuki Nakajima
    1996 Volume 50 Issue 10 Pages 1507-1514
    Published: October 20, 1996
    Released on J-STAGE: August 17, 2011
    JOURNAL FREE ACCESS
    This paper presents a lossless hybrid coding scheme for compressing computer cartoon images, such as those used in TV cartoon programs and computer games. Such images are usually composed of simple multi-color characters/graphs and complicated full-color backgrounds. The hybrid coding scheme is based on the basic principle of Structure Runlength Coding (SRC) and is combined with prediction coding. It solves two main problems in the conventional methods : that SRC cannot cope with the complicated full-color regions and that predictive DPCM coding cannot efficiently compress the simple multi-color regions. Through an evaluation of entropy, we prove that our hybrid coding scheme is superior to the conventional methods at compressing these computer cartoon images. Compared with an adaptive DPCM coding, our hybrid coding can decrease the entropy by about 1 bit per pixel on average. Compared with SRC, it can decrease the entropy up to 5 bits per pixel in the case of complicated backgrounds. An experimental result shows high compression ratios.
    Download PDF (1726K)
  • Kazuhito Murakami, Hiroyasu Koshimizu
    1996 Volume 50 Issue 10 Pages 1515-1521
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Our research aims to clarify the mechanism of sensitiveness in human vision by developing a computerized facial caricaturing system, and at the same time to examine the significance of the usage of facial caricatures as a nonverbal interface media. To enforce the skill of our computerized caricaturing system, called PICASSO, as a flexible caricaturing tool, it is essential that the system be able to have various kinds of facial expressions added. In this paper, a dynamic mechanism to change emotional facial expressions is realized by using a set of different mean faces and the piece-wise linear transform method. In many cases the caricatures are evaluated to be well by humans and also by the system. Furthermore, the effectiveness of this facial emotion in caricature was confirmed in the cognitive experiments for the 'talk' environment in the network.
    Download PDF (1041K)
  • Masanobu Yamamoto, Soo Hyeon Seong
    1996 Volume 50 Issue 10 Pages 1522-1527
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    We propose a novel method of estimating three-dimensional (3D) motion parameters of an object using two successive images from a single camera. The method recovers 3D shape of the object from shading in each image, then estimates 3D motion parameters from two successive 3D shapes, and finally determines the optical flow, whereas the existing methods begin with optical flow estimation while assuming constant brightness during motion. Our method is useful even when the brightness of the pixel in motion does not remain constant.
    Download PDF (1099K)
  • Suguru Saito, Masayuki Nakajima
    1996 Volume 50 Issue 10 Pages 1528-1535
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Recently, the main focus of computer graphics has been the creation of photo-realistic images. In this paper, We consider hand-painted images with special emphasis on the generation of hand-painted images from natural pictures taken by a camera. We discuss the positioning of our painting algorithm in computer graphics, then explain the local image analysis. We provide further explanation of two painting algorithms based on data obtained from the image analysis. The Fourier transform and an oriented band-pass filter are used to obtain the local image analysis. One painting algorithm is generally a blurring algorithm in which the local image undergoes a blurring transform that depends on the local region. In the other painting algorithm, the image is formed from the painting of curved lines caused by the blush strokes. In this paper, we show hand-painted images using these algorithms.
    Download PDF (2784K)
  • Kazunori Yoshida, Jun Ohya, Fumio Kishino, Hiromi Tanaka
    1996 Volume 50 Issue 10 Pages 1536-1542
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    We propose a shape reconstruction and color inferring method for 3D objects based on planar symmetry using an active camera. This efficiently creates 3-D models of objects from the binary silhouettes in some views acquired by the active camera whose viewpoints are selected recursively based on the symmetry planes of observed objects. And the lighting condition is measured using color samples before inferring the color of observed objects. The original color data of the plane is inferred from the observed RGB data of the color samples, RGB data of the object observed by the CCD camera, and the surface normal to the point on the surface of the object.
    Download PDF (6605K)
  • Takayuki Miyazawa, Yoshinori Mochizuki, Kunio Kondo, Hisashi Sato, Shi ...
    1996 Volume 50 Issue 10 Pages 1543-1548
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Technical illustrations often adopt exaggerated images rather than geometrically correct projections of models in order to emphasize the properties of shapes to customers. Besides of changing colors and highlighting, geometrical sizes are varied to attract attention. This paper introduces a geometrical tool to modify three-dimensional models in perspective space. The data of models are kept unchanged. An image close to an eye is often exaggerated in nomal photo processing. Our method of projection helps the work of technical illustrators.
    Download PDF (1602K)
  • Masaki Hayashi, Kazuo Fukui, Yasumasa Ito, Nobuyuki Yagi
    1996 Volume 50 Issue 10 Pages 1549-1557
    Published: October 20, 1996
    Released on J-STAGE: August 17, 2011
    JOURNAL FREE ACCESS
    We have developed several kinds of virtual studio systems that combine a number of different images including CG pictures synchronized with camera work to create visual effects which make it look as if someone in a CG image is actually being filmed. In a conventional virtual studio, however, virtual shooting is limited by the physical studio space when filming an actor in front of a chroma-key blue screen. We have designed and developed a new system capable of operating a camera over a very wide range physically impossible by any conventional means. This has been made possible by combining a real-time CG image with an image created by geometrical transformation of the foreground picture shot by a motion control camera. The system shoots the object by moving the real camera within a range where the motion control camera can physically move ; outside this range, it controls the image processor based on the principle of vitural shooting. In this way, the system achieves real-time camera work over an extremely wide range in a physically limited studio space. In the paper, we present the visual shooting method and study its limitations when it is used to replace real shooting. Then, we describe the system based on the results of the study, and introduce some examples applied to TV programs.
    Download PDF (2670K)
  • Keiji Okumiya, Masahide Kaneko, Hiroshi Harashima
    1996 Volume 50 Issue 10 Pages 1558-1566
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    This paper proposes the concept of hierarchical representation of scenes and of abstract index images from the viewpoint of the abstraction of the image contents. Abstract index images show the outline of the original sequence, and can be applied to facilitate tasks such as coding, editing, and summarizing the essence of image sequences. To generate abstract index images, we have developed a new method based on shape decomposition with primitive figures. The use of primitive figures enables a compact description that contains the general structural information. We also investigate applications which utilize abstract index images.
    Download PDF (2481K)
  • Mitsuru Kaneko, Masayuki Nakajima
    1996 Volume 50 Issue 10 Pages 1567-1576
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    We propose a production method to generate and control computer graphics animation using a giant virtual spreadsheet called Animation Spreadsheet (AS). AS contains thousands of images as miniature image data which can be designed, generated, edited and saved. AS has two axes. One is based on time and the other is on layers of image components which will make up a cell of a completed image. Users are able to manipulate images by previewing with the time axis or the layer axis. AS can be used not only for computer generated animation production, but also as a designing and editing tool for other types of video image production. We examined AS in a short animation production to compare it with an existing normal production method.
    Download PDF (2797K)
  • Mitsuru Kaneko, Masayuki Nakajima
    1996 Volume 50 Issue 10 Pages 1577-1584
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Computerization of cell-touch animation production started in the 1970s. Most computer systems used 2D computer graphics technology. Even now there are very few systems which use 3D computer graphics technology. The authors have proposed methods and algorithms to produce cell-touch animation image using 3D computer graphics technology such as generation of character lines and an animation spreadsheet. We propose a method to create freer camera motion for cell-touch animation production. We first construct a world in the computer where elements such as camera, lights, objects and cell layers are deployed. 2D characters are mapped on a transparent plate as a layer. Then motion paths and camera directions are set using spline curves. The views can be previewed with wireframes and rendered images. Many camera motions that used to be impossible with conventional cell-animation or 2D-based animation production methods can be achieved with less effort and sometimes automatically.
    Download PDF (2060K)
  • Yeun-Bae Kim, Masahiro Shibata, Terumasa Ehara, Kazumasa Enami
    1996 Volume 50 Issue 10 Pages 1585-1592
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    In this paper, we describe a natural language-based video editing system called video sequencer which can provide a real sequence of video images described by a script made of natural language sentences by searching a video image database. The database is indexed according to its semantic contents extracted from content annotation provided by directors. The sequencer is able to convert the queries formulated in natural language in the script for image retrieval queries. The video sequencer can increase the readability of the script for program production by using natural language processing. Moreover, the physical description of the video images, such as camera parameters, can be used in parallel with natural language sentences to constrain the search conditions. Thus more proper video sequences can be obtained.
    Download PDF (1547K)
  • Takahiro Fuchigami, Takashi Komatsu, Takahiro Saito
    1996 Volume 50 Issue 10 Pages 1593-1597
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
  • Yasushi Kanai, Keiichi Sato
    1996 Volume 50 Issue 10 Pages 1598-1602
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
    Download PDF (665K)
  • Goh Miyamoto, Ichiro Matsuda, Susumu Itoh
    1996 Volume 50 Issue 10 Pages 1603-1606
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
  • Tomohisa Hamano, Hiroshi Yoshikawa
    1996 Volume 50 Issue 10 Pages 1607-1611
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
  • Junji Tamai, Hiroshi Yoshikawa
    1996 Volume 50 Issue 10 Pages 1612-1615
    Published: October 20, 1996
    Released on J-STAGE: March 14, 2011
    JOURNAL FREE ACCESS
feedback
Top