-
Hiroshi Yasuda
1989 Volume 43 Issue 10 Pages
1011-1019
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
Takahiro Saito, Takashi Komatsu, Hiroshi Harashima
1989 Volume 43 Issue 10 Pages
1020-1027_1
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
In spatial image vector quantization, a vector dimension cannot be made high enough to limit intervector redundancy because of its large computational complexity and storage requirement, so consequently satisfactory compression cannot be achieved. To cope with the foregoing problem, we developed new schemes of spatial image vector quantization that utilize intervector redundancy for compression by employing a self-organizing list of codeword-indices as an auxiliary data structure. We, first, propose two basic schemes : one that encodes codeword-indices by using the list and the other employs the list for encoder's search as well as index coding. We, afterwards, enhance the basic scheme with two different techniques : one that adds a new codeword to an initial codebook and the other gradually builds up a high-dimensional codebook by concatenating low-dimensional code words. The simulation results demonstrate that the proposed schemes successfully exploit intervector redundancy for compression of a sampled image.
View full abstract
-
Nobumoto Yamane, Yoshitaka Morikawa, Hiroshi Hamada, Akito Fukui
1989 Volume 43 Issue 10 Pages
1028-1036
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
This paper proposes a method of improving performance for DCT-zonal coding of images by M-transform. In the zonal coding, efficiency is degraded due to large distortion caused by Max quantizer for DCT coefficients, because they have broad-tailed distributions. Moreover, quality of reconstructed images in deteriorated by block artifact. In the proposed method, by appling M-tranform to each sample sequence of the DCT coefficient over different blocks, the distributions of the DCT coefficients are restrictively tranformed to Gaussian. Consequently distortion can be reduced and mismatch of distributions between quantizer and input signal can be eliminated. Simultanously, the block artifact becomes inconspicuous because the quantization error is scrambled into almost random noise by inverse M-transform. Simulation results for many test images show those improvements were attained as good as expected above, in both adaptive and non-adaptive coding. Evaluation of computational complexity also shows the usefulness of this methed.
View full abstract
-
Takahiro Saito, Yoichi Kishimoto, Takashi Komatsu, Hiroshi Harashima
1989 Volume 43 Issue 10 Pages
1037-1045
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
Normalized AC transform coefficients of picture signals are well modeled as having a spherically symmetric distribution. For the distribution, quantizers whose code-words are arranged on the surface of concentric hypersphere will show excellent performance with its low encoding complexity. Permutation codes, developed by Berger and others, having this property, require no multiplication for encoding, and can directly quantize a very high dimensional input. Permutation codes, however, cannot show a satisfactory performance for high rates or low dimension, and require unfeasible operation precision for encoding a codeword's index. To cope with these problems, we developed new improved permutation codes, formed an algorithm for encoding a codeword's index, which is performed by only integer operations with limited operatdon precision, and incorporated improved permutation codes into discrete cosine transform image coding. The simulation results demonstrate that improved permutation codes can efficiently quantize transform coefficients of a high sequency.
View full abstract
-
Atsushi Koike, Masahide Kaneko, Yoshinori Hatori
1989 Volume 43 Issue 10 Pages
1046-1055_1
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
A hybrid coding method is one of the most promising methods in realizing low bit rate transmission of motion pictures. However, the efficiency of orthogonal transforms for motion-compensated interframe prediction error signals (abbreviated as MC error signals) has not yet been documented well. In this paper, we investigate the characteristic of MC error signals and compare the efficiency of KLT (Karhunen-Loeve Transform) to that of other transforms (DCT, SLT, HCT, WHT, HAT, and DST) from the viewpoint of entropy, mean square errors, and decorrelation ratio. First, the efficiency of the above-mentioned transforms is compared for intraframe image signals. Second, we calculate the basic functions of KLT for MC error signals and compare the efficiency with that of other transfors. Experimental results show that DCT approximates the efficiency of KLT well for MC error signals. Finally, we apply these orthogonal trasfoms to the hybrid coding method, and show that the DCT give the highest coding performance among these transforms.
View full abstract
-
Noboru Yamaguchi, Susumu Itoh, Yoshihiko Kihara, Toshio Utsunomiya
1989 Volume 43 Issue 10 Pages
1056-1064
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
This paper proposes a new coding scheme for progressive transmisson of still images at a low bit rate. In the first stage, a reduced image composed of averages of every 4×4 pels is encoded by adaptive DPCM with Max-type quantizers and variable length codes. In the second stage the reduced image is magnified by using an interpolation filter. Then interpolation error signals, generated by subtracting the magnified image from an intermediate image composed of averages of every 2×2 pels of the original image, are encoded by adaptive Hadamard transform coding with linear quantizers and variable length codes. In the last stage, almost the same process as in the second stage is reiterated. To prevent a lowering of coding efficiency, the mean squared error of each stage is reduced less than D/16, D/4, and D respectively, where D is a given constant. Computer simulation shows that this coding scheme performs comparable to KL transform coding.
View full abstract
-
Seiki Inoue
1989 Volume 43 Issue 10 Pages
1065-1071
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
We have proposed a coding algorithm for image database, called two-level coding. It divides an image into index data (reduced-size image date) for quick reference and complementary data that are used to reproduce a full-size image. In this paper, a color photographic filing system using this technique is described. First, we examined coding performance for practical photographic images and decided threshold values of this method. Then, we developed an experimental system with a decorder. The following requirements were especially taken into account : fast transform between storage devices and frame memories, effective managenent of variable length coding data by tables and quick hardware decoding using pipeline methods. As a result, 4 index images can be displayed in one second and a full-size image can be reproduced in 1-2 seconds.
View full abstract
-
Kazuto Kamikura, Hisashi Ibaraki, Hiroshi Watanabe
1989 Volume 43 Issue 10 Pages
1072-1078_1
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
A moving picture coding scheme for digital storage media using periodic intraframe coding is proposed. An efficient storage of moving pictured and special functions, such as fast search and random access, can be realized by the proposed coding scheme under the fixed read-out bit rate of 1.1 Mb/s. The coding scheme is based on interframe coding, and the appropriate peried for intraframe coing is determined. The bit assignment for intraframe and interframe coding is considered. It is shown that the odd (or even) frame coding rethod with frame interpolative technique gives better picture quality than the full frame coding method. Moreover, the progressive coding technique is introduced into intraframe coding to realize the fast search function.
View full abstract
-
Hidefumi Ohsawa, Shigeo Kato, Yasuhiko Yasuda
1989 Volume 43 Issue 10 Pages
1079-1086
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
This paper proposes a new highly efficient coding algorithm for bi-level images that is based on hierarchical structuring with a projection method and employs Markov model coding using an arithmetic code. This algorithm allows for image quality control at internediate stages through the introduction of variable parameters in the projection method. Furthermore, processing, which prevets the disappearance of fine lines during image reduction transformations, is implemented, improving image quality. In addition, through simulation on 8 CCITT test documents, we have demonstrated that the use of arithmetic code provides a 20 percent reduction in the coding rate as compared to MMR.
View full abstract
-
Masao Aizu, Mikio Takagi
1989 Volume 43 Issue 10 Pages
1087-1092
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
A local area-adaptive data compression scheme for color printing image is presented. The compressed data is applicable to layout processing of printing image without reconstruction. This method is performed as follows. Image is divided into small blocks. Cyan, Magenta, Yellow and Black componets of color printing image are regarded as a 4-dimensional vector, and are applied vector quantization procedure local area-adaptively. 3 decision methods of quantizing level are proposed, and representation vectors (code book) are made by LBG algorithm for each block.
The presented method is applied to printing image. When the compression ratio is 1/9, a SNR of over 28 dB is achieved.
In this method, a code corresponding to any original pixel can be accessed, and its vector components reconstructed only by referring to the code book of the block including the original pixel.
View full abstract
-
Tetsujiro Kondo, Yasuhiro Fujimori, Atsuo Yada, Kenji Takahashi
1989 Volume 43 Issue 10 Pages
1093-1099_1
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
A new bit rate reduction method for consumer digital VTR use must meet special requirmements such as fast motion playback, slow motion playback, and electronic editing, while cemaining error free. We devoloped the ADRC (Adaptive Dynamic Range Coding) method considering these requirements. ADRC can remove the redundancy within each pixel's level according to the dynamic range of the block. We feel that this method is good bit rate reduction scheme because it has less spatial error propagation. The bit assignment of ADRC is decided accoring to the dynamic range of the block. It is easy to control the amount of information generated by the encoder. ADRC is a variable length coding scheme with special playback ability. The efficiency of the ADRC technique has been confirmed at 25 Mb/s by computer simulation.
View full abstract
-
Saburo Tazaki, Satoshi Kaji, Yoshio Yamada, Hisashi Osawa
1989 Volume 43 Issue 10 Pages
1100-1105
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
A ternary recording code can store more information than a binary one under the same frequency bandwidth. Jacoby has proposed ternary recording codes based onthe two-level magnetic recording technique. We call a class of these codes the “Jacody-type ternary recording code”. This class of code has an adventage over multi-level recording codes on the recording system by enduring a considerable level perturbation.
This paper presents a method for systematically constructing a Jacoby-type ternary recording code by applying a finite state automation model under dkc constraints. As an example of this method, a new Jacoby-type ternary recording' code with variable-length and DC-free property is developed.
View full abstract
-
Mutsumi Ohta, Motoyosi Shibano, Takasi Shimizu, Hiroto Kunihiro, Takao ...
1989 Volume 43 Issue 10 Pages
1106-1111
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
A coding method is discussed for video signals on digital storage media, especially for CD-ROM systems. Digital compression techniques are required to store video signals, since the maximum transmission rate is 150 kbyte/sec on a CD-ROM interface. It is well known that the Hybrid coding method is very effective on this data rate in transmission systems. In this paper, the effectiveness of this codidng algorithm is discussed and confirmed, also in digital storage media, though required functions have some differences from transmission systems. Using this algorithm, a prototype of a playback system is implemented. A long video sequence from a CD-ROM is expected to be useful for wido applications, in education, data-base services and entertainment.
View full abstract
-
Satien Triamlumlerd, Masayuki Tanimoto
1989 Volume 43 Issue 10 Pages
1112-1118_1
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
Bandwidth compression is one of the key technoqogies in realizing a high quality video recording system. A wideband picture signal can be recorded by compressing its bandwidth before recording. This paper proposes a new video recording system by using the TAT system, which compresses the bandwidth and keeping the high resolution of the picture. An experimental system is constructed and the feasibility of the new system is successfully demonstrated. A home-use VCR is used as a recording media in the experimental system. It is modified to record the TAT signal composed of a bandwidth compressed analog TCI signal and a digital mode signal. The bandwidth compressed TCI signal is recorded by the conventional FM system of the VCR and the mode signal is recorded by QPSK in the separate lower frequency band, which is originally used to record the chrominasce signal in a conventional VCR. The regenerated picture is stable and the resolution is improved 1.6 times over a conventional home-use VCR.
View full abstract
-
Hiroyuki Hamada, Toshikazu Ikenaga, Naoki Kawai, Osamu Yamazaki, Takeh ...
1989 Volume 43 Issue 10 Pages
1119-1128
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
In this paper, coding methods are described for Hi-Vision degital still picture broadcasting using ISDB (Integrated Service Digital Broadcasting). At first, conditions needed for coding methods are shown, and the characteristics and coding performances (SNR, results of quality assessments and coding ratio) of two coidig methos, subsampled DPCM and adaptive interpolative quantizer are shown. Futhermore, an adaptive method controlling coding ratio accoriding to the interval of picture changing is described. In addition, improving methods for picture impairment caused by bit-error occurrdng in trasmission paths are described, and by bench tests this system was found to be acceptable for sufficient picture quality broadcasting service even if the received C/N is degraded. Using these methods, clear Hi-Vision still pictures with high quality PCM stereophonic sounds can be transmitted in one digital channel; 2.048 Mb/s transmission.
View full abstract
-
Makoto Miyahara, Yasuhiro Yoshida
1989 Volume 43 Issue 10 Pages
1129-1136
Published: October 20, 1989
Released on J-STAGE: August 17, 2011
JOURNAL
FREE ACCESS
In studying new-generation color image codings, it is very effective 1) to code signals in the space of inherent tri-attributes of human color perception, and 2) to relate a coding error with the perceptual degree of deteriorations. For these purposes, we have adopted the Munsell Renotation System in which color signals of triattributes of human color perception (Hue, Value and Chroma) and psychometrical color differences are defined. In the Musnell Renotation System, however, the intertransformation between (RGB) data and corresponding color data is very combersome, because the intertransformation depends on a look up table. This paper presents a new method of mathematical tranformation. The mathematical transformation is obtained by multiple regression analysis of 250 color samples, which are uniformly sampled from whole color ranges that a conventional NTSC color TV camera can present. The new method can transform (RGB) data to the data of the Munsell Renotation System far better than the conventional method given by the CIE (1976) L*a*b*.
View full abstract
-
Akira Yasuda, Nobumoto Yamane, Yoshitaka Morikawa, Hiroshi Hamada
1989 Volume 43 Issue 10 Pages
1137-1144_1
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
2D extrapolative prediction-discrete sine transform (EP-DST) coding works efficiently with small block size (4×4) pels. This paper discusses the hardware realization of the coding, especially transfomation, with gate array. First, for hardware, reduction we introduce integer approximation of DST, named IST. The IST is realized by input-parallel and bit-serial structure with small scale hardware. 2D DST is composed by cascading the two IST's in pipeline. Second to save processing time, the computation wordlength for natural images is cut short. The disigned 2D DST transformer is composed of 2, 700 gates and is ascertained by a CAD tool to work on NTSC video signals in real time. Last. Last, we conclude coughly that, when realizing by full custom LSI, the scale and power dissipation of the EP-DST CODEC are less than 36% and 69% respectively of these of oh DCT.
View full abstract
-
Transform Coding
Hideo Hashimoto
1989 Volume 43 Issue 10 Pages
1145-1152
Published: October 20, 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS
-
1989 Volume 43 Issue 10 Pages
e1
Published: 1989
Released on J-STAGE: March 14, 2011
JOURNAL
FREE ACCESS