This paper describes an object-oriented method for color image quantization. Instead of trying to minimize some cost function of colorimetric color errors, the method exploits physical models of scene-image relations. An image is first segmented into regions corresponding to objects in the physical world. Each object is then painted with a few levels of shades of the same chromaticity. This work is motivated by the observation that physical objects, not color patches, are the basic units for human visual perception. Chromatic variations within an object surface tend to be grossly discounted by our color vision, and therefore, it is not necessary to render those variations accurately. Furthermore, since the shading across an object surface spans a much smaller dynamic range than the whole scene, the limited number of color shades can be more effectively used within a segmented image region.
The two elements of the method are a physics-based image segmentation algorithm and a psychophysics-based coloring algorithm. Based on models of light reflection, the image segmentation algorithm partitions the input image into regions that roughly correspond to different physical objects. The coloring algorithm renders the busy image regions with fewer luminance levels than the uniform image areas where shadings are subtle and errors easily visible. A number of images rendered in 8 bits by the method are compared with those rendered by other methods, such as the median-cut, the mean-cut, and the variance-based algorithms. The object-oriented rendition method is shown to have the least contouring artifacts.
抄録全体を表示