Stippling is a simple non-photorealistic rendering (NPR) method. NPR stippling images are monochromatic binary images, which are made up of black dots distributed over white backgrounds. In this paper. we extend the composition and present methods for producing monochromatic multi-valued stippling and color stippling images. Our method is based on the scheme of point redistribution according to probability distributions presented by Secord et al. We extend their scheme and apply it to stippling with multi-valued dots over multi-valued backgrounds and to color images. Additionally, we develope a technique for improving the detail description by varying the dot sizes according to the gradient of tonal intensities of the input images.
Similarity retrieval for three-dimensional models has been in the spotlight because of the increase in digital archives. We propose a similarity estimation method that uses mathematical morphology. It consists of a feature extraction and a similarity estimation process. First, position of a shape is determined by eigenvectors of the covariance matrix, and an amount of feature corresponding to the projections and hollows of the shape is extracted by repeating mathematical morphology. As the direction of the shape is occasionally reversed in the case that the eigenvectors of the covariance matrix are used, we apply the standard deviation to an amount of feature. Second, the degree of similarity between the amount of feature and those which are preserved in advance is calculated. Experimental results showed that our method is more effective than conventional ones.
This paper focuses on the sampling problem in light field rendering (LFR), which is a fundamental approach to image based rendering. The quality of LFR depends on the light-ray database generated from pre-acquired images, since image synthesis is the process of gathering appropriate light-ray data from the database. For improving the quality, interpolation of light-ray data is effective. It is based on the assumption that objects in a scene are placed on a plane called a “focal plane”. According to the depth of the focal plane (which is the distance between cameras and the focal plane), a focus-like effect would appear on the synthesized images. In this paper, we formulate the depth of field in LFR to address the range of depth where scene objects can be rendered in focus. Our theory is based on the plenoptic sampling theory, and includes some other related works. The proposed concept could be applicable for intuitive measurements of synthesis quality, configurations of sampling conditions, and evaluations of spatial coding methods.
A technique is described for rendering in real time reflections on surfaces with non-uniform roughness, which is difficult to do with conventional methods. It combines off-line processing with hardware rendering. First, a multiscale environment map is created of the rough surfaces. Next, the pixels that would significantly affect the rendering results in an omnidirectional image are filtered out. Finally, the reflections on the surfaces are rendered in real time using the map to reflect the effects of the roughness. Experiments demonstrated that the proposed method can render surfaces with non-uniform roughness sufficiently accurately in real time.
For presenting virtual objects in a real space, mixed reality (MR) technique is very effective to simulate an overview of a scenery. In interior design, for instance, the coordinates of the arrangements and the colours of the items need to be determined based on various layout plans. We propose a new framework for interactively arranging and rendering virtual objects in real space by manipulating physical 3D markers. A 3D marker consists of 2D codes and a spherical mirror for acquiring both geometric and photometric conditions by single camera instantaneously. Experimental results showed that the proposed approach is effective for providing appropriate geometry and local lighting conditions to present each virtual object in a simple system. The proposed system allows the user to experiment arranging of virtual furniture and verify the created scene which is visually consistent with the physical space.
This paper proposes a method of synthesis of lighting environment based on the atmospheric conditions for creating highly realistic outdoor scenes. The optical atmospheric model takes into account the effects of Rayleigh and of Mie scattering as a part of the physical model. Clouds, which enhance the reality effect of rendered images, are extracted and composed onto the spherical light image. A CIE XYZV color model is used as a visual modelto create scotopic vision. A comparison of a calculated and a measured spherical light image showed that using the proposed method can reproduce a proper lighting environment under any atmospheric conditions.
We propose a stereoscopic display system for multiple users. It allows three or more people to simultaneously observe individual stereoscopic image pairs from their own viewpoints. The system tracks the position of the user's heads and it generates distortion-free images for each user. The system consists of a normal display and a display mask with a hole in the center. The display mask is placed at a suitable distance over the display surface. By controlling the position of the image drawing area for each user according to the corresponding user's viewpoint, each user can observe the stereoscopic image pairs shown in an individual area of the display system with shutter glasses. On the other hand, no user is able to see the image drawing areas of the other users because these areas are adequately occluded by the display mask. Accordingly, the display system can simultaneously provide intelligible 3D stereoscopic images for three or more moving observers without flickering or distortion.
We propose a method for removing view-disturbing waterdrops from images by processing images taken with a pan-tilt camera system. Images that are taken on rainy days are often hard to make out due to adherent waterdrops on the surface of the protecting lens. In our method, a first image of a distant view is taken and then a second image is taken from a different angle. The new image is transformed with the projective transformation and compared with the first one to identify the regions with waterdrops. We can distinguish which image portion has waterdrops by considering the distance between two potential waterdrop regions. Finally, the region with waterdrops can be eliminated by merging two images. Experimental results showed the effectiveness of this method.
Many communication aid systems have been developed for severely physically handicapped people like ALS patients. The eye-gaze input system is being studied as an interface for a communication aid that can operate a computer with eye movement alone. Conventional eye-gaze input systems, however, can harm users. This is due to the use of infrared rays on the eyes to detect the eye-gaze. The prolonged irradiation could potentially cause eye damage such as glassblower's cataract, retinitis solaris, or dry eyes. We propose an eye-gaze input system using a personal computer and a home video camera for eye-gaze detection under natural light. The system detects the eye-gaze using image analysis based on the limbus tracking method, and it does not require special image processing units or sensors. The system is safe, compact and easy to use.