This paper describes an interactive user interface method for handling and modeling virtual objects based on a user's finger motion, and a prototype user interface system, consisting of a binocular stereo display, a set of TV cameras, and two workstations. One of the workstations inputs the images of the user's fingers from the cameras, and estimates the three-dimensional finger motions. Based on the detection of the finger instructions, the other workstation interactively controls the three-dimensional motions and material and lighting properties of objects' geometric models, and presents them to the user through the binocular display. Experimental results using the system are also shown.
Perceptual segmentations with illusory stratification were studied by using silhouettes with competitive perceptual segmentations. One of the conditions for perceptual segmentations and boundary conditions for competitive segmentations in stratification was made clear. For part of a figure to be segmented as a frontal part, the figure must contain edges facing each other or connecting smoothly to each other. The relative length of the illusory contours, perceived in competitive segmentations, determined which segment was perceived as the frontal part. The apparent length of the illusory contour was affected by the existence of parallel illusory contours and by the orientation of the parallel lines constituting a stimulation pattern. The perception of the competitive perceptual segmentations was balanced when the length of the parallel illusory contours was 4/5 of the length of the illusory contours perpendicular to each other.
We propose a new volume rendering technique called cell projection, which is a combination both forward and backward projections. It renders high-quality images as well as raycasting. Non-empty cells are forward projected onto the projection plane from each pixel within the projected region of the cell, a ray is back-projected to the cell, and the intersections are inserted and sorted in depth order in the intersection list map. Using this intersection map, instead of using traditional raycasting, we can accelerate the rendering of translucent layered structures presented in a huge volume dataset.