We describe a 3D modeling method from multi-viewpoint images of a dynamic human body and a realtime rendering system of dynamic 3D models.This 3D modeling method reconstructs an accurate and stable 3D shape of a human body using stereo matching method supported by the regional surface feature of the visual hull.To display the generated 3D model in real-time from arbitrary viewpoints,we developed a real-time rendering system that uses aview-dependent polygon texture mapping method.Using the proposed modeling method and the rendering system,we experimentally rendered 3D models of a Noh performer and these experiments confirmed that the system was able to display 3D models in real-time from arbitrary viewpoints.Finally,comparisons with the results of other methods demonstrated the effectiveness of the proposed 3D modeling method and the rendering system.
Recently,simulating natural phenomena has become one of the most important research areas in computer graphics.We focus on the visual simulation of fire.First,we propose an interactive method to simulate the motion of fire.Our method allows the user to control the shape and motion of fire.This is achieved by combining cellular automata and particle systems.Secondly,we propose a fast rendering method using wavelets for surrounding objects illuminated by the fire.The method can take into account not only direct light but also indirect light.This method achieves fast calculation of dynamic intensity and render shadows for objects illuminated by fire.Using this method,realistic images of a scene with fire can be generated in real-time.
Web3D techniques for managing 3DCG in the contents of HTML pages have made remarkable progress since VRML appeared in 1995.Many Web3D systems use fixed shaders for graphics API for realtime 3DCG rendering,so it is difficult to simulate the global illumination.We propose a solution to the problem of global illumination that uses a Web3D environment map in our proprietary Web3D format,'3DX' and the GPU shading language.
Motion segmentation of dynamic 3D mesh is presented.A dynamic 3D mesh is a sequence of 3D models made for a real-world dynamic object.Since 3D models are generated independently regardless of their neighboring frames,motion tracking by taking correspondence of vertices between 3D models is very difficult.Therefore,a feature-vector-based motion degree analysis is used.For this purpose,a modified shape distribution algorithm has been developed.In the algorithm,representative points are generated by clustering vertices based on their spatial distribution instead of randomly sampling vertices as in the original shape distribution algorithm.Motion segmentation is conducted by searching local minima in the degree of motion calculated in the feature vector space.A simple verification process is also presented.Experiments using a dynamic 3D mesh of traditional dances demonstrated high accuracy of motion segmentation with precision and recall rates of 92% and 87%.
Digital 3D models of historic buildings or cultural heritage objects are useful for preservation.Such models can not only be stored permanently,but also supply us with clear guidelines for the restoration process.3D models also provide sufficient information about geometrical characteristics that are useful for inspecting and classifying objects.The Bayon temple,which consists of 52 towers,is one of the most well-known buildings of the Angkor monument in Cambodia.The temple is famous for its towers which have four faces at the four cardinal points. According to research performed by the Japanese government team for Safeguarding Angkor(JSA),the faces can be classified into three groups,Deva,Devata,and Asura,based on subjective criteria.We demonstrate a more objective way to classify the faces using measured 3D geometrical models created by a laser range sensor.
We developed a simple electroholography system consisting of a graphics processing unit(GPU),which has been developing quickly in recent years,and a PC projector,which contains a minute reflective liquid crystal display(LCD).The structure of the GPU is suitable for calculating a computer-generated hologram(CGH).The calculation speed of the GPU is approximately 500 times faster than that of a central processing unit(CPU) alone.We succeeded in reproducing a video of an object,which consisted of 256 points in about 20 frames/s.Reconstruction at the video rate(real time) was almost completely achieved.Moreover,we used a PC projector in which minute reflective LCD panels were included as part of the reconstruction optical system.Three LCD panels for the three RGB colors were built into the inside of a projector.Using them,holographic reconstruction with color was also able to be performed easily.This system,consisting of a GPU and a projector,is simple to make.Therefore,this system is expected to promote reserch in this field.
For conventional grating images, high luminance of reproduction images is required.In color representation of conventional grating images, red, green and blue spectra are fixed. We propose a color representation method for grating images that uses [the two brightest spectra for the human optical system?(This may need some rewording. I am not sure what you want tosay here, but this is one possibility.)] To verify the method, we used theoretical analysis and experiments.The results show that a reconstructed image using our method is 1.2 times brighter than that using the conventional RGB method.
Computer user interface methods that utilize the user's gazes have been investigated recently.Thesemethods may enable us to operate computers without using a mouse,and improve the accessibility of computersfor users having disabilities on their hands.However,those methods using gazes also have a difficulty,because eyemovements often include involuntary ones and the users need to concentrate on controlling their eye movement.We propose a user-interface system that enables the user to control a computer without needing to use any point-ing devices.We discuss a way of detecting the three-dimensional eye locations using first Purkinje images,whichare images reflected on the eyes' corneal surfaces.Furthermore,we also demonstrate a prototype system that uti-lizes this eye location detection method and enables the user to operate commercial applications with only usingfacial movements and without a mouse.Finally,we present our experimental results and evaluate this method.
In the field of image restoration of lost pixels,a method based on optical flow has been proposed.This method can accurately restore lost pixels when edge directions change or when multiple edges exist in lost pixels.However,when continuity of intensity between a lost pixel and its neighboring pixels is not satisfied,the method can not accurately restore lost pixels.To solve this problem,we propose a method based on similarity between the local region and neighboring regions of a lost pixel.
A broadcasting system using a stratospheric platform,which is an air vehicle capable of remaining stationary at an altitude of 20 km,has been studied.Advantages of such a system are that it has a wider line-of-site area than a terrestrial system and a shorter propagation distance than a satellite system.However,such air vehicles have not been realized yet.An experiment in which a broadcasting signal is transmitted from a stationary airship at an altitude of 4 km is presented.The service area was defined as an area where the angle of elevation to the airship exceeded 10 degrees.A helical antenna was developed to provide higher gain in the oblique direction to distribute power uniformly.Received power was measured and reception using off-the-shelf receivers was verified in two flight tests.Stable reception was confirmed at several receiving points in the service area despite instability immediately below the airship.
Induction field in vision means the psychological potential field that affects the appearance of a figure.We typically,measure the field using the upper limen of the method of limits.However,measurement results are unstable for lower luminance.We investigated the effect of negative- or positive-type images on the field by using the method of limits and measuring the upper and lower limen.We determined that negative- or positive-type images affect the field.We confirmed the effect of the induction field in vision on the lower limen,which suggests that the method of limits is required to measure the induction field in vision.
In conventional methods of dividing peakes in histograms,in advance,we have to know the number of peaks in a histgram.By using the weighted sequential fuzzy cluster extraction whose condensation is redefined for dividing peaks in histograms,we can divide peaks when the number of them is unknown.