Personal video sharing services such as YouTube have become popular because videos can easily be recorded in high-definition (HD) using a personal camcorder. However, it is difficult to broadcast an HD video via the Internet due to the large amount of data involved. We describe a method for generating videos with virtual camerawork based on object tracking technology. Once the user specifies the positions of the region of interest (ROI) on keyframes, our method can be used to generate virtual camerawork between two keyframes in a row based on the results of bi-directional tracking. We evaluated our method with subjective experiments that demonstrate its effectiveness.
Automatic generation of character behavior by giving motion data to objects Recently, virtual space design with high quality three-dimensional CG has become possible due to a rapid improvement in computer performance. To generate a character's behavior, we have to apply the motion data to CG character after making them with the motion capture device or by hand work. However, it is necessary to make series of new motion data whenever the scene is changed. The production cost has increased because this is complicated work for a creator. Therefore, we have developed a technique for automatically generating the character motion data by giving motion information to objects of which the scene of a virtual space is composed. Thus, each object includes the motion data that cause the characters to act.
In order to study the application of event-related potential (ERP) for performing picture quality evaluations, ERP was measured for both still and blurred pictures and were subjectively evaluated on a two-grade quality scale: “Good” and “Bad”. Also, a task for the evaluation was discussed. The results showed that similar large P300 amplitudes for both “Good” and “Bad” opinions appeared, which indicates that the use of opinions with two separate poles is not suitable.