In 3D character animation, frame rate is often reduced to mimic hand-drawn animations, like anime (Japanese animation). The technique includes the omission of motions and emphasis on speed to express movements with a small number of drawings. As such, sampling real motions at equal intervals does not produce the same effect as anime-like motions. In this paper, therefore, a system called MoNACA (Motion transfer by Nakawari Adaptation for Cel-anime Articular trajectories) is proposed to convert motion capture data into anime-like motion. MoNACA determines nakawari poses by purifying the trajectory curves of motion capture data and selecting a set of segmentation points, taking into account the motion speed between each pair of consecutive keyposes and redundancy. Furthermore, it omits minute rotations to reproduce the flatness of anime and automatically locates the optimal viewpoint to enhance the visibility of the motion. The evaluation experiments on the transformed motions and the system functionalities empirically proved that we can achieve more attractive motion transformation effectivity compared to existing methods.
Today, collision detection between rigid bodies in computer games often uses Bounding Volumes that allow for fast decision making. However, because rigid bodies move discretely in the computer, they are tunneling each other without colliding if their motion velocity is very large. Various CCD (Continuous Collision Detection) methods have been proposed to address this problem and have been used in FPS (First Person Shooting) games that handle rigid bodies with large velocities such as bullets. One example is the Sweep-based CCD method and the Speculative CCD method used in Unity. However, the Sweep-based CCD method ignores angular motion and does not support rotational motion, while the Speculative CCD method has low collision detection accuracy for rotational motion. To solve this problem, this study interpolates the motion trajectory by using a fan shape, which is an approximate shape of the rotational motion trajectory of a rectangular body, to improve the accuracy of collision detection for rotating rigid bodies. This paper proposes a collision detection method between a fan shape and a primitive shape, and presents the verification results of whether the accuracy of collision detection is improved by interpolating the rotational motion trajectory using the fan shape, as well as the execution speed of the proposed method.
Cartoons, animations, and video games contain representations known as energy waves. Energy waves represent a moving mass of energy accompanied by strong luminescence. The authors have defined analytically integrable energy distributions for primitive shapes and have used numerical integration for energy distributions representing quadratic curves and circular shapes. In this study, we propose a method to represent distributional states such as trigonometric curves and parametric curves as a single function that can be quickly rendered by taking two correspondences between the mediating variable in line integrals and the mediating variable in explicit curves. We constructed an integral function for spiral curves and B-spline curves and integrated it using the composite Simpson method to maintain real-time rendering speed.
This paper presents a new visual phenomenon in which the shape of a whole animal is created by optical illusion when a physical 3D object representing a head half or a tail half is placed in contact with a mirror, and the resulting whole animal makes a U-turn in another mirror that is perpendicular to the first mirror. Using two such half-body objects, we can create a circular parade of four animals that orient clockwise or counterclockwise uniformly. The mathematics and design principles of this phenomenon are shown with examples. Some extensions are also discussed.
In this paper, we propose a support system for generating animated picture book storytelling videos. The proposed system consists of two parts: a story generation part and an animation generation part, and generates a picture book animation in an interactive manner with the user. In the story generation part, the user inputs the title of the picture book and the number of pages, and the system automatically generates sentences. Then, by selecting a favorite voice from among four types of voices, a voice reading the text is automatically generated. The animation generation section automatically generates a background image of the picture book and image objects that appear in the picture book based on the text input by the user. The user can easily create an animated picture book movie by selecting the animation to be generated and clicking in the background image. With these two generators, users can create animated picture book videos with simple operations, without having to prepare the text and pictures of the picture book. In fact, from the evaluation experiments, it has been confirmed that the proposed system can generate grammatically and contextually well-organized sentences that are typical of picture books, and that it can also generate animations that are consistent with the content of the sentences and can be enjoyed by the readers.