-
Nao NOZAWA, Shuhei KODAMA, Tomoaki MORIYA, Tokiichiro TAKAHASHI
Session ID: 16-04-51
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A foot reflexology mat is a tool that gives stimulates appropriate areas in foot reflexology. However, the tool is designed to fit a shape of a general person's foot. Therefore, the tool does not fit persons with the large foot or the flat foot. In this research, we propose a method for generating foot reflexology mat that fits personal foot shape. We conducted a hearing and confirmation of blood flow promotion by thermography. It shows that we were able to solve the problems of the existing tool.
View full abstract
-
Hideo TAGASHIRA, Akihiro MATSUURA
Session ID: 16-04-52
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this note, we aim at controlling the angular velocity of an object that rotates around its central axis by the ultrasonic motor and report on the first experimental results on the relationship between the magnitude of the applied voltage and the angular velocity of the round-type ultrasonic motor.
View full abstract
-
-immersive documentary-
Yuichi ISHIKAWA, Shogo HOTTA, Yuta KURIMARU, Satoshi MARUYAMA, Reiji ...
Session ID: 16-04-53
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The work is the documentary based interactive VR game which has high-quality visual images. The Game concept is the events of the early 20th century, VR stage is set to the train compartment and window. To obtain higher immersion feeling and interaction, we use HTC Vive for object operation and UnrealEngine4 for realtime rendering.
View full abstract
-
Yutaka YONEKURA, Ryuta MOTEGI, Yoshihisa KANEMATSU, Naoya TSURUTA, Koj ...
Session ID: 16-04-54
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a simulation system for character emotion expression including visual effects. We extracted visual effects which are used for character's emotional expression from existing animation works and classify them into facial effects and screen effects. Our system generates typical example of emotional expression from combinations of facial parts, facial effects and screen effects. User experiments show that the user can design desired expression by referring to the simulation results.
View full abstract
-
Erika SAKAI, Akinori ITO, Takayuki ITOH
Session ID: 16-04-55
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Animation and game are very popular in Japan. Many works in animation and game have been released , and many characters have been produced. lt is an important factor to select appropriate voice actors so that we can appropriately impress the characters. Based on this idea , we are developing a prototype of voice actor recommendation tool. We recorded voice performance and calculated vocal acoustic features. Then we had a questionnaire of impressions of the recorded voice performance , and solved the relationship between the acoustic
features and impression values. We can estimate appropriate acoustic features of new characters when we specify their impression values , and consequently generate a list of appropriate voice actors for these new characters based on their acoustic features. At the same time , we gathered documents describing these characters on the Web , and calculated their co-occurrence by a natural language processing tool. We can discuss how to assign voice actors to the game charactcrs by observing the list of voice actors and co-occurrence-based character map.
View full abstract
-
Daichi NAGANO, Keishi TAKASHIMA, Hidekazu TSUJIAI
Session ID: 16-04-56
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
I tried to capture the time axis in painting expression by taking a picture of the Japanese painting in 100 steps and projecting the image outputted as an animation on the original one.
View full abstract
-
Natsumi Hirata, Yosuke Nakamura, Koji Mikami, Kunio Kondo
Session ID: 16-04-57
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this research, we developed two kinds of games, “game including Boredom Avoidance Elements(BAE)” and “not included game” from BAE survey of Role Playing Game. We divided test player into two groups with different playing conditions and compared the results of the questionnaire of two kinds of games. As a result, it became clear how each BAE is effective against the motivation reduction. Moreover, we found that there is a difference in feeling depending on the playing conditions.
View full abstract
-
Kazuki Sato, Ryuta Motegi, Naoya Tsuruta, Koji Mikami, Kunio Kondo
Session ID: 16-04-58
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Super deformed (SD) character is a style where characters have oversized heads and short limbs. The demand for SD character is increasing as icons or stickers on communication tools have become popular. In this paper, we propose a system to support producing a draft of SD characters. We collected illustrations of existing SD characters and classified them by shape and pose. The user can generate 3D super deformed model by choosing and adjusting body parts and pose on our system.
View full abstract
-
Tomoko NAKAMOTO, Reiji TSURUNO
Session ID: 16-04-59
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Human eye does not move randomly. When we face to each other, we gaze right eye, left eye, and midface continuously. We generated eye movement model for facing virtual character based on statistical data of eye movement when human gazing another person face and user’s face and face region position that are captured by web camera. By using our method, we were able to generate natural eye movement for facing virtual character that is considering user’s face position.
View full abstract
-
Nozomi HIJIKATA, Shuto SHIKAMA, Issei FUJISHIRO
Session ID: 16-04-60
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, many live-action films have been created based on comics and animations. Best casting is a key to the success of these works. In this research, a system, called CoCoA, has been developed, which takes as input, the original character’s information to recommend a list of candidates for the actor/actress through matching of face features and personality. The feasibility of the system is discussed on the basis of user evaluations.
View full abstract
-
Yuhei ONO, Yosuke NAKAMURA, Taichi WATANABE, Koji MIKAMI
Session ID: 16-04-61
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the production of 3DCG character motion, it is important to pose and move with awareness of the centroid. Animation ignoring the centroid becomes physically unnatural movement, and the viewer feels discord. In response to this problem, the authors propose a system that visualizes the centroid of a character during motion creation. We develop methods and systems to support the creation of physically correct motions and intentionally exaggerated motions with less discomfort.
View full abstract
-
Naoki IMAMURA, Yosuke NAKAMURA, Taich WATANABE, Koji MIKAMI
Session ID: 16-04-62
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, more and more people draw illustrations using computers. In some cases, a 3D model is used in order to make reference to accuracy when drawing illustrations. However, it is troublesome to pose a 3D model of a human character because an operation different from hand drawing is used. Therefore, we propose a system to support the creation of illustration guide, which can pose a 3D model of a human character more closely by hand-drawn operation.
View full abstract
-
- Japanese Sub-Culture Audio Contents for Female After '00 Age-
China MIYAZONO, Akinori Ito, Ken’ichiro Ito
Session ID: 16-04-64
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this research, we tried to analyze the structure of romance situation audio drama for women in recent years, using the method of content structure analysis that has been developed around the Arts and Sciences Society and the method of video music / acoustic research field. Although it is difficult to use the concept of photogony and acumatic because there is no image, the idea of content structure analysis such as division of Diegetic / Non-Diegetic and adjustment of logical consistency of different spatiotemporal. It was revealed that was possible to carry out the analyzed analysis.
View full abstract
-
Daisuke KANNO, Kiyotaka KATSUMATA, Koji MIKAMI, Kunio KONDO
Session ID: 16-04-65
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
To create a scenario, a lot of ideas of stories are needed.However, ideas that depend on the experience and sensitivity of the scenario writer will exhaust the idea.In this study, we propose a narrative composition tool based on analysis of movies and proposed a story's simulation method.We supported ideas of scenario producers.
View full abstract
-
Chiaki ONO, Yoshihisa KANEMATSU, Ryuta MOTEGI, Naoya TSURUTA, Koji MIK ...
Session ID: 16-04-66
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
An ensemble cast is a method of dramatic production in which all members of the cast have equal amounts of screen time and importance. The structure of the ensemble cast is complicated because it aims to emphasize the interconnectivity of characters by unifying different parallel plot lines, and existing story support methods cannot be applied. In this paper, we propose a system for supporting the scriptwriting with an ensemble cast. We analyze the existing movies with and without ensemble cast to clarify the difference between them. Our system shows scene compositions extracted from existing movies, which contains a given number of sub-stories and a given number of scenes. User experiments show that our system is useful for the scriptwriting with an ensemble cast.
View full abstract
-
Jun HASHIDUME, Yoshihisa KANEMATSU, Koji MIKAMI, Kunio KONDO
Session ID: 16-04-67
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this research we will support foreshadowing placement on story plot. For that reason, we extracted elements of foreshadowing from existing works and examined the arrangement and trend of foreshadowing. In addition, we developed a foreshadowing data library “Scrapbook for foreshadowing production support” that enables authors to search based on the analysis results. Evaluation experiments using this system showed that subjects were able to produce more complicated foreshadowing.
View full abstract
-
Anna YAMADA, Tsukasa KIKUCHI
Session ID: 16-04-68
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Masked Rider is the generic name of the hero who attached a mask appearing in the special effects hero program which lasts 1971 through 2017 now. Because so Attractive one may include a transformation scene.I pay my attention to the camera work of the transformation scene in this study and perform analysis,consideration.I extract a characteristic and the similarity every rider,a difference,the difference of the rider and Heisei rider of the Showa era for the cause by the result.
View full abstract
-
Hikaru YAMAGUCHI, Ryuta MOTEGI, Naoya TSURUTA, Koji MIKAMI, Kunio KOND ...
Session ID: 16-04-69
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Robot animation in Japan is a big genre and most of them features battle scenes. The battle scene is one of the attractive parts of a robot animation. However, producing these scenes are strongly depends on 3D motion designers' skills. In this paper, we proposed a system which applies the common motions to a 3D robot model. We extracted 8 frequently appearing motions in a battle scene in the Japanese robot animations. User experiments show that our system can shorten the time required for motion production of robot animation.
View full abstract
-
Shouhei Miyagi, Yoshihisa Kanematsu , Koji Mikami , Kunio Kondo
Session ID: 16-04-70
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this research, we develop a struggle scene design support system that accumulates the technology and know-how of existing animation. When the director produces a storyboard or animatic, this system is used to support the design of the scene. For that purpose, we selected struggle scenes from existing animation and investigated the behavior of the characters that compose the fighting scene shot by shot. Based on the shot behavioral patterns clarified from the survey results, candidates for the next shots of the action are selected and designing of the struggle scenes is supported. As a result of the evaluation experiments, we found that the function of presenting the shots candidates of this system is effective in designing the struggle scenes.
View full abstract
-
Ami HONDA, Megumi KANEKO, Meeko KUWAHARA
Session ID: 16-04-71
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Recently, many smartphone applications have been released and used as means for easily creating stop motion animation. In this paper, study on functions suitable for creating stop motion with smartphone.
View full abstract
-
Haruki Kakesu, Akinori Ito, Ken’ichiro Ito
Session ID: 16-04-72
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Sounds of the kind called “Atmospheric Sound” in the field of video sound effects are also used to express the feelings and circumstances of characters. Although these are not sounds or sound effects of middle range which can not be called sound expression, there is no specific concept in the production technique, and systematic investigation classification etc has not been done about production method. In this research, we mainly focused on Japanese SF animation, and conducted a survey of 12 titles and 83scenes, mainly on contents since the 00's. As a result, the relation with the various editing techniques is deep as the video expression technique, and sound including abundant dissonance and non-integral next harmonic was abundantly used as the sound production technique. Furthermore, when compared with overseas contents, the use rate of electronic sound was remarkably high.
View full abstract
-
-You can make your own rap with synthesized voice and CG animation by writing a script-
Masaki HAYASHI, Yoshiaki Shishikui, Steven Bachelder, Masayuki NAKAJIM ...
Session ID: 16-04-73
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We have made an application to make rap music with CG animation by writing out a simple script. Aquestalk and TVML are used for synthesized voice and real-time CG generation, respectively. A user can enjoy making rap music video easily by writing speech texts and character movements along with music beat in the script.
View full abstract
-
Satoshi Furukawa, Yosuke Umemoto, Misato Kawatsu, Keisuke Sakata, Hina ...
Session ID: 16-04-74
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
After-school care programs have a function for providing a safety children’s playground. On the other hand, most of the time in children’s activity occupy the time for watching a television and reading a book due to a staff shortage. Hence, we developed some multiplayer interactive games to induce children’s exercises in their plays. This paper reports the details of our developed games and some results of our experiments in two children’s houses belonging to the elementary schools in Kumamoto.
View full abstract
-
-A Puzzle Application for Color Distribution and Deployment-
Shogo Hashimoto, Koki Takeda, Masaki Yamamoto, Hyuga Watanabe, Yuriko ...
Session ID: 16-04-75
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a puzzle application “Mondrian Puzzle”. Prior to develop the application, we analyzed the segmantation ratio and color distribusion of the “Composition” series of Mondrian by obsevation and replication. He composes them with utilizing the area segmentation with horizontal line and vertical line which do not intersect or intersect only once, and then he has arranged a certain specific colors into their rectangular areas. We apply the features to make the iOS application which has Mondrian style puzzle pattern.
View full abstract
-
Wataru YOKOTA , Yasunari OBUCHI
Session ID: 16-04-76
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
As a method of discriminating invisible part information, there is a hammering test for discriminating an object using a sound signal. In this paper, we deal with coin falling sounds as an example of such sound classification. In conventional coin discrimination studies, humans have been analyzing a single speech feature. However, it is difficult to accurately analyze many speech features by this method. It is necessary to accurately analyze large quantities of features. Combining machine learning with sound signals analysis seems to make it possible to analyze many feature quantities more accurately. For this reason, we analyzed sounds of popular five-yen and ten-yen coins falling into various materials using machine learning.
View full abstract
-
Rumiko Fukada, Akinori Ito, Ken’ichiro Ito
Session ID: 16-04-77
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Production of a school musical drama movie has to plan the construction of the music, the lyrics and the appropriate cinematography. This research did the perspicuity of the structure between the music and the video and a spadework of overall Analytical method suitable for interpretability. An association analysis was put into effect by music, lyrics and the figure which normalized the structural for parameter group by basic music theory, riff and rhyme on a pop music analysis and cinematography which means photography and an edit operation relation between the language of motion picture.
View full abstract
-
Tomoko YAMAGUCHI , Yasunari OBUCHI
Session ID: 16-04-78
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A questionnaire survey with SD method was conducted to find suitable animal sounds for dinosaurs. First, five adjective pairs were found to represent the characteristic of dinosaurs. Next, we asked the users to describe the dinosaur using those adjective pairs and to choose an animal sound suitable for the dinosaur at the same time. Based on the analysis of those results, we found the way to calculate from the adjective scores the probability by which a specific animal sound is chosen for the dinosaur.
View full abstract
-
Risa Sakurai, Akinori Ito, Ken’ichiro Ito
Session ID: 16-04-79
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the study of game sound, technology research centered on the EC field, cognitive psychology research centered on the UI research field, music research centering on BGM, etc. have been conducted, but from the expression field The concept of the sound design method is still underdeveloped. In this research, mainly on Nintendo's main titles, we investigated sound design methods accompanying interactive gimmicks and conducted content analysis on core techniques and concepts. As a result, it became clear that the main design concepts include “extended realism of real world sports”, “problem discovery sequential type directing”, “experiential curve effect improvement type direction”, and the like.
View full abstract
-
Nozomi MIDORIKAWA, Toru KANO, Yuriko TAKESHIMA
Session ID: 16-04-80
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, along with digitization of music, it has become possible to carry a large number of music with portable music players or smartphones. However, as the number of songs that you can possess has increased, it has become difficult to select a song that matches your mood and preference from among them. In such cases, “visualization” that expresses music information with images would be useful to grasp the contents of music in a short time and to select a song efficiently. In this study, we propose a visualization method based on a morpheme analysis for lyrics and acoustic features so that you can intuitively understand the atmosphere of the music. As a result, it became possible to grasp the impression of music visually, and easy to select a song from a large number of music.
View full abstract
-
Taito OTANI , Yasunari OBUCHI
Session ID: 16-04-81
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a multimodal performance system that controls effect parameters use for playing in real time by voice input and feedback color information according to the control value. This research aims to extend musical expression in live performance using information processing technology. The system extracts acoustic feature such as volume and pitch from voice input for switching effect and controlling effect parameter. We propose a method to multimodal control parameters by feedback color according to operation result.
View full abstract
-
Maya ATSUKI, Kaya OKADA, Takayuki ITOH, Takanori OGATA, Yuta HINOKUMA
Session ID: 16-04-82
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Analysis of walkers’ behavior of shops, offices and public spaces is important for marketing and urban development. Visualization is a useful tool for this analysis. It is preferable to develop client-server systems to visualize huge sizes of human walk datasets. Also, it is preferable to collaboratively visualize such datasets by multiple experts. Based on these requirements, we present a Web-based visualization system for walkers’ behavior. This system aggregates the walkers’ paths on a Web server and returns to clients. The server part is implemented in Python and deployed as a Web API. Meanwhile, client-side visualization module is available on Web browsers since it is developed with JavaScript and D3.js.
View full abstract
-
-Extension to Virtual Reality Space-
Shoko SAWADA, Takayuki ITOH, Takashi MISAKA, Shigeru Obayashi
Session ID: 16-04-83
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Generally speaking, fluid simulation is executed repeatedly by changing conditions, so we need to compare a large amount of results. In order to compare results executed under different conditions, it is effective to overlap the streamlines generated from each conditions in a single 3D space. Streamline is a curve line which shows wind flow. We automatically select and visualize important streamlines suitable for comparison. In addition, we can observe this result in virtual reality space.
View full abstract
-
Saki YAMASHITA, Takayuki ITOH
Session ID: 16-04-84
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper presents a user interface to assist creation of digest videos. This study aims to generate digest videos highlighting specific persons. The presented technique firstly applies a face identification technology to extract shots which take the target person. It then calculates the similarity between arbitrary pairs of shots which take the target person and other shots, and scores all the shots which do not take the person based on the maximum similarity. The user interface displays all the shots which are moderately similar to the shots which take the target person. Users can flexibly select shots which the users want to include to the digest video.
View full abstract
-
Kohei TOBE, Mikio SHINYA, Michio SHIRAISHI
Session ID: 16-04-87
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Recently, stereoscopic game have been actively developed. In video games, smoke displays are important elements and their efficient stereoscopic displays are desired. Although volume data can well represent smokes, they are generally too expensive for video games. Alternatively, 2D textured polygons are commonly used. However, when stereoscopically displayed, a smoke texture polygon is often recognized as a planar object, which suffers reality. In this study, we propose a stereo display method that displays a smoke textured polygon as realistically as volume rendering method. The method estimates a simple 3D smoke distribution from the texture and applies simplified volume rendering. Experiments confirmed the efficiency of the method.
View full abstract
-
Shiori YASUE, Takayuki ITOH, Naruhito TOYODA
Session ID: 16-04-89
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We have been studying representation of micro-geometry of human skins applying computer graphics technologies. The technique calculates feature values such as radii of pores and directionality of furrows from real photographs. There are other popular features in the field of skin care industry, such as average-area of ridges. Human skins may not be healthy if the average area values are large. This paper presents a technique to calculate average-area of ridges from real photographs and reflect the average-area to 3D geometry generation for computer graphics.
View full abstract
-
Karin SETOGUCHI, Tsukasa KIKUCHI
Session ID: 16-04-90
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
There are few works on the work focusing on the part reproducing the process until the hair gets wet and dry, and the process until the hair is wet and dry, in the work from which animals made with 3DCG come out. Therefore, in this study, we reproduce the process from animal hair to wet until dry. In reproducing, first the animal's hair structure, hair type and method of growing, and the difference between human hair and animal hair were examined. And, as an experiment, observation was carried out by dividing observation of the hair of an actual animal into three. Simulation was conducted based on experiment results and others.
View full abstract
-
Yohei UEKI, Tsukasa KIKUCHI
Session ID: 16-04-91
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, it has become common to use 3DCG in movies and animation, and improvement of production technology is required. Particularly, phenomena related to fluid such as smoke, flame, and water are said to be difficult. In this research, it is representative among effects, focusing on the explosion effect which is required for diversity because of its high frequency of use in live-action photography and cell animation, etc., and among them taking up Trail Smoke which is difficult to photograph in live-action photography. Then, by controlling the shape by flowing particles to the pressure gradient, determining the pressure gradient by passing the pass so that the shape of the explosion can be more controlled, simulation of the explosion including Trail Smoke .
View full abstract
-
Yuta Sato, Tsukasa Kikuchi
Session ID: 16-04-92
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, 3DCG has been used in various forms in animation works. However, if it is taken into the work as it is, it gives a feeling of incompatibility. Therefore, researches on a method of expressing it in a cel-anime style called Cellook are proceeding. Therefore, in this research, we focus on “excessive bleeding scenes” that can be seen with many animation works, and propose a method for generating Cellook bloodshed by 3DCG. In this study, first we investigate the properties of blood, calculate the movement of the fluid by the FLIP method based on it, and simulate real blood flow. Then add elements to make it into a Cellook in the generated simulation result, and realize Cellook blood flow expression.
View full abstract
-
Takuya Aoki, Tsukasa Kikuchi
Session ID: 16-04-93
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
ITE (The Institute of Image Information and Television Engineers) provides a word template file for the Technical Report of ITE.
View full abstract
-
Mizuki HAYASHI, Tsukasa KIKUCHI
Session ID: 16-04-94
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Washi is due to lack of successor of handmade technology and raw material shortage in 2014, non-registered intangible cultural heritage. It is always valuable and it is not easy to keep it. Using CG simulation, we reproduce paper deformation such as breaking paper and breaking. Many studies have been done. However, studies on wrinkles caused by shrinkage due to drying number few. Furthermore, in each of these studies, it is important to know details of the paper to be simulated. It is not marked. Therefore, in this study, we perform a visual simulation of wrinkles caused by shrinkage due to drying peculiar to Japanese paper, targeting Japanese paper, even paper.
View full abstract
-
Yuki MIYAHARA, Masanori NAKAYAMA, Issei FUJISHIRO
Session ID: 16-04-95
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
With the development of computer graphics, we have come to feel as if CG characters were alive. Such kind of things are called “Animacy.” From previous studies, we know that we feel animacy in the actions of the objects. In order to produce characters that promise to be alive, it is crucial to make their actions manifest animacy. Skilled animators can produce such characters, though it is difficult for people who do not have such knowledge or skills. Therefore, we categorize the movements of creatures, and thereby suggest a method to express the essence of animacy by stretching of mesh model. The actions are generated by autonomous calculation and user operations.
View full abstract
-
Momoko TSUCHIYA, Takayuki Itoh, Yoshihisa Nitta
Session ID: 16-04-96
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Light art represents various objects by a light stroke drawn in the air. It takes about 10 to 30 seconds to create a light art picture, binding these pictures we could create a light art movie. It is a time-consuming task because we need large number of frames to create a movie. This paper presents a light-art-like interactive system. This system extracts the edges of human body by using depth information, and then displays them as light art in real time. As an on-going work, we are improving the reality of light art generated by this system, by implementing continuous line generation and hand-drawing-mimicking deformation algorithms.
View full abstract
-
Manami DOI, Tsukasa KIKUCHI
Session ID: 16-04-97
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A picture drawn with a computer called the computer graphics becomes mainstream in the present. The new form idea discovered by Picasso and black has turned back from the Renaissance period to the form idea which had been handed down. In this research, replace Picasso 's point of view in Cubism with a 3D model and clarify by placing the camera against the object while referring to the actual cubism paintings. By placing another new object for the extracted camera, it expresses cubism by new CG from the viewpoint of cubism painting.
View full abstract
-
Xiangping LIU, Kota TAYA, Hiroshi MORI, Fubito TOYAMA, Kenji SHOJI
Session ID: 16-04-98
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
It is necessary for users to determine 2-digital color of the area surrounded by the drawing lines in a methodology that interactively creates papercutting patterns based on writing line drawings. In general, the number of regions N surrounded by drawing lines of handwriting input is large. Furthermore, the color combination of each area becomes enormous as 2N. Efficiency will be improved greatly in the case of determine 2-digital color by users. In this research, using a trained general objectives recognition neural network (VGG - 16 which is 16 layers of CNN), a method to recommend a small number of candidates based on output values at the specified category from 2N papercutting patterns will be proposed.
View full abstract
-
Shohei USAMI, Saori KURATA, Hiroshi MORI, Hubito TOYAMA, Kenji SHOJI
Session ID: 16-04-99
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a method for computer-assisted colorization to freehand line drawings with reference to a photograph taken by a tablet terminal. In our previous work, there is a system for automatic coloring to line drawings entered by tracing reference images. The system has a problem that the degrees of freedom of position and size of objects in line drawings are insufficient as the positions of objects in line drawings are fixed at ones in the reference images. In this study, we try to solve this problem by incorporating the registration process that linearly maps the reference image to the drawing canvas at the beginning or the middle of the line drawing production. As the result of it, we expect that line drawings can be produced with the user’s preferred and size.
View full abstract
-
Takeshi Okuya, Shigekazu Sakai
Session ID: 16-04-100
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, reproduction of cel animation by 3DCG is being done. In traditional production methods based on drawing and synthesis, each element has unique perspectives. This is a feature on cel animation representation. On the other hand, the general camera model of 3DCG creates a uniform perspective. Therefore, it is necessary to resolve this difference. In this research, in order to reproduce cel animation perspective by 3DCG, we developed a method to move vanishing point and adjust perspective strength, using a multi-viewpoint matrix that is created for each rendering object.
View full abstract
-
Kota Taya, Xiangping Liu, Hiroshi Mori, Fubito Toyama, Kenji Shoji
Session ID: 16-04-101
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The papercutting fixes a sketch on black paper and clips out an unnecessary part. The sketch is drawn with white and black, and there is a role to guide a place to cut. Beginner can easily make it because it only cut a point appointed based on a sketch. The purpose of this study is to make a sketch for papercutting from a picture written by hand drawing. In addition, this study is reflected the individuality and originality by interactive approach.
View full abstract
-
Taiki SATO, Youngha CHANG, Nobuhiko MUKAI
Session ID: 16-04-102
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we propose a method of generating the 3D model of 2D Ukiyo-e by using multiple layers. In order to generate a 3D model, image segmentation is necessary, which divides the image into 3 layers: front, middle and back layers. However, normal segmentation methods for natural images do not work for Ukiyo-e, because Ukiyo-e is drawn with a limited number of colors. There also is a problem that some objects in Ukiyo-e are drawn with strokes that are not closed. In this case, traditional algorithms cannot determine the appropriate boundaries. Therefore, we propose a segmentation method that solves these problems and constructs the 3D model of 2D Ukiyo-e by placing the divided images onto multiple layers
View full abstract
-
Sayaka TAKEDA, Hideki TODO, Toru KANO, Yuriko TAKESHIMA
Session ID: 16-04-103
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We can see many scenery photo images on travel websites but it is still difficult to obtain them for all four seasons. To generate a plausible different seasonal picture, we propose a color transformation method by using example photos. From the example images available online, we extract representative colors for the target area and season. Our system can replace a part of the representative colors with different colors taken from a reference. Our experiments demonstrate how the proposed method can synthesize different seasonal pictures.
View full abstract
-
Yojiro KUWAHARA, Toru KANO, Yuriko TAKESHIMA
Session ID: 16-04-104
Published: 2017
Released on J-STAGE: July 01, 2020
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Many natural disasters such as earthquakes and tsunami have occurred in Japan. Hazard maps that predict damaged areas are used as a way for taking measures against such disasters. In recent years, although hazard maps that treat crime or traffic accidents have been used, you can’t see all of these factors on the same map. In this study, we propose a new hazard map that considers three factors – disasters, crime and accidents. By visualizing multiple hazard factors on the same map, you could search for a residence place and choose a travel destination.
View full abstract