Reports of the Technical Conference of the Institute of Image Electronics Engineers of Japan
Online ISSN : 2758-9218
Print ISSN : 0285-3957
Reports of the 300th Technical Conference of the Institute of Image Electronics Engineers of Japan
Displaying 1-50 of 125 articles from this issue
  • -sTone-
    Yuki ISHII, Kouta KIKUCHI, Toshitaka AMAOKA
    Session ID: 21-04-001
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This study proposes a sound interactive system “sTone” ,which makes sound in the real world, performs by electronic control. This musical instrument allows participants to freely choose the sound they want to play by selecting some from lots of stones. In order to play the musical instrument, participants need to set their favorite stones on the musical instrument and use a MIDI controller to control the solenoid for playing music. “sTone” provides a new musical experience and the opportunity to focus on the environment surrounding us by allowing users to select stones. The purpose of “sTone” is not only to represent individuality through the process of selecting the stones but also express their individuality by sounds.
    Download PDF (1205K)
  • ― The Analogy of life in the XR Era ―
    Ken SONOBE, Ryu NAKAGAWA
    Session ID: 21-04-002
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    “Meta Flowers” is a multiparticipant installation artwork with Cross Reality (XR) . Wearing a glove-type tactile device consisting of linear resonant actuators (LRA) and a microcontroller with Wi-Fi unit, and HoloLens 2, participants are able to experience XR through the act of arranging virtual flower (VF) with ‘shadow,’ ‘rigidbody,’ and ‘sound.’A VIVE Tracker placed on the tip of a metal rod inserted into a vase on the table, and VF blooms there. The real shadows of the vases, sticks, and artificial flowers, and the virtual shadows of the VFs are projected onto the table by the light and images projected from the projector on the ceiling, creating a fusion without unnaturalness. Participants can move the VFs and arrange them in vases on the table. When Participants touch the VF, petals of the VF fall and participants get tactile sensations through the haptics glove. In addition, when water is poured into a vase of metal rods without petals using a sensor-equipped jug, the VF blooms again. VFs play sounds while they are in bloom, and their pitch change depending on the position of them
    Download PDF (792K)
  • Yutaka AKEHATA, Youngha CHANG, Nobuhiko MUKAI
    Session ID: 21-04-003
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    We have developed a double-edged knife sharpening simulator, which can deform the kitchen knife and the grindstone by considering the frictional wear between them. This simulator can feedback users the force caused by the knife sharpening, and also display the angle between the knife and the grindstone and the trajectory of the knife on the grindstone. The knife and the grindstone are modeled with meshes, and are deformed by calculating the moving amount of each vertex that constructs the models. As the result of user evaluation, the scores of knife deformation and force feedback were relatively high.
    Download PDF (707K)
  • Tomoya Hirabayashi, Rin Izumi, Kenta Uchida, Tomoaki Moriya, Tokiichir ...
    Session ID: 21-04-004
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    We propose the “Switchable Dual Dial method”, a flick input method in a virtual space that uses two VR controllers. The proposed method is to flick the touchpad of one VR controller with a finger of one hand to input consonants. Then, by flicking the touch pad of the other VR controller with a finger of the other hand to select a vowel, one Japanese character is input. Since the above two operations can be performed almost at the same time, Japanese input can be performed accurately and at high speed. As a result of the experiment, we verified the possibility as a new Japanese character input method.
    Download PDF (710K)
  • Hikaru MURAKAMI, Akinori ITO
    Session ID: 21-04-005
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this study, we created a prototype of a tool that plays sound effects in real time in response to the actions of a CG avatar. We analyzed three VTuber live streams focusing on the actions and confirmed seven types of repeated actions and characteristic actions. The analysis of one of the movements, a small wave of the left hand, showed that the z-axis rotation occurred with a fixed period, and the angular velocity changed periodically around 0. Based on the results, we created a prototype that detects the motion and plays a sound effect using VIVE Pro 2. Using the prototype, we confirmed that the sound effect was played back along with the movement. We also ported some of the functions to HoloLens 2 and discussed them.
    Download PDF (622K)
  • Shungo KIUCHI, Tomoya ITOH, Yuriko TAKESHIMA, Tsukasa KIKUCHI
    Session ID: 21-04-006
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    3DCG fluid simulation has become an indispensable element in the field of video, but to control the fluid as desired requires a certain amount of knowledge and production time. Therefore, we thought that it would be helpful for creators if we could propose a system that can control fluids even if they do not have deep knowledge and cannot spend much time on production. In this research, we decided to focus on the firing scene of a tank, among other phenomena where fluid simulation is used. First, we investigated the mechanism of the generated fluid and the differences in the fluid that appear in different tanks. Based on the results, we created a system that can control the fluid by manipulating the parameters and also generate the accompanying elements. This would reduce the time required to produce a tank firing scene.
    Download PDF (400K)
  • Reo ANDO, Tomoya ITO, Yuriko TAKESHIMA, Tsukasa KIKUCHI
    Session ID: 21-04-007
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    I was interested in CG, which is used in many fields such as movies, games, and medical care, and I thought about dealing with fluid simulation. The purpose of this study is to reproduce a large-scale fluid simulation such as a close-out size wave.
    Download PDF (722K)
  • Mikihiro Kagoshima, Motonobu Kawashima, Daichi Hayakawa
    Session ID: 21-04-008
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This research is for improving the expression of viscous body in video game contents. In this research, simulations of various viscosities created in advance are depict in real time using VAT (Vertex Animation Texture). Spawn an additional VAT simulation when a character steps into the ground on the game engine to which it has a viscous material applied. This depicts the characteristics of the viscous, “cling” and “gradually collapsing”. Also, when stepped into a material that depict another viscous, if you spawn a suitable VAT, it is possible to feel different viscosities and show interactivity
    Download PDF (643K)
  • Haruto SATO, Tomoya ITO, Yuriko TAKESHIMA, Tukasa KIKUCH
    Session ID: 21-04-009
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, with the development of computer graphics (CG) technology, there have been various studies in the field of CG simulation and the demand for CG representations of food has been increasing. In this study, the entanglement of noodles and sauces in a pasta dish is represented by CG simulation. The interaction between a filamentous object and a non-Newtonian fluid, as seen in the case of noodles and sauce, is considered to be one of the most difficult subjects due to the complex behavior of the filamentous object and the non-Newtonian fluid. In this method, the interaction between noodles and sauce is simulated by Vellum and FLIP, respectively, and then combined by re-simulation.
    Download PDF (644K)
  • Satoshi KOIBUCHI, Akinori ITO
    Session ID: 21-04-010
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This study is demonstrate a system for procedural sound synthesis of “buechu-buechu” in the production of video content using 3DCG. We focused on viscous liquids and developed a system for generating interactive “buechu-buechu” that imitates Foley recordings. We conducted a Foley recording using a viscous liquid as an index for sound synthesis. Based on the recordings, we created a synthesizer using CsoundUnity to imitate the Foley recordings, and by layering multiple spherical objects in Unity in laminate shape, we were able to generate the sound of a large number of fine bubbles bursting when the stick shape objects made contact.
    Download PDF (983K)
  • Kazuya SAWADA, Kai LENZ, Masanori KAKIMOTO
    Session ID: 21-04-011
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Manga production is complicated and difficult. In particular, Manga storyboard production is of high importance and difficulty, and often causes the work to be incomplete. Therefore, I thought that Manga storyboard production could be facilitated by supporting the composition process, which is the pre-process of the Manga storyboard process, with reference to the manga production process and the Manga storyboard production of professional writers. Based on the method in the previous research, we developed an information management system for Manga storyboard production. As a result, it can be expected to be fully effective as a Manga storyboard configuration support system
    Download PDF (687K)
  • Yuki ENDO, Kai LENZ, Masanori KAKIMOTO
    Session ID: 21-04-012
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In scenario production, it is necessary to grasp the information of the work in order to prevent the composition from collapsing or inconsistent when adding or correcting the information. There is a problem that the amount of information such as character correlation becomes enormous as the scenario progresses, making it difficult to manage. For the purpose of clarifying information between scenarios, we developed a scenario composition support system focusing on visualization of correlations using force directed graphs. The correlation of characters before and after the scenario can be visualized and edited on a Web browser
    Download PDF (491K)
  • Keita Ushijima, Motonobu Kawashima, Daichi Hayakawa
    Session ID: 21-04-013
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, video production workflows have become more diverse, with a variety of digital imaging technologies being used. In this context, stop motion animation still exists as a work that is mainly produced using traditional filming methods. In this study, we propose a virtual production method to improve the lighting process, which is one of the issues in stop motion animation production.
    Download PDF (1518K)
  • Hiroyuki YOKOTA, Ryosuke FURUTA, Yukinobu TANIGUCHI, Ryota HINAMI, ...
    Session ID: 21-04-014
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In order to translate Japanese manga into another language, it is necessary to recognize the contents and position of texts and replace it with the translated texts. If there is no layer information, it is required to delete the texts from the original images and place the translated texts. Since a volume of manga consists of about a few hundred pages, manual inpainting of illustrations takes a high cost. For this reason, it is desirable that image inpainting techniques, for example, those based on generative adversarial networks, can be used. In this study, we focus on the fact that the way to draw illustrations depends on the author and propose an image inpainting model that considers each author’s characteristics. In addition, to deal with cases where changes need to be made to the representation of the illustrations, we propose a method for interactive inpainting and editing manga images using the user’s line drawings as a clue.
    Download PDF (623K)
  • Yuki Narayama, Kai Lenz, Masanori Kakimoto
    Session ID: 21-04-015
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Nowadays, tools and methods for more efficient production are being developed and researched on a daily basis. The production of animation can be categorized into a number of processes, and in this research, we will focus on the color scheme process. In this research, we aim to visualize the color schemes of characters and backgrounds that are depicted simultaneously in animation, and to support production and analysis. In this method, we generate a concurrent color scheme timeline from captured images of actual animations, and visualize eight representative colors of a scene using the K-means method. In this study, we solved the problems of the previous studies and actually used the concurrent color scheme timeline for analysis.
    Download PDF (471K)
  • Miku TAKEMASA, Kohei TOKOI
    Session ID: 21-04-016
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The purpose of this research is to develop an application using Augmented Reality (AR) to model a virtual object jointly with a remote person. In this system, clay is modeled, and hand tracking is used to change the modeling of the clay, which is a virtual object in accordance with physical laws, and to do so simultaneously with another terminal in real time. By using smartphones, which are in everyone's possession today, we hope to reduce the cost of installation and make the system useful for remote education and entertainment.
    Download PDF (784K)
  • Yui KOROKU, Issei FUJISHIRO
    Session ID: 21-04-017
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    When conventional 3D techniques are used for animating cel look characters in 3DCG, a sense of incongruity may arise due to the dimensional difference from the 2D appearance. To address this problem, we propose a method based on the cel anime techniques to generate behaviors with less 3D nature from motion capture data. In cel anime, inbetweening technique is used to omit motion and emphasize speed, resulting in taste different from that of real motions. In our method, we focus on the speed distribution of the motion capture data and select the most effective poses to realize the effectiveness of inbetweening while omitting redundant motions by reducing the number of poses drawn, thus reproducing typical motions of cel anime.
    Download PDF (1146K)
  • Toshiya YAMAGUCHI, Ryosuke FURUTA, Yukinobu TANIGUCHI, Ryota HINAMI, S ...
    Session ID: 21-04-018
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In manga, the story unfolds as the illustrated characters speak their lines expressed in text. Therefore, it is important for understanding manga to link each line to the character who speaks it. In this study, we propose a context-aware speaker estimation method for manga that estimates the speaker of a line by taking consecutive lines as input. The proposed method considers the conversation context and the manga-specific expression by using the prediction results obtained by the natural language processing model and the candidate speakers defined by the distance between the target lines and each character in the manga image.
    Download PDF (563K)
  • Ryosuke NISHIDA, Issei FUJISHIRO
    Session ID: 21-04-019
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    With the recent progress in CG representation, the cost of terrain generation using 3D instances has increased, and it is desired to reduce the complexity of the generation procedure. In addition, it is little known how to generate terrain using walking paths. In this study, we propose a method to automatically generate a landscape of natural topography by referring to a texture having terrain features and a map showing walking paths. With this method, various natural terrains along the paths can be expressed with only a few parameters, and thus reducing the modeling cost of the artists.
    Download PDF (374K)
  • Kaishu SHIRAKAWA, Hiroyuki KUBO
    Session ID: 21-04-020
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, it has become possible to obtain object shapes and reflection characteristics from objects in target images using a technique called inverse rendering, and to generate models that have the same reality as real objects. In this method, we use this inverse rendering technique to estimate the surface roughness of an object. However, existing methods that use the entire target image for estimation have shown problems in obtaining accurate estimation results due to the inclusion of regions that are not suitable for estimation. Therefore, before estimating the roughness of an object's surface, this method extracts regions of high confidence by introducing constraints that take into account two types of variance obtained by varying the input image, and estimates the roughness using only these regions as the target image.
    Download PDF (685K)
  • Yudai FUJINO, Mutsuo SANO
    Session ID: 21-04-021
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    As a result of the epidemic of the new coronavirus infection nationwide from around March 2020, many people are being asked to refrain from going out unnecessarily and urgently, and the current situation is that they cannot go on trips. Even if the number of infected people of the new coronavirus infection has subsided, the number of infected people is increasing again every time a new mutant strain appears. In this way, even if the new coronavirus infection has subsided, if a new mutant strain or another infectious disease appears, it may be required to refrain from going out unnecessarily and urgently. In recent years, virtual reality (VR) technology has become more common, and you can easily experience virtual space with smartphones as well as devices such as dedicated goggles. Therefore, in this research, we will consider the development of a system that anyone can easily experience the feeling of traveling by creating a 3D model from a video on a smartphone using Visual SLAM and creating a VR space based on it.
    Download PDF (460K)
  • Joe TAKAYAMA
    Session ID: 21-04-022
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This study aims to propose a design support system for deformable Gothic ornaments; this report is the intermediate stage of the project. Specifically, we attempted to generate traceries (window frame ornaments) seen in Gothic architecture by using metaballs, a modeling technique in computer graphics (CG). In this study, density value of each metaball is attenuated at some parts; this technique allows easily draw some typical Gothic motifs as a single metaball. Also, it is possible to animate a state in which the motif is gradually deformed by continuously changing the attenuation rate. By applying this technique, we developed an interactive editor for designing Gothic patterns.
    Download PDF (1597K)
  • Sayaka OTSU, Tomoya ITO, Yuriko TAKESHIMA, Tsukasa KIKUCHI
    Session ID: 21-04-023
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    ITE Ambigrams are characters that can be read not only from the direction given the character but also from different directions. At present, ambigrams are treated close to design expertise, and the method of making them by human hands is the mainstream. Although there is a conventional method of converting characters using deep learning and creating an ambigrams by paying attention to the character structure characteristics of hiragana, it has been considered that it is difficult to create ambigrams for complex characters such as kanji. Therefore, in this research, we propose and study the method for the purpose of appropriately deforming characters using volume data and particles and generating an ambigram of kanji that can be distinguished as characters.
    Download PDF (1069K)
  • Tomoya KOBAYASHI, Issei FUJISHIRO
    Session ID: 21-04-024
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In digital illustration, there exist painting styles that combine multiple visual effects specific to painting materials. While these styles offer a wide range of artistic expressions and make illustrations more fascinating, they require artistic expertise in terms of how to apply and harmonize the effects of each painting material. In this study, we define these painting styles as Mixed Painting Material Style, and present an NPR framework to retouch an input image to the style. The framework partitions the input image into multiple regions, and converts the regions with high saliency to oil painting and the regions with low saliency to watercolor. This article also includes a few application examples to verify the effect of the present NPR framework.
    Download PDF (2971K)
  • Soma Ishimura, Kohe Tokoi
    Session ID: 21-04-025
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This research aims to develop a method for parametrically modeling the shape of an imaginary creature, a dragon. We assume that the generated dragons are always reptiles with wings and limbs. In this research, we collect existing dragon designs, extract common features among them, and determine the parameters that characterize the shape of the dragon from the extracted features. We also design a procedure to generate the desired dragon from the extracted parameters, and implement it as a dragon design tool.
    Download PDF (425K)
  • Takuto TSUKIYAMA, Mutsuo SANO
    Session ID: 21-04-026
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, with the development of machine learning, tools such as “SAI-chan” and “WAIFU LABS” that automatically generate high-quality 2D characters have been developed, making it easy for people who are not good at drawing illustrations. You can create a two-dimensional character in. However, the part of the character that is generated is only from the face to the upper body, and most of the characters are not generated from the lower body. In addition, the generated character is an illustration that looks like a standing picture when viewed from the front, and it is not possible to accurately reproduce the pose of the character as requested by the user. In this research, we propose a method to automatically generate 2D characters and poses from “Atari”, which is the base for drawing illustrations.
    Download PDF (541K)
  • Delong OUYANG, Ryosuke FURUTA, Yugo SHIMIZU, Yukinobu TANIGUCHI, Ryota ...
    Session ID: 21-04-029
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this paper, we propose an interactive semi-automatic colorization system for manga in order to reduce the time and costs for manual colorization process. In the proposed system, we first obtain a flat-colored image from scribbles input by user. Then, we obtain a high quality colorized result by inputting the flat colored image to a colorization network. We conducted experiments to compare the colorization accuracy, working time, and subjective evaluations of our system with a commercial software. The results show that the proposed system yield better colorized results and save more time than existing systems.
    Download PDF (1189K)
  • Yugo SHIMIZU, Ryosuke FURUTA, Delong OUYANG, Yukinobu TANIGUCHI, Ryota ...
    Session ID: 21-04-030
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    To create color comics, manual colorization process is required, which incurs high labor costs. Tosolvetheproblem, we previously proposed a semi-automatic colorization method by inputting a pair of a screen-tones imageandaflat-colored image. Although this approach can reduce the cost, it still requires manual preparation of flat-colored images. Thus,in this paper, we propose a method to automatically create screen-tones images and flat-colored images fromcolor illustrationson the Internet. Experiments show that the colorization network can be adequately trained with the dataset automaticallycreated by the proposed method although the colorization accuracy is slightly lower than that of manually preparedtrainingdata.
    Download PDF (904K)
  • Noriki AMANO
    Session ID: 21-04-032
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    We are exploring the unique usage and availability of stage lighting. In this study, we propose an art competition with stage lighting. Specifically, we are envisioning Figure Illuminating, a competition in which participants manipulate moving lights to draw figures and symbols with light, and manipulate moving lights in time with the music to create a space. The moving lights, which are often used for concerts, not only move up and down, left and right, and over a wide area, but also have multiple built-in light colors to create a vivid effect in the space. However, in order to realize the competition, there are issues that need to be considered, including the formulation of rules. In this study, we used the competition rules of sport kites (competition kites) as a reference, studied the reference model of the dimming table, implemented the prototype system, and conducted a demonstration performance. Thus, this study not only proposes a highly entertaining competition with stage lighting, but also a simple and intuitive user interface for lighting operation.
    Download PDF (829K)
  • Ayaka YOKOTA, Kouta KIKUCHI, Toshitaka AMAOKA
    Session ID: 21-04-033
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    People can’t be exhaling a breath towards the water for making bubbles until people are completely satisfied, because it is taken as immoral act. “Bquarium Display” will supply the place for bubbling experience. Throughout this interactive art, I want experienced person to enjoy bubbling, and I want to change the negative images about breathing towards the water at the same time. Furthermore, when people exhale a breath into the box, “Bquarium Display” can be measured the amount of breath along through time. Therefore, this display can feel the fun that people experience unreal situations. There are two characteristics. Firstly, “Bquarium Display” be able to save and reproduction bubbling. Secondly, it has two displays.
    Download PDF (496K)
  • -Musical instrument played on the face-
    Masaki Nakai, Hidekazu Tsujiai
    Session ID: 21-04-034
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    I want to play a person, the musical instrument which cannot play a musical instrument and practiced, But think that there is the person who has failed. I studied the thing which could enjoy the simple performance that I image-processed music for people In OpenCV of Python in Pure Data, and a sound sounded for people by nodding in front of Web camera.
    Download PDF (283K)
  • [in Japanese], [in Japanese], [in Japanese], [in Japanese], [in Japane ...
    Session ID: 21-04-035
    Published: 2022
    Released on J-STAGE: March 22, 2017
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
  • -Paper doll animation-
    Ayana TAKASAKI, Hidekazu TSUJIAI
    Session ID: 21-04-036
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This is a story about a girl who jumps out of a painting and finds the hair ornament she was looking for. Nowadays, there are many beautiful and realistic animations produced by digital illustrations and computer graphics. In the midst of this trend, we will use the motif of paper dolls, with their awkward analog movements and limited orientation, to create and present animation with a unique atmosphere that is different from digital animation.
    Download PDF (1364K)
  • Hiraku Matsuda, Mutsuo Sano
    Session ID: 21-04-037
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years,the tourism industry has been severely affected by the novel coronavirus.In order to prevent infection,it is necessary to maintain a certain distance from the target area.On the other hand,it is also necessary to provide contents to attract the interest of tourists.Taking these points into consideration,we will create an application for tourists that can be experienced while keeping a distance,keeping in mind the prevention of infection. In this research,we convert the scenery in the photographs into Edo-period style images.In order to reproduce the scenery of the Edo period,we apply Ukiyo-e style processing using Instance Segmentation and CycleGAN.
    Download PDF (1248K)
  • Miyuki MATSUDA, Kenichiro TANAKA, Takuya FUNATOMI, Yasuhiro MUKAIGAWA, ...
    Session ID: 21-04-038
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Thermal cameras use far-infrared light, which is emitted in response to the temperature of an object, to make measurements, and are used in a variety of situations such as security guards and infectious disease control. However, unlike ordinary cameras, the aperture of ordinary thermal cameras is often fixed. In this paper, we propose a method to generate a synthetic aperture image with a large aperture using ray space information of far infrared wavelengths measured by a thermal camera to obtain an image with a very shallow depth of field and to remove unwanted obstructions at a different depth from the subject from the image. Furthermore, we propose an algorithm to compensate for the temperature distortion that occurs when generating synthetic aperture images, and verify its effectiveness.
    Download PDF (385K)
  • Ryosuke ISHIKAWA, Kohjiro HASHIMOTO, Tadashi MIYOSAWA
    Session ID: 21-04-039
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this research, we propose a scene extraction method focusing on table tennis competition. Table tennis players set up their fixed camera and shoot their own games. The video taken is used for reviewing own play later, but it has been reported that the time spent playing competitions such as rallies is 20% of the match time. Therefore, the work of reviewing the entire video is not efficient. In this research, we propose a method to extract only the scenes that the user wants to look back from the video. Here, it is assumed that the features of the scenes which should be extracted are body movements. And the scene extraction problem is solved by using a matching method of skeletal time series data based on a hidden Markov model.
    Download PDF (349K)
  • WEIBO ZHONG, Ryosuke FURUTA, Yukinobu TANIGUCHI, Ryota HINAMI, Shonosu ...
    Session ID: 21-04-040
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this study, we tackle a problem of detecting bounding boxes of handwritten texts in a manga image. The detection of handwritten text positions in manga can be applied to various downstream tasks such as automatic translation and image completion. To this end, we build a new dataset with bounding box annotations of handwritten texts on Manga109 dataset, and apply an object detection method for natural images to our problem. In addition, in order to improve the detection accuracy of handwritten texts, we propose a data augmentation method that increases training data by synthesizing handwritten texts and a method that integrates the detection results using images converted into line drawings.
    Download PDF (874K)
  • Shunta KOMATSU, Ryosuke FURUTA, Yukinobu TANIGUCHI, Ryota Hinami, Shon ...
    Session ID: 21-04-041
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Character recognition in manga is an important task to improve the usability of e-comics through recommendation systems and automatic translation systems. However, character identification in manga is difficult because the style of manga varies greatly from author to author and work to work. In this paper, we propose a clustering method for manga face images, which is robust for unknown authors and works. In particular, we show that training the model using domain generalization with self-supervised learning enables effective feature extraction even for unknown styles. We also show that the constraints based on domain-specific knowledge in manga improve clustering accuracy. As a result of experiments on a manga image dataset, we confirmed that the proposed method improves the clustering accuracy.
    Download PDF (2402K)
  • Wen Zheng, Shigekazu Sakai
    Session ID: 21-04-042
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Lighting has been an important part in today’s 3D animation making, with different elements including light strength, lighting direction, shade and shadow. Among all these factors, color is a necessary part worthy to be considered more. Based on the existed research, color can influence people’s emotion in both subjective and objective ways, which will be a valuable potential used in animation for more profoundly emotional depiction in works as well as more precisely emotional guidance for audience. However, the lack of scientific research, especially on the application of 3D animation, is the problem. Therefore, this thesis is aimed to detect the influence of color of lighting in 3D animations on audience’s emotions with the comparative experiment in psychological method. Each subject will be asked to watch 3 short 3D animation videos with only the colors of ambient lighting different and fill the questionnaire containing changes of moods after each watching times.
    Download PDF (1080K)
  • -Qualitative survey of learning methods and contents-
    Luna MATSUKAWA, Masanobu ENDOH
    Session ID: 21-04-043
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Serious games are developed for the purpose of learning and solving problems. In contrast, there are entertainment games that lead to unintentional learning. In the latter case, we conducted a qualitative survey of what players learned and how they learned it. As a result, in addition to knowledge acquisition and motivation, which are the main outcomes of the former, skill improvement and application to the real were observed. We concluded that this was “Awareness Games” as opposed to serious games. And, as a feature exclusive to Awareness Games, it was shown to be a “Guideline for Life”.
    Download PDF (382K)
  • Urara Horikawa, Tomoya Ito, Yuriko Takeshima, Tsukasa Kikuchi
    Session ID: 21-04-044
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The purpose of this study is to clarify the relationship between numerical data of music and color, and to help in music selection.
    Download PDF (871K)
  • Akiko Sato, Shoji Takeuchi
    Session ID: 21-04-045
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The purpose of this research is to find out the actual status of science publicity and outreach, and to explore issues that need to be improved by interviewing scientists about how they produce scientific content and what their views are. As a result, the scientists we interviewed were of the opinion that visual content in science is useful for disseminating scientific research, but that care should be taken when disseminating the information. The problem is that there is no system within the institution to support the creation of visual content, and in many cases the scientists themselves are handling the task, which is laborious, time-consuming, and in some cases financially difficult.
    Download PDF (1139K)
  • Rin NAGAO, Suguru SAITO
    Session ID: 21-04-046
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Based on the data examples used in actual animation production, we investigated what kind of data is related to each work process in animation production and designed a database that stores the data used in animation production.The database is a relational database that records the relationship between each task and data, how each data is used in the animation, and also stores the revision history of the data. This database facilitates the understanding of the production progress, the transfer of data to the next work process, and the confirmation of the revision history.
    Download PDF (1484K)
  • Terumi YAJIMA, Naoki HASHIMOTO
    Session ID: 21-04-047
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, it has become popular to use a combination of a projector and a simple transparent screen to represent a life-size character, and this method is widely used in live character performances. However, since the projection surface is flat, the characters appear to be flat when viewed from an angle. In this paper, we propose a new method to reduce the flatness of the projected image. In this study, we propose a multi-viewpoint video presentation method for character live performances, which presents appropriate images according to the position of the observer.
    Download PDF (587K)
  • Naoto YOSHIDA, Oky Dicky Ardiansyah Prima
    Session ID: 21-04-048
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    3D spherical displays have been attracting attention as a device for displaying multi-view 3D images. However, these displays render images according to the viewpoint thus only one viewer can see the images accurately. This study extends the projection method of the spherical display to develop a display with a polyhedral structure, and projects images from multiple viewpoints with the normal direction of each surface as the viewpoint direction.
    Download PDF (914K)
  • Takuji Kurosawa, Oky Dicky Ardiansyah Prima
    Session ID: 21-04-049
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    For creating more natural looking 3D images, it is necessary to apply perspective correction corresponding to the viewing direction. Tracking sensors are generally used to determine the user's viewing direction. However, the complexity of the sensor devices has made practical use of displays equipped with such corrections difficult. This study investigates the implementation of Vive-Tracker, Azure Kinect, and vision cameras as tracking sensors for a 3D spherical display and analyzes the differences in the appearance of 3D images generated based on each sensor.
    Download PDF (634K)
  • Atsuki Hirakawa, Masaki Komaba, Hina Miyata, Keishi Asai, Kei Kanari, ...
    Session ID: 21-04-050
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, digital archives that can preserve and reproduce intangible cultural properties have been attracting attention. We believe that the digital archiving of intangible cultural properties is essential in Japan, where the population is aging rapidly. In this study, we focused on the musical accompaniment of the Kanuma Autumn Festival, which is designated as a UNESCO Intangible Cultural Heritage, and with the cooperation of the members of the Kamifukatsu Ohayashi Preservation Society, we developed a system for viewing the Ohayashi contents from multiple viewpoints, including individual images and hand-held images.
    Download PDF (498K)
  • Xiaojie Zheng, Haoran Xie, Kazunori Miyata
    Session ID: 21-04-051
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    There are three important aspects for vocal training: vocal breath, vocal pitch, and vocal rhythm. In this study, we propose a vocal practice support system that allows users to experience vocal pitch and rhythm during vocal practice that the users can enjoy vocal practice even alone. For the proposed system, the user vocalizes while tapping the keyboard according to a predefined rhythm. We propose an effective visualization method for vocal pitch and rhythm and an evaluation method for singing. The evaluation results show the learning effect of the proposed system. We believe that the proposed method is novel that it allows the user to learn not only vocalization but also rhythm simultaneously.
    Download PDF (560K)
  • -Through the creation of “Legs” which a multi-legged freestanding object-
    Koji IIGURA
    Session ID: 21-04-052
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The importance of STEAM education and Maker education is increasing day by day, but in order to do these education, various facilities are required. This tends to lead to a situation where these education programs can only be done during periods when grants and competitive funding are available. In order to solve this problem, we show specific information on inexpensive equipment that can be used for education, which was obtained through practices.
    Download PDF (1491K)
  • Rikuto ANDO, Mitsunori Makino
    Session ID: 21-04-053
    Published: 2022
    Released on J-STAGE: March 31, 2023
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    To support mathematics learning, we propose an AR system that displays information on a desk. This system uses machine learning to recognize fingers from video images, and displays the formula corresponding to the problem pointed at. This improves time efficiency by reducing the time required to search for materials during learning. The results of the operation time during the experiment and the problem solving time showed that the proposed system was effective in speeding up the material search time and reducing the solving time.
    Download PDF (1768K)
feedback
Top