Reports of the Technical Conference of the Institute of Image Electronics Engineers of Japan
Online ISSN : 2758-9218
Print ISSN : 0285-3957
Current issue
Displaying 1-50 of 90 articles from this issue
  • Noriki AMANO
    Session ID: 22-04-01
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This study aims to support the performance of the theremin, an electronic musical instrument, by XR (Cross Reality) and to extend its functions. Specifically, using AR (Augmented Reality) assists in the performance of the theremin, and using MR (Mixed Reality) of sound realizes the theremin's chords virtually. The performance of the theremin, an electronic musical instrument played by holding one's hand in the air, is greatly affected by the delicate position of the hand, so AR is effective in assisting performance. In addition, the virtual realization of chords by sound MR enhances the function of the theremin as a musical instrument, which can only play single notes.
    Download PDF (506K)
  • Iori Yamamoto, Kohjiro Hashimoto, Noritaka Uchibori, Yasuhito Nagatomo ...
    Session ID: 22-04-02
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This research is an effort for regional reconstruction around Suwa Taisha Kamisha. We developed a smartphone application using AR technology for the purpose of spreading the understanding of the five-storied pagoda and Gochi Nyorai statue, which are historical assets. Specifically, we made a 3D model of the five-storied pagoda and the Gochi Nyorai statue, and built an application that reconstructs it in 3D. Assuming the place of use, we have built three functions: a location type used at the site of the five-storied pagoda, a marker type used at a museum, and a markerless type used at home. In this paper, we report on the constructed application and its effect after practical use.
    Download PDF (692K)
  • Ikumi HISAMATSU, Yuji SAKAMOTO
    Session ID: 22-04-03
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Virtual Reality has been attracting attention for its potential to give people a new purchasing experience. In fact, many VR shopping contents in various forms have appeared and are in operation. However, most of the previous studies on VR shopping have dealt with systems that imitate physical stores. Therefore, we focused on VR shopping that does not imitate a real store, and evaluated the usability of the system and the user's purchasing experience. In this study, we created VR shopping contents, and conducted a subject experiment with a purchasing task to reproduce the purchasing experience.
    Download PDF (720K)
  • Tatsuya ONO, Tomokazu ISHIKAWA
    Session ID: 22-04-04
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The goal of this research is to use XR to provide reality-based training regardless of location and weather conditions. This study implemented a system that introduced the initial velocity, axis of rotation, and number of rotations obtained from image recognition. PK in XR and actual PK were conducted as verification experiments, with user questionnaire evaluation by subjects and evaluation by the difference between the actual drop point of the ball and the drop point in the virtual space.
    Download PDF (580K)
  • Takeru Yoshida, Tomokazu Ishikawa
    Session ID: 22-04-05
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this study, we investigated the possibility of the MR game using the OHAJIKI MR game. We created a OHAJIKI MR game that can be played with HDM HoloLens2, and experimented with a OHAJIKI MR game and a OHAJIKI MR game. A questionnaire survey was conducted to evaluate “interestingness”, “operability”, “difficulty”, “replayability”, and “do you want to recommend it to your friends?” Since the average value of the items “interestingness”, “replayability”, and “would you recommend it to friends” was 7.5 or more, it was found that MR games could be used for entertainment. On the other hand, there were problems such as not being able to wear HoloLens2 well and not being able to press virtual buttons well. It is considered that the improvement of fun can be expected by solving the problem of operability
    Download PDF (543K)
  • Misato Mochizuki, Yumemi Ueda, Kohjiro Hashimoto, Tadashi Miyosawa
    Session ID: 22-04-06
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The PLATEAU project, led by the Ministry of Land, Infrastructure, Transport and Tourism, is developing use case models for 3D city models. In this study, as a use case model, a 3D city model is used for predicting road surface freezing. When the factor of road surface freezing is limited to the sunshine rate, it is possible to predict the frozen spots by simulating the movement of the sun and the calculation of the sunshine rate for each road. In this paper, we build a prototype sunshine simulator and evaluate its effectiveness through experiments.
    Download PDF (348K)
  • -A report of experiments at the zoo’s event for children-
    Yuto UCHI, Sotaro KIDO, Yossi KUBOTA, Kanon NAKAHARADA, Naoto HARADA, ...
    Session ID: 22-04-07
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This study allows participants to learn the characteristic behavior of a target animal using a proposed system in which they play the roles of demonstrator and observer, respectively. This proposed system makes them support each other in learning by giving correct and incorrect feedback to the observer while the demonstrators mimic the animal's movement with Kinect. The results of a questionnaire survey of families with children showed highly positive feedback.
    Download PDF (717K)
  • Hiroaki KOMORI, Kazuya UEKI
    Session ID: 22-04-08
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Today, there are many ways to deepen one's understanding of artworks, and among them, interactive appreciation, in which artworks are unraveled through dialogue among viewers, is attracting attention. This method of appreciation does not necessarily require artistic knowledge, making it possible for even novice art appreciators to participate. On the other hand, it requires participation in workshops. In this study, we propose a GUI application that uses a unique appreciation process. The user actually uses the application to view the artwork and performs a self-interpretation of the artwork. Afterwards, we conducted a questionnaire-based evaluation of the act of viewing. From the results, it was confirmed that the viewing using this system enables the viewer to see the artwork carefully and leads to the viewer's free self-interpretation of the artwork.
    Download PDF (379K)
  • Wang Wenzhen, Hiroyuki Kose, Takashi Mihara, Hapnes Toba, Sulaeman San ...
    Session ID: 22-04-09
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The purpose is to develop educational content linking schools in remote locations by utilizing VR and 3D game engine technology. Its educational effect is evaluated by comparing the case with and without the use of VR (HMD).
    Download PDF (547K)
  • Ginnojo Muta, Taiga Matsunaga, Daichi Yanagihara, Kohjiro Hashimoto
    Session ID: 22-04-10
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This paper proposes a driving scene detection method for car driving behavior data. Based on the supervised learning approach, it is possible to generate a scene detection model by collecting data of scenes to be detected in advance and letting it learn. However, since it is difficult to collect the data of the scene to be detected from the huge amount of driving behavior data, we consider the utilization of the driving behavior data obtained by the driving simulator. Naturally, there is a discrepancy between the simulator's driving behavior data and the actual driving behavior data, and scene detection is difficult even if the simulator's data is learned as it is. The purpose of this paper is to obtain the knowledge necessary for devising a method of applying simulator data.
    Download PDF (328K)
  • Masaru MIYAZAKI, Naotsuna FUJIMORI, Momoko MAEZAWA, Masaru TAKEUCHI, S ...
    Session ID: 22-04-11
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this paper, we will introduce methods of converting musical expression into visual representations and research work into producing content using these new representations. We expect to accomplish this by analyzing and extracting relationship between the music theory features (volume, rhythm, pitch, reverberation, song development, etc.) and the emotional features (bright, dark, tension, relaxation, etc.) caused by music and then designing images output that corresponds to each of the parameters. By combining the results with the original content images, the information contained in the music can be conveyed in a way that is easy to understand, and that is rich with emotion.
    Download PDF (1571K)
  • Tamaki AKAIKE, Michio SHIRAISHI
    Session ID: 22-04-12
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this study, we propose a method to visualize many passes in a game for users who are not familiar with strategies. The method utilizes clustering method for locations and players. And displays the tendency of potentially dangerous passes by the opposing team.
    Download PDF (562K)
  • Xiaojun Wu, Ryuichi Tanida, Jun Shimamura
    Session ID: 22-04-13
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Nowadays, 3D point cloud of real buildings, or even a real town, can be obtained by using LiDAR or such other devices. These 3D point clouds can be used to construct virtual buildings, or even a whole virtual town in a cyber space. However, it is hard to visualize such point clouds effectively because of their large data size. In this paper, we propose an interactive visualization using octree structure. Users can switch resolutions and focusing areas interactively. Also, we propose a high speed path detection algorithm, which can help to compose an effective camera work for wandering through a virtual town.
    Download PDF (726K)
  • Karen TSURUME, Noriki AMANO
    Session ID: 22-04-14
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this study, we attempt a musical representation of an impossible object by auralization. An impossible object is a figure that is perceived by visualization based on optical illusions. Therefore, we believe that research on impossible objects will lead to the creation of artistic expressions based on visual illusions. It is interesting as a research subject to consider musical expressions of impossible objects based on such illusion effects. This is because it is possible to enjoy the auditory perception of impossible objects by expressing figures that do not actually exist in the form of sound. In this study, we use Minecraft to create impossible objects and music and aim to make it possible to recognize the illusory effect of the objects as music.
    Download PDF (467K)
  • Yumemi Ueda, Misato Mochizuki, Kohjiro Hashimoto, Tadashi Miyosawa
    Session ID: 22-04-15
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The PLATEAU project, led by the Ministry of Land, Infrastructure, Transport and Tourism, is developing use case models for 3D city models. In this research, as a use case model, a 3D city model is utilized in a system for early notification of blocked roads due to heavy rains or disasters. Since disaster information is often posted on Twitter in real time, we consider acquiring this information, adding it to the 3D city model, and making it available on the web page. In this paper, we construct a prototype system and evaluate its effectiveness through experiments.
    Download PDF (370K)
  • -Learning from filming and editing videos set in campus-
    Yuya DAN, Hideyasu TANOE
    Session ID: 22-04-16
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In the “Multimedia Programming” offered in the Information Course of the Faculty of Business Administration of Matsuyama University, classes are designed so that students can practically learn about multimedia expression and technology with the goal of producing videos that convey the appeal of Matsuyama University. In the term, weekly class (90 minutes), students are expected to work on issues of video production through group work after introducing new concepts and technologies. This paper on this year's workshop for 134 students with various proficiency levels in information technology (IT) divided into 31 teams. Through the review by students, the effectiveness of the education will be verified. As a result, group work received favorable evaluations, and a certain number of students felt that communication skills improved.
    Download PDF (602K)
  • Tweaking of control parameters
    Daisuke KODAMA, Issei FUJISHIRO
    Session ID: 22-04-17
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Hair animation in Japanese hand-drawn animation plays an important role in expressing dynamism and creating impressive scenes. However, it requires many parameters to be set for this kind of motion in 3DCG and makes it a difficult to achieve the ideal motion entirely by hand. We propose a support system for hand-drawn anime-style hair animations. This system takes as input sketches to illustrate how to shake and control parameters to adjust the motion, which improves the efficiency while maintaining expression.
    Download PDF (674K)
  • Kotojiro HANAZAWA, Motonobu KAWASHIMA, Daichi HAYAKAWA
    Session ID: 22-04-18
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In the creation of CG content, key poses are set for characters and motion is applied to them. Correctly expressing the center of gravity is considered important in expressing character movement when creating natural animation. Creators are required to understand the correct center of gravity of characters. This study proposes a method to improve the efficiency of animation production by supporting the derstanding of the center of gravity in the characteristic body shape of two-headed characters.
    Download PDF (529K)
  • Shuga OGASAWARA, Katsutsugu MATSUYAMA
    Session ID: 22-04-19
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    While LEGO bricks can be freely shaped depending on their combinations, it is not easy to restore a completed work after disassembling it, especially if it was created on a spur-of-the-moment basis. In this study, we attempt to support restoration by developing a tool that has a function to save the assembly order using an RGB-D camera and a function to present the assembly procedure using 3DCG. This method recognizes the shape and position of blocks placed one by one by the user, and constructs the assembly sequence as a graph. The tool also renders a 3DCG based on the constructed graph to provide feedback on the recognition and to present the assembly sequence of the work.
    Download PDF (518K)
  • Takayuki IDE, Toshiyuki IMAI, Kohei TOKOI
    Session ID: 22-04-20
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Node-based visual programming has become popular in recent years and is used in a wide range of applications from video production to the development of embedded systems for automobiles. To improve the readability of node graphs, time and effort are required to devise the arrangement of nodes. In this study, we developed a node graph drawing algorithm suitable for describing digital compositing process in node-based visual programming. The node graph drawing algorithm is mainly based on force directed algorithms. A prototype system that imitates the node editor of the video editing software “Davinci Resolve” was created, and the effectiveness of the proposed method was examined using this system.
    Download PDF (594K)
  • Miyuka MORI, Motonobu KAWASHIMA, Daichi HAYAKAWA
    Session ID: 22-04-21
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, CG works using toon rendering have become a common sight in video games and animation productions. These are one of the methods used to reproduce the characteristics of hand-drawn animation works in 3DCG. However, the accuracy of the model, which is a feature of 3DCG, may lead to a sense of discomfort and lack of dynamism when compared to hand-drawn animation. This method aims to express perspective exaggeration similar to that of hand-drawn animation by using the positional relationship between the model and camera to change the effect of perspective deformation, and by using real-time 3DCG to determine obstructions that hide the face and further deform the perspective exaggeration.
    Download PDF (378K)
  • Shota Kato, Motonobu Kawashima, Daichi Hayakawa
    Session ID: 22-04-22
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In CG based video and game content works, breaching animations of aquatic creatures jumping out of water can be seen. To create such animations, it is necessary to investigate and understand the actual behavior of sharks and whale. Therefore, it takes a great deal of time to create these animations. In this study, we investigate the characteristics of the breaching behavior of whales and propose a breaching animation production method that can be used efficiently based on the results.
    Download PDF (767K)
  • Reo NANAMI, Motonobu KAWASHIMA, Daichi HAYAKAWA
    Session ID: 22-04-23
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Among the visual effects in video games, brushstroke effects are often different from actual brushstroke-style. In this study, brushstroke-style refers to suiboku-ga (ink painting). In addition, the reproduction of the same scene in a video game based on a manga was insufficient when compared. Therefore, in this study, we created a brushstroke effect that takes into account the characteristics of suiboku-ga (ink painting) to reproduce the brush strokes that are actually drawn by a brush. To achieve this goal, we created a visual effect that takes into account the “blurring” and “smearing,” which are characteristics of suiboku-ga (ink painting), by devising trail effects and materials.
    Download PDF (721K)
  • Taketo Katsuta, Motonobu Kawashima, Daichi Hayakawa
    Session ID: 22-04-24
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Proposal for a method to reproduce muscle simulation in a real-time environment using vertex animation and physical simulation.
    Download PDF (631K)
  • Fuminori SHIBASAKI, Issei FUJISHIRO
    Session ID: 22-04-25
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Accomplished artists often include leading lines in their compositions, which utilize not only obvious lines but also rows of similar shapes, gradations, brush strokes, etc., to emphasize the main subject of their works, as well as to guide the viewer’s attention to the surrounding areas. Although saliency maps are generally used to locate attractive areas of still images, such scalar features do not express the trajectory of a viewer’s gaze. In this study, we propose a visual analysis method for leading lines based on subgraphs of Morse-Smale complexes extracted from saliency maps. Furthermore, we empirically prove the validity of our method in comparison to the eye tracking results of actual observers.
    Download PDF (2824K)
  • Hideki MITSUMINE, Toshie MISU, Kensuke HISATOMI
    Session ID: 22-04-26
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, volumetric capture technology has been actively developed and put to practical use for the purpose of content creation in xR and rich video expression. In order to achieve a high realism in the rendering of volumetric video and to composite it in a natural way consistent with real space during the AR experience, it is necessary to add material property information to the volumetric video that allows the lighting conditions to be changed. In this report, we propose a material properties measurement method that can be applied to volumetric capture. The proposed method is characterized by the use of polarized light, color three-phase photometric stereo, and a multi-dot DOE laser pattern projector to obtain reflection coefficients, normal information, and sub-surface scattering parameters from a few frames of captured video. This report shows experimental results of the proposed method and demonstrates the effectiveness of the principle.
    Download PDF (2525K)
  • Hiroki JO, Yoshihisa KANEMATSU, Kouji MIKAMI
    Session ID: 22-04-27
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In Japanese robot animation battle scenes, the atmosphere of robot battles, the situation and the tempoof battlesvary depending on the storyline, the theme of each battle, and the characteristics of each battle. In order to showtheseelementseffectively, the way of connecting cuts has a great influence. In this study, 188 battle scenes were extracted from22existingrobot animation works, and the common developments during the battle and the actions that the robots often performwereclassified. Using the results of the analysis, we constructed a flowchart for structural analysis of the way cuts are connected.
    Download PDF (479K)
  • -Evaluation of Relationship Between Road Gradient and Braking Operation-
    Taiga Matsunaga, Ginnojo Muto, Daichi Yanagihara, Kohjiro Hashimoto
    Session ID: 22-04-28
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This paper examines a clustering method for driving behavior data. Here, it is assumed that the driving scene is defined by the relationship between the surrounding situation and the operation. Then, Coupled-GP-HSMM is evaluated as a clustering method for driving situations. In the experiment, road slope data is used as situation data, and brake data is used as operation data, and scenes with a strong relationship between the two are extracted as driving scenes. Then, its effectiveness is evaluated based on actual driving environment conditions.
    Download PDF (528K)
  • Kazuya Suzuki, Akihiro Chiba, Kazuya Ueki
    Session ID: 22-04-29
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, Vision and Language technologies have been developed. In addition, the field of video retrieval has been developing along with it. In this study, we examined the accuracy of using Vision and Language techniques for video retrieval by changing the input query sentences. We conduct two types of validation: validation 1 “Adding a phrase at the beginning of a query sentence” and validation 2 “Adding an important word in a query sentence at the end or the beginning of a sentence”. As a result, validation 2 “Adding important words in the query sentence to the end or the beginning of the sentence,” was effective. “Adding the important words at the end of the sentence” improved the accuracy in many patterns. The result of validation 1 “Appending a phrase to the beginning of a query sentence” varied depending on the phrase. The phrases that improved accuracy were particularly effective for CLIP and SLIP. When two query sentences and a prompt-engineered sentence were used in the search, accuracy was improved in more query sentences, although the number of cases where accuracy was greatly improved was reduced.
    Download PDF (326K)
  • Akihiro Chiba, Kazuya Suzuki, Kazuya Ueki
    Session ID: 22-04-30
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    We propose methods to improve the accuracy of video retrieval from query texts with that describe the retrieval topic in detail as auxiliary texts for retrieval. We tested four types of prompt engineering methods: combining with query text, splitting, simplifying, and paraphrase adjustment, to obtain text that is more complementary to the query text from the original detailed text. The results of retrieval experiments using pre-trained models of CLIP showed that the three methods of combining with query sentences, simplification, and paraphrase adjustment improved retrieval accuracy, and especially, the combination of simplification and paraphrase adjustment improved the accuracy the most.
    Download PDF (435K)
  • Takuma SUGIURA, Kazuya UEKI
    Session ID: 22-04-31
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, e-Sports, in which computer games are regarded as sports competitions, has become increasingly popular. This has increased the need to classify the mouse, which is an important device in e-Sports. In this study, we created a mouse classification system consisting of a mouse detection part and a mouse classification part and constructed a new dataset for mouse detection and a new dataset for mouse classification. In addition, we investigated learning methods for the mouse detection and mouse classification parts. As a result, we confirmed that the mouse detection part can detect mice with sufficient accuracy, and the mouse classification part has the highest accuracy of 72.8%.
    Download PDF (693K)
  • Keito NAKAYAMA, Tomokazu ISHIKAWA
    Session ID: 22-04-32
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this study, the similarity of subjectively similar songs was calculated by analyzing their constituent parts, with the aim of identifying the factors that contribute to their perceived similarity. The results showed that there was no consistent pattern in the parts that were considered most similar among subjectively similar songs. Therefore, it was concluded that it is more appropriate to perform similarity calculations based on the songs as a whole, rather than limiting the comparison to a limited set of parts.
    Download PDF (1973K)
  • Ryosuke HORIAI, Oky Dicky Ardiansyah PRIMA
    Session ID: 22-04-33
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The capability of deep learning to generate 3D representations of body joints extracted from a single image has led to the development of 3D human pose estimation with a monocular camera. By representing the joints in the camera coordinate system, the direction, distance, and speed of movement of the person can be quantified, enabling applications in sports and rehabilitation. However, when the target person moves over a wide area, the camera needs to be moved as well, which makes the estimation of the direction, distance, and speed of movement of the person incorrect due to changes in the camera position. This study proposes a 3D human pose estimation without the influence of camera position and verifies the accuracy of the estimation by motion capture.
    Download PDF (836K)
  • Toshitake SUZUKI, Tomokazu ISHIKAWA
    Session ID: 22-04-34
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    This study aims to recreate the animation of the breast using computer graphics.A mass-spring damper system is applied to a 2D model of the breast,based on its internal structure.The movement of the breast is approximated by optimizing the spring constant (K) and damping constant © of the mass-spring damper system using a genetic algorithm,to minimize the difference with real-world measurements.
    Download PDF (585K)
  • Anna Goto, Takayuki Itoh
    Session ID: 22-04-35
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    A well-known chromatic problem causes that humans may differently recognize the color of the foreground object before and after applying image synthesis. A small number of studies have addressed to solve this problem from the viewpoint of color science. In this paper, we propose a color correction technique that takes into account the color illusion caused by the difference in brightness, saturation, and hue between the foreground and background when a foreground object is cut from the original image and merged with another background image in image composition. This color correction technique is expected to reduce the color change phenomenon when the foreground object in the original image is merged with another background image, and to enable image merging processing that looks more natural.
    Download PDF (352K)
  • Qing CHEN, Hironobu ABE
    Session ID: 22-04-36
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, an increasing number of AI-based services have been used to automatically generate live-action and animated face images. However, such services cannot automatically generate face images with facial expression differences for the generated images, which are necessary for game production, and require manual adjustment. In this research, we propose a method for automatically generating facial images with expression differences for facial images generated using StyleGAN2. We report on the system developed based on the proposed method, the effectiveness of the proposed method, and the evaluation of the system.
    Download PDF (678K)
  • Sakura SHINJI, Issei FUJISHIRO
    Session ID: 22-04-37
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Simulated experience and reliving of real-life scenes are good examples of the use of three-dimensional virtual space. In particular, objects that are difficult to manipulate in real world, such as cultural assets, are worth experiencing through a tangible 3D model. However, models produced by current 3D reconstruction techniques are static, and cannnot provide experiences involving dynamic interactions such as the manipulation of tools. In this study, we propose a method to reproduce a three-dimensional dynamic scene from a static point cloud of an object as well as a sample video of the object’s motion and a rotation axis specified by the user. We applied this method to point clouds created from mesh models and a point cloud of actual cultural asset, and were able to reproduce visually plausible articulated motions.
    Download PDF (434K)
  • Hikaru YAMAMOTO, Masaki ABE, Taichi WATANABE
    Session ID: 22-04-38
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Few of the CCD (Continuous Collision Detection) methods in today’s computer games deal with rotational motion. In addition, the Speculative CCD method of Unity, which is an example of a method that addresses rotational motion, has the problem of low accuracy in collision determination. To solve this problem, this study aimed to improve the accuracy of continuous collision determination for rotating rigid bodies by interpolating the rotational motion trajectory by a fan shape, which is an approximate shape of the rotational motion trajectory. In this paper, we define and propose an interference determination between a fan shape and several primitive shapes.
    Download PDF (356K)
  • Moeko ONO, Kazuya UEKI
    Session ID: 22-04-39
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    There are various types of character-based art. These include handwritten brushstroke art and machine-generated ASCII art. Most of the mechanically generated character art is text as it is. In this study, we propose a system that uses image processing to generate character art that reproduces the shape of the text by distorting the text itself. We compared the differences in the character art generated by changing the number of characters and the type of characters, and conducted a questionnaire survey as an evaluation. As a result, we confirmed that although the number of characters needs to be adjusted to improve the recognition accuracy of the artwork, it is possible to generate character artwork that can recognize shapes and characters to some extent.
    Download PDF (315K)
  • Misa SAKAMOTO, Kazuya UEKI
    Session ID: 22-04-40
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this study, we investigated and experimented with a method for converting illustrations drawn by users based on their own images into illustrations that can be used in presentation materials by correcting lines and coloring. We used Pix2Pix for image generation, and the free illustration site “Irastoya” (operated by Takashi Mifune) for training data. The results of the experiment were evaluated by a questionnaire survey of 17 male and female participants in their 20s.
    Download PDF (358K)
  • Hironobu Hosoya, Motonobu Kawashima, Daichi Hayakawa
    Session ID: 22-04-41
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    ITE In recent years, many characters in animated works and video game productions have a bunch of hair that spring out from their hair, known as Ahoge. In animated works, the Ahoge often moves and expresses emotions in response to the emotions of the characters. In this study, we will express emotions with Ahoge like those in animated works with real-time characteristics, assuming that the characters are playing video games.
    Download PDF (550K)
  • Tomoya SAKAMOTO, Motonobu KAWASHIMA, Daichi HAYAKAWA
    Session ID: 22-04-42
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, methods such as procedural modeling and generating models from satellite photographs have been used to create building models more efficiently. However, it doesn't take into account the tiled roof that accompanies the building, which makes it difficult to use in game content. In this study, we focus on the Irimoya roof, which is difficult to construct, and propose a method to procedurally construct an Irimoya roof suitable for game content by creating a rotten expression often seen in game content and converting the roof with the expression into an optimized shape.
    Download PDF (727K)
  • Koyo NEMOTO, Motonobu KAWASHIMA, Daichi HAYAKAWA
    Session ID: 22-04-43
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In this research, I investigated and analyzed what is clay animation-likeness by shooting stop-motion movies using actual clay. As a result, it turned out that clay animation-likeness is the slight deformation of the subject that occurs during animation work. I thought that by reproducing this with CG technology, I could express the unique texture of clay animation images. I propose a technique that enables CG representations unique to clay animation images by expressing slight deformation of the subject in each frame of clay animation using displacement maps.
    Download PDF (474K)
  • Koya SUDA, Motonobu KAWASHIMA, Daichi HAYAKAWA
    Session ID: 22-04-44
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, the use of workflows utilizing photogrammetry has been increasing in the computer graphics industry. One method of de-lighting, which is part of the process, is to de-light by inferring lighting information from the texture. While this method has advantages such as not requiring advance preparation, it can cause problems due to guesswork. Therefore, this study proposes a method to improve the de-lighting process while retaining the advantages of existing methods by allowing designers to paint 3DCG objects in the lighting environment of a game to solve problems that arise in the de-lighting flow using lighting estimation.
    Download PDF (849K)
  • Kexin DIAO, Masanobu ENDOH
    Session ID: 22-04-45
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    Players experience relationships with others through communication in games, which generates motivation and happiness. We developed a card game to explore the occurrences and changes in players' communications. Analysis of test plays shows that communications occurred not only during the game but also before and after the game. There was a communication boost when the game was at a disadvantage. And players who wanted to win voluntarily increased their communications.
    Download PDF (489K)
  • Tsutomu Urai, Youngha Chang, Nobuhiko Mukai
    Session ID: 22-04-46
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    There are several studies to predict a color appearance considering the influence of its surrounding colors. In order to calculate the color induction effect, it is considered important to select a color space considering to human perception, however, conventional computer models did not discuss the color space selection in detail. In this research, we have investigated the perceptual uniformity of color spaces by using several representative orthogonal opposite color spaces.
    Download PDF (434K)
  • Luwen SU, Misaki ABE, Taichi WATANABE
    Session ID: 22-04-47
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    We propose a one-way coupling framework between the water surface and rigid to simulate a does not exist phenomenon in which the energy wave passes through the surface of the water and causes ripples. In this framework, the aim of this specific model, compared to the general simulation method, is to combine rendering technical like ray tracing and space clipping to decrease the computational load. We also offer user-defined parameters to model interactions between the energy wave and the water surface.
    Download PDF (538K)
  • Yuusuke Kase, Hisakazu Hada
    Session ID: 22-04-48
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    The purpose of this study was to create a support tool that would enable people with low confidence in their drawing ability to easily and beautifully create pancakes with character illustrations. First, a guide image was created by dividing the input image into five levels based on the brightness of the color. The guide image was then displayed on the hot plate using a projector. The user then creates the pancake by tracing the guide. As a result, it was possible to create a beautiful illustration pancake that expresses the difference in color in 5 stages with the color of the baked.
    Download PDF (691K)
  • Yitong CHEN, Masaki ABE, Taichi WATANABE
    Session ID: 22-04-49
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In recent years, the area of image processing and image generation using deep learning has been further developed and applied to various domains. For example, YOLO is good at image recognition, which is often used for medical images, and GAN (Generative Adversarial Network) can generate realistic images. The two are combined to achieve wall texture generation.
    Download PDF (335K)
  • Xiang LI, Masaki ABE, Taichi WATANABE
    Session ID: 22-04-50
    Published: 2023
    Released on J-STAGE: January 31, 2024
    CONFERENCE PROCEEDINGS RESTRICTED ACCESS
    In FPS game matches, it is common for a cameraman to operate the in-game Observer system to provide a match screen to spectators. However, there is a problem that it is difficult for the human eye to catch up with the fast game development and switch screens appropriately. This study aims to improve the spectator’s game viewing experience by proposing a method that detects a player before he or she takes a shooting action and automatically switches the camera.
    Download PDF (196K)
feedback
Top