-
Amane Nakaike, Yuta Nakauchi, Asako Soga
Session ID: 23-04-001
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Costumes designated as cultural properties and owned by museums are usually archived as images. By converting such images to 3DCG, they can be used more effectively as digital content. In this study, as a way to efficiently create 3DCG using an archive of kosode and furisode costume images, we developed a 3D model template that can generate various 3D models of kosode and furisode by simply replacing texture images. This paper reports on the construction process of the 3D model template and the 3D models of kosode and furisode that it generates.
View full abstract
-
Tomoya NAGAI, Kohe TOKOI
Session ID: 23-04-002
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we developed a method for generating lightweight, anime-style 3D models from point cloud acquired through photogrammetry, ensuring these models retain shapes of semantic significance. Given that models derived from photogrammetry often result in extensive point cloud, they require editing processes like downsampling and noise reduction for practical use. Our method simplifies these point cloud and applies anime-style rendering, producing models that, while not replicating the detailed shapes of components, effectively reproduce necessary forms through texture mapping, except when viewed from certain angles. This approach not only maintains shapes with semantic importance but also ensures the models are lightweight, making them ideal for VR and AR applications.
View full abstract
-
Rikuta NOGUCHI, Tsukasa KIKUCHI
Session ID: 23-04-003
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper proposes a new generation method for Stone Walls of Japanese Castles and a stacking-method called "Sangi-Zumi". The Stone Walls of Japanese Castles have unique and beautiful curves. It was difficult to generate accurate stone walls and the "Sangi-Zumi" with previous methods. In this method, we create generation rules based on an ancient-document-of-stone-wall-design called "Goto-Ke-Bunsyo". Furthermore, the curve of the slope of the stone wall is subdivided into line-segments. After subdivision, each line-segment is extruded to generate the "Sangi-Zumi". The advantage of the method is that it allows the user to quickly generate a variety of CG models of stone walls by a few parameters.
View full abstract
-
Yuki AKABOSHI, Tsukasa KIKUCHI
Session ID: 23-04-004
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we propose a method to represent the backdraft observed in a compartment fire in 3DCG by employing the Euler equation, which is based on the Navier-Stokes equation, the governing equation of fluid dynamics. In the process of calculating the advection term, the applied BFECC method is used instead of the Semi-Lagrange method that is generally employed. As a result, a more realistic representation of backdrafts is possible. The filmmaker can easily set the initial and calculation conditions and reduce the time and effort required to obtain a realistic backdraft. In addition, the advantages of this method over live-action filming include lower production costs and the ability to make post-production modifications and adjustments.
View full abstract
-
Nobuhiro TAKASE, Tsukasa KIKUCHI
Session ID: 23-04-005
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper focuses on the representation of smoke, which is computationally expensive, and proposes a method for representing large-scale smoke scenes that requires a minimum of computation. Based on a previous study in which multiple smoke cache data were placed by a creator in 3D space to generate a large cumulus cloud, the area where the cache data overlap is set as a new emitter and recalculated using the Euler equation with the velocity field and density stored in the cache data. This enables the representation of smoke mixing with other smoke while using cache data. In addition, as a large-scale smoke representation using cache data, a simulation of a large sandstorm approaching is performed.
View full abstract
-
Yuya MATSUMOTO, Issei FUJISHIRO
Session ID: 23-04-006
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
With advances in extended reality technologies, there is an increasing demand for three-dimensional (3D)
scene reconstruction from captured videos. With the advent of Neural Radiance Fields, research on 3D reconstruction has
made rapid progress, leading to environments where 3D reconstruction can be easily achieved only with smartphones. However,
editing reconstructed 3D scenes or specific objects remains challenging and is still in its early stages. This study aims to
semiautomatically edit 3D scenes reconstructed by Gaussian Splatting. We propose a method to estimate and extract foreground
objects automatically from user input, and to cut them out from the scene. By applying to 3D scenes reconstructed from existing
trained scenes and captured videos, we confirmed the ability of the proposed method to perform visually plausible cutout of
foreground objects.
View full abstract
-
Yuichi ITO
Session ID: 23-04-007
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
On television and other platforms, frame interpolation techniques are used to increase the frame rate of videos,
making them appear smoother. This is especially effective for older videos or movies shot at 24 frames per second (fps). We
discovered that applying this technology to very low frame rate video sources, around 1 fps, results in unique visual effects
reminiscent of glitch art. Moreover, by carefully selecting the video or image sources to which frame interpolation is applied, it
becomes possible to intentionally control the expression of the visuals. Furthermore, this paper presents a novel glitch art-like
video expression method and also considers the feasibility of using the proposed method for other video creators.
View full abstract
-
Yufeng ZOU, Yasuo KUHARA
Session ID: 23-04-008
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Digital manga was born about 30 years ago. At the current stage, manga media is not only moving towards digitalization but also integrating with other fields such as television, movies, and games. This has led to the introduction of more dynamic and interactive storytelling methods, such as interaction and augmented reality technologies. In this research, we use a new game engine called Godot to create manga works that have game-like elements. A story can be developed in various ways by the readers.
View full abstract
-
Yoshiki SATO, Tsuyoshi TAKATANI
Session ID: 23-04-009
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Wood is an important material due to not only its high functionality but also its beauty. The replication of the wood appearance has been practiced and the target factor is mainly its surface texture. There has been, however, less work focusing on anisotropic reflectance characteristic which is an important factor connected to the glossiness. In this paper, we propose a method to fabricate the anisotropic reflectance characteristic using a 2.5D printer. A 2.5D structure of stripes printed with a clear ink can generate varying reflectance with respect to the direction of light. The printing parameters are determined based on a look-up table. An experiment with two kinds of wood demonstrates the availability of the proposed method.
View full abstract
-
-The Format of Technical Report (Subtitle)-
Funa Okuda, Tsukasa Kikuchi
Session ID: 23-04-010
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research proposes a method for visual simulation of shaved ice.
In this paper, we propose a method for visual simulation of shaved ice, which is divided into the accumulation of shaved ice and the interference between shaved ice and syrup. The accumulation of shaved ice is simulated by randomly generating pieces using Perlin noise, and by using Bullet's physics. This is done by simulating the plating according to calculations. Regarding the interference between shaved ice and syrup, the syrup and ice have color information, and when the syrup touches the ice, the color information of the syrup is copied to the ice. We reproduced the seepage. Also, by making the ice voluminous and increasing its voxel size over time, we were able to reproduce the appearance of the small and complex shape of the ice pieces, and the way the pieces of ice stick together to form a single block of ice when it melts. It became possible to do so.
View full abstract
-
-Fast Generation with Unity-
Motoya KAYAMORI, Takeshi YAMAGUCHI, Hiroshi YOSHIKAWA
Session ID: 23-04-011
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Computer-generated hologram (CGH) is hologram in which interference fringe is computed by a computer, and the CG data must be converted into a point cloud for CGH computation. Our group proposed a point cloud generation method using photon maps (PM method), which can generate realistic point clouds by using photons, which are light particles, but it takes a lot of time to generate point clouds. However, it took a long time to generate the point cloud, and a method using Unity was proposed (U2Dat), which enabled fast and realistic point cloud generation, but had problems in representing reflections and refractions, which were possible with the PM method. In this research, ray tracing is introduced to the point cloud generation method using Unity, and the point cloud is moved by reflection and refraction to generate a fast and realistic point cloud. The realism of CG data is also improved by modifying the Unity rendering environment.
View full abstract
-
Nanami YASAKU, Masanobu ENDOH
Session ID: 23-04-012
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Digital games can use full-color images without limitations. The world of the game is created by the characters and items that appear in the game. Color images influence the attributes of objects. In this study, we investigated the impressions of colors, based on the idea that color images can assist in setting the worldview. As a result, we found that primary colors emphasize function, warm colors are associated with temperature, and neutral colors are seen as formal. We conclude that color images can be used to create a worldview.
View full abstract
-
Naoki TANI
Session ID: 23-04-013
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, we have seen a bizarre obsession with lookism (an ideology that places importance on
appearance alone and discriminates against people based on their judgment or appearance) as a social problem that we are all
familiar with, and as a result, is at risk of increasing the risk of developing BDD (Body Dysmorphic Disorder) and other
complications later in life. The most frequently mentioned opinion emphasizing lookism as a social background is that beauty
and good looks have a strong influence on the formation of relationships in social group organizations (schools, workplaces,
etc.), and that the more attractive a person looks to a third party, the more privileged he or she will be. This is because the more
attractive a person is in the eyes of a third party, the more privileged he or she is. However, the modification of appearance
obtained through cosmetic plastic surgery only provides temporary satisfaction, and the patient begins to worry about other
parts of the body, continuing the negative cycle and potentially increasing the risk of complications from BDD. In addition,
despite the abundance of content available with modern technological advances, there is little in the way of visual approaches for those who lack the proper awareness of the illness and symptoms associated with BDD. Therefore, as a new proposal, a
video aimed at correct disease awareness and education about BDD among young people, the core of BDD patients, was
created using content regularly used in the modern age, with the aim of preventing and reducing the number of BDD patients
trapped in excessive thinking due to lookism by viewing a live-action video and a handwritten Two types of promotional
videos will be prepared, and it will be investigated which one is more advantageous as video content for today's young people.
The first step is to make the correct knowledge of the disease known not only to those who suffer from BDD, but also to
healthy people, in order to contribute to the prevention of healthy people and the reduction of patients suffering from BDD in
the future by using modern contents to solve the 10 goals of SGDs, which are to eliminate inequalities among people and
countries. As a first step, I am writing this paper to investigate the superiority of handwritten images over live-action images
and the image survey using the SD method. I will attempt to take a modern approach to this long-standing and prevalent social
problem. The results of the survey will not only raise awareness by investigating which types of video content are effective in
addressing the social problems of the younger generation, but will also contribute to society by providing video content
companies with applicable data to guide their video production on sensitive topics.
View full abstract
-
-Sound-making System Triggered by Body Movements-
Kengo FUJITA, Kenichiro ITO
Session ID: 23-04-014
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Tone modulation on an electric guitar is achieved by operating switches on the guitar itself, using an amplifier, or using multiple guitar-specific effectors. Each of these effectors has its own modulation function, and the player adjusts and sets the parameters using the knobs and buttons on the guitar to achieve the desired effect. Then, while playing, the player can apply the effect to any part of the song by turning the pedal on and off with his/her foot. However, there are limits to the effects that can be obtained with foot pedal operation, and fine modulation requires advanced techniques. Especially for amateur players, the operation itself is difficult. Therefore, the purpose of this study is to realize an intuitive and interactive tone change system triggered by the guitar player's physical actions (conscious or unconscious). Although the musical expression of the prototype has been evaluated to a certain degree, technical improvements to further increase the usefulness of the system itself are an issue for the future.
View full abstract
-
Keisuke WATANABE, Hikaru YASUDA, Taisei INOUE, Norimasa YOSHIDA, Takaf ...
Session ID: 23-04-015
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we propose a curve drawing tool for 3D curves that takes into account the curvature monotonicity regions. For a specific control point, the tool can visualize the region where the curvature change is monotonically varing, and thus users can easily maintain the monotonicity of the curvature. Examples of 3Dcurves created with the proposed curve drawing tool are also shown.
View full abstract
-
Masataka Kondo, Tomokazu Ishikawa
Session ID: 23-04-016
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We conducted an experiment on the use of AR technology to reduce nervousness during interviews. In the experiment, an HMD was used and a system was implemented to overwrite a texture on the face of the person being interviewed with a texture that blocks the gaze of the person being interviewed. A comparison was made between switching the texture based on smile detection and a fixed texture. Results showed that hiding the interviewer's face reduced anxiety, but there was no significant difference in changing the texture by smile detection.
View full abstract
-
Daiki Fukushima, Tomokazu Ishikawa
Session ID: 23-04-017
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research proposes a method for reconstructing 3D models of animals with complex fur textures. Our method combines shape estimation using BANMo and fur parameter estimation using trained deep neural networks to achieve realistic and personalized pet models. Our result has shown to be able to generate natural-looking pet models.
View full abstract
-
Eimu SUGIYAMA, Tsukasa KIKUCHI
Session ID: 23-04-018
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, CG has been increasingly used in the production of video works. Clouds are particularly
necessary for outdoor scenery, but a great deal of knowledge is required to express them in CG. In this study, we propose a
method for intuitively generating clouds for people who have no knowledge of computer graphics or clouds. The user only
needs to set a few parameters to generate the clouds he or she wants to represent. The use of onomatopoeia as a parameter
makes the setting intuitive. This method allows the user to easily create complex cloud configurations. Future work includes
the addition of additional cloud types and parameters that can be selected, as well as the ability to specify colors.
View full abstract
-
Yoshifumi TADA, Tsukasa KIKUCHI
Session ID: 23-04-019
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, the number of works that use 3DCG in the anime industry has been increasing. 3DCG technology has also developed over time, but there are still many problems that need to be improved in the 3D models and expressions used in anime. Especially around the mouth. There are few established ways to express the mouth area, and no efficient production flow has been found. Therefore, in this paper, we proposed a method of expressing the mouth area using CG and researched the method.
View full abstract
-
Shengjie ZHANG, Masaki Abe, Taichi WATANABE
Session ID: 23-04-020
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we propose a method for automatically generating 3D maps suitable for applications
requiring small-scale and detailed environments, such as roguelike games, focusing on the selection
of points of interest (POI) that engage players from a level design perspective. Unlike traditional methods
that rely on manual labor, our approach uses procedural generation to efficiently create maps that offer a
unique experience each time.
View full abstract
-
Yuta Iwahara, Tsukasa Kikuchi
Session ID: 23-04-021
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper analyzes the influence of game package cover design on consumers' willingness to purchase games. This paper analyzed the influence of the cover design of game packages on consumers' willingness to purchase based on their impression of the visual information. The results showed that the characters on the cover of game packages have an initial effect on consumers' willingness to buy the game package.
View full abstract
-
Shouta YAMADA, Tsukasa KIKUCHI
Session ID: 23-04-022
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, with the increase in the number of works created using 3DCG due to the improvement in computer performance, the number of people who create 3DCG works on their own has been increasing. However, even though tools and software have facilitated the creation of models, the time and effort required to produce the model remain essentially large. Therefore, with the aim of reducing the time and labor of the creator, we studied a method to support the creation by procedural modeling. In this study, stringed instruments belonging to the lute family, such as guitars and violins, were targeted. The development environment was node-based, and Houdini was chosen for the ease of visualization of the processing and re-editing of the model.
View full abstract
-
Hiroki MITAMURA, Suguru MATSUYOSHI
Session ID: 23-04-023
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We target impolite comic skits on sexuality, appearance and religion. We asked 10 and 20 subjects in the Kansai
and Kanto regions, respectively, to watch them and answer questionnaires about which type of offensive humor is impolite or
not. The experiments show that people in the Kansai region accept offensive humor more than those in the Kanto region, and
that the former feel bullying humors impolite while the latter feel impolite to humors which prohibit a person concerned from
doing something.
View full abstract
-
Shuichi FUJIWARA, Tsukasa KIKUCHI
Session ID: 23-04-024
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In response to the recent retro boom, we will study the composition of designs based on product advertising poster designs from the 1970s to the 1980s, taking into consideration the historical background of the time, and analyze the elements that cause a sense of retro and attempt to extract elements that play a role in the current fashion. The goal of this research is to provide an index for designing new retro-inspired designs.
View full abstract
-
Mitsuru WATANABE, Masaki ABE, Taichi WATANABE
Session ID: 23-04-025
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, a Turing pattern is used to mimic a falling droplet. In fluid dynamics, it is difficult to handle
splitting and merging of boundaries in multiphase flows using the Cartesian grid method. Therefore, we apply the Turing
pattern, which has common points with the Cartesian lattice method, to produce a droplet drop model with a clear boundary
between the liquid and gas ranges. This method can maintain the boundary of the droplet without subdividing the lattice.
View full abstract
-
Nagisa KATO
Session ID: 23-04-026
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Using rough sets, we propose bike designs that match the image of works such as games and anime. Create a model by dividing the bike into parts, and then combine the parts to create a sample. The results of a questionnaire survey to find out what kind of image the created sample has will be analyzed using a rough set. Based on the analysis results, we propose a bike design based on the image by combining parts. In this research, it is possible to know the design of a motorcycle that matches the image of the work you want to create. Not only works where motorcycles are the main feature, but also works where motorcycles only appear as part of the background or in a certain scene, can be used to create the atmosphere of the work by using a motorcycle that fits the image.
View full abstract
-
Yui HATTORI, Tsukasa KIKUCHI
Session ID: 23-04-027
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a method to create degraded textures for leather sofas using Substance 3D Painter. The goal was to create a texture for a leather sofa in a new condition and to establish a method to express the process of deterioration of a leather sofa by adding wrinkles, scratches, discoloration, etc. to that texture. By establishing this method, it will be possible to easily reproduce the aging deterioration of a leather sofa at each stage. By understanding the characteristics of leather deterioration over time and establishing a method to reproduce this, it will be possible to express games and video works more realistically. Additionally, since this research focuses on leather, it can be expected to be applied not only to sofas but also to other leather products.
View full abstract
-
Chishun KANAYAMA
Session ID: 23-04-028
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a method to use a hand-held object as a handwriting input device by detecting objects using an RGB-D camera and estimating their position and pose. The target object is assumed to be a cylindrical writing instrument such as a pen or brush. It detects a hand holding the object from images taken with an RGB-D camera, converts the area around it into a point cloud, and estimates the position and pose based on known information such as size and shape to measure the movement of the writing instrument.
View full abstract
-
Akihiro SUZUKI, Kenichiro ITO
Session ID: 23-04-029
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The objective of this research is to create “one-time music” using chance, through space and time interwoven
between people. It also aims to ask visitors about the difference between listening to “sound” and “music” using the system
created in this research. To achieve this goal, we built a music creation system using Unity and Arduino. The interface of the
system was designed to symbolize the sharing of space and time by visitors and was based on the image of a clock face. The
“movement” and “behavior” of the people in the “place” where the system was installed triggered the spinning of various
sounds. This is also an attempt to make “relationships between people” that we are not aware of in our daily lives apparent
through a series of sounds.
View full abstract
-
SVR-4 ver.2
Ayaka TAMURA, Masaya FURUKAWA, Sonam TSHEWANG, Ryu NAKAGAWA
Session ID: 23-04-030
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
"SVR-4 ver.2" is a new version of the Educational VR content "SVR-4" for rammed earth house builders in Bhutan.
The previous version, SVR-4, is a VR content that allows you to interactively observe the construction process of rammed earth
houses incorporating earthquake-resistant technology developed in an international joint project between Japan and Bhutan.
Currently, it is being used for the education of rammed earth house builders in Bhutan. In ver.2, we mainly improved the UI and
aimed to create an experience that allows users to understand the structure of the building more flexibly.
View full abstract
-
Koyuki Oshiro, Yuri Takiguchi, Yuki Hosoda, Mituki Yosida, Touma Nagas ...
Session ID: 23-04-031
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Creating mural art is often hindered by legal constraints and the risk of ink stains. Because
of these factors, freely painting mural art is not easy. Therefore, for more people to enjoy mural art, we
proposed a mural art system that does not cause stains on clothes or walls. We developed a program that
remotely draws ink into a brush-type device. The brush device is equipped with an LED tape and can emit
red and blue light. TouchDesigner was used to identify the LED light and process the image as a line to
be drawn. The resulting images were projected onto a wall using a projector to reproduce mural art. This
system aims to increase public familiarity and enjoyment of mural art.
View full abstract
-
Rin Okuyama, Hisakazu Hada
Session ID: 23-04-032
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research aims to simulate accumulating energy in the palm and firing it, as often depicted
in mangas and animations, using VR content with tactile stimulation. We focused on the palm, the site of
energy manifestation and accumulation, and created a device and VR content that provides tactile stimulation.
Additionally, we conducted experiments involving the direct experience of this action, complemented
by an evaluation questionnaire. The questionnaire results demonstrated the effectiveness of each stimulus
presented by the device and validated our research approach to simulating energy manipulation.
View full abstract
-
Ayato YAMADA, Junko SGIMORI, Toshiya ARAKAWA
Session ID: 23-04-033
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
While it is acknowledged that preparing for large–scale disasters requires the implementation of community–based mutual aid efforts during normal times, modern society often exhibits weak connections among neighbors. Consequently, it is emphasized that fostering mutual aid intentions during disasters necessitates strengthening relationships among residents in noncrisis periods. This study outlines the creation of a PC-based disaster prevention game titled “Fire Simulation: Cultivate Mutual Aid Skills” as a strategy to promote and enhance mutual aid skills. Additionally, the study explores the shifts in mutual aid awareness resulting from engaging in the game.
View full abstract
-
Noriki AMANO
Session ID: 23-04-034
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We are exploring the application areas and possibilities of live coding, which treats programming as a type of performance art. As part of this, this study proposes live coding to control Acusmonium. The acusmonium is a performance tool equivalent to an instrument in acousmatic music, but it is a simple device that operates many speakers with faders and is limited in what it can do. Furthermore, acousmatic music is a unique type of music that does not have a clear melody. In this research, we aim to control such acusmonium using musical live coding. Specifically, we aim to replace fader operations with programming, make it possible to control the acusmonium more flexibly and accurately, and make it function as a performance tool for melodious music other than acousmatic music.
View full abstract
-
-A Co-Creative Space for Integrating Crafting Visual Spaces and Music Production-
Natsuki KAWAMOTO, Masamichi NISHIDA, Sasuke MIYATA, Aoi YAMADA, Ken IS ...
Session ID: 23-04-035
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Recent music composition tools are available for easy production of music even in a virtual space. However, the existing virtual reality tools for music composition can not make music with multiple users. This study attempts to solve the problem by integrating visual crafting and music production into the metaverse. The new metaverse called MeloVerse provides an enjoyable virtual space of music and visual art for novices of music.
View full abstract
-
Naotsuna FUJIMORI, Ikuko SAWAYA, Momoko MAEZAWA, Kazuteru KOMINE, Keni ...
Session ID: 23-04-036
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
As part of music visualization research that we are working on, we have been studying designs to visually feel the music structure of a homophonic song consisting of a main melody and accompaniment. In this paper, we propose a method for generating the images that describe the main melody and accompaniment individually. In order to express the information contained in music visually and naturally, we used knowledge of cross-modal correspondence in the design of the visualization and produced visualization content by synthesizing it with a video of a pianist's performance.
View full abstract
-
YELIN HUANG, Ken’ichiro Ito
Session ID: 23-04-037
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, City Pop has garnered global attention due to its distinctive musicality and powerful rhythms. The
purpose of this study is to elucidate how City Pop diffuses across diverse cultures and its impact on music consumption behavior
in the digital age. Focusing on China, the research analyzes the characteristics of cultural diffusion of City Pop and explores its
correlation with trends in digital music consumption. The study aims to investigate the process of music spreading across
different cultures and the evolution of music consumption in the digital era, providing new insights.
View full abstract
-
Takehiro ISHINUKI, Issei FUJISHIRO
Session ID: 23-04-038
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Virtual live concerts, which have attracted much attention from potential audience in recent years, are less
enthusiastic than real ones. In this study, we propose a method for synthesizing Non–Player–Characters (NPCs) that can
substitute for real audience to achieve enthusiastic virtual live concerts. Considering psychological factors, we synthesize audience
NPCs that cheer synchronously through psychological interaction with the surrounding audience including the player.
View full abstract
-
Chika KADOKURA, Tsukasa KIKUCHI
Session ID: 23-04-039
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we investigate the factors that make the UI of smartphone e-commerce apps "easy to use" and propose guidelines for stress-free UI design in e-commerce apps. In this research, we will create two types of UI and conduct usability tests. In the post-survey, we selected five evaluation axes and scored the survey results to determine which UI was superior. As a result of the scoring, the average score of UI (B) was higher than that of (A) in the four evaluation axes of likeability, usefulness, operability, and ease of viewing.
View full abstract
-
Kaho UKAI, Youngha CHANG, Nobuhiko MUKAI, Kojiro HIRANO, Kouzou MURAKA ...
Session ID: 23-04-40
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this research, a support system has been developed to diagnose whether an endoscopic image shows "severely abnormal nasal cavity (Symptom+)" or "not severely abnormal nasal cavity (Symptom-)". ResNet50 and VGG16 are employed as the deep learning models, and fine-tuning is performed with endoscope images of the nasal cavity after pre-training with ImageNet. The average diagnostic accuracy of stratified 4-fold cross-validation was about 80%, while the recall rate of Symptom+ was about 60%. In the future, we plan to improve the recall rate for practical use.
View full abstract
-
Mana NISHITANI, Issei FUJISHIRO
Session ID: 23-04-041
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Animation of characters’ clothing is in high demand in a variety of digital content. In particular, a cape flapping
in the wind is often an important element in a dynamic scene. However, it is not an easy task to create 3DCG animations of
capes as desired by artists. We, therefore, propose a system that creates an animation of a cape waving in the wind from a small
number of parameters. The system, therefore, uses Laguerre Voronoi decomposition to control the undulating surface mesh of
the cape.
View full abstract
-
Weiyi XIA, Masahiro FUCHIUE, Suguru SAITO
Session ID: 23-04-42
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we have built a web system that accumulates data during transmission between Japanese
animation production processes and facilitates the trial-and-error process during production. This system provides an
interface that resembles the timesheet used in the traditional production process, and has the function to compare the
good or bad of the changes in timing and the replacements in pictures through video comparison. This paper reports
on the features of the system and the comments collected from animation production employees regarding the system.
View full abstract
-
Rena TOMIZAWA, Tomokazu ISHIKAWA
Session ID: 23-04-043
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We investigated how changing the hitstop duration according to the gaze duration would affect the user's response. A preliminary experiment on hitstop duration was conducted to define the boundary between comfort and discomfort. This experiment suggested that hitstop duration that matches the gaze duration may improve GameFeel.
View full abstract
-
Shogo NISHIDA, Kei KANARI, Mie SATO
Session ID: 23-04-044
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the field of education, teaching materials using virtual reality (VR) technology have been developed.
Although there are studies on fatigue from video viewing in VR, it has not been clarified what factors cause fatigue from
reading in VR. In this study, we focused on the emergence of fatigue to clarify the characteristics of reading in VR and
investigated the factors that cause fatigue from reading in VR. In the experiment, we asked subjects to read in a VR space
while taking a break every time they felt fatigue. The subjective evaluation of fatigue and the elapsed time between the start of
reading and the first break were analyzed. As a result, "eye fatigue" and "stiff shoulders" were the main fatigue factors, and
the time to feel fatigue differed depending on the fatigue factors.
View full abstract
-
Joichiro MURAOKA, Yusei MURAISHI, Kosei TOMIOKA, Mie SATO, Naoki HASHI ...
Session ID: 23-04-045
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In research on the Proteus effect, in which the appearance of an avatar in virtual reality influences the user's
behavior and self-expression, has been well known. Conventional research has suggested a relation between weight perception
and avatar impression, but it has not been fully investigated whether the body and motion are affected when the weight illusion
is caused by changes in avatar impression. In this study, we examined relations between the degree of strength impression of the
avatar, the degree of weight illusion, the velocity of motion, and the force output by comparing the weights of dumbbells using
avatars with different impressions. As a result, it was confirmed that subjects tended to lift the dumbbell more quickly and to
output more force in the biceps brachii when they felt that the avatar is stronger in the weight illusion caused by the impression
change of the own body.
View full abstract
-
Joshua Tanaka, Hiroyuki Kose, Takashi Mihara, Hapnes Toba, Terutoshi T ...
Session ID: 23-04-046
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, there is content called interactive video. In addition, AR technology is used in general fields. The
purpose of this project is to present content which assists people in understanding and considering information in the real
world more accurately by developing interactive video using AR technology (AR interactive video). In this research, AR
interactive videos which are useful in the field of education were created. They were evaluated through a questionnaire survey.
View full abstract
-
Mizuki SONOBE, Naoki HASHIMOTO
Session ID: 23-04-047
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
When employing spatial coding methods for wide-area projection in indoor spaces, the challenge lies in the
reduction of measurement accuracy due to reflected light. Indirect reflections from walls and floors, as well as specular
reflections, decrease the accuracy of measurements, necessitating the restriction of projection areas or covering them with nonreflective
materials. Therefore, this study aims to improve the accuracy of spatial coding for wide areas in indoor spaces by
combining polarized light analysis using linear polarizers with the segmented projection of Gray Code patterns.
View full abstract
-
Motohide AOKI, Naoki HASHIMOTO
Session ID: 23-04-048
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this paper, we present a system designed to propose optimal projector configurations for image projection in wide-area spaces. The system identifies effective arrangements by considering quality-related factors such as projection resolution, focus blur, and shadow areas, utilizing multi-objective optimization techniques. Additionally, the system determines the best placement by taking into account subjective impressions through VR-based simulations. The objective is to achieve high-quality projector placement while minimizing the installation effort.
View full abstract
-
Akihisa Okuzumi, Haruto Nakayama, Koki Nogami, Hisakazu Hada
Session ID: 23-04-049
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we investigate the perceptual and experiential shifts that occur when users
switch between first-person and third-person perspectives and the inherent differences between these two
viewpoints. Our experiment involved using a camera positioned behind the user and a button to alternate
between a first-person view in a standard virtual reality (VR) space, facilitated by a head-mounted display
(HMD), and a third-person view utilizing the camera’s feed. This investigation focused on how such
perspective shifts affect user perception. The findings revealed that the first-person perspective was easier to
use, while the third-person perspective often induced feelings of sickness. However, no significant difference
was noted in the sense of immersion between the two viewpoints.
View full abstract
-
Shuntaro HISAMATSU, Hisakazu HADA
Session ID: 23-04-050
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This study presents a method of kana character input using a scroll wheel, aiming to achieve
both a small control unit, one-handed operation, and simplicity of operation in character input. The proposed
method exclusively relies on the scroll wheel of a mouse for selecting and confirming input characters
through rotation and pressing actions. The scroll wheel is rotated and pressed to select and confirm input
characters, respectively.
View full abstract