-
Honoka AKAMATSU, Hirokazu YASUHARA, Koji MIKAMI
Session ID: 23-04-051
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, many virtual reality (VR) contents have been developed. Among them, it is now possible to
interact with virtual reality with one's own hands using hand tracking. On the other hand, the method of moving in VR using
hand tracking has not been established, and many studies have proposed various methods.
Therefore, we focused on pinch operation of a smartphone as one of the proposed movement methods. We thought that
moving with a gesture like the zoom operation of a smartphone, which many people are familiar with, would enable intuitive
operation. In this study, we implemented a movement method using pinch operation on VR and compared it with other
movement methods and investigated. As a result, we found that pinch gestures were considered more natural and intuitive in
the questionnaire, although no dominant difference was obtained as data
View full abstract
-
Shion NAKAGAWA, Kai LENZ, Yoshihisa KANEMATSU, Koji MIKAMI, Suguru MAT ...
Session ID: 23-04-052
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
When multiple people create a scenario, various problems may occur due to lack of information or different interpretations of each person . In order to suppress such differences in behavior over time, we created Time series Character Transition Diagrams for Scenario. When creating a plot, showing the purpose, motivation, and relationships between characters in Time series strengthens the common understanding of the entire team and reduces discrepancies in words and actions. As a result of the evaluation experiment, it was confirmed that the level of understanding of the characters improved.
View full abstract
-
Keita KOJIMA, Yoshihisa KANEMATSU, Suguru MATSUYOSHI, Koji MIKAMI
Session ID: 23-04-053
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, olfactory displays that can switch between smells have attracted attention among VR technologies, and these are used for purposes such as odor reproduction and dramatic effects. In this study, we developed a device that provides olfactory stimuli in conjunction with VR contents, and developed a system that presents smells to players as information about the approach of enemies. Comparative experiments with sound, which is usually used for guidance, showed that players can be guided to some extent by smell guidance. We also confirmed that the system can increase the sense of tension in the same way as sound.
View full abstract
-
Kousei MOCHIZUKI, Yoshihisa KANEMATSU , Suguru MATSUYOSHI, Koji MIKAMI
Session ID: 23-04-054
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Character dialogues should reflect the character's personality and setting, but it is unclear whether they are
communicated to the viewer as intended. In this study, we propose a method to visualize the viewers feelings toward the
character dialogues as a bar graph with percentages on a bar graph using sentiment analysis. A tool was created to determine if
the proposed method properly expressed the writer's intended impression of the character. The results of the experiment
increased the writer’s confidence in their own dialogues.
View full abstract
-
Keiya SHIMAKAGE, Hirokazu Yasuhara, Koji MIKAMI
Session ID: 23-04-055
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Interactive character content using sentence generation AI recognizes the user's utterance and then generates a response, resulting in unnatural pauses. As a countermeasure, there is a method of inserting a hammer or filler immediately after the end of the user's speech, but the general-purpose hammer is used extensively, which conversely impairs the humanity of the character. Therefore, we proposed a method to make the filler response conform to the content of the user's utterance, and as a result of experiments, it was found that it had a positive impact on the interaction experience.
View full abstract
-
Shinji SUTO, Hirokazu YASUHARA, Koji MIKAMI
Session ID: 23-04-056
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The world is currently facing a serious waste problem. At the same time, Japan is approaching the limits of its
waste landfills. For sustainable environmental protection, we need to reduce waste and recycle more aggressively. Therefore,
we focused on plastic bottles, a frequent source of recyclable trash. We developed a sorting device to enjoyably separate caps
and bottles and its effectiveness as a work of art, and verified its usefulness through experiments.
View full abstract
-
Yudai HONDA, Hikaru ODA, Kanon TASHIRO, Asuka CHIBA, Ken ISHIBASHI
Session ID: 23-04-057
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research aims to construct an interactive learning system to allow users to understand chemical reactions using the robot toy toio with visual effects. Our system makes users deal with toio blocks to create chemical reaction equations, and the visual effects based on the reaction products project onto the table with toio blocks. We hope to provide a novel learning environment for easy understanding and enjoying learning. This will encourage learning engagement in chemical reactions.
View full abstract
-
Hiromi KARASAWA, Yoshihisa KANEMATSU, Suguru MATSUYOSHI, Koji MIKAMI
Session ID: 23-04-058
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Recent 3D character models have detailed expressions such as simulation of clothing and fluttering of hair. However, they are lacking in terms of movements such as the swaying of flesh and shape changes such as arms and legs deformation from socks compression. In this study, we propose a modeling method to reproduce the differences in softness among muscles, fat, and ligaments, and to reproduce the bite and crush of clothing in illustrations more easily. To confirm whether our model can reproduce the bite in illustrations, we conducted image similarity measurements. The experimental results show that our model can reproduce thigh bite in illustrations more easily than conventional methods.
View full abstract
-
-a Case of Pop Music-
Miki HASHIMOTO, Akinori ITO
Session ID: 23-04-059
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, the perception of moving sound images is named " Acoustic Image Position Movement Perception", and the purpose of this study was to clarify the conditions that cause the perception of sound image movement to a longitudinal angle. The experiment used a short song phrase in the form of a band ensemble. The guitar part was used as the direct sound, and the phrase created with a delay added to the guitar part was used as the reverb sound. These sound sources were placed in the spatial audio system, and their perceptions were studied. As a result of the experiment, there was possibility to induce the perception of the Acoustic Image Position Movement Perception away from the listener by setting the DR ratio to 0dB. On the other hand, when the DR ratio was not 0 dB, there was suggested to induce the Acoustic Image Position Movement Perception in the direction closing to the listener. When the DR ratio was set to -6 dB, there was possibility could induce the Acoustic Image Position Movement Perception in a direction other than the direction in which the indirect sound originally arrived.
View full abstract
-
Shuhei Kizaki, Akinori Ito
Session ID: 23-04-060
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The purpose of this study was to search for a method of auditory presentation of indicate choices and interactions to measure understanding of the scenes in an audio drama. We developed the audio drama game as a stimulus using Unity. For comparison, it includes two story-line, one with and one without sounds to prompt the player to make a decision, and tested whether there was a difference in the choices made in each situation. In order to produce an audio drama, richer and more amplified expression is needed. To confirm the level of understanding, it is necessary to deepen the study of user interface sounds.
View full abstract
-
Issei WADA, Akihiro MATSUURA
Session ID: 23-04-061
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we propose a VR duel system that presents the sensation of a gun being hit by a bullet, as depicted
in Western movies. This system consists of a gun device, an air compressor, an HMD, etc. The gun device presents impact of a
bullet using a solenoid and also presents the sensation of being hit by a gun using a compressor at the tip of the gun device.
View full abstract
-
Kaito SHIBATA, Akihiro MATSUURA
Session ID: 23-04-062
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, aiming at creating content to pull objects out of a viscous fluid, we conducted preliminary
experiments to determine the ease of pulling out the object depending on the shape of the object and the presence of vibration.
A type of clay called bentonite is used for the viscous fluid. The objects to be pulled out are four types of cylinders with
bottoms of a square, a circle, an uneven shape, and a thin disk. Furthermore, we also examined how a vibration device inside
each solid affects the ease of pulling out. We measured the maximum load when pulling solids out and compared the results.
View full abstract
-
Asahi Murakami, Yoshihisa Kanematsu, Suguru Matsuyoshi, Koji Mikami
Session ID: 23-04-063
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we propose a new index called pre-aim accuracy for determining player skill in tactical shooting games. In order to verify whether this index can actually determine player skill, I created an FPS game that can measure the accuracy of pre-aim and conducted an experiment. Twenty men and women in their 20s participated in the experiment, and based on their self-reports through a questionnaire, they were assigned to skill level groups and compared with each group. The results showed a significant difference in overall game accuracy and close range enemies.
View full abstract
-
Kai LENZ, Iori NAKAMOTO, Daiki YAMAZUMI
Session ID: 23-04-064
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In movies, anime, special effects works, and other works in which multiple characters appear, visualization of relationships is a method of visualizing how the characters are related to each other. Visualization of entanglement is a diagram obtained by counting the number of scenes in a story where characters meet or accompany each other, and calculating using a dynamic model that attaches springs between the characters for the number of times. In this case, there are some scenes where it is difficult to simply determine that the characters are accompanying each other. Among these, in this study, we analyzed and summarized the types in which the characters transform or disguise themselves.
View full abstract
-
Kai LENZ, Reina TAKEHARA, Hako TONE, Souken NAGASAWA
Session ID: 23-04-065
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In movies, anime, special effects works, and other works in which multiple characters appear, visualization of
relationships is a method of visualizing how the characters are related to each other. Visualization of entanglement is a diagram
obtained by counting the number of scenes in a story where characters meet or accompany each other, and calculating using a
dynamic model that attaches springs between the characters for the number of times. In this case, there are some scenes where
it is difficult to simply determine that the characters are accompanying each other. Among these, in this study, we analyzed and
summarized several types that are difficult to classify.
View full abstract
-
Kunio SAKAMOTO, Towa MAEDA
Session ID: 23-04-066
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A window signage or sign is a cost effective way to attract attention from streets. Assume that you draw something on the glass window from both sides. It is difficult to understand your drawing correctly since one observes the mirrored text and graphics from opposite side. It stands to reason that someone observes mirrored imaging according to the direction of viewing if you draw something on the transparent material. The authors have researched one-way observable window display system which enables to make images invisible form back side. In this paper, we propose the window signage display to avoid mirrored image problems using interference colors of transparent optical films.
View full abstract
-
Kosei ENOKI, Tomokazu ISHIKAWA
Session ID: 23-04-067
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The study explores communication and anxiety, particularly in the context of web conferencing amid the COVID-19 pandemic. It investigates the impact of real-time facial expression changes on communication dynamics. Using facial recognition technology, the system alters facial expressions, aiming to improve emotional states during interaction. The experiment involved subjects interacting while observing their modified facial expressions. Results indicate no significant increase in positive emotions, except for "confident/positive," but notable reductions in negative emotions. Thus, while facial deformation may not substantially boost positive emotions, it shows promise in alleviating negative ones, offering insights into enhancing online communication experiences.
View full abstract
-
Yohei Harada, Tomokazu Ishikawa
Session ID: 23-04-068
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This study investigated the influence of display size on memory tasks. Nowadays, various electronic devices are used for learning and work. The sizes of these displays vary, and people view, memorize, and process the information displayed on them. In this act of remembering, we felt that the memory ability was not uniform no matter what device we used, and we thought that the difference was due to the display size. Therefore, in this study, we placed five types of displays of different sizes at distances that unified the viewing angle and conducted comparative experiments and evaluations using questionnaires using two types of memory tasks created with reference to memory sports.
View full abstract
-
Satoshi Shimomura, Makoto J. Hirayama
Session ID: 23-04-069
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
The information that college students had obtained separately, such as class timetables and bus schedules, was
combined into a single chatbot. This makes it easier and quicker to obtain the information needed for college life. The created
chatbot was used by students and a questionnaire was administered to confirm that it improved their college life.
View full abstract
-
Yoshihiro Suzuki, Tomokazu Ishikawa
Session ID: 23-04-070
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research aims to develop an application that automatically adds specific sound effects to specific movements on video. For example, we are considering adding an SE that sounds like a bone breaking when the head is tilted. To determine how the head is tilted, the coordinates of the feature points of the face in the video are acquired for each frame. Calculate the time to apply SE using the angular velocity from the feature points. The results show that the proposed method does not cause any discomfort compared to the SE timing of currently available videos.
View full abstract
-
Shohsuke Iida, Tomokazu Ishikawa
Session ID: 23-04-071
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, a wide variety of VR content has been provided, and the development of devices that approach the sense of touch is attracting attention. In this study, we investigated the sensory experience of "temperature," "smoothness," and "hardness" in virtual space. Unlike previous studies that perform haptic feedback using a glove-type device, we conducted a tactile experiment without using a device by inducing a tactile illusion from visual information using a virtual hand illusion. As a result, it was shown that it is possible to create a tactile illusion using visual information for "temperature" and "smoothness."
View full abstract
-
Kazunari MATSUMOTO, Tomokazu ISHIKAWA
Session ID: 23-04-072
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a method that can generate 3DCG models of cities without knowledge of CG modeling and image processing. This method automatically generates 3DCG models of cities by inputting map images and building models. The map image is converted to a contour image using Pix2Pix, and the position of the block is read by corner detection.
View full abstract
-
Ziyan LING, Takanori NAGAE
Session ID: 23-04-073
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the current stage where GPU development is not yet fully mature, I will research how to use ChatGPT in gaming projects. This will enable players to experience new gameplay while avoiding impatience due to ChatGPT response times.
View full abstract
-
Hajime Kaneda, Hidekazu Tsujiai
Session ID: 23-04-074
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This work is the animation work which designates the flow of time as theme. A participant can change image and sound
through the computer. The hypothetical character of the virtual human by the CG in the early picture appears. And the character is moving in
order to induce influencing the participant. Operation of the participant for the hypothetical character is decided; time change of aging and
becoming younger is given to form of the background image and the hypothetical character.A participant felt and manipulated " The time ",
so we would like to have realizing in the participant.
View full abstract
-
Karen TSURUME, Noriki AMANO
Session ID: 23-04-075
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this research, we propose a multi-colored switching animation using a stage light and realize a memory game based on it. Specifically, we use a stage light to illuminate illustrations drawn in multiple colors and switch the light colors to realize the animation. Furthermore , we propose a unique memory game preparing multiple illustrations based on our method, and the game is played by finding illustrations with the same movements. Our method innovates the expressive power and interactivity of conventional switching animation and enables a novel and unique memory game.
View full abstract
-
-Sound installation listening to inaudible environmental sounds-
Katsuya Endoh
Session ID: 23-04-076
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We created a sound installation work "RadioScape". This work allows visitors to listen to normally inaudible
environmental sounds through a radio. This work expresses a non-anthropocentric soundscape. This paper describes the
background of the work, related works, implementation methods, and future issues and developments.Keyword Soundscape,
Radio art, Sound installation
View full abstract
-
Yudong ZHANG, Takanori NAGAE
Session ID: 23-04-077
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This work is a media art piece centered around stylized materials. With the advancement of AI painting technology, texture creation has seen a surge in possibilities and efficiency. Utilizing UV projection from the camera in 3D software opens up various methods for stylization. The process begins with modeling in Blender, where information such as depth and wireframes is extracted from objects within the scene. Subsequently, the Controlnet model of AI painting is used to generate corresponding textures. Camera projected UV mapping is carried out in Blender's Eevee renderer, followed by adjustments and hand drawn details in Photoshop. The modified textures are connected as base colors to a cartoon shader, resulting in a Celluloid (Cel) style material.
View full abstract
-
-Proposal for information presentation method using strings and twists-
Kaito Shimizu, Toshitaka Amaoka
Session ID: 23-04-078
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Many display works have been created using real-world materials. These works are displayed by changing the shading and
the area visible to the viewer. Some of these works use twisting movements of the materials.
In this study, we propose a display that uses two bands and twisting by a gear mechanism. By twisting and crossing the two
parallel strings with gears, the front and back sides of the strings are reversed and the color of the back side is displayed. We
propose a method of displaying information by using the reversed and parallel rubber strings as a single pixel.
View full abstract
-
Haruka KUMADA, Miu KANEMATSU, Kaito SHIMIZU, Kouta KIKUCHI, Toshitaka ...
Session ID: 23-04-079
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research combines sunlight and information technology to propose an information presentation method and create an artwork. For the information presentation method, we propose an information presentation display based on the Luminance of reflected light, taking advantage of the property that the Luminance increases when light overlaps. In this research , a prototype display consisting of 5 pixels with 4 shades of gray was created. For the artwork, we created a sunlight artwork fot the expression of information using colors by spectroscopy of sunlight with using the small robot “toio”. By moving and tilting the dichroic film with controlling toio, the viewer can appreciate the color, movement, and color mixing of light rectangles of various colors generated from sunlight.
View full abstract
-
-Creating windows that incorporate natural fluctuations using dichroic panels -
Miu KANEMATSU, Toshitaka AMAOKA
Session ID: 23-04-080
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This piece is a work that extends the functionality of windows to capture the movement of wind as light into the
interior space. An anemometer is installed outside to measure the wind speed. Panels resembling window glass tilt according to
the measured wind speed. Dichroic film is applied to these panels, allowing sunlight to transmit light into the interior space. By
capturing the movement of wind as light indoors, it incorporates the natural fluctuations into the interior environment.
View full abstract
-
Pu Zheng Wang, Takanori NAGAE
Session ID: 23-04-081
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the present research, reference is made to the twelve principles of animation penned by animators from the historical Disney studio, as detailed in ' The illusion of life'. Utilizing motion capture apparatuses, which employ inertial sensors to document human movements, we procure motion capture data and subsequently explore methodologies to refine this data. It is noteworthy that while certain principles from Disney's set of twelve are beneficial for the refinement process, others are not pertinent. The ultimate objective is to actualize a more authentic representation in CG animation through the judicious use of economical my motion capture techniques.
View full abstract
-
Mai YAMAGUCHI, Haruka KURIBARA, Tetsuya MATUMURA
Session ID: 23-04-082
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, there has been an increasing demand for low latency transmission in various systems such as autonomous vehicles and telemedicine technology. In response, an ultra low latency video coding scheme that can be controlled in the order of μ s has been proposed as a low latency video coding scheme for edge image sensing devices. This ultra-low-latency video coding scheme has a weakness of low coding efficiency due to line by line processing. In a previous study, intra prediction using deep learning was proposed to solve this problem, but it failed to achieve sufficient improvement in coding efficiency. In this paper, we propose a method using ConvLSTM1D to further improve the coding efficiency of intra prediction using deep learning, and verify the method.
View full abstract
-
Tomoyuki NISHITA, Yuna NAKAMURA
Session ID: 23-04-083
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
-
Yuma AOKI, Kohei TOKOI
Session ID: 23-04-084
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Physics-based fracture simulation can generate realistic fracture patterns, but it is difficult to use in applications that require fast processing, such as games. To address this issue, a method has been proposed that uses a conditional adversarial generative network (cGAN) to predict fracture shapes, thereby speeding up processing. However, in previous research, the inability to deal with differences in physical quantities such as the velocity and mass of colliding objects was a problem. To solve this problem, this paper proposes a method in which a force condition map generated using a spring mass model is input as a condition.
View full abstract
-
Norifumi KAWABATA
Session ID: 23-04-085
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
GPT-4, which is a multimodal large-scale language model, was released on March 14, 2023. GPT-4 is equipped
with Transformer, a machine learning model for natural language processing, which trains a large neural network through
unsupervised learning, followed by reinforcement learning from human feedback (RLHF) based on human feedback. Although
GPT-4 is one of the research achievements in the field of natural language processing (NLP), it is a technology that can be
applied not only to natural language generation but also to image generation. However, specifications for GPT-4 have not been
made public, therefore it is difficult to use for research purposes. In this study, we first generated an image database by adjusting
parameters using Stable Diffusion, which is a deep learning model that is also used for image generation based on text input
and images. And then, we carried out experiments to evaluate the image quality from the generated database, and discussed
the quality assessment of the image generation model.
View full abstract
-
Haru SHINDO, Takuya FUNATOMI, Akinobu MAEJIMA, Yasuhiro MUKAIGAWA, Hir ...
Session ID: 23-04-086
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, there has been a growing demand for real-time pose estimation in environments with intense disturbance light, such as effects and moving lights in stage productions. Hence, thermal cameras utilizing far-infrared imaging have been employed to mitigate the impact of disturbance light and achieve robust pose estimation in disturbance light. However, Non-Uniformity Correction (NUC) for accurate temperature measurement in thermal cameras introduces freezes in image capture, preventing real-time pose estimation. In this study, aiming for real-time pose estimation that is robust for disturbance light, we propose a pose estimation process during NUC execution with an RGB camera and a method for predicting the bounding box of a person.
light, we propose a pose estimation process during NUC execution with an RGB camera and a method for predicting the bounding
box of a person.
View full abstract
-
Haruki UENO, Junichi SUGITA
Session ID: 23-04-087
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
Auscultation is one of the basic techniques in physical assessment, in which a stethoscope is used to listen to
sounds inside the body to determine the patient's condition. Pulmonary auscultation is one of the most frequently performed
tasks by nurses, but many nurses have difficulty with auscultation. In pulmonary auscultation, it is necessary to distinguish
between breath sounds and adventitious sounds. Since only breath sounds can be heard from able-bodied persons, mannequintype
auscultation simulators are used to listen to lung sounds in various pathological conditions. Mannequin-type auscultation
simulators are expensive, so the number of mannequins and the places where they can be used is limited. This study aims to
develop an auscultation simulator using a social virtual reality (social VR) platform. Social VR has the advantages of being easy
to use as long as there is a PC connected to the network, and of facilitating learning while people in remote locations communicate
with each other.
View full abstract
-
Naoya INAGAKI, Takeshi MASUDA, Hiroaki SAWANO
Session ID: 23-04-088
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the analysis of video footage of ball sports, it is necessary for the analyst to watch all of the game
footage and record the necessary information. In this study, we propose a method for automatically extracting
scenes in which scores are displayed from the broadcast ball game video, focusing on the score displays common
across various ball sports broadcasts. In the proposed method, the score board area is estimated by frame
difference. Then, scenes in which scores are displayed are extracted by analyzing the difference between the
average image of the estimated area and individual frames. Furthermore, the score area is estimated by frame
difference and background subtraction to predict the time of score changes. The experimental results showed
zero overextractions and an unextraction rate of 4% of the total for scene extraction, while for score change
time estimation, there were 2 overestimations and 1 unestimation.
View full abstract
-
Ryotaro ARAI, Tomokazu ISHIKAWA
Session ID: 23-04-089
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This paper analyzes the effects of players' biological functions and personality traits on shooting maneuvers in order to establish a training methodology for FPS games. 12 subjects played two different games, and heart rate variability, eye movement, hit rate, and mouse operation were measured. Personality traits were analyzed with a questionnaire and 16 Personalities. It was found that increased heart rate and wrist axis mouse manipulation affected shooting accuracy, and that players with certain personality types had higher ability.
View full abstract
-
Haruka IWAKAMI, Jun SHIBATA, Fuuya SEKIGUCHI, Hiroaki SAWANO
Session ID: 23-04-090
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In this study, we investigate a bidirectional interpretation method between Japanese and sign language. The proposed method interprets from input speech or text in Japanese to a sign language CG image, and from a sign language live-action image to Japanese text or speech. To generate a sign language CG image from Japanese, morphological analysis is performed on the input Japanese text, and the sign language animation corresponding to each morpheme is concatenated and played back. For the generation of Japanese text from live sign language video, the system performs skeletal detection using MediaPipe on live sign language video, extracts sign language words from the detected time-series skeletal coordinates using machine learning, converts the extracted words into text sentences, and outputs audio using speech processing. Since speech processing has not yet been implemented in this research, this paper describes the results of sign language CG video generation when Japanese text is input, and the word extraction accuracy and text sentence generation results for live-action video of sign language sentences.
View full abstract
-
Haruka GOTO, Tomokazu ISHIKAWA
Session ID: 23-04-091
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We propose a method for generating crowd simulations from video. The parameters necessary for crowd simulation, i.e., number of people, actions, and speed, were extracted using image recognition technology. The number of people and their actions were estimated using a dedicated algorithm with high accuracy even in low-light conditions or in situations where people overlap. Speed is estimated by using the moving distance between frames. By providing the above parameters to the crowd simulation, we were able to generate a simulation that is close to the actual video.
View full abstract
-
Koki SHIMABUKURO, Tomokazu ISHIKAWA
Session ID: 23-04-092
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We examine the usefulness of an AI-based peripheral blood leukocyte classification and screening technique based on microscopic blood images of cats and dogs. Peripheral blood smears of MG-stained cats and dogs were used as the target samples, and the accuracy of leukocyte image classification was inferentially evaluated using a CNN model obtained by learning with leukocyte images. The inferential evaluation after transfer learning was 0.10 for Eosinophil, 0.80 for Lymphocyte, 0.80 for Monocyte, and 0.7 for Neutrophil. However, more training images are needed to use this technique in the field.
View full abstract
-
Yoshiaki OZAKI, Koichi YAMADA, Hidetaka MASUDA
Session ID: 23-04-093
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In recent years, the opportunity for individuals to use multiple mobile devices has increased. However, the current device functionalities and applications lack sufficient support for utilizing multiple devices in tandem, as users often separate devices by purpose or role, operating them independently. We are developing a system to support smartphone tasks that require consulting related information. This system integrates multiple devices to automatically display information related to the application's context (scene) on the main device onto a secondary device. Our research addresses the challenge identified in previous systems, which could not present information related to a series of tasks across different scenes. We propose and implement an annotation list to solve this issue.
View full abstract
-
Yuzhou PEI, Yongha CHANG, Nobuhiko MUKAI
Session ID: 23-04-094
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A 3D point cloud is generated from the photos taken with an iPhone stereo camera, and a triangular
mesh model is constructed from the point cloud using the k-nearest neighbor method after noise removal.
Since this mesh model includes back and overlap faces, the number of polygons in the 3D model can be reduced
by integrating overlapping triangles into a single polygon after backface removal using the normal vectors
of the faces. This paper describes how to reduce the number of polygons.
View full abstract
-
Yuichi SAKANO, Hironobu ABE
Session ID: 23-04-095
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
In the production of 2D animation based on illustrations, it is necessary to divide the target illustration into parts by category, such as hair, face, and body. Usually, the partitioning process is performed using graphics software, but when creating complex animations, the number of parts becomes large and time-consuming. In this paper, we propose a method to improve the segmentation accuracy of hair by automatically segmenting it into parts using U-net, which is one of the semantic segmentation methods, and by combining it with post-processing based on image processing.
View full abstract
-
Akito SAKAI, Wataru WAKITA
Session ID: 23-04-096
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
This research proposes a method for easy and unlimited movement in a VR space, in which the operator's foot and leg positions are estimated from the foot load acquired by a device like a scale, and reflected in the avatar. This method is expected to enable walking in VR space while presenting a sense of physical possession at a lower cost than conventional methods. As a first step, we used a simple device with eight load sensors to estimate the position of the operator's feet and legs based on the difference between the front-back and right-left sides of the load. As a result, one-legged standing and forward movement reproduced transitions similar to actual movements, but rightward movement was significantly different, and transitions occurred instantaneously. In the future, we believe it is necessary to improve the placement of the sensors along the plantar area and to control the amount of change in the load difference.
View full abstract
-
Shinnosuke NAGANO, Suguru MATSUYOSHI
Session ID: 23-04-097
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
For a character whose name has only the Hiragana representation, it is not easy to choose a Kanji representation appropriate for her/his traits and backgrounds. We propose a system that recommends Kanji representation candidates for a given Hiragana name and a given musical piece. We target 10 popular person names in Japan, such as Aoi, Sakura and Haruto. The experiments show that the proposed system can recommend appropriate candidates for 76% of combinations of input names and musical pieces.
View full abstract
-
Kazuki YAMADA, Suguru MATSUYOSHI
Session ID: 23-04-098
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We target Double Cross The 3rd edition TRPG. We collected 28 scenarios of the TRPG system for analyzing story structure, event types, the number of scenes in each story phase , and battle parameters of boss enemies . Then, we developed a scenario template of the TRPG system for beginners which consists of 13 spread sheets with 146 blank cells. We asked 6 subjects to write scenarios with/without the proposed template. The experiment showed that our template could recommend proper texts or values for more than 30 cells and reduce writing time.
View full abstract
-
Takato KONNO, Suguru MATSUYOSHI
Session ID: 23-04-099
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
A female character model kit is a plastic kit of a girl character or a humanized robot. We surveyed 196 model kits of 13 female character model series manufactured in Japanese 8 companies. Our findings are: The 13 series have similar distributions of hair colors, colors of arms and face parts; Distribution of hair styles and eye colors vary in the 13 series The degree of freedom is inversely proportional to the numbers of world settings and character settings. We conclude that it is important to choose hair colors and colors of arms according to traits of target characters in the future.
View full abstract
-
Hikaru NAGATOMI, Suguru MATSUYOSHI
Session ID: 23-04-100
Published: 2024
Released on J-STAGE: June 30, 2025
CONFERENCE PROCEEDINGS
RESTRICTED ACCESS
We target Uma Musume Pretty Derby and Blue Archive as popular social games in Japan. We asked annotators to
label excitement degrees to episodes in all the chapters of these games’ stories. Majority of annotators labeled “most exciting”
to 37 episodes out of 89 and 125 episodes out of 291 for Uma Musume and Blue Archive, respectively. We found that these
games have the most exciting elements at the middle and final points of their storylines as TV dramas and movies usually do.
We also found that the episodes of the final chapter tend to be judged as “most exciting.”
View full abstract