IIEEJ Transactions on Image Electronics and Visual Computing
Online ISSN : 2188-1901
Print ISSN : 2188-1898
ISSN-L : 2188-191X
Volume 12, Issue 1
Displaying 1-7 of 7 articles from this issue
Special Issue on Journal Track Papers in IEVC2024
Contributed Papers
  • Thalita Munique COSTA, Yoko USAMI, Mai IWAYA, Yuka TAKEZAWA, Yuika NAT ...
    2024Volume 12Issue 1 Pages 2-14
    Published: 2024
    Released on J-STAGE: April 10, 2025
    JOURNAL RESTRICTED ACCESS

    A common task executed in the medical routine is the identification, classification, quantification, and analysis of white blood cells from peripheral blood, which is commonly done with the help of automatic counters. Some of the most popular machines present low accuracy and commit relevant mistakes in the classification of the cells. In this work, we propose and discuss the use of the deep learning architecture YOLOv7 in the reclassification of blood cell images segmented by the machine CellaVisionTM DM96 into 11 classes, i.e., Band Neutrophil, Segmented Neutrophil, Basophil, Eosinophil, Erythroblast, Thrombocyte, Lymphocyte, Lymphocyte Variant, Metamyelocyte, Monocyte, and Myelocyte, in single and cascade classification methods. The classification made by CellaVisionTM DM96 achieved an accuracy of 76.20%, precision of 80.93%, recall of 92.87%, and F1-Score of 86.49%. The single classification method presented a mean accuracy of 93.59%, precision of 96.16%, recall of 97.23%, and F1-Score of 96.69%. The Cascade method resulted in a mean accuracy of 93.85%, precision mean of 96.81%, a recall of 97.23%, and F1-Score of 96.83% for the same evaluation database. Both methods proved effective in increasing the performance in blood images classification and, mainly the cascade method, reduced the rate of more relevant mistakes.

    Download PDF (3702K)
  • Mayu NAMAI, Issei FUJISHIRO
    2024Volume 12Issue 1 Pages 15-22
    Published: 2024
    Released on J-STAGE: April 10, 2025
    JOURNAL RESTRICTED ACCESS

    A variety of summarization techniques has recently been proposed to manage the growing volume of me- dia data, but most are oriented toward homogeneous media conversion, resulting in a limited compression ratio. In this study, we focus on the creation of a vignette illustration that represents the story in an animation or game briefly and that allows the viewer to understand its world perspective at a glance. If video can be converted into vignette illustration, it is expected to provide a more highly compressed summary of the media information. This paper proposes VigNet, a system that semiautomatically converts an input video into vignette illustrations so they reflect users’ preferences.

    Download PDF (22025K)
  • Hidenori ITAYA, Tsubasa HIRAKAWA, Takayoshi YAMASHITA, Hironobu FUJIYO ...
    2024Volume 12Issue 1 Pages 23-31
    Published: 2024
    Released on J-STAGE: April 10, 2025
    JOURNAL RESTRICTED ACCESS

    Multitask learning can be utilized to efficiently acquire common factors and useful features among several different tasks. This learning method has been applied in various fields because it can improve the performance of a model by solving related tasks with a single model. One type of multitask learning utilizes auxiliary tasks, which improves the performance of the target task by learning auxiliary tasks simultaneously. In the video game strategy task, unsupervised reinforcement learning and auxiliary learning (UNREAL) has achieved a high performance in a maze game by introducing an auxiliary task. However, in this method, the auxiliary task must be appropriate for the target task, which is very difficult to determine in advance because the most effective auxiliary task will change dynamically in accordance with the learning status of the target task. Therefore, we propose an adaptive selection mechanism called auxiliary selection for auxiliary tasks based on deep reinforcement learning. We applied our method to UNREAL and experimentally confirmed its effectiveness in a variety of video games.

    Download PDF (2294K)
  • Kousuke KATAYAMA, Toru HIGAKI, Kazufumi KANEDA, Bisser RAYTCHEV, Watar ...
    2024Volume 12Issue 1 Pages 32-39
    Published: 2024
    Released on J-STAGE: April 10, 2025
    JOURNAL RESTRICTED ACCESS

    We developed a real-time, intuitive interaction and photo-realistic illumination method for CT volume rendering. Our approach involves auto-stereoscopic display and hand-sensor-based gesture control, as well as lightweight and effective illumination, and a fast-sampling algorithm that enables them to be rendered in real-time. Consequently, our rendering method achieved render volume data obtained from general CT examinations at real-time 4K stereo view, allowing intuitive comprehension of the 3D structure, and providing the realism required not only for diagnostics but also for educational materials and forensic evidence.

    Download PDF (13426K)
  • Joichiro MURAOKA, Kosei TOMIOKA, Yusei MURAISHI, Naoki HASHIMOTO, Mie ...
    2024Volume 12Issue 1 Pages 40-47
    Published: 2024
    Released on J-STAGE: April 10, 2025
    JOURNAL RESTRICTED ACCESS

    The development of virtual reality technology has made it possible to easily change the appearance of a person, and it has been suggested that the impression change of the appearance using virtual reality avatars affects not only self-perception but also weight perception. We have studied the physical effects of the impression change of the own body during the weight illusion based on electromyogram. As a result, a significant correlation was obtained between the electromyogram and the degree of the weight illusion among the subjects, although the correlation was not significant for each subject. In this study, to investigate the change in motion during the weight illusion, we analyze the motion under the impression change of the own body through a comparison experiment of dumbbells’ weights. The results showed that subjects who obtained a correlation between the degree of impression of the strength of the avatar and the degree of weight illusion had a positive correlation between the degree of impression of the strength of the avatar and the velocity of their motion. The results suggest that the velocity of motion, the degree of impression of the strength of the avatar, and the degree of the weight illusion are related with each other.

    Download PDF (17033K)
  • Shuto KINOSHITA, Yasushi YAMAZAKI
    2024Volume 12Issue 1 Pages 48-55
    Published: 2024
    Released on J-STAGE: April 10, 2025
    JOURNAL RESTRICTED ACCESS

    With the rapid spread of smartphones, user authentication on smartphones has become essential. However, conventional user authentication methods using PINs, passwords, pattern locks, etc. have a problem in that users are not authenticated continuously after the first successful authentication; therefore, there is a risk that an authenticated smartphone might be used improperly by unauthorized individuals. To address this problem, continuous authentication that verifies the user’s identity without burdening the user by continuously acquiring biometric information from his/her daily smartphone usage has been proposed. Specifically, as smartphones are utilized for various purposes, ensuring the authenticity of the user during text input actions is crucial. Therefore, in this paper, we focus on achieving continuous authentication on smartphones on the basis of flick input behavior, which is a typical text input action by many smartphone users. We aimed to extract user-specific features from Japanese free text input through flick operations in daily usage and evaluated the effectiveness of continuous authentication on the basis of these extracted features. The simulation results indicated that a certain level of authentication accuracy can potentially be maintained by appropriately selecting suitable features.

    Download PDF (1939K)
feedback
Top