Journal of the Japan Society for Precision Engineering
Online ISSN : 1882-675X
Print ISSN : 0912-0289
ISSN-L : 0912-0289
Volume 88, Issue 12
Displaying 1-25 of 25 articles from this issue
Special Issue : Advancement of Machine Tools Which Support Manufacturing
Review
Lecture
My Experience in Precision Engineering
Gravure & Interview
Introduction to Precision Engineering
Introduction of Laboratories
 
Selected Papers for Special Issue on Industrial Application of Image Processing
  • Osamu NASU, Daisuke SHIMBO, Junji SUKENO, Manabu HASHIMOTO
    2022 Volume 88 Issue 12 Pages 895-902
    Published: December 05, 2022
    Released on J-STAGE: December 05, 2022
    JOURNAL FREE ACCESS

    This paper introduces a fast image matching method using coarse to fine search. For reliable and high-speed matching, it is important to decide on optimal interval on coarse search. In the proposed method, efficient search is realized by making the coarse search interval non-uniform in order to make the performance of rotation matching uniform. Through experiments using 4 types×121 test images, it was confirmed that the proposed method can achieve a speedup of up to 60% compared to the conventional method.

    Download PDF (5760K)
  • Shunsuke NAKATSUKA
    2022 Volume 88 Issue 12 Pages 903-909
    Published: December 05, 2022
    Released on J-STAGE: December 05, 2022
    JOURNAL FREE ACCESS

    In anomaly detection using deep learning, normal models based on pretrained CNNs using only normal data have become the mainstream. This framework can only use normal data for training and discards valuable information even when abnormal data is available. In addition, PaDiM, one of the representative models in this framework, creates a normal model for each position and thus cannot consider the relationship between each pixel. In this paper, we propose a method to generate a normal model by considering the information of anomalous data and neighborhood information, and achieve an image-level AUROC: 0.984 on MVTec AD.

    Download PDF (2532K)
  • —Breaking away from Model-First Development to Once-for-All AI Model Development—
    Masakazu INOUE, Shogo AKIYAMA, Toshiaki OHGUSHI, Masao YAMANAKA
    2022 Volume 88 Issue 12 Pages 910-918
    Published: December 05, 2022
    Released on J-STAGE: December 05, 2022
    JOURNAL FREE ACCESS

    Once-for-All (OFA) is an AI model development method that allows a model (Supernet), a redundant representation of a base AI model (Base Model), to be trained only once to obtain models (Subnets) that are suitable for various devices in terms of accuracy, processing speed and number of parameters. In this paper, we address a road obstacle detection system consisting of multiple AI models, and apply OFA to each AI model. Finally, we succeed in obtaining the optimal Subnets for the entire system by considering the combination of the obtained Subnets.

    Download PDF (4292K)
  • Takuya GOTO, Hiroki NAKANO, Hajime ASAMA, Atsushi YAMASHITA
    2022 Volume 88 Issue 12 Pages 919-925
    Published: December 05, 2022
    Released on J-STAGE: December 05, 2022
    JOURNAL FREE ACCESS

    The predictive performance of segmentation highly depends on the quality of annotations. However, in case of medical images, it is difficult to annotate precise region of disease since they have variety of shapes and ambiguous boundaries. Improving segmentation performance of models trained with rough annotation will make it easier to create training data. This paper proposes a novel method to improve auto tumor segmentation from computed tomography (CT) images using noisy annotations. The proposed method uses pre-training to detect rough tumor regions and fine-tuning to refine the roughly detected regions. In order to refine roughly detected regions, Active Contours Dice (ACD) Loss is introduced in fine-tuning. In our experiments using simulated rough annotations, the proposed method improves segmentation performance.

    Download PDF (2926K)
  • Wataru GANAHA, Takuya KAMITANI, Masashi NISHIYAMA, Yoshio IWAI
    2022 Volume 88 Issue 12 Pages 926-936
    Published: December 05, 2022
    Released on J-STAGE: December 05, 2022
    JOURNAL FREE ACCESS

    We propose a method for temporally enhancing the high-frequency components of video sequences with no artifacts to observe the micromovement of the human body. The existing methods of video motion magnification cause severe artifacts in the enhanced video sequence because the temporal micromovement and the spatial appearance of a subject are not stably separated from the input video sequence. When the observer views the enhanced video sequence with severe artifacts, they cannot fully check the high-frequency components of body sway. Here, we assume that the temporal micromovement is the same among all pixels contained in the subject's head. Our method stably separates the video sequence into the temporal micromovement and the spatial appearance. Then, our method amplifies the high-frequency components of the temporal micromovement. The experimental results show that our method enhanced the video sequence of body sway with no artifacts. We confirmed that the high-frequency components were viewed in the enhanced video sequence. We also visualized the reason for success or failure in baggage weight classification using the video sequence of body sway as an application of our method.

    Download PDF (5210K)
feedback
Top