精密工学会誌
Online ISSN : 1882-675X
Print ISSN : 0912-0289
ISSN-L : 0912-0289
88 巻, 12 号
選択された号の論文の25件中1~25を表示しています
特集:ものづくりを支える工作機械の高度化
展望
解説
私の歩んできた道
グラビアとインタビュー 精密工学の最前線
はじめての精密工学
研究所・研究室紹介
 
画像技術の実利用特集論文
  • 那須 督, 新保 大介, 助野 順司, 橋本 学
    2022 年 88 巻 12 号 p. 895-902
    発行日: 2022/12/05
    公開日: 2022/12/05
    ジャーナル フリー

    This paper introduces a fast image matching method using coarse to fine search. For reliable and high-speed matching, it is important to decide on optimal interval on coarse search. In the proposed method, efficient search is realized by making the coarse search interval non-uniform in order to make the performance of rotation matching uniform. Through experiments using 4 types×121 test images, it was confirmed that the proposed method can achieve a speedup of up to 60% compared to the conventional method.

  • 中塚 俊介
    2022 年 88 巻 12 号 p. 903-909
    発行日: 2022/12/05
    公開日: 2022/12/05
    ジャーナル フリー

    In anomaly detection using deep learning, normal models based on pretrained CNNs using only normal data have become the mainstream. This framework can only use normal data for training and discards valuable information even when abnormal data is available. In addition, PaDiM, one of the representative models in this framework, creates a normal model for each position and thus cannot consider the relationship between each pixel. In this paper, we propose a method to generate a normal model by considering the information of anomalous data and neighborhood information, and achieve an image-level AUROC: 0.984 on MVTec AD.

  • ―一品一葉のAIモデル開発からの脱却,一品多様のAIモデル開発へ―
    井上 昌和, 秋山 祥伍, 大串 俊明, 山中 正雄
    2022 年 88 巻 12 号 p. 910-918
    発行日: 2022/12/05
    公開日: 2022/12/05
    ジャーナル フリー

    Once-for-All (OFA) is an AI model development method that allows a model (Supernet), a redundant representation of a base AI model (Base Model), to be trained only once to obtain models (Subnets) that are suitable for various devices in terms of accuracy, processing speed and number of parameters. In this paper, we address a road obstacle detection system consisting of multiple AI models, and apply OFA to each AI model. Finally, we succeed in obtaining the optimal Subnets for the entire system by considering the combination of the obtained Subnets.

  • 後藤 拓矢, 中野 宏毅, 淺間 一, 山下 淳
    2022 年 88 巻 12 号 p. 919-925
    発行日: 2022/12/05
    公開日: 2022/12/05
    ジャーナル フリー

    The predictive performance of segmentation highly depends on the quality of annotations. However, in case of medical images, it is difficult to annotate precise region of disease since they have variety of shapes and ambiguous boundaries. Improving segmentation performance of models trained with rough annotation will make it easier to create training data. This paper proposes a novel method to improve auto tumor segmentation from computed tomography (CT) images using noisy annotations. The proposed method uses pre-training to detect rough tumor regions and fine-tuning to refine the roughly detected regions. In order to refine roughly detected regions, Active Contours Dice (ACD) Loss is introduced in fine-tuning. In our experiments using simulated rough annotations, the proposed method improves segmentation performance.

  • 我那覇 航, 神谷 卓也, 西山 正志, 岩井 儀雄
    2022 年 88 巻 12 号 p. 926-936
    発行日: 2022/12/05
    公開日: 2022/12/05
    ジャーナル フリー

    We propose a method for temporally enhancing the high-frequency components of video sequences with no artifacts to observe the micromovement of the human body. The existing methods of video motion magnification cause severe artifacts in the enhanced video sequence because the temporal micromovement and the spatial appearance of a subject are not stably separated from the input video sequence. When the observer views the enhanced video sequence with severe artifacts, they cannot fully check the high-frequency components of body sway. Here, we assume that the temporal micromovement is the same among all pixels contained in the subject's head. Our method stably separates the video sequence into the temporal micromovement and the spatial appearance. Then, our method amplifies the high-frequency components of the temporal micromovement. The experimental results show that our method enhanced the video sequence of body sway with no artifacts. We confirmed that the high-frequency components were viewed in the enhanced video sequence. We also visualized the reason for success or failure in baggage weight classification using the video sequence of body sway as an application of our method.

feedback
Top