医用画像情報学会雑誌
Online ISSN : 1880-4977
Print ISSN : 0910-1543
ISSN-L : 0910-1543
36 巻, 2 号
選択された号の論文の21件中1~21を表示しています
巻頭言
依頼総説
招待解説
招待解説論文
依頼総説
  • -基本編-
    高橋 規之
    原稿種別: 依頼総説
    2019 年 36 巻 2 号 p. 50-52
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    DIGITS is a free web application that can be used to simply and rapidly train deep neural network for image classification, segmentation, and object detection. DIGITS provides a graphical interface to frameworks without dealing with them directly on the command line. DIGITS simplifies deep learning procedures such as constructing data, setting and training deep neural networks, observing training performance in real time with visualization. This paper presents a brief overview of how to use DIGITS to construct dataset of images for classification, train network a model, and classify the images using chest X-ray images.

招待解説論文
原著論文
  • 大島 あみ, 神谷 直希, 篠原 範充
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 59-63
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    Breast cancer is the topmost incident cancer in Japanese women. Mammography is used for population-based screening of breast cancer, and mammary gland density is used for risk management. Four categories are defined for mammary gland density, and doctors and technicians perform qualitative visual classification. Therefore, objective estimation of mammary gland density is required. In this study, we propose an automatic classification method of mammary gland density in mammograms using a deep convolutional neural network(DCNN). AlexNet is used for the DCNN, and five input image sets are prepared. The configuration is the original image only, the edge image only, and a combination of the original and edge images. In the edge image, the kernel size was set to 3 or 5. Finally, the mammary gland density was output from the four categories as the predicted classification result. Using the population-based screening data, 1106 mediolateral oblique images of right and left breasts were used. As a result, the average concordance rate between the predictive classification result and doctors' evaluation achieved 82.3% when only the original images was used.

  • 松山 江里, 李 鎔範, 高橋 規之, 蔡 篤儀
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 64-71
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    In recent years, convolutional neural networks(CNNs)have been exploited in medical imaging research field and have successfully shown their ability in image classification and detection. In this paper we used a CNN combined with a wavelet transform approach for histologically classifying a dataset of 548 lung CT images into 5 categories, e.g. lung adenocarcinoma, lung squamous cell carcinoma, metastatic lung cancer, potential lung cancer and normal. The main difference between the commonly-used CNNs and the presented method is that we use redundant wavelet coefficients at level 1 as inputs to the CNN instead of using original images. One of the major advantages of the proposed method is that it is no need to extract the regions of interest from images in advance. The wavelet coefficients of the entire image are used as inputs to the CNN .We compare the classification performance of the proposed method to that of an existing CNN classifier and a CNN-based support vector machine classifier. The experimental results show that the proposed method can achieve the highest overall accuracy of 91.7% and demonstrate the potential for use in classification of lung diseases in CT images.

  • 畠野 和裕, 村上 誠一, 植村 知規, 陸 慧敏, 金 亨燮, 青木 隆敏
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 72-76
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    Osteoporosis is known as one of the main diseases of bone. Although image diagnosis for osteoporosis is effective, there are concerns about increased burden of radiologists associated with diagnostic imaging, uneven diagnostic results due to experience difference, and undetected lesions. Therefore, in this study, we propose a diagnosis supporting method for classifying osteoporosis from phalanges computed radiography images and presenting classification results to physicians. In the proposed method, we construct classifiers using convolution neural network and classify normal cases and abnormal cases about osteoporosis. In our experiments, two kinds of CNN models were constructed using input images generated from 101 cases of CR images and evaluated using Area Under the Curve(AUC)value on Receiver Operating Characteristics(ROC)curve. Finaly, AUC of 0.995 was obtained.

  • 芳野 由利子, 陸 慧敏, 金 亨燮, 村上 誠一, 青木 隆敏, 木戸 尚治
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 77-82
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    A temporal subtraction image is obtained by subtracting a previous image, which are warped to match between the structures of the previous image and one of a current image, from the current image. The temporal subtraction technique removes normal structures and enhances interval changes such as new lesions and changes of existing abnormalities from a medical image. However, many artifacts remain on a temporal subtraction image and these can be detected as false positives on the subtraction images. In this paper, we propose a 3D-CNN after initial nodule candidates are detected using temporal subtraction technique. To compare the proposed 3D-CNN, we used 7 model architectures, which are 3D ShallowNet, 3D-AlexNet, 3D-VGG11, 3D-VGG13, 3D-ResNet8, 3D-ResNet20, 3D-ResNet32, with these performance on 28 thoracic MDCT cases including 28 small-sized lung nodules. The higher performance is showed on 3D-AlexNet.

  • 山田 朋奈, 李 鎔範, 長谷川 晃
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 83-87
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    The purpose of this paper is to develop a computerized classification method for right or left and directions of arms in forearm X-ray images using a deep convolutional neural network(DCNN). 648 radiographs were obtained by using X-ray lower arm phantoms. These images were downsized to 213×256 pixels and used as training and test images in the DCNN. AlexNet and GoogLeNet were used as the DCNN. All radiographs were classified to eight categories by the DCNN. Classification accuracies were obtained by nine-fold cross validation tests. The accuracies using AlexNet and GoogLeNet were 79.3% and 92.6%, respectively. GoogLeNet would be useful to classify forearm radiographs automatically. The proposed method may contribute to quality assurance for medical images.

  • 腰高 美穂, 榎本 和馬, 寺本 篤司, 藤田 広志
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 88-92
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    Breast density is an important diagnostic information because it is related to detection sensitivity of breast cancer and cancer risk. However, since the observer classifies density subjectively, variations in judgment result from individual differences or experience differences. In this study, we developed a novel method for automated classification of mammograms using deep convolutional neural network(DCNN). In the method, ninety-three mammograms from cancer screening programs were included in this study. A two-dimensional image was provided to the input layer of the DCNN. Four output units corresponding to four levels of breast density were obtained via two convolution, pooling, and fully connected layers. Here, as an input image, high-pass filtered images were given to the input layer in order to emphasize the skin line and the mammary glands in the mammogram. Furthermore, trimming of background of mammogram was conducted for the data augmentation. The evaluation of the 93 mammograms gave a correct classification rate of 86%. Moreover, when preprocessing(high-pass filter and trimming)were applied to the input image, the classification ability was improved as compared with the case where the mammogram was directly input. These results indicate that DCNN will be useful for breast density evaluation and risk assessment using mammograms.

  • 吉岡 拓弥, 内山 良一
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 93-97
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    The treatment plan of lung cancer patient is determined based on TNM classification. However, this treatment plan is not necessarily based on prognosis. The ability to predict patient's prognosis by image examination would yield new information in formulating a treatment plan. The purpose of this study is to develop a method for the prognostic prediction among lung cancer patients. The public database NSCLC-Radiomics was used in this study. Sixty seven patients classified as stage I were selected and their pretreatment computed tomography(CT)images and survival times were obtained. First,we selected one slice containing the largest tumor area and manually segmented the tumor regions. We subsequently determined 294 radiomic features such as tumor size, shape, CT values, texture, and so on. Four radiomic features were selected by using least absolute shrinkage and selection(Lasso). Cox regression model and random survival forest(RSF)with the selected 4 radiomic features were employed for estimating the survivor functions of 67 patients. Time-dependent receiver operating characteristic(ROC)analysis was used for evaluating the estimation accuracy. Average area under the curve(AUC)values of Cox regression model and RSF were 0.741 and 0.826, respectively. Therefore, it revealed that RSF had higher accuracy in prognostic prediction. Our proposed method for the prognostic prediction of lung cancer patients can provide useful information in formulating patients' treatment plans.

研究速報
  • 長谷川 晃, 野口 映花, 李 鎔範
    原稿種別: 研究論文
    2019 年 36 巻 2 号 p. 98-101
    発行日: 2019/06/30
    公開日: 2019/06/28
    ジャーナル フリー

    Unsharpnesses are likely to occur with a high heart rate in angiography. In this study, U-Net was used to remove unsharpness for the purpose of improving the image quality of x-ray movies in the cardiovascular imaging. Dynamic x-ray images including unsharpness were taken with the moving speed of the metronome at 100, 200 beats/minute (bpm). Standard deviation(SD)and modulation transfer function(MTF)were measured and used to evaluate the effect of artifact removal. As a result, mean SDs of original images and processed images by U-Net were 4.34 and 0.54, respectively. Similarly, mean cut-off frequencies of MTF of original images and processed images by U-Net were 0.52 mm−1 and 4.6 mm−1, respectively. Since SD was greatly reduced and MTF was greatly improved, U-Net would improve the image quality of improvement cardiovascular dynamic x-ray images.

企業総説
feedback
Top