Medical Imaging and Information Sciences
Online ISSN : 1880-4977
Print ISSN : 0910-1543
ISSN-L : 0910-1543
Volume 36, Issue 2
Displaying 1-21 of 21 articles from this issue
Invited Review Article
  • Kunihiko FUKUSHIMA
    2019Volume 36Issue 2 Pages 17-24
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    Recently, deep convolutional neural networks (deep CNN) have become very popular in the field of visual pattern recognition. The neocognitron, which was first proposed by Fukushima (1979), is a network classified to this category. Its architecture was suggested by neurophysiological findings on the visual systems of mammals. It is a hierarchical multi-layered network. It acquires the ability to recognize visual patterns robustly through learning. Although the neocognitron has a long history, improvements of the network are still continuing. This paper discusses the recent neocognitron focusing on differences from the conventional deep CNN.

    Download PDF (1843K)
  • Hiroshi FUJITA
    Article type: review-article
    2019Volume 36Issue 2 Pages 25-29
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    A third artificial intelligence(AI)boom has been generated, and remarkable AI movement has come to be noted even in the medical diagnostic imaging area. In this review article, we focus on AI in the medical imaging area, in particular, the trend of recent development and practical application of “computer-aided diagnosis(CAD)” using deep learning technology(AI-CAD), and the future predictions will be discussed at the end.

    Download PDF (2419K)
  • Shuji YAMAMOTO
    Article type: review-article
    2019Volume 36Issue 2 Pages 30-38
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    This paper describes the effectiveness of using machine learning in clinical trials. Machine learning has used various applications in the field of radiology.
    The GPU cloud system is widely utilized for gathering the bigdata of medical image information and applying machine learning for healthcare check screening in China.
    Artificial intelligence enables “predictive” analysis by learning data from many years of time series and demonstrates its effectiveness and power.
    This paper suggests the future medical care by the distributed cooperative system such as blockchain and describes the direction and prospect of application of medical image information in the future.

    Download PDF (7413K)
Invited Review
  • -Customize -
    Akira HASEGAWA
    2019Volume 36Issue 2 Pages 39-43
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    This paper introduces methods of customizing a neural network using DIGITS which is a development environment of deep learning provided by NVIDIA. This paper includes how to manage job data generated at learning,bootstrap method for resampling on learned model, and learning method using pretrained model. Main topic of this paper is how to customize the existing network model using caffe.

    Download PDF (2711K)
Invited Review Article
Invited Review
  • : Introduction
    Noriyuki TAKAHASHI
    2019Volume 36Issue 2 Pages 50-52
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    DIGITS is a free web application that can be used to simply and rapidly train deep neural network for image classification, segmentation, and object detection. DIGITS provides a graphical interface to frameworks without dealing with them directly on the command line. DIGITS simplifies deep learning procedures such as constructing data, setting and training deep neural networks, observing training performance in real time with visualization. This paper presents a brief overview of how to use DIGITS to construct dataset of images for classification, train network a model, and classify the images using chest X-ray images.

    Download PDF (2283K)
Invited Review Article
  • Akiyoshi HIZUKURI, Ryohei NAKAYAMA
    Article type: research-article
    2019Volume 36Issue 2 Pages 53-58
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    Due to deep learning techniques, automated pathological diagnosis is becoming a real possibility. This article describes a brief overview of research trend for image analysis in pathological image. We explain the image analysis techniques with the traditional procedure based on hand-crafted features and classifiers, and then introduce the analysis techniques with deep learning techniques. Finally, we describe important fundamental technologies for automated pathological diagnosis in the future.

    Download PDF (1608K)
Original Article
  • Ami OSHIMA, Naoki KAMIYA, Norimitsu SHINOHARA
    Article type: research-article
    2019Volume 36Issue 2 Pages 59-63
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    Breast cancer is the topmost incident cancer in Japanese women. Mammography is used for population-based screening of breast cancer, and mammary gland density is used for risk management. Four categories are defined for mammary gland density, and doctors and technicians perform qualitative visual classification. Therefore, objective estimation of mammary gland density is required. In this study, we propose an automatic classification method of mammary gland density in mammograms using a deep convolutional neural network(DCNN). AlexNet is used for the DCNN, and five input image sets are prepared. The configuration is the original image only, the edge image only, and a combination of the original and edge images. In the edge image, the kernel size was set to 3 or 5. Finally, the mammary gland density was output from the four categories as the predicted classification result. Using the population-based screening data, 1106 mediolateral oblique images of right and left breasts were used. As a result, the average concordance rate between the predictive classification result and doctors' evaluation achieved 82.3% when only the original images was used.

    Download PDF (1934K)
  • Eri MATSUYAMA, Yongbum LEE, Noriyuki TAKAHASHI, Du-Yih TSAI
    Article type: research-article
    2019Volume 36Issue 2 Pages 64-71
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    In recent years, convolutional neural networks(CNNs)have been exploited in medical imaging research field and have successfully shown their ability in image classification and detection. In this paper we used a CNN combined with a wavelet transform approach for histologically classifying a dataset of 548 lung CT images into 5 categories, e.g. lung adenocarcinoma, lung squamous cell carcinoma, metastatic lung cancer, potential lung cancer and normal. The main difference between the commonly-used CNNs and the presented method is that we use redundant wavelet coefficients at level 1 as inputs to the CNN instead of using original images. One of the major advantages of the proposed method is that it is no need to extract the regions of interest from images in advance. The wavelet coefficients of the entire image are used as inputs to the CNN .We compare the classification performance of the proposed method to that of an existing CNN classifier and a CNN-based support vector machine classifier. The experimental results show that the proposed method can achieve the highest overall accuracy of 91.7% and demonstrate the potential for use in classification of lung diseases in CT images.

    Download PDF (3883K)
  • Kazuhiro HATANO, Seiichi MURAKAMI, Tomoki UEMURA, Humin LU, Hyoungs ...
    Article type: research-article
    2019Volume 36Issue 2 Pages 72-76
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    Osteoporosis is known as one of the main diseases of bone. Although image diagnosis for osteoporosis is effective, there are concerns about increased burden of radiologists associated with diagnostic imaging, uneven diagnostic results due to experience difference, and undetected lesions. Therefore, in this study, we propose a diagnosis supporting method for classifying osteoporosis from phalanges computed radiography images and presenting classification results to physicians. In the proposed method, we construct classifiers using convolution neural network and classify normal cases and abnormal cases about osteoporosis. In our experiments, two kinds of CNN models were constructed using input images generated from 101 cases of CR images and evaluated using Area Under the Curve(AUC)value on Receiver Operating Characteristics(ROC)curve. Finaly, AUC of 0.995 was obtained.

    Download PDF (2305K)
  • Yuriko YOSHINO, Huimin LU, Hyoungseop KIM, Seiichi MURAKAMI, Takato ...
    Article type: research-article
    2019Volume 36Issue 2 Pages 77-82
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    A temporal subtraction image is obtained by subtracting a previous image, which are warped to match between the structures of the previous image and one of a current image, from the current image. The temporal subtraction technique removes normal structures and enhances interval changes such as new lesions and changes of existing abnormalities from a medical image. However, many artifacts remain on a temporal subtraction image and these can be detected as false positives on the subtraction images. In this paper, we propose a 3D-CNN after initial nodule candidates are detected using temporal subtraction technique. To compare the proposed 3D-CNN, we used 7 model architectures, which are 3D ShallowNet, 3D-AlexNet, 3D-VGG11, 3D-VGG13, 3D-ResNet8, 3D-ResNet20, 3D-ResNet32, with these performance on 28 thoracic MDCT cases including 28 small-sized lung nodules. The higher performance is showed on 3D-AlexNet.

    Download PDF (1362K)
  • Tomona YAMADA, Yongbum LEE, Akira HASEGAWA
    Article type: research-article
    2019Volume 36Issue 2 Pages 83-87
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    The purpose of this paper is to develop a computerized classification method for right or left and directions of arms in forearm X-ray images using a deep convolutional neural network(DCNN). 648 radiographs were obtained by using X-ray lower arm phantoms. These images were downsized to 213×256 pixels and used as training and test images in the DCNN. AlexNet and GoogLeNet were used as the DCNN. All radiographs were classified to eight categories by the DCNN. Classification accuracies were obtained by nine-fold cross validation tests. The accuracies using AlexNet and GoogLeNet were 79.3% and 92.6%, respectively. GoogLeNet would be useful to classify forearm radiographs automatically. The proposed method may contribute to quality assurance for medical images.

    Download PDF (1165K)
  • Miho Koshidaka, Kazuma Enomoto, Atsushi Teramoto, Hiroshi Fujita
    Article type: research-article
    2019Volume 36Issue 2 Pages 88-92
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    Breast density is an important diagnostic information because it is related to detection sensitivity of breast cancer and cancer risk. However, since the observer classifies density subjectively, variations in judgment result from individual differences or experience differences. In this study, we developed a novel method for automated classification of mammograms using deep convolutional neural network(DCNN). In the method, ninety-three mammograms from cancer screening programs were included in this study. A two-dimensional image was provided to the input layer of the DCNN. Four output units corresponding to four levels of breast density were obtained via two convolution, pooling, and fully connected layers. Here, as an input image, high-pass filtered images were given to the input layer in order to emphasize the skin line and the mammary glands in the mammogram. Furthermore, trimming of background of mammogram was conducted for the data augmentation. The evaluation of the 93 mammograms gave a correct classification rate of 86%. Moreover, when preprocessing(high-pass filter and trimming)were applied to the input image, the classification ability was improved as compared with the case where the mammogram was directly input. These results indicate that DCNN will be useful for breast density evaluation and risk assessment using mammograms.

    Download PDF (1902K)
  • Takuya YOSHIOKA, Yoshikazu UCHIYAMA
    Article type: research-article
    2019Volume 36Issue 2 Pages 93-97
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    The treatment plan of lung cancer patient is determined based on TNM classification. However, this treatment plan is not necessarily based on prognosis. The ability to predict patient's prognosis by image examination would yield new information in formulating a treatment plan. The purpose of this study is to develop a method for the prognostic prediction among lung cancer patients. The public database NSCLC-Radiomics was used in this study. Sixty seven patients classified as stage I were selected and their pretreatment computed tomography(CT)images and survival times were obtained. First,we selected one slice containing the largest tumor area and manually segmented the tumor regions. We subsequently determined 294 radiomic features such as tumor size, shape, CT values, texture, and so on. Four radiomic features were selected by using least absolute shrinkage and selection(Lasso). Cox regression model and random survival forest(RSF)with the selected 4 radiomic features were employed for estimating the survivor functions of 67 patients. Time-dependent receiver operating characteristic(ROC)analysis was used for evaluating the estimation accuracy. Average area under the curve(AUC)values of Cox regression model and RSF were 0.741 and 0.826, respectively. Therefore, it revealed that RSF had higher accuracy in prognostic prediction. Our proposed method for the prognostic prediction of lung cancer patients can provide useful information in formulating patients' treatment plans.

    Download PDF (2247K)
Brief Article
  • Akira HASEGAWA, Eika NOGUCHI, Yongbum LEE
    Article type: research-article
    2019Volume 36Issue 2 Pages 98-101
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    Unsharpnesses are likely to occur with a high heart rate in angiography. In this study, U-Net was used to remove unsharpness for the purpose of improving the image quality of x-ray movies in the cardiovascular imaging. Dynamic x-ray images including unsharpness were taken with the moving speed of the metronome at 100, 200 beats/minute (bpm). Standard deviation(SD)and modulation transfer function(MTF)were measured and used to evaluate the effect of artifact removal. As a result, mean SDs of original images and processed images by U-Net were 4.34 and 0.54, respectively. Similarly, mean cut-off frequencies of MTF of original images and processed images by U-Net were 0.52 mm−1 and 4.6 mm−1, respectively. Since SD was greatly reduced and MTF was greatly improved, U-Net would improve the image quality of improvement cardiovascular dynamic x-ray images.

    Download PDF (1043K)
Review Article
  • [in Japanese]
    Article type: review-article
    2019Volume 36Issue 2 Pages 102-104
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS
    Download PDF (6685K)
  • Yoshitaka BITO, Takashi SHIRAHATA, Yoshihiro IWATA, Koji YAMAGUCHI, ...
    Article type: review-article
    2019Volume 36Issue 2 Pages 105-108
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS

    Artificial intelligence(AI)is expected to drastically improve quality and efficiency of healthcare, especially of image diagnosis. The image diagnosis usually comprises workflow including patient acceptance, scan, image interpretation and reporting. The workflow is supported by diagnostic imaging modalities and information systems, and thus, the workflow has many tasks those can be improved by AI : patient risk check, protocol optimization, scan preparation, fast scan, image quality improvement, image quantification, image interpretation, and reporting. Hitachi is working to improve whole workflow according to two concepts. The one is product development concept : plus digital, to make imaging modalities intelligent, and pure digital, to make information system intelligent. The other is technology development concept : hybrid learning, to make AI by combining existing knowledge and machine leaning. Several examples of our development were shown in automatically positioning imaging planes on MRI, fast scan using sparse sampling and deep learning reconstruction on MRI, simultaneous multiparameter mapping on MRI, computer aided detection of lung cancer on CT, and transverse AI application on early MRI diagnosis of dementia. Through these developments, AI shows promising performance in improving diagnostic imaging.

    Download PDF (2812K)
  • [in Japanese]
    Article type: review-article
    2019Volume 36Issue 2 Pages 109-111
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS
    Download PDF (2118K)
  • [in Japanese], [in Japanese], [in Japanese]
    Article type: review-article
    2019Volume 36Issue 2 Pages 112-113
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS
    Download PDF (1947K)
  • [in Japanese], [in Japanese], [in Japanese], [in Japanese]
    Article type: review-article
    2019Volume 36Issue 2 Pages 114-116
    Published: June 30, 2019
    Released on J-STAGE: June 28, 2019
    JOURNAL FREE ACCESS
    Download PDF (1532K)
feedback
Top