Medical Imaging Technology
Online ISSN : 2185-3193
Print ISSN : 0288-450X
ISSN-L : 0288-450X
Volume 35, Issue 4
Displaying 1-11 of 11 articles from this issue
Main Topics / Deep Learning Applications, Research and Development in Medical Imaging
  • Kenji SUZUKI
    2017 Volume 35 Issue 4 Pages 177-179
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    Download PDF (745K)
  • Hayaru SHOUNO
    2017 Volume 35 Issue 4 Pages 180-186
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    In this paper, we explain about a basic architecture and learning style of deep convolution neural network (DCNN), which is known as a kind of deep learning (DL) system, and also show an application of medical image classification. The DCNN is a combination of neural network architecture called “Neocognitron” and learning method called error back propagation (BP). One of the important factor for the performance of DCNN is a balance between the number of the free parameters in the network and the scale of the training dataset. In several field such like medical imaging, it is hard to acquire labeled data. The small dataset sometimes occur the overtraining. In order to prevent the overtraining, we introduce a transfer style learning method into the DCNN, which improves the classification performance.
    Download PDF (1690K)
  • Xiangrong ZHOU, Hiroshi FUJITA
    2017 Volume 35 Issue 4 Pages 187-193
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    This paper introduces research works that apply deep learning approaches based on ConvNet to solve automatic multi-organ segmentations on CT images that cover a wide range of human body. In particular, we describe our recent research work as an example to show multiple-organ segmentation methods on CT images by using ConvNets. We discuss strength and weakness of the ConvNet that is majorly used for 2D image processing and its extension for 3D images with the latest research progresses. Finally, we compare the deep learning approaches to the conventional approach that is designed by the processing procedures based on human experience and shows an advantage and potential possibility of ConvNets to address the issue of automatic multi-organ segmentations on CT images covering a wide range of human body.
    Download PDF (1713K)
  • Yasushi HIRANO, Takayoshi ITO, Noriaki HASHIMOTO, Shoji KIDO, Kenji SU ...
    2017 Volume 35 Issue 4 Pages 194-199
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    In this paper, a brief overview of massive-training artificial neural network (MTANN) deep learning and its applications are described. The MTANN deep learning is a class of neural networks which directly learn and output images, whereas other deep learning generally outputs classes. The input to the neural network is pixel values in a local region (image patch) in an input image, whereas the output is a single pixel value. The entire output image is obtained by scanning the neural network with the local window (region) in a convolutional manner. In a training stage, a distribution map of the likelihood of being a lesion is given as a teaching image for the MTANN. For example, to classify between lung nodules and non-nodules, a Gaussian distribution, its peak of which is located at the center of a nodule, is given for a positive (i.e., nodule) sample; and value zeros for a negative (i.e., non-nodule) sample. The authors introduce the applications of the MTANN deep learning to false-positive reduction in computer-aided detection of non-polypoid (“flat”) lesions in CT colonography, differential diagnosis of lung nodules in chest CT, and classification of diffuse lung diseases in chest CT.
    Download PDF (1358K)
  • Mitsutaka NEMOTO
    2017 Volume 35 Issue 4 Pages 200-205
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    There are various researches of medical image analysis and computer-aided diagnosis based on deep learning, recently. In this paper, we introduce our studies to apply deep learning for detection of cerebral aneurysms on brain magnetic resonance image (MRI). On the other hand, it is known that manual optimization of much hyper-parameters for deep learning is tough and time-consuming. We also introduce our studies to optimize the hyper-parameters automatically.
    Download PDF (1973K)
  • Ken’ichi MOROOKA, Kaoru KOBAYASHI
    2017 Volume 35 Issue 4 Pages 206-211
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    There are support systems for surgery using 3D object models of human organs such as surgical simulation and preoperative surgical planning. One of fundamental techniques in the support systems is to estimate the organ deformation in real-time. Finite element method (FEM) is one of well-known techniques for accurately simulating the physical behaviors of objects. However, FE analysis requires substantial computational expenses to obtain more realism simulation. To solve the problem, we have been constructing neural networks to estimate the nonlinear organ deformation. By using the training data generated by nonlinear FEM, the network learns the organ deformation when an external force acts on the organ surface. The computations in the network is the weighted sum of simple nonlinear functions. Therefore, our method achieves the real-time FE analysis while keeping the analysis accuracy. In this paper, we show the overview of our method and the experimental results obtained by our method.
    Download PDF (1533K)
Survey Paper
  • Kenji SUZUKI
    2017 Volume 35 Issue 4 Pages 212-226
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    Recently, a machine learning (ML) area called deep learning emerged in the computer-vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer-vision competition, ImageNet Classification. Since then, researchers in many fields, including medical image analysis, have started actively participating in the explosively growing field of deep learning. In this paper, deep learning techniques and their applications to medical image analysis are surveyed. This survey overviewed 1) standard ML techniques in the computer-vision field, 2) what has changed in ML before and after the introduction of deep learning, 3) ML models in deep learning, and 4) applications of deep learning to medical image analysis. The comparisons between MLs before and after deep learning revealed that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is learning image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The survey of deep learning also revealed that there is a long history of deep-learning techniques in the class of ML with image input, except a new term, “deep learning”. “Deep learning” even before the term existed, namely, the class of ML with image input was applied to various problems in medical image analysis including classification between lesions and non-lesions, classification between lesion types, segmentation of lesions or organs, and detection of lesions. ML with image input including deep learning is a very powerful, versatile technology with higher performance, which can bring the current state-of-the-art performance level of medical image analysis to the next level, and it is expected that deep learning will be the mainstream technology in medical image analysis in the next few decades. “Deep learning”, or ML with image input, in medical image analysis is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical image analysis in the next few decades.
    Editor's pick

    Download PDF (1742K)
Papers
  • Yudai YAMAZAKI, Eiichi TAKAHASHI, Masaya IWATA, Hirokazu NOSATO, Ayumi ...
    2017 Volume 35 Issue 4 Pages 227-238
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death in women around the world. Breast cancer screening is essential to breast cancer. Recently, breast ultrasound imaging is one of the most popular techniques. However, the reading of ultrasound images require well-trained radiologists. In order to reduce the burden, Computer-aided detection (CADe) systems are developed to help radiologists in the detection task. However, there are a lot of miss-detection in muscle and fat tissue. To reduce miss-detection, I proposed tumor detection method using mammary segmentation. In the experiments, I show the effectiveness of this method for tumor detection.
    Download PDF (2940K)
  • Yoshitomi HARADA, Tatsuya NOMURA, Hidetoshi MIYAKE
    2017 Volume 35 Issue 4 Pages 239-249
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    The detection error of the lung cancer nodule has a cognitive error and a judgment error. First, it is important for readers to make the candidate nodules clearly visible in order to reduce a cognitive error. The ribs and the pulmonary vessels are misinterpreted as the candidate nodules. Hereafter for computer-aided automatic detection of pulmonary nodules, it is necessary to process the pulmonary hilar vessels which are frequently extracted as false-positives. We herein propose a new method to control the pulmonary vascular shadow near the hilum and to clarify pulmonary nodules in chest radiographs. The pulmonary vessel (or its en face view) is extracted as a continuing linear shadow and is rearranged to the neighboring brightness level using a two-dimensional histogram. Pulmonary nodules are relatively enhanced by the equalization of the brightness level of the false-positive pulmonary vessel. We evaluated the new images obtained by applying the proposed technique to 154 images with nodules in the JSRT database. A radiologist and a student evaluated the new images in a point of visibility of the nodules. In this evaluation of 117 images excluding “extremely subtle” and “obvious”, 76% of cases were clearly depicted. The brightness of the pulmonary vessel was controlled and visibility of the pulmonary nodules was well improved. Further improvement is expected by showing original radiographs and proposed images simultaneously.
    Download PDF (2506K)
Tutorial
  • Chiyo YAMAUCHI-KAWAURA
    2017 Volume 35 Issue 4 Pages 250-254
    Published: 2017
    Released on J-STAGE: September 30, 2017
    JOURNAL FREE ACCESS
    X-ray computed tomography (CT) examinations are frequently used in diagnosing various diseases as they can provide high-definition images of arbitrary physical cross-sections in a short time. However, it has been known that CT examinations provide relatively higher radiation doses to patients than other radiological image diagnostic examinations. Furthermore, an increase in carcinogenic rates has been observed in children who have undergone multiple CT examinations, according to the latest epidemiological survey for children. But it is not as if the carcinogenic risk due to CT examinations cannot decrease. If only we correctly understand the correlation between the advantage to patients that image quality brings and the disadvantages regarding radiation doses, the carcinogenic risk to patients of useless radiation exposure could possibly be reduced. To realize this, it is first necessary to more precisely determine the exposure doses of patients undergoing CT examinations. This paper introduces the dose level of patients undergoing CT examinations and some trials for optimization of radiation protection for patients in Japan.
    Download PDF (791K)
Editors’ Note
feedback
Top