Because a huge amount of new data is generated every day, the data that we treat are “big data”, and they are not something that we can handle manually. Machine learning（ML）that can handle such “big data” automatically becomes a rapidly growing, indispensable area of research in the fields of medical imaging and computer vision. Recently, a terminology, deep learning emerged and became very popular in the computer vision field. It started from an event in 2012 when a deep learning approach based on a convolutional neural network（CNN）won an overwhelming victory in the bestknown worldwide computer-vision competition, ImageNet Classification. Since then, researchers in virtually all fields including medical imaging have started actively participating in the explosively growing field of deep learning. In this paper,the field of machine learning in medical imaging before and after the introduction of deep learning is reviewed to make clear 1）what deep learning is exactly, 2）what was changed before and after the introduction of deep learning, and 3）what is the source of the power of deep learning. This review reveals that object/feature-based ML was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is learning image data directly without object segmentation or feature extraction ; thus, it is the source of the power of deep learning. The class of image/pixel-based ML including deep learning has a long history, but gained the popularity recently due to the new terminology, deep learning. The image/pixel-based ML is a versatile technology with substantially high performance. ML including deep learning in medical imaging is an explosively growing, promising field. It is expected that image/pixel-based ML including deep learning will be the mainstream technology in the field of medical imaging in the next few decades.
This paper introduces bioimage-informatics, which is a rather new interdisciplinary research field by collaborations between biology and image informatics. Recent biology often needs to prove their biological findings through appropriate quantitative and objective analysis of bioimages, such as microscopic images or videos. Typical image analysis techniques are not sufficient for the bioimage analysis because bioimages has many difficulties ; for example, less spatiotemporal resolution, more noise, less appearance information, and more ambiguous object boundaries. We, therefore, need to develop new image analysis techniques with enough robustness to deal with those difficulties. An important approach to realize robust analysis is to introduce optimization methods and machine learning methods into image analysis techniques. This paper explains basic idea of those methods and their usefulness for bioimage-informatics.
In this paper, I explained the latest use cases of deep learning in several fields. Moreover, I also explained some use cases and the latest study of deep learning in medical imaging field. There are 3 reasons why deep learning adoption has been accelerated in a couple of years. One reason is GPU（Graphics Processing Unit）. I explained required calculation performance in deep learning in general and how to utilize GPU power for better performance. Moreover, I also explained NVIDIA Deep Learning Platform including the latest GPU hardware, Deep Learning SDK（Software Development kit）and DIGITS（Deep Learning GPU Training System）software.
Deep neural networks（DNNs）play important roles in medical image processing : Given a set of enough number of training data, one can construct a high-performance pattern recognition/regression machine by using a DNN. In this article, the author briefly describes a research trend of DNN in a field of medical image processing.
At the ImageNet Large Scale Visual Recognition Competition（ILSVRC）2012, Hinton et al showed a higher recognition rate and won the victory of this competition. Since then, deep learning has become a focus of attention. In the field of image recognition, the convolutional neural network（CNN）, which is one of the deep learning methods, is most frequently used. Prior to the deep learning era, the boom of AI has been twice so far, and Deep Learning caused a third AI boom. Due to deep learning, accuracy of computer image recognition and speech recognition is dramatically improving. Regarding the image, at the ILSVRC mentioned above, in 2015, AI reached beyond human image recognition ability. The innovation of AI this time is merely an improvement of the image recognition ability of the computer. However, if radiologists increase the amount of reading by using AI in the future, the clinical importance of radiologists will be higher than now. Currently, application of AI to image diagnosis area, which each company is developing, is not limited to applying Deep Learning to image recognition, but also by constructing a system combined with various existing AI technologies.
In this paper, we explain a typical deep learning system called “Deep Convolution Neural Network : DCNN”,which becomes to be de facto standard in the field of computer vision. Moreover, we also explain an application a DCNN into the medical image analysis for the computer aided diagnosis. Considering the balance between sample size and the number of weight is a important factor for developing a deep learning system, however, acquiring the data is hard task in the field of medical imaging. Thus, we introduce a kind of transfer learning method into the DCNN. As the result, we confirm the improvement of classification performance for the diffuse lung disease（DLD）patterns.
Our research group develops an automated method of lung nodule detection in PET/CT images by means of deep learning technique. This review article describes the outline of our study as one application of the deep learning for medical image processing. In the proposed method, initial nodule candidates are detected by nodule enhancement and thresholding techniques. Regarding to the false positive reduction method, both conventional shape / metabolic features and deep convolutional neural network are employed. As a result of performance evaluation, proposed method had the better false positive reduction performance than that of conventional method.
Forensic identification using dental records is one of the efficient methods in large-scale disasters. In order to facilitate record filing process and to alleviate mental burden of dentists who are generally not used to observing corpses,we are investigating an automated dental record filing method. This paper introduces our recent study on automated classification of tooth types using a deep convolutional neural network.
It is important to build a teaching file system for medical staff. However, in order to effectively utilize the accumulated cases, various support functions are necessary. Therefore, we developed augment tools that utilizes case data using artificial intelligence technology. The artificial intelligence technology we used is a Case−Based Reasoning（CBR）method which is a kind of expert system. The CBR consists of machine learning of cases and reasoning similar cases. We developed similar case retrieval and Chest X−ray image education support tool using CBR. In this article we describe implementation method, effects and issues.
Our research group has been working on using deep learning（DL）to address a critical issue, automatic image segmentation, which is the fundamental part of medical image analysis based on computers. This review article describes the outline of our recent study as one application of the DL for multiple organ segmentations on CT images. We carry out the image segmentations as a multi-class, pixel-wise classification problem, and employ a fully convolutional network to solve this difficult classification task based on fully data-driven approach. Comparing to the previous works, our method uses an end-to-end DL approach to learn image features combined with a classifier together. As the result of image segmentations for 19 types of organs on 240 cases of 3D CT scans, our method demonstrated a comparable performance to other state-of-the-art works with much better efficiency, generality, and flexibility.
This paper presents a brief overview of applications of machine learning in medical image analysis. We explain machine learning from Bayes Theorem and traditional statistical pattern recognition. Random forest and feed forward neural network are then explained. Convolutional neural network is also shown. Examples of medical image analysis applications using machine learning techniques are presented in this short manuscript.
Lung cancer is one of the most important cancer in the world. Among them, Ground Glass Opacity（GGO）has a hazy area of increased attenuation in the lung image. In recent years, development of a Computer Aided Diagnosis （CAD）system for reducing the burden on work load and improving the detection rate of lesions has been advanced. In this paper, we propose a CAD system to extract GGO from CT images. Firstly, we extract the lung region from the input CT images and remove the vessel, and bronchial region based on 3 D line filter algorithm. After that, we extract initial GGO regions using concentration and gradient information. Next, we calculate the statistical features on the segmented regions. After that, we classify GGO regions using support vector machine（SVM）. Finally, we detect the final GGO regions using deep convolutional neural network（DCNN）. The proposed method is tested on 31 cases of CT images from the Lung Image Database Consortium（LIDC）. The results demonstrate that the proposed method has 86.05[%] of true positive rate and 39.03[/case] of false positive number.
Detection of intracranial unruptuered aneurysms is important because their rupture is a main course of subaracnoid hemorrhage. The purpose of this study is to develop a computer-aided diagnosis scheme for the detection of unruptured aneurysms in order to assist radiologists' image interpretation. The vessel regions were first segmented by using region growing technique for limiting the search areas of unruptured aneurysms. For determining the initial candidate regions of aneurysms, ring type gradient concentration filters were applied to the segmented regions. Fourteen threedimensional shape and texture features were obtained from the candidate regions. Rule-based schemes and random forest with these features were employed for distinguishing unruptured aneurysms and false positives（FPs）. Our proposed method was evaluated by using 25 cases. The sensitivity for the detection of unruptured aneurysms was 88.0% with 1.76 FPs per patient. Therefore, our proposed method would be useful for the detection of unruptured aneurysms in MRA images.
Computed tomographic colonography（CTC）, also known as virtual colonoscopy, provides a minimally invasive screening method for early detection of colorectal lesions. It can be used to solve the problems of accuracy, capacity, cost,and safety that have been associated with conventional colorectal screening methods. Computer-aided detection（CADe）has been shown to increase radiologists' sensitivity and to reduce inter-observer variance in detecting colonic polyps in CTC. However, although CADe systems can prompt locations of abnormalities at a higher sensitivity than that of radiologists,they also prompt relatively large numbers of false positives（FPs）. In this study, we developed and evaluated the effect of a transfer-learning deep convolutional neural network（TL-DCNN）on the classification of polyp candidates detected by a CADe system from dual-energy CTC images. A deep convolution neural network（DCNN）that had been pre-trained with millions of natural non-medical images was fine-tuned to identify polyps by use of pseudo-colored images that were generated by assigning axial, coronal, and sagittal images of the polyp candidates to the red, green, and blue channels of the images, respectively. The classification performances of the TL-DCNN and the corresponding non-transfer-learning DCNN were evaluated by use of 5-fold cross validation on 20 clinical CTC cases. The TL-DCNN yielded true- and falsepositive rates of 73.6［％］and 1.79［％］, respectively, which were significantly higher than those of the non-transferlearning DCNN. This preliminary result demonstrates the effectiveness of the TL-DCNN in the classification of polyp candidates from CTC images.
Sternocleidomastoid muscle is the biggest skeletal one in neck region and has a medical significance for evaluating the influence of Amyotrophic lateral sclerosis（ALS）. Since the morphological change of the muscle is often associated with ALS, the precise measurement of volume and density for the muscle is important for the early and quantitative diagnosis. The purpose of this study was to evaluate the initial results of automatic segmentation for the sternocleidomastoid muscle in whole-body and torso CT images. We construct a probabilistic atlas for the sternocleidomastoid muscle without any abnormalities. The procedure to construct the atlas was based on the technique developed for internal organs. The muscle shape for the atlas was created by manual procedures, and used as gold standards for the evaluation of segmented results. The probabilistic atlas was aligned with each individual muscle on the basis of the bone anatomical location and the edge of the muscle. We used 10 cases of whole-body CT images with abnormalities in the skeletal muscles, and 20 cases of torso CT images with no abnormalities in the skeletal muscles. As a result, the average concordance rates of sternocleidomastoid muscle were 60.3% and 65.4%, respectively. We successfully segmented the major area of the sternocleidomastoid muscle. This is because the atlas of sternocleidomastoid muscle deformed using the information of bone anatomical location and edge of the sternocleidomastoid muscle is fitted in the shape of the individual muscle.
The purpose of this study is to investigate an effectiveness of a method for automatic classification of infant hip types on ultrasonography. A convolutional neural network（CNN）was adopted for the automated classification of hip types corresponding to the Graf method that was defacto standard method for ultrasonographic assessment of infant hip dysplasia. In the CNN, AlexNet was employed as neural network model. We collected 49 ultrasound images that were classified based on the Graf method by an ultrasonographer. Data augmentation by rotating, mirroring, adjusting contrast, etc., generated additional 246,960 images from the original 49 ones. The augmented images were used as training data of the CNN. The accuracy by 10-fold cross validation was 73%. The CNN would be potentially effective for automatic classification of infant hip types.