T1 contrast Magnetic resonance image (MRI) is a high contrast imaging technique of soft tissues for the diagnosis of neuropsychiatric disorders with brain degeneration, due to its high contrast of soft tissues. Recently, from development of image analysis techniques by engineering and informatics researchers, it has become possible to quantitatively evaluate the volume of brain tissue in the region of anatomical interest or Voxel by Voxel, which is a highly quantitative, objective, and reproducible technique as a surrogate biomarker of disorders. However, the reproducibility of MRI remains problems due to geometric distortions and inhomogeneity of the signals, as well as differences in the analysis values between scanners. For such problems, there have been some reports, unification of the imaging protocols and attempts to remove the non-uniformity and geometric distortions by image pre-processing correction. To address the problem of inter-device reproducibility, harmonization based on probability and statistics has been attracting attention in the recent years. This paper introduced the points to be considered when engineering and informatics researchers perform analysis and research using clinical images.
Deep learning has received much attention because of its excellent performance in image quality and reconstruction time. It was reported that image quality can be improved in MR compressed sensing (CS) by using the phase scrambling Fourier transform imaging (PSFT) that uses quadratic phase modulation to the subject. In this paper, an image reconstruction using Generic-ADMM-Net as a CNN image reconstruction was examined. Simulation studies showed that sharpness, preservation of structure and image contrast were improved compared to standard Fourier transform based CS-CNN or iterative image reconstruction method. These studies indicate that PSFT has the possibility to reconstruct higher quality images in deep learning image reconstruction as well as iterative reconstruction.
Differential diagnosis of early well-differentiated hepatocellular carcinoma (ewHCC) from non-cancer is very difficult because cellular and structural atypism in most ewHCCʼs are very slight. We have realized a method to visualize the distribution of nuclear density in whole slide images of ewHCC sections. While nuclear density can help diagnose ewHCC, it is desirable to visualize the distribution of more features to enhance the usefulness of the function. We have thus realized an automatic method of re-extracting the contours of the cell nuclei and visualizing the distribution of shape features including circularity, which is useful for diagnosis. The extracted shape features are circularity, a ratio of major axis to minor axis, a standard deviation of distance between the center of gravity and contours, and a nuclear area. The mean absolute percentage errors for the extracted features were 0.26%, 2.02%, 9.75% and 6.94%, respectively. All the processing is automated, and the computation time on a PC is less than an hour even for large surgical sections.
It is not easy to acquire case image data, which is important for checking and improving the performance of computer-aided diagnosis (CAD) systems, and various attempts have been made to artificially generate case images. In this study, Elliptic Fourier descriptor (EFD), one of the quantitative evaluation methods for contour information, was used to analyze the contours of calcification distribution shapes and to investigate a method to apply it to the generation of artificial mammograms with calcifications. The shape information of the calcification distribution was converted to an elliptical Fourier descriptor, and principal component analysis was performed. The first to fourth principal components and the area obtained were used as features and discriminated using the SVM (Support Vector Machine), It was confirmed that three relatively high malignancy categories (clustered, linear, and segmental) among the five calcification distributions in the BI-RADS (Breast Imaging Reporting and Data System) could be identified with 90.4% accuracy. We also developed a method to artificially generate mammograms with various calcification distributions by randomly arranging calcifications extracted in advance, within the contours of the generated calcification distribution shapes and embedding them in other mammograms. Seven radiologists evaluated the generated 15 artificial mammograms and 15 real mammograms by the index of "reality" of the calcifications (0: fake to 100: real). A two-tailed t-test showed that there was no statistically significant difference in the rating of “reality” between the artificial mammograms and the real mammograms. Averaged AUC of ROC analysis was 0.466. It has been confirmed that the method developed in this study can generate artificial mammograms with calcifications with various distribution shapes at a level that is indistinguishable from actual case images. In the future, it is expected to be used to generate image data for performance evaluation of CAD systems and as a data argumentation for deep learning.
It has been about a decade since deep learning has attracted attention in the processing and recognition of medical images. Initially, many researchers had the impression that it wouldn't work without a GPU. It was difficult to use GPUs in the first place, so many researchers could have been confused about incorporating GPUs into their research. In recent years, the stability of the development environment, the expansion of the database, and the use of the GPU have become easier, and anyone can create programs using deep learning. This course extracts important contents from the hands-on seminar held at the JAMIT annual meetings, and explains the execution environment of deep learning and a simple program. The basic content is processed based on building a deep learning environment using TensorFlow and Keras. The sample programs are distributed online in Jupyter Notebook format. Part 1 deals with environment construction and image classification by convolutional neural network, Part 2 deals with environment construction using GPU and area extraction from images, and Part 3 deals with unsupervised learning using AutoEncoder.