Since the 2010s, rapid advancements in AI technology have been paving the way for significant contributions to human culture and welfare. The medical field is no exception and has seen notable interest and growth in AI research and development over the past decade. Particularly after 2020, many medical AI systems have gained approval as programmed medical devices, showcasing AI’s enduring presence in clinical settings. However, the actual implementation of AI in clinics remains limited, suggesting that the AI boom in healthcare is yet to fully mature. Developers face challenges in bridging this gap. Those involved in medical AI, including us, are continuously striving to ensure that the current interest in medical AI doesn't end as a fleeting trend but becomes mainstream in clinical and research settings. The healthcare AI industry is undergoing rapid changes, and the upcoming years might be pivotal. This article aims to provide a broad perspective on medical AI trends, including specific examples of products developed by our company, LPIXEL, in the realm of AI-powered image diagnostics.
This study investigated factors affecting truncation artifacts in magnetic resonance images, using phantom experiments and numerical phantom simulations. A custom-made phantom was prepared by filling a plastic container with olive oil and diluted gadolinium contrast agent. The phantom was repeatedly imaged by shifting the field of view (FOV) by 0.1 mm, up to a maximum of 2.0 mm. A numerical phantom was also created to simulate the shape and position of the physical phantom. In the simulation, the numerical phantom was 2D Fourier-transformed to acquire data in the spatial frequency domain. The low-spatial-frequency region was sampled according to the acquisition matrix size setting, and the surrounding area was zero-filled according to the reconstruction matrix size setting to create a k-space. The k-space was then inversely 2D Fourier-transformed to produce an image including truncation artifacts. These artifacts were affected by the position of the FOV and imaging target when the reconstruction matrix size was insufficient. Analysis of the physical and numerical phantoms revealed similar trends in the truncation artifacts: when the reconstruction matrix size was increased, the effect of the position of the FOV and the object was reduced.
Ideally, X-ray imaging of artificial knee joints should perfectly match the joint surfaces. However, due to factors such as individual differences between patients and the skill of technicians, it is difficult to achieve perfect matching in one shot. In addition, the acceptance criteria of images are often left to the judgment of individuals, and unnecessary retakes often occur. In order to solve the problems caused by the increase in the number of retakes, we develop an automated assessment system for alignment of artificial knee joint in X-ray images using Convolutional Neural Network (CNN), which has excellent ability in image recognition. In this study, we first performed a preliminary study using an artificial knee phantom, and then examined the usefulness of this method in detail using clinical images. In the former, we used VGG16 as a CNN model for image classification and evaluated its classification performance. In the latter, we used clinical images of 461 cases for which the acceptance or rejection of imaging had been confirmed. We used several CNN models for image classification and compared their performance. In both examinations the overall classification rate exceeded 80%, and VGG16 had the highest classification performance among the CNN models. These results suggest the possibility that this method can reduce unnecessary retakes.
In emergency medicine, imaging diagnosis by un-enhanced computed tomography (CT) is frequently used due to the remarkable progress and spread of CT. In the case of acute diseases such as trauma, accurate diagnosis of the upper abdominal region may be difficult depending on the experience of the doctor who interprets the images and the display conditions. In this study, we propose a method using deep convolutional neural network (DCNN) to automatically classify the presence or absence of traumatic hematoma in coronal images reconstructed from plain CT images. Coronal images are useful for observing a wide area of the upper abdomen and are often used. 337 images with traumatic hematoma and 492 images without were divided into 8 data sets. Finally, 17 types of DCNN were used, and the images were output into two categories, with and without traumatic hematoma, by 8-fold cross-validation. Receiver operatorating characteristic (ROC) analysis was performed to calculate accuracy and area under the curve (AUC). The highest accuracy was 0.841 and AUC was 0.909 when DenceNet-201 was used.
The development of computer systems to assist radiologists to accomplish medical image diagnosis requires recognition of target organ regions on images. However, fully automated organ segmentation to medical images is desirable, as manual annotation of pixel-by-pixel target organ regions on images is tedious and error-prone. Recent works have focused on two types of segmentation methods. One is CNN, which tends to capture local features, and the other is Transformer, which tends to capture global context. In this study, we aim to improve the performance of CNN networks by integrating with Transformer for multi-organ and tissue region segmentations, which has not been previously explored. Previous studies used three orthogonal cross-sections, but this study uses more sections in non-orthogonal directions to validate their use. We also use pre-trained models to validate the variability of organ region extraction accuracy. We validated the accuracy of organ extraction using multiple cross-sectional orientations. The proposed method improved the extraction accuracy by 3.2% in terms of the Jaccard coefficient compared to the baseline using axial sections only.