In total hip arthroplasty, pelvic tilt in standing position is important in preoperative planning of the optimum placement angle of the cup. However, such tilt angle cannot be accessed from CT images scanned in the supine position. Previous study has been focused on radiographs scanned in the standing position. 2D-3D registration between a radiograph and a patient-specific CT image achieved that, but its application was limited due to the radiation exposure at CT acquisition. To solve this problem, we have proposed a method to estimate pelvic tilt angle from only single radiograph using convolution neural networks and tested with simulated images. However, its application to real radiographs is difficult due to the influence of noises and the X-ray spectrum. In this paper, we introduce estimation of pelvic tilt from real radiographs using a generative adversarial networks translating a real radiograph to a simulated image.
Medical images depict organs with different contrasts depending on their measurement techniques. In a clinical setting, patients may undergo multiple types of modalities for certain purposes. However, image acquisition by multiple types of modalities is time-consuming and not cost-effective. In this research, we address image synthesis, i.e. translating images such that they resemble the contrast of target modality. Image synthesis have long required “paired” training data, i.e. images of the same patients acquired with multiple modalities in the same postures, until CycleGAN has recently resolved this deficiency. CycleGAN enables Image Synthesis without paired data, learning synthesis toward each modality. Although CT-MR synthesis methods have been proposed so far, these only take into account MR images of single sequence. However, it is often the case that MR images of multiple sequences in the same posture are available. In this paper, we examine image synthesis between MR images of three types of sequences and CT around hip region using CycleGAN.
Convolutional Neural Network (CNN)-based accurate prediction typically requires large-scale annotated training data. In Medical Imaging, however, both obtaining medical data and annotating them by expert physicians are challenging; to overcome this lack of data, Data Augmentation (DA) using Generative Adversarial Networks (GANs) is essential, since they can synthesize additional annotated training data to handle small and fragmented medical images from various scanners―those generated images, realistic but completely novel, can further fill the real image distribution uncovered by the original dataset. As a tutorial, this paper introduces GAN-based Medical Image Augmentation, along with tricks to boost classification/object detection/segmentation performance using them, based on our experience and related work. Moreover, we show our first GAN-based DA work using automatic bounding box annotation, for robust CNN-based brain metastases detection on 256×256 MR images; GAN-based DA can boost 10% sensitivity in diagnosis with a clinically acceptable number of additional False Positives, even with highly-rough and inconsistent bounding boxes.
Generative Adversarial Networks (GAN) have been applied to a variety of tasks such as denoising, image transfer and super-resolution, and have been proved to be a promising way to reconstruct high quality images. In this paper, we report super-resolution method using GAN for medical image processing. Specifically, it consists of two networks: Generator that generates High Resolution images and Discriminator that distinguish a generated HR image from a real HR image. We train these two networks iteratively to obtain the generator that reconstructs HR image. We show that blurring disappears in restored HR image by using GAN and visually high quality HR image can be obtained.
Pathologists visually observe hematoxylin-eosin (HE) stained images under a microscope to perform pathological diagnosis. If it is not possible to sufficiently diagnose by judging shape using HE stained specimens alone, it is necessary to add another evaluation method such as immunohistochemistry (immunostaining).In order to accurately and rapidly identify a tumor, this study proposes a method of automatically identifying a tumor in a pathological image by estimating features of immunostaining from an HE stained image. The method consists of three steps: 1. features of tumor presence or absence are extracted from the HE stained image using a convolutional neural network (CNN), 2. a classifier is created so that the features obtained from the HE stained image approach the features of the presence or absence of a tumor stained by immunostaining by using the CNN, and 3. the presence or absence of a tumor is judged by using the classifier. The experimental results using digital images of pathological tissue specimens of prostate cancer show improved identification accuracy.
In this article, we describe the basics of diffusion weighted imaging (DWI), the challenge in DW-EPI, and the introduction of the recent techniques, MUSE (MUltiplexed Sensitivity Encoding) and RPG (Reversed Polarity Gradient).