Medical Imaging and Information Sciences
Online ISSN : 1880-4977
Print ISSN : 0910-1543
ISSN-L : 0910-1543
Volume 33, Issue 3
Displaying 1-4 of 4 articles from this issue
Original Article
  • Takahiro NAKAMOTO, Hidetaka ARIMURA, Tomonari SASAKI, Ken'ichi MOROOKA ...
    2016 Volume 33 Issue 3 Pages 48-56
    Published: September 30, 2016
    Released on J-STAGE: October 01, 2016
    JOURNAL FREE ACCESS

    This study was aimed to develop a framework for estimating four-dimensional (4D) dose distributions during stereotactic body radiation therapy (SBRT) based on a 2D/3D registration with an adaptive transformation parameter (ATP) approach. The 4D dose distributions during treatment delivery in the SBRT were estimated by applying a dose calculation algorithm to simulated 4D-computed tomography (CT) images during treatment time. The simulated 4D-CT images during treatment time were calculated by registering 3D planning CT images to 2D electronic portal imaging device (EPID) dynamic images through portal dose images for all frames. Transformation parameters of the 2D/3D registration were optimized by using a Levenberg-Marquardt algorithm with the ATP approach. Ten lung cancer patients who underwent the SBRT (total EPID frames : 183) were applied to this study. Mean gamma pass rates (3 mm/3%) for ten cases with and without the ATP approach were 96.10% and 94.61%, respectively (p>0.05). The framework with the ATP approach was able to significantly reduce computational time (48.40 min) by 70.92% of that in the framework without the ATP approach (166.44 min) in all cases. Therefore, the framework with the ATP approach would be more efficient for estimating the 4D dose distributions while keeping precision in the gamma pass rates.

    Download PDF (9834K)
  • Mei HIROSE, Bing-Yu SUN, Dai MIYANAKA, Yoshihiko HAYAKAWA
    2016 Volume 33 Issue 3 Pages 57-62
    Published: September 30, 2016
    Released on J-STAGE: October 01, 2016
    JOURNAL FREE ACCESS

    Dysfunctions of mastication and swallowing are caused by the growth in the mixed dentition period, postoperative symptoms in surgical procedures, missing tooth/teeth, simply aging, etc. There is a dietary report on the reduction of food intake due to prolonged chewing. Therefore, we tried to develop a noncontact chewing cycle analyzer, which works on tablet PCs. After a face area is captured, the top far-left pixel was tracked to measure the whole-face transition and two pixels at the captured mouth angle were picked up to measure the optical flow method. The mouth movement in the period of food intake, mastication and swallowing, was recorded. The movement showed kinds of wave forms, which sometimes contained irregular forms. But the compensation using the whole-face movement clearly showed the repeated mastication. Due to the limitation of the calculation loading in tablet PCs, the range of practically measurable fps was 8 to 9 in the maximum 30. We reduced the size of captured image and the number of pixels for the optical flow analysis, and chose the effective edge-extraction filter for the faster calculation. The data captured at 8 fps were enough to record wave forms of the mouth movement. Such non-contact chewing cycle analyzer is of value to evaluate the mastication dysfunction and recovery.

    Download PDF (1529K)
  • Eri MATSUYAMA, Yongbum LEE, Noriyuki TAKAHASHI, Du-Yih TSAI
    2016 Volume 33 Issue 3 Pages 63-68
    Published: September 30, 2016
    Released on J-STAGE: October 01, 2016
    JOURNAL FREE ACCESS

    The purpose of this study is to investigate and compare the effectiveness of two previously proposed waveletbased medical image enhancement methods, i.e., exponential-type (E-type) and sigmoid-type (S-type) mapping methods. A total of 30 chest radiographs, CT images and mammograms were evaluated visually by using Scheffe's method for paired comparison. As a visual evaluation, images obtained with and without enhancement processing, or images processed with the two enhancement methods were displayed side by side on a light emitting diode monitor. Two different images, i.e.,original image vs. image processed with E-type mapping method, original image vs. image processed with S-type mapping method, and image processed with E-type mapping method vs. that with S-type mapping method, were then evaluated on a discrete rating scale. The experimental results showed that the images processed with S-type mapping method are significantly better than the original images obtained with the three different modalities in both the visual and quantitative evaluations. In addition, the images processed with E-type mapping method are significantly better than the original images of chest radiographs in both the evaluations. The results demonstrate that the previously proposed image enhancement methods are effective and feasible.

    Download PDF (2698K)
Brief Article
  • Xiangrong ZHOU, Takaaki ITO, Ryosuke TAKAYAMA, Song WANG, Takeshi HARA ...
    2016 Volume 33 Issue 3 Pages 69-74
    Published: September 30, 2016
    Released on J-STAGE: October 01, 2016
    JOURNAL FREE ACCESS

    We propose a novel approach for semantic CT image segmentation based only on a fully convolutional network (FCN), which accomplishes an end-to-end, voxel-wise multiple-class prediction to map each voxel in a CT image directly to an anatomical label. The proposed method simplifies the segmentation of the anatomical structures(including multiple organs)in a CT image(generally in 3D)to majority voting for the semantic segmentation of multiple 2D slices drawn from three orthogonal viewpoints with redundancy. An FCN consisting of “convolution” and “de-convolution” parts is trained and re-used for the 2D semantic image segmentation of different slices of CT scans. We applied the proposed method to segment a wide range of anatomical structures that consisted of 19 types of targets in the human torso. A database consisting of 240 3D CT scans and a humanly annotated ground truth was used for training(230 cases)and testing(the remaining 10 cases). The results showed that the target regions for the entire set of CT test scans were segmented with acceptable accuracies(89% voxels were labeled correctly)against the human annotations. This performance was comparable to other recently reported state-of-the-art results. Compared to previous segmentation methods that have to be guided by human expertise, this data-driven approach showed better efficiency, generality, and flexibility.

    Download PDF (7069K)
feedback
Top