IIEEJ Transactions on Image Electronics and Visual Computing
Online ISSN : 2188-1901
Print ISSN : 2188-1898
ISSN-L : 2188-191X
Special issues: IIEEJ Transactions on Image Electronics and Visual Computing
Volume 7, Issue 1
IIEEJ Transactions on Image Electronics and Visual Computing
Displaying 1-5 of 5 articles from this issue
  • Antoine CHAUVET, Tomo MIYAZAKI, Yoshihiro SUGAYA, Shinichiro OMACHI
    Article type: Contributed Papers --Special Issue on Internet of Things and its Related Technologies in Image Electronics--
    2019 Volume 7 Issue 1 Pages 2-12
    Published: June 15, 2019
    Released on J-STAGE: April 01, 2021
    JOURNAL RESTRICTED ACCESS

    The study proposes a lightweight adaptive postfilter based on neural networks for use inH.265 High Efficiency Video Coding (HEVC). The proposed filter is adaptive because it uses a different set of parameters based on encoding settings and most significantly on the quantization parameter. With the aforementioned information, the filter most efficiently improves each block. We trained the filter for 4 different QP values and we demonstrate that the use of the filter leads to a decrease in bitrate of over 4% in a few cases and a decrease in bitrate of 1.5% on average for both All Intra and Random Access modes.In contrast to a few filters that use several passes and require specific ordering, the proposed filter changes each pixel at most once and the input uses only initial values, thereby allowing perfect parallelization.Furthermore, the use of only one convolutional layer and eight feature layers maintains the computing cost and memory footprint to the minimum possible extent, and this makes real-time processing possible even on embedded hardware.

    Download PDF (3248K)
  • Yan LI, Rong XU, Artus KROHN-GRIMBERGHE, Jun OHYA, Hiroyasu IWATA
    Article type: Contributed Papers
    2019 Volume 7 Issue 1 Pages 13-23
    Published: June 15, 2019
    Released on J-STAGE: April 01, 2021
    JOURNAL RESTRICTED ACCESS

    This paper proposes deep learning based methods for automatically detecting the uterus in the ultrasound (US) image and segmenting the detected uterus into anatomical structures. For accurate detection of the uterus and for segmentation of multiple fine-grained anatomical structures from the US image, we use a two-tier deep learning based algorithm: (I) localizing the bounding box of the uterus, and (II) segmenting the areas of amniotic fluid and fetal body from uterine image. To achieve (I) we design and train a convolutional neural network (CNN) based bounding box regression model, which regresses candidate positions of the uterus. Then we use the cropped uterus region as the input to a semantic segmentation approach. For (II) we apply fully convolution based architecture that segments the fetal body and amniotic fluid from fetal US images in an end-to-end, supervised learning pipeline. We use additional inner layers and intermediate supervisions to improve the segmentation accuracy and smooth out the boundaries. We experimentally evaluate our methods and demonstrate the accurate uterus detection and anatomical structure segmentation results.

    Download PDF (3940K)
  • Suleiman MUSTAFA, Akio KIMURA
    Article type: Contributed Papers
    2019 Volume 7 Issue 1 Pages 24-35
    Published: June 15, 2019
    Released on J-STAGE: April 01, 2021
    JOURNAL RESTRICTED ACCESS

    In this study, we propose a semi-automated system for detecting cervical cancer from cervigram photographs of affected cervix regions. Cervical cancer is among the most common cancers affecting women in the world particularly in developing countries where few population have access to proper screening due to high costs of laboratory testing. For this reason a simple inexpensive test by visual inspection with acetic acid (VIA) is used where the cervix region is observed with the naked eye for change in color, texture and appearance. We consider that applying adequate image processing techniques to the captured images during VIA is effective to assist gynecologist (doctor) for detecting, diagnosing and examining the cervix region based on the visual inspection observations. That is, it is possible to construct a kind of computer aided systems for detecting and diagnosing cervical cancers. In our framework we first segment an input image into lesions of interest by GrabCut algorithm, and next extract many color- and texture-based features by using image processing.Then based on these extracted features the segmented image is categorized as cancerous “malignant” or non-cancerous “benign” by using ensemble classification methods combined with 3 or 5 machine learning algorithms. We conducted some experiments using real cervigram images and found through the statistical analysis that only 10–13 extracted features can be sufficient to detect cervix cancer and our method comparatively improved the detection accuracy compared to visual eye inspection.

    Download PDF (4160K)
  • Muhammad ARIEF, Hideki TODO, Koji MIKAMI, Kunio KONDO
    Article type: Contributed Papers
    2019 Volume 7 Issue 1 Pages 36-45
    Published: June 15, 2019
    Released on J-STAGE: April 01, 2021
    JOURNAL RESTRICTED ACCESS

    Recent works in Non-Photorealistic Rendering (NPR) are capable of stylizing global appearance to fit shading designs by artists. Meanwhile, local appearance control is still required for the artists to depict geometric and emotional features. We propose a simple region control algorithm for an existing light-based texture projection technique. Since the shading effects of the texture projection cannot be overlayed, we consider to deform the original projection result by modifying texture coordinates. To enable artists to use familiar interface, we incorporate multiple lighting effects and painting control with the original light-based texture projection process. Our shading pipeline is implemented on GPU, which enables a real-time preview for the shading design at interactive rates. We also conducted user studies to evaluate the usability of our system.

    Download PDF (7352K)
  • Yan LI, Rong XU, Artus KROHN-GRIMBERGHE, Jun OHYA, Hiroyasu IWATA
    Article type: Short Paper
    2019 Volume 7 Issue 1 Pages 46-51
    Published: June 15, 2019
    Released on J-STAGE: April 01, 2021
    JOURNAL RESTRICTED ACCESS

    To locate the fetal head in ultrasound (US) images, this paper proposes a deep learning based method for weakly supervised learning from image-level annotation. We first modify and train fetal head classification models based on existing backbone structures, then adopt the feature maps and learned weights to visualize the high response areas of fetal head. In order to improve the localization accuracy, this paper further optimizes completeness of the salient area of the fetal head by adopting multiple feature maps from different feature levels. The final bounding box of the fetal head is obtained from mined regions through threshold. We evaluate both fetal head plane classification and weakly learned localization results in US images. In the experiments we compare several backbone structures and verify the effectiveness of the proposed method.

    Download PDF (1385K)
feedback
Top