NIHON GAZO GAKKAISHI (Journal of the Imaging Society of Japan)
Online ISSN : 1880-4675
Print ISSN : 1344-4425
ISSN-L : 1344-4425
Volume 59, Issue 6
Displaying 1-10 of 10 articles from this issue
Invited Review
  • Toshiyuki TAKABAYASHI, Akio MAEDA, Hirotaka IIJIMA, Masashi IKEDA
    2020 Volume 59 Issue 6 Pages 564-569
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS

    The single-pass inkjet printing technology is a key technology for the high productivity of “AccurioJet KM-1.” We have developed a couple of new technologies for the printer;a new print head “KM1800i,” a new UV-curable ink “HS-ink,” and new image-forming processes such as halftone screen pattern optimization, nozzle compensation, shading correction, and misfiring correction.

    The ink has extremely stable jetting reliability necessary for single-pass printing. The ink makes immovable the dots formed from the ink on the media without coalescing each other and make a high-quality image instantaneously. The ink can print on a wide variety of media without pre-conditioning liquids.

    This technology received the Technology Award from the Imaging Society of Japan in 2018. We will report this technology with a focus on ink.

    Download PDF (1296K)
Special Topic
  • Motoi IWATA, Hiroyuki ARAI, Yoshihisa KITANO
    2020 Volume 59 Issue 6 Pages 570
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS
    Download PDF (22K)
  • Yoshikatsu NAKAJIMA, Hideo SAITO
    2020 Volume 59 Issue 6 Pages 571-578
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS

    This work addresses the task of open world semantic segmentation using RGB-D sensing to discover new semantic classes over time. Although there are many types of objects in the real-word, current semantic segmentation methods make a closed world assumption and are trained only to segment a limited number of object classes. Towards a more open world approach, we propose a novel method that incrementally learns new classes for image segmentation. The proposed system first segments each RGB-D frame using both color and geometric information, and then aggregates that information to build a single segmented dense 3D map of the environment. The segmented 3D map representation is a key component of our approach as it is used to discover new object classes by identifying coherent regions in the 3D map that have no semantic label. The use of coherent region in the 3D map as a primitive element, rather than traditional elements such as surfels or voxels, also significantly reduces the computational complexity and memory use of our method. It thus leads to semi-real-time performance at 10.7Hz when incrementally updating the dense 3D map at every frame. Through experiments on the NYUDv2 dataset, we demonstrate that the proposed method is able to correctly cluster objects of both known and unseen classes. We also show the quantitative comparison with the state-of-the-art supervised methods and the processing time of each step.

    Download PDF (1467K)
  • Wataru SHIMODA, Keiji YANAI
    2020 Volume 59 Issue 6 Pages 579-584
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS

    In this paper, we introduce our ICCV (IEEE/CVF international conference on computer vision) 2019 paper1) that entitled “Self-supervised Difference Detection for Weakly-supervised Semantic Segmentation.” To minimize annotation costs associated with training of semantic segmentation models, weakly-supervised segmentation approaches have been studied. In recent weakly supervised segmentation methods, visualization-based approaches have been widely adopted. However, the visualization results are not generally equal to semantic segmentation. Therefore, to perform highly-accurate semantic segmentation, it is necessary to consider mapping functions that convert the visualization results into semantic segmentation. However, since such general mapping functions do not always guarantee improvement in accuracy. In the article, we consider that the results of mapping functions include noise and aim to improve accuracy by removing noise. To achieve that, in the article, we proposed self-supervised difference detection (SSDD) module which estimates noise from the results of mapping functions by predicting the difference between the segmentation masks.

    Download PDF (652K)
  • Satoshi KOSUGI, Toshihiko YAMASAKI
    2020 Volume 59 Issue 6 Pages 585-590
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS

    Weakly supervised object detection, where a detector is trained with only image-level annotations, is attracting more attention. As a method to obtain a well-performing detector, the detector and the instance labels are updated iteratively. In this study, for more efficient iterative updating, we focus on the instance labeling problem, a problem of which label should be annotated to each region based on the last localization result, and two instance labeling methods are proposed. First, to solve the problem that regions covering only some parts of the object tend to be labeled as positive, we find regions covering the whole object focusing on the context classification loss. Second, considering the situation where the other objects in the image can be labeled as negative, we impose a spatial restriction on regions labeled as negative. Using these methods, we obtain the best results on the PASCAL VOC 2007 and 2012 datasets.

    Download PDF (1322K)
  • Mikihiro TANAKA, Takayuki ITAMOCHI, Kenichi NARIOKA, Ikuro SATO, Yoshi ...
    2020 Volume 59 Issue 6 Pages 591-600
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS

    For communication between humans and intelligent agents such as robots, it is an important issue for agents to tell humans what they see. In this article, we introduce the results of our research on the generation of sentences that not only refer to objects correctly but also let humans find them quickly. If the target is not salient, finding the target itself becomes difficult. Therefore, we designed the model to utilize the salient contexts around it (e.g. “beside a car”) to help humans to find the targets. Moreover, we optimized the generation of sentences that are easily understood by using the time required to locate the referred objects by humans and their accuracies. To evaluate our system, we created a new dataset using images from Grand Theft Auto V (GTA V). Experimental results showed that our system generated sentences that are easily comprehended by humans, especially for less salient targets.

    Download PDF (1795K)
  • Shunta MAEDA
    2020 Volume 59 Issue 6 Pages 601-606
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS

    This article is intended to provide an explanatory review on the paper, “Unpaired Image Super-Resolution using Pseudo-Supervision,” which we published in CVPR 2020 (IEEE/CVF Conference on Computer Vision and Pattern Recognition).

    In most studies on learning-based image super-resolution (SR), the paired training dataset is created by downscaling high-resolution (HR) images with a predetermined operation (e.g., bicubic). However, these methods fail to super-resolve real-world low-resolution (LR) images, for which the degradation process is much more complicated and unknown. In the paper, we propose an unpaired SR method using a generative adversarial network that does not require a paired/aligned training dataset.

    In this article, we classify existing single image super-resolution methods in terms of training datasets, show how the proposed method resolves their shortcomings, and discuss potential applications to real-world use-case to address industrial demands.

    Download PDF (754K)
  • Atsuhiro NOGUCHI, Tatsuya HARADA
    2020 Volume 59 Issue 6 Pages 607-616
    Published: December 10, 2020
    Released on J-STAGE: December 10, 2020
    JOURNAL FREE ACCESS

    In this review, we introduce our proposed novel method for training an image generation model from only a small number of unknown category images. Image generation models can learn the distribution of images from the training images and generate new images according to the distribution. Recent advances in image generation models have made it possible to generate high-quality images;however, the need for large datasets for training has limited the application of such models. Therefore, in this study, we realized an image generation from a small number of images by reusing the feature representations acquired by the pre-trained image generator on a large dataset and learning only how to combine the feature representations. The proposed method focuses on batch statistics that contribute to this combination and trains only these parameters. This method enabled us to generate higher quality images from a small dataset (less than 100 images) compared to conventional methods.

    Download PDF (3013K)
Imaging Highlight
Lectures in Science
feedback
Top