The Journal of the Institute of Image Electronics Engineers of Japan
Online ISSN : 1348-0316
Print ISSN : 0285-9831
ISSN-L : 0285-9831
Volume 49, Issue 2
Displaying 1-13 of 13 articles from this issue
  • Yongchi XU, Jinlin XU
    2020 Volume 49 Issue 2 Pages 102-109
    Published: 2020
    Released on J-STAGE: July 31, 2023
    JOURNAL FREE ACCESS

    Multicolor printing inevitably results in the overprinting of ink layers and wet-on-wet printing. In this case, the ink trapping rate is defined to describe how much less the volume of the second ink printed on the wet ink layer is than it printed on the dried ink layer or on the blank paper. In order to predict accurately the color of overprint solids, a new color prediction model was built on the basis of the simplest two-color overprint solid in this paper. Firstly, the uneven distribution of ink layers in two-color overprint solid was reduced to evenly distributed ink layers. As a result, the ink trapping rate was provided in which the rate of thickness of second ink layer printed on the wet first ink layer or on the blank paper was calculated. Secondly, referring to the color prediction model based on the light scattering characteristics of the paper and ink layer, the necessity of selecting the simulation ink layer to measure the optical parameters of ink layer was explained, a new preparation method of the simulation ink layer was described, an algorithm for calculating the thickness of ink layer with the reverse transmittance of ink layer based on a specific wavelength was built, a new production method of experimental samples to simulate accurately the process of overprinting was provided, the necessity and method of correcting the reflectance of simulation ink layer was emphasized, the color prediction model of the two-color overprint solid based on a solid print was given in this paper. Finally, by comparing the predicted and measured values of the experimental samples, the proposed model can accurately predict the color of two-color overprint solids.

    Download PDF (1720K)
  • Naoto KAWAMURA, Hyunho KANG, Keiichi IWAMURA
    2020 Volume 49 Issue 2 Pages 110-118
    Published: 2020
    Released on J-STAGE: July 31, 2023
    JOURNAL FREE ACCESS

    Digital watermarking by Green-Noise Diffusion Method has characteristics with visually high image quality, and print-resistance and reversibility, even embedded strongly. However, there are still some problems that the embedding capacity is not yet sufficient, and watermark is easily attacked as it is block type. Therefore, we obtained increased amount of embedding by multi-leveling, the extraction error is reduced by using both Gray code and error correction code, so that the printing durability is further improved, the watermark extraction from plain paper or dirty printed matter is enabled. And we obtained improved security from collusion attacks introducing the camouflage patterns. In the application as Annotation, it is possible to extract a stable watermark from various images by applying smoothing on the images.

    Download PDF (4685K)
  • Takumi KOBAYASHI, Yuta SUGIURA, Hideo SAITO, Yuji UEMA
    2020 Volume 49 Issue 2 Pages 119-127
    Published: 2020
    Released on J-STAGE: July 31, 2023
    JOURNAL FREE ACCESS

    This paper presents a 3D virtual eyeglasses try-on system for practical use. For fitting eyeglasses in a shop, consumers wish to look at themselves in a mirror while trying on various eyeglass styles. However, for people who need to wear eyeglasses for correcting problems with eyesight, it is impossible for them to clearly observe their face in the mirror without wearing eyeglasses. This makes fitting them for new eyeglasses difficult. Many studies on virtual glasses try-on have been conducted, and some eyewear brands have already put the system into practical use, but none of these mixed reality systems considered processing in case users wear their own glasses. This research proposes a virtual try-on system that can be used while wearing eyeglasses. We replace the user’s eyeglasses in the input video with new eyeglasses virtually.Also, a fast and accurate face tracking tool enables our system to automatically display 3D virtual glasses following a user’s head motion. Moreover, taking into account the change due to distortion of the correction lens and the refraction and reflection of light, t is possible to generate more realistic glasses by drawing a virtual lens. Experimental results demonstrate that the proposed method can render virtual glasses naturally while the user is wearing real eyeglasses.

    Download PDF (5721K)
  • Xin LU, Daiki SHIROSAWA, Akio KIMURA
    2020 Volume 49 Issue 2 Pages 128-135
    Published: 2020
    Released on J-STAGE: July 31, 2023
    JOURNAL FREE ACCESS

    In object recognition problem, HOG features and some machine learning methods for learning them to identify an object are known as extremely effective techniques. Recently, emphasizing feature variance and multiplying feature resolution have also been proposed for further improving recognition performance. This paper calculates multiple types of the conventional HOG by combining their various cell sizes and block sizes in an appropriate range, and proposes a simple method to learn them and evaluate the corresponding object recognition performance. Especially in evaluation experiments using real images of faces and bodies, the experimental results show that learning performance of the proposed method comparable to the extremely complicated latest methods such as CoHOG and MRCoHOG can be achieved, although its execution time slightly increases compared to the conventional HOG.

    Download PDF (1790K)
  • Kazuki NAKAMICHI, Huimin LU, Hyoungseop KIM, Kazue YONEDA, Fumihiro TA ...
    2020 Volume 49 Issue 2 Pages 136-143
    Published: 2020
    Released on J-STAGE: July 31, 2023
    JOURNAL FREE ACCESS

    Recently, cancer is the leading cause of death in Japan. It is known that early detection and early treatment of cancer before metastasis occurs is important. Therefore, circulating tumor cells (CTC) is expected as a useful biomarker that can be new cancer tests. The CTC exists in the blood of patients with metastatic cancer and pathologists analyze CTC to diagnosis the condition of cancer. Pathologists analyze blood samples from images taken with a fluorescence microscope, but it is time-consuming since the number of CTC in the blood is very few. In this paper, we develop an automatic detection method of CTC in fluorescence microscopy images. In the proposed method, we detect cell regions by using the selective enhancement filter and blob analysis and then identify CTC by using SqueezeNet, which is one of the kinds of convolutional neural network (CNN). The input image to SqueezeNet is a composite of three images taken with a fluorescence microscope. As a result of applying the proposed method to 5,040 microscope images (6 cases), 97.30[%] of true positive rate (TPR) and 3.150[%] of false positive rate (FPR) was obtained.

    Download PDF (1318K)
  • Shun OHUE, Ai MORIKUBO, Yuko TASHIRO
    2020 Volume 49 Issue 2 Pages 146-153
    Published: 2020
    Released on J-STAGE: July 31, 2023
    JOURNAL FREE ACCESS

    We report the 19th Visual Information Processing Camp (VIP Camp) held from 21th to 23th, September 2019. The VIP Camp is taken place for the students who study visual information processing and other related research fields and it was 19th times this year. The VIP Camp is planned and operated every year by voluntary university students from all over Japan. The primary purpose of the Camp is to provide opportunities for practicing their research presentations and face-to-face discussion on them. The catchword of the Camp this year is “Next Stage”. As represented by the catchword, we arranged poster sessions with fast-forward presentations, and a group work session in which the participants discuss research papers presented “What is a good poster presentation?”, and so on. In this paper, we report the organizing activities for the 19th VIP Camp and discuss possibilities to improve the future VIP Camps by analyzing the answers in participant questionnaires.

    Download PDF (4890K)
feedback
Top