IIEEJ Transactions on Image Electronics and Visual Computing
Online ISSN : 2188-1901
Print ISSN : 2188-1898
ISSN-L : 2188-191X
Special issues: IIEEJ Transactions on Image Electronics and Visual Computing
Volume 5, Issue 2
IIEEJ Transactions on Image Electronics and Visual Computing
Displaying 1-9 of 9 articles from this issue
IIEEJ_Trans_Vol_05_No_02_2017
  • Jaime SANDOVAL, Kazuma UENISHI, Munetoshi IWAKIRI, Kiyoshi TANAKA
    Article type: Contributed Paper --Special Issue on IEVC2017--
    2017 Volume 5 Issue 2 Pages 60-73
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    3D planes detection is an important task that has numerous applications in urban environments. However, current methods do not deal appropriately with the noise and quantization artifacts of low-cost sensors. In this paper, we present the Scaled Difference of Normals, a points filter that addresses these issues and is implemented on top of the Fast and Deterministic Planes Detection Based on Hough Transform. We evaluated its precision by comparing the detected planes coefficients with semi-automatically generated ground truth data and confirmed that when compared to state of the art methods, the proposed method is fast and has superior precision even in the presence of high noise levels, quantization artifacts and several variations in the points distribution caused by registration.

    Download PDF (6146K)
  • Yoshihiro TAKEMURA, Toru TAMAKI, Bisser RAYTCHEV, Kazufumi KANEDA
    Article type: Contributed Paper --Special Issue on IEVC2017--
    2017 Volume 5 Issue 2 Pages 74-82
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    The paper proposes a novel method for rendering wavelength-dependent phenomena such as dispersion and interference of light in cooperation with Monte Carlo integration and importance sampling. The method, we call tristimulus value rendering, is simple and generates a physically correct image with less computational resource than a traditional spectral rendering. The tristimulus value rendering makes use of the property of human visual perception to sample several sets of wavelength of light in cooperation with importance sampling. Several examples including the effects of dispersion and interference of light demonstrate the usefulness of the proposed method.

    Download PDF (5092K)
  • Masayuki MIYAMA
    Article type: Contributed Paper --Special Issue on IEVC2017--
    2017 Volume 5 Issue 2 Pages 83-91
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    Super-pixel segmentation is to over-divide an image to get object boundaries. This is positioned as preprocessing for advanced image processing such as image division and object recognition. This paper proposes a novel fast and accurate super-pixel segmentation algorithm. The proposed method starts from the initial division of a regular grid and hierarchically repeats boundary update using an energy function in local. The energy function consists of a penalty term and a regularization term. The penalty term expresses the color similarity between a pixel and a super-pixel. The regularization term expresses the spatial continuity of the super-pixel. Experimental results show that the proposed method is two to ten times faster than the conventional methods and there is no significant decrease in accuracy. The accuracy against the boundary length is overwhelmingly higher in particular. The object inside is meaningfully divided and the detail of the object is clear in the image where each pixel is replaced with the average of the super-pixel. Furthermore, a dedicated processor based on this algorithm was designed. By adopting a line buffer, a region table of 16 banks, forwarding of the left result, and a cache for updating region data, the processor free from pipeline stall achieves throughput of 1 pixel / 1 cycle. The throughput of the processor implemented on an FPGA was VGA 42.2 fps when operating at 30 MHz. This was five times faster than a software implementation running on a PC at 3 GHz.

    Download PDF (6456K)
  • Masahiro TOYOURA, Takumi TANAKA, Atsushi SUGIURA, Xiaoyang MAO
    Article type: Short Paper --Special Issue on IEVC2017--
    2017 Volume 5 Issue 2 Pages 92-98
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    We focus on eye tracking by head motion. This type of eye tracking does not provide the most accurate results, but it does not require a user to wear cumbersome sensors like cameras on glasses. The approach works for many applications, such as the extraction of human attention by surveillance camera or an intuitive interface for tablet devices. Through a preliminary experiment, we confirmed that head direction is often largely different from eye direction. We propose to estimate accurate eye direction by using head motion history. A sequence of head directions and the differentials provide richer information than head direction at one moment. Using multiple regression analysis (MRA) and dynamic coupled component analysis (DCCA), we examined the relationship between eye direction and head motion history, and reduced the error rate by 7.2% and 0.8% on average.

    Download PDF (3804K)
  • Yasuhiro YAO, Harumi KAWAMURA, Akira KOJIMA, Tatsuya OSAWA, Yoshinori ...
    Article type: Contributed Paper --Special Issue on Application-Based Image Processing Technologies--
    2017 Volume 5 Issue 2 Pages 100-109
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    A purpose of augmented reality (AR) applications is to express virtual objects realistically in actual scenes. To achieve this, the objects’ shading should be consistent with the scenes’ illumination. Although many methods have been proposed to achieve this shading consistency, they require specific light probes (such as a mirrored sphere), user annotations, or intensive scene’s geometry reconstruction. Hence, they are not particularly suitable for interactive AR applications in general user environments. In this paper, we propose a method to derive diffuse shading consistent to an actual scene by referring to an unspecified object. The object can be either a shiny or matte object as long as the surface has diffuse reflection. We experimentally confirmed that the method derives diffuse shading stably from objects with various specular shininess intensities and that the derived shadings are comparable to those derived by the most accurate conventional method. We also developed two types of systems to confirm that the method requires little user work and runs in realtime in scenes with dynamic illumination.

    Download PDF (6103K)
  • Junichi SUGITA, Tokiichiro TAKAHASHI
    Article type: Contributed Paper --Special Issue on Application-Based Image Processing Technologies--
    2017 Volume 5 Issue 2 Pages 110-122
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    The red-yellow-blue (RYB) color model is a subtractive model based on pigment color mixing and is widely used in art education. In the RYB color model, red, yellow, and blue are defined as the primary colors. In this study, we apply this model to computers by formulating a conversion between the red-green-blue (RGB) and RYB color spaces. In addition, we present a class of compositing methods in the RYB color space. Moreover, we prescribe the appropriate uses of these compositing methods in different situations. By using RYB color compositing, paint-like compositing can be easily achieved. We also verified the effectiveness of our proposed method by using several experiments and demonstrated its application on the basis of RYB color compositing.

    Download PDF (3595K)
  • Natsuki TAKAYAMA, Hiroki TAKAHASHI
    Article type: Contributed Paper --Special Issue on Application-Based Image Processing Technologies--
    2017 Volume 5 Issue 2 Pages 123-133
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    The performance evaluation of partial blur segmentation algorithms is a crucial part of research. Current performance evaluation criteria for partial blur segmentation, however, are used without sufficient discussion of the target requirements. The validity of performance metrics and statistical methods depends on the context of the particular problem. This study considers the evaluation of partial blur segmentation in the blurred/sharp region classification and segmentation contexts. This paper discusses two important issues regarding the performance evaluation criteria of partial blur segmentation: (1) the performance metrics and (2) the summarization scheme of metric values for accurate comparison. An experimental comparison of performance metrics shows that Informedness and its absolute value are the most suitable metrics to evaluate partial blur segmentation. The research presented in this paper employs the maximum and averaged values of metrics obtained by naive-thresholding using multiple parameters. These values can capture the performance and sensitivities of the parameters of algorithms in detail. This research employs Otsu’s method and Graphcuts to evaluate the extent to which the performance depends on the segmentation algorithms in automatic applications. These approaches ensure a more accurate comparison than the previous evaluation criteria. Four state-of-the-art partial blur segmentation algorithms are evaluated by the proposed criteria. The results revealed detailed conditional dependences of superiority and some overlooked characteristics of algorithms.

    Download PDF (2276K)
  • Ran CHOI, Kota WATANABE, Hiroaki JINGUJI, Nobuyuki FUJITA, Yoji OGURA, ...
    Article type: Contributed Paper --Special issue on Image Processing for Life and Its Application--
    2017 Volume 5 Issue 2 Pages 135-144
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    Adolescent idiopathic scoliosis (AIS) causes serious health problems when left untreated after onset. In Japan, moire images obtained from moire screening systems have been widely used for early stage detection of AIS. However, the problems of this system are the need for manual diagnosis after screening and the result classifying only two classes, normal or abnormal, which cannot provide diagnostic information essential for treating AIS. Therefore, we propose a screening system that can estimate spinal positions from a moire image using a convolutional neural network (CNN) and then automatically screening the spinal deformity from the estimated spine. For this, training dataset is generated by merging a moire image and spine positions on a radiograph. The estimated spine by CNN is evaluated for scoliosis by the proposed measuring method, which calculates the Cobb angle, a standard for scoliosis diagnosis. Results show that the proposed system has low error when compared with the published results of similar systems and the observer error of manual diagnosis. The proposed system is not only able to screen the spine as an alternative to radiography using only the moire image but also provides detailed spinal information for treatment.

    Download PDF (2410K)
  • Wenjun ZHOU, Shun’ichi KANEKO, Dong LIANG, Manabu HASHIMOTO, Yutaka SA ...
    Article type: Contributed Paper --Special Issue on Visual Computing--
    2017 Volume 5 Issue 2 Pages 146-159
    Published: December 15, 2017
    Released on J-STAGE: July 20, 2018
    JOURNAL FREE ACCESS

    This paper presents a novel background subtraction method called co-occurrence pixelblock pairs (CPB) for detecting objects in dynamic scenes. Based on a “pixel to block” structure, it uses the correlation of multiple co-occurrence pixel block pairs to detect objects in dynamic scenes. It offers robust background subtraction against a dynamically changing background. We firstly propose a correlation measure for co-occurrence pixel-block pairs to realize a robust background model. We then introduce a novel evaluation strategy named correlation depended decision function for accurate object detection with the correlation of co-occurrence pixel-block pairs. Finally, CPB can estimate the foreground from a dynamic background with a sensitive criterion. We describe our CPB in full detail and compare it to other background subtraction approaches. Experimental results with several challenging datasets demonstrate the effective performance of our CPB.

    Download PDF (4371K)
feedback
Top