-
Article type: Cover
Pages
Cover1-
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Index
Pages
Toc1-
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Toshio YASUE, Tetsuya HAYASHIDA, Jun YONAI, Kazuya KITAMURA, Toshihisa ...
Article type: Article
Session ID: IST2013-28/ME2013-70
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
A 33-Mpixel and 120-fps CMOS image sensor with higher sensitivity and low random noise was fabricated. By using 0.11μm CIS process, the prototyped sensor exhibited conversion gain of 112μV/e^-, which is 1.8 times higher than conventional image sensor. As a result, sensitivity and random noise of the sensor was 2.25V/lx/s (measured under A-light source and IR cut filter) and 2.47 e^-rms (measured at 120 fps and Gain=6.8) respectively.
View full abstract
-
Hiroshi SHIMAMOTO, Kazuya KITAMURA, Toshio YASUE, Tadaaki YANAGI, Tets ...
Article type: Article
Session ID: IST2013-29/ME2013-71
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
NHK is researching and developing Super Hi-Vision, a 8K version of UHDTV, as the next-generation broadcast system. We have developed a 120-fps 33-megapixel (7,680-pixel by 4,320-line of resolution) CMOS image sensor and a three-chip color image capture device. In this work, we developed a 4K resolution 120-fps LCD (liquid crystal display) display for the SHV capture device.
View full abstract
-
Yuta SASADA, Junichi AKITA
Article type: Article
Session ID: IST2013-30/ME2013-72
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Line-of-sight (LoS) detection is decided by measuring the eye movement, and this technology is attracting attention as a new type of user interfaces. Human's eyeball also moves very rapidly, which is called as saccade, and saccade is also expected to be used as an another type of user interfaces. Although real-time tracking of saccade is difficult by conventional LoS detection systems. We have been developing LoS detection system using Vision Chip to achieve fast and low latency processing, but the frame rate required to capture the saccade has not been proposed mostly. In this paper, we describe the relation between eyeball's travel distance and speed by saccade to propose the frame rate, which is enough to track saccade.
View full abstract
-
Tatsuhiko SAKAGUCHI, Junichi AKITA
Article type: Article
Session ID: IST2013-31/ME2013-73
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
Traditional image processing method is still images in the time domain. The frame rate should be set for capturing and tracking the moving objects, high frame rate for faster objects, for instance. The higher frame rate may result in expensive processing cost for slower objects, or difficulties in adapting the drastic change of speed. In this paper, we describe the motion detection algorithm based on independent time-continuous domain in each pixel. We also describe the estimation on the robustness for external noise factors in motion detection using this algorithm.
View full abstract
-
Shun KAWADA, Rihito KURODA, Shigetoshi SUGAWA
Article type: Article
Session ID: IST2013-32/ME2013-74
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We propose a color reproduction method which based on the estimated spectral reflectance of objects in order to improve the color reproductivity of single chip image sensor with the White-RGB color filter. Virtual color filter information was newly introduced to the spectrum estimation in addition to the physical four color filters (White-RGB). The estimation accuracy was evaluated with the 24 colors in the Macbeth Color Checker. The averaged value of the color difference ΔEab of 24 colors was 7.62 with only the physical RGB color filters. On the other hand, the value was reduced to 1.88 by the new estimation method which includes three physical color filters and the newly added four virtual color filters.
View full abstract
-
Masashi Morita, Tetsuzo Fujita, Yutaka Nakano, Toshiyuki Yoshida
Article type: Article
Session ID: IST2013-33/ME2013-75
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
A multi-parameter motion estimation/compensation model is desirable for an improvement of video coding efficiency. The authors thus have proposed a new multi-parameter motion estimation technique based on reliability metrics for motion estimation under a constraint for the computational cost. This paper aims to improve the estimation accuracy and efficiency of the technique by detecting macro blocks comprising a single edge and reducing the estimation precision along with the edge.
View full abstract
-
Masamitsu Taue, Shusuke Kondo, Toshiyuki Yoshida
Article type: Article
Session ID: IST2013-34/ME2013-76
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
The authors have proposed a video coding technique that maximizes an estimated spatio-temporal Mean Opinion Score (EMOS) and temporal and spatial MOS estimation techniques. Although we have evaluated the accuracy of the spatial MOS estimation technique by comparing the results by the technique with actually measured MOSs, it has not compared with the other useful video quality measures. This paper therefore derives a coefficient set of the spatial MOS estimation technique for H.264 video encoders, and then the EMOSs estimated by the technique, actually measured MOSs, and SSIMs are compared each other to confirm the advantage of our technique over the SSIM.
View full abstract
-
Hiroki INANO, Akihiro KITAMURA, Kousuke IMAMURA
Article type: Article
Session ID: IST2013-35/ME2013-77
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this paper, we proposed a moving object extraction by level set method considering depth information obtained from a single camera image. Many conventional methods for moving object extraction are based on the inter-frame difference information, but accuracy degradation of extraction in uncovered background and occlusion region is easy to happen by the use of only frame difference information. In our proposal method, we improve the accuracy of moving object extraction by design of a new speed function of level set method using both edge information and depth information obtained from a single image as well as inter-frame difference information. The experimental results through computer simulations show the proposed method provides good extraction accuracy under the condition that there is significant difference of depth information between the object and background.
View full abstract
-
Takayuki Fujimura, Tadaaki Hosaka, Takayuki Hamamoto
Article type: Article
Session ID: IST2013-36/ME2013-78
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In recent years, the demand for Adaptive Driving Beam (ADB) technology, which controls vehicle headlights adaptively, has recently increased. Implementation of ADB requires real-time detection of the leading vehicles. In this paper, we propose novel method of detecting and tracking leading vehicles at night. The framework of our method is 1)detecting white lines to define traffic lanes, 2)searching vehicles only in the traffic lanes, and 3)tracking the captured vehicles in subsequent frames. A major benefit of our method lies in the reduction of false detection and the cutdown of processing time by restricting the search range. Experimental results quantitatively and qualitatively show the effectiveness of the proposed method.
View full abstract
-
Hironobu TAKANO, Akira MORI, Kiyomi NAKAMURA
Article type: Article
Session ID: IST2013-37/ME2013-79
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
In this study, the recognition performance of non-contact type of fingerprint with the local luminance variation or HOG feature were investigated using the 3-D rotation (roll, yaw, and pitch directions) images of fingerprint. The 60 fingerprint images of 15 subjects captured by a camera were used in the experiment. From experimental result, the equal error rates (EER) were gradually increased with the rotation angle in all rotation directions. The error rates of fingerprint recognition for the HOG feature were lower than those of the local luminance variation. The rotation correction of the roll direction using finger shapes provided the significant improvement of recognition performance for both features.
View full abstract
-
Tomiaki SAITOU, Hironori YAMAGISHI, Takayuki HAMAMOTO
Article type: Article
Session ID: IST2013-38/ME2013-80
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
We have studied methods of free-viewpoint image synthesis that can operate in real time using the multi-view image sensor system. In the previous methods, distance information required for free-viewpoint image synthesis can be estimated in real time, but the processing speed was rather low. Therefore the frame rate of the real-time processing is not sufficient. In this paper, we propose new distance estimation method to accelerate the processing.
View full abstract
-
Article type: Appendix
Pages
App1-
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App2-
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS
-
Article type: Appendix
Pages
App3-
Published: June 17, 2013
Released on J-STAGE: September 21, 2017
CONFERENCE PROCEEDINGS
FREE ACCESS