A 1-inch optical format, 80fps, 10.8M-pixel CMOS image sensor that employs a row-shared dual conversion gain pixel for both high-end video and still camera applications is presented. The 80fps readout speed was chosen because of a minimum requirement for a strobe sync speed of 1/60s for use in a still camera without a mechanical shutter and also for reducing rolling shutter distortion on moving objects. In order to satisfy this frame rate requirement with a minimum increase in analog power consumption, a new pipelined pixel-to-ADC scheme was introduced. A noise floor of 1.6 e-rms and the column fixed pattern noise (FPN) of 0.045 e-rms have been obtained at the highest gain of 27 dB, while the maximum handling signal charge is 25 ke- at the lowest gain, suggesting sufficient settling of pixel and column analog signals within a row time of 4.58μs due to the newly introduced pipelined operation.
A digital calibration algorithm is proposed for a 2-stage cyclic analog-to-digital converter (ADC) with 12-bit resolution used in a 33-Mpixel 120-fps CMOS image sensor for Super High-Vision. This algorithm can correct errors generated in the ADC due to capacitor mismatch, finite gain error, and incomplete settling error. A simulation was performed to verify the proposed algorithm. The results show that the maximum differential nonlinearity (DNL) is improved to +0.49 / -0.48 LSB from +4.5 / -1.5 LSB, and the maximum integral nonlinearity (INL) is improved to +0.23 / -0.27 LSB from +7.5 / -1.5 LSB.
This paper describes dark current characterization of two-stage charge transfer pixels, which enable a global shuttering and kTC noise canceling. The proposed pixel uses pinned diode structures for the photodiode (PD) as well as the storage diode (SD), thereby a very low dark current is expected. In this paper, effects of negative gate biasing and temperature dependency are discussed with device simulations and measurement results. The measured dark current of the PD and SD with the negative gate bias results in 19.5 e-/s and 7.3 e-/s (totally 26.8 e-/s) at ambient temperature of 25°C (the chip temperature is approximately 30°C). This value is much smaller than that of conventional global shutter pixels, showing the effectiveness of use of the pinned storage diode.
The authors have developed an ultra-high-speed video camera with in-pixel storage and backside illuminated structure, which was named "Backside Illuminated In-situ Storage Image Sensor (BSI ISIS)". In order to achieve much higher frame-rate, a multi-collection-gate structure was proposed toward one giga frames per second. Based on this structure, an example pixel design of "Nanosecond-resolution image sensor (NanoSIS)" is reported. Each pixel has six collection gates, and an image sensor with this pixel design can be applied for a multi-framing camera. The travel time to an appropriate collection gate is less than 1 ns for electrons generated in the most part of a pixel except the area close to the pixel boundary. Through results of device simulation, this paper describes preliminary analysis of the performance and possible modification of the design toward sub-nanosecond temporal resolution.
In this paper, an ultraviolet light (UV-light) sensitive and highly robust Si photodiode technology based on atomically flattened Si surface is summarized and its application to a CMOS image sensor is demonstrated. By forming a surface high concentration layer of photodiode with steep dopant profile uniformly on flattened Si surface, the almost 100 % internal quantum efficiency to UV-light waveband and negligibly small degradation of photo-sensitivity were achieved for both n+pn and p+np photodiodes. The developed photodiode technology was applied to a 5.6 μm pixel pitch front-side-illuminated CMOS image sensor. The fabricated sensor chip exhibited a spectral response to a wide light waveband of 200-1000 nm, and the sensitivity degradation did not occur after the strong UV-light exposure stress.
A polarization-analyzing CMOS image sensor with 65 nm standard fabrication process was designed and characterized. Polarization-analyzing image sensor pixel was realized using wire grid structures designed with a metal wiring layer within the standard CMOS process. Taking advantage of sub-100 nm CMOS process, a fine grid pitch was realized. Polarization-analyzing performance significantly higher than our previous sensors with 0.35 μm CMOS process was obtained. Polarization imaging capability was demonstrated for a scene with local polarization variation. With an aim of further performance improvement, subtraction readout scheme and multiple layer stacked on-pixel polarizer were proposed and discussed.
Conventional imaging systems have the pixels that are arranged in the regular lattice positions, or lattice pixel placement (LPP). LPP is employed in most imaging systems for its advantages on pixel read-out and image reconstruction ways, however, in LPP, the clarity on image representation depends on the direction of the object in the image, or the directional dependency exists. In this paper, we propose the pseudorandom pixel placement (PPP) for reducing the directional dependency on the accuracy in the area measurement. We carried out the simulation to evaluate the directional dependency decrease effect for various pixel parameters, and discuss the optimum pixel parameters and the image resolution.
A CMOS image sensor capable of selectively varying each pixel’s exposure time supported by modified X-Y addressing scheme and pixel sequential readout architecture to increase the intra-scene dynamic range for a macro-pixel which the focal plane is divided into is discussed. The proposed architecture provides finer granularity of exposure time control compared to our previous image sensor from a macro-pixel to a single pixel. As a result, a generation of under-exposure due to large illumination difference within a macro-pixel can be suppressed, which enables sophisticated high-level functionalities in machine vision application by scene recognition. We fabricated a test chip using 0.18-μm 1P5M standard CMOS process. In the chip, twelve 10-b 2.5MS/s pipelined Analog-to-Digital Converters (ADC) are integrated for on-chip digitization. The measurement results demonstrate high flexibility on extended dynamic range imaging, improved pixel fixed pattern noise and ADC’s differential non-linearity of +0.94/-1.38LSB.
We propose and demonstrate a 3-D acquisition system with background suppression capability based on a structured light projection method. We have designed and demonstrated CMOS image sensor for frequency and phase detection by in-pixel digital demodulation circuit. The demodulation circuits suppress background light and extract modulated signal light that encodes angular information in frequency and phase from the background light. These in-pixel demodulation circuit has advantages in efficient and robust demodulation. The modulated signal light is projected using a high-speed DMD projector. We demonstrated -21 dB signal to background ratio without any optical filters, and 0.68 cm distance precision of the entire 3-D range finding system.
Line-of-sight (LoS) detection is used for various user-interfaces, and a rapid eye movement, which is called saccade, is expected to be applied to new user-interfaces. The authors propose the column-parallel architecture for LoS detection image sensor to realize the high frame rate of 1,000[fps] and 640 x 480[pixels] resolution. An emulation system using a high speed camera and FPGA was developed for algorithm evaluation, operating at 1,000[fps] and succeeded to track saccade movement. In addition, the preliminary integrated image sensor with 16 x 16[pixels] has been designed.
The authors propose an algorism for miniaturizing an optical tactile sensor using a thin, flexible elastomer membrane. This tactile sensor is composed a flexible elastomer membrane, a layered light, and an ultra-miniature camera. Irradiating a sheet-like light onto a deformed elastomer enables the deformation of the elastomer to be represented as contour. The two-dimensional images acquired from camera are used to detect three-dimensional force information. In this study, the quadratic polynomials were used to approximate the deformation of the elastomer. Three points, which are composed of two points: the vertex and the fixing points of the elastomer, and one point acquired by a slit light, are sufficient to estimate the parameters, leading to the miniaturization of the sensor. The result of evaluation experiment showed that our method has advantages of being capable of detecting force with the error of 0.1[N] or lower, compared with the output from 6-axis force sensor.
We show a method of realizing object tracking and image restoration in the dark in which target motion and a reference image are simultaneously estimated using a Bayesian framework. To avoid being trapped in a local minimum in the gradient calculation, a broader search is performed by calculating differences after applying a strong low-pass filter to input images. Deblurring is performed using the motion parameters estimated from the blurred images. As a result, we realized object tracking and image restoration from simulated video images with an SNR of up to -6 dB, and real video images captured in a dark environment of less than 0.05 lx illuminance at the subject surface. In addition, we examined the optimal frame rate for image restoration and we found that a higher frame rate was better under relatively little noise while a lower frame rate was better under much noise.
Preferred viewing distance (PVD), which is the viewing distance favored in the home viewing environment, is important for overall TV broadcasting system assessments to guarantee the image quality of TV programs. Previous studies have suggested that image size and resolution have little effect on the PVD. However, in future TV systems such as ultra-high-definition TVs (UHDTVs), which are expected to have a wide field of view (FOV) and high spatial resolution, program production methods may change depending on the specific values of these parameters, and viewer behaviors may change accordingly. Here we used a program consisting of still images to show that viewer behavior differs between two spatial resolutions. This result suggests that the conditions in previous studies discouraged viewers from finding an optimized PVD for each scene because there was little need and insufficient time to move spontaneously.
Recent progress in coded imaging has enabled us to recover sharp images from undesirable image blurs caused by depth and motion. However, the existing methods are limited to single types of blurs, such as depth blur or motion blur. The coded imaging must also tackle the problem of image noise caused by decreasing of input light. In this paper, we propose a method, which enables us to deblur both depth blur and motion blur simultaneously by coding image capture both in space and time. In particular, we show that by changing the aperture pattern efficiently, accurate image deblurring can be achieved under the existence of image noise.