Ever-increasing demand for high quality display has ensued two decades of research in high dynamic range (HDR) imaging, as it has the potential to replace the standard dynamic range imaging. The most significant feature of HDR is the accurate reproducibility of a scene with its entire spectrum of visible lighting and color depth. But this capability comes with expensive capture, display, storage and distribution resource requirements. Additionally, display of HDR video content on a standard display device with limited dynamic range requires some form of adaptation. Many adaptation algorithms, widely known as tone mapping (TM) operators, have been studied and proposed in these two decades. In this paper, we will discuss those TM algorithms which have been implemented on graphics processing unit (GPU)1-12), field-programmable gate array (FPGA)13-47), and application specific integrated circuit (ASIC)48-60). Like any other image enhancement algorithms tone mapping is resource intensive; accordingly, they have to be accelerated using hardware for real-time performance. This adaptation effort leads to various design challenges that are encountered during the hardware (HW) development. Any real-time application poses strict timing constraints which requires time exact processing of the algorithm. Also, most of the embedded systems would have limited system resources in terms of battery, computational power and memory resources. These design challenges require novel solutions, and in this paper we focus on these HW issues. Recently machine-learning (ML) algorithms have been providing different approaches to solve various image processing problems including tone mapping. Besides HW-TMOs we briefly review learning-based TM algorithms, and discuss important architectures & datasets. Additionally, we present HDR-Net like neural networks as ideal candidate baselines for realizing TMOs with deep neural network (DNN) on hardware/embedded systems.
We present our work on a complementary metal oxide semiconductor (CMOS) image sensor that uses crystalline selenium as the photoconversion layer and enables avalanche multiplication at low voltage, with the goal of realizing a high-definition, high-sensitivity camera. Gallium oxide, used as a hole blocking layer, and nickel oxide used as an electron blocking layer effectively prevents the increase of external dark current caused by carrier injection from an external electrode. In addition, a new crystallization method was developed to improve the crystallinity of selenium for the fabrication of crystalline selenium films. We were able to capture high-quality images in a crystalline selenium-based CMOS image sensor and confirm signal amplification by a factor of approximately 1.4 at a reverse bias voltage of 22.6 V by using these film structures and deposition conditions.
This study proposes to adapt the fundus camera for use as a personal healthcare tool. The proposed system uses near-infrared light to avoid blinding the subject and three-wavelength near-infrared imaging to acquire colorized fundus images. First, the optical system with the fundus camera was tested using a three-plate near-infrared snapshot camera. Subsequently, image processing and denoising techniques, including tracking and image integration, were applied to reduce the blur caused by biological scattering. Furthermore, a singlechip three-wavelength near-infrared-compatible image sensor is required for device miniaturization, for which a dielectric multilayer Fabry-Perot bandpass filter was adopted as its transmission wavelength can be easily controlled. In this study, the optical design of the dielectric multilayer structure, the fabrication process of the mosaic filter, and the chip mounting technology are investigated. The demosaicing process and color space conversion corresponding to the spectral response characteristics of the fabricated image sensor are also discussed.
Holographic data storage (HDS) is promising technology as novel storage with potentials of larger recording density and higher data transfer rate. A computer-generated hologram (CGH) technique enables the optical configuration of HDS to be in-line and simple and has therefore been applied to HDS. In our previous papers, two approaches for increasing the data density per a data-page and the number of multiplexed holograms in a medium have been adopted to improve the storage density of CGH-based HDS. Multilevel phase data-pages and a phase retrieval technique based on the Fourier fringe analysis have been introduced and the multiplexing technique based on the reference wave correlation has also been employed. In this paper, such the studies are individually introduced, and finally the feasibility of the combination of them are discussed.
This paper introduces an adaptive depth compression method that enables content directors to express their intensions on 3D views on light-field displays with limited depth reconstruction capability. Depth compression methods have demonstrated that scenes with significant depth can be shown with appealing subjective quality by contracting scene depth taking into account the characteristics of the human vision. The present study extends the method to add more flexibility on the 3D expressions. We introduced a new depth remapping function with many parameters and optimized the parameter values to satisfy directors' production demands under the constraints of depth reconstruction capability and subjective quality. Objective and subjective evaluation experiments showed that the proposed method allocated sufficient depth to specific objects always in animated scenes and provided a stronger sense of depth and preferable 3D expressions. These results suggest that our method will expand the availability of light-field displays in future 3DTV service.