In this paper, we propose and discuss design parameter modeling for design space exploration of real-time high dynamic range (HDR) synthesis on FPGA. HDR synthesis is a technique to produce an image with wider dynamic range from multiple camera images with different exposures. Mertens et al. proposed an adaptable HDR synthesis method based on visual criteria, which works without knowledge of exposure times and camera-specific parameters. Our research group has proposed an fully-pipelined FPGA implementation of the simplified Mertens' method, whose low latency is suitable for real-time applications such as autonomous vehicle systems. It has two major design parameters which affect a quality of HDR synthesis: the number of input images and the number of image pyramid layers. Typically, finding the best combination of the two parameters under the given resource constraints of a target FPGA device is a difficult task requiring repeated runs of time-consuming logic synthesis process. Therefore, we propose a mathematical model to estimate resource usage of the HDR synthesis system to accelerate design space exploration. Since Block RAM (BRAM) is the most dominant resource of the system, we focus on a BRAM usage in this model. Comparison results targeting a Zynq UltraScale+ MPSoC FPGA reveal the usefulness of the proposed model for efficient design space exploration. Though the model has some estimation error due to the granularity of BRAM, the error is predictable since it mainly depends on two parameters (bit width and image width) which are independent of the parameters to be explored. Under the typical setting, for example, the error is approximately 12.5 % and almost constant. We also conduct design space exploration considering a quality of HDR synthesis based on the Multi-Exposure Fusion Structural Similarity index (MEF-SSIMd), suggesting that a 6-input, 7-layer configuration provides the best trade-off between resource usage and HDR synthesis quality under the given constraints.
抄録全体を表示