2026 年 14 巻 1 号 p. 2-10
We propose a hybrid reconstruction method for coded light-field imaging. Most previous methods utilized pre-trained reconstruction, in which the reconstruction process was first pre-trained on a light-field dataset taken from various 3-D scenes and then applied to new target 3-D scenes. However, pre-trained reconstruction is not necessarily optimal for a specific 3-D scene and sometimes results in insufficient reconstruction quality for the fine details. To address this issue, we first introduce a method of self-supervised reconstruction that focuses on the data observed from a specific 3-D scene. To this end, we incorporate a learning-based 3-D representation technique called neural radiance fields (NeRFs) into the framework of coded light-field imaging. Moreover, we combine pre-trained and self-supervised approaches seamlessly to synergize the strengths of both. Experimental results demonstrate that our method can achieve better reconstruction quality consistently over various 3-D scenes than the previous pre-trained methods.