This paper focuses on the sampling problem in light field rendering (LFR), which is a fundamental approach to image based rendering. The quality of LFR depends on the light-ray database generated from pre-acquired images, since image synthesis is the process of gathering appropriate light-ray data from the database. For improving the quality, interpolation of light-ray data is effective. It is based on the assumption that objects in a scene are placed on a plane called a “focal plane”. According to the depth of the focal plane (which is the distance between cameras and the focal plane), a focus-like effect would appear on the synthesized images. In this paper, we formulate the depth of field in LFR to address the range of depth where scene objects can be rendered in focus. Our theory is based on the plenoptic sampling theory, and includes some other related works. The proposed concept could be applicable for intuitive measurements of synthesis quality, configurations of sampling conditions, and evaluations of spatial coding methods.