抄録
Recently, studies of living support in intelligent spaces are attracting much attention, such as informational support on a smart device with AR technology. Mobile camera localization is required to provide support information. However, localization with interest point detectors fails under textureless scenes. Moreover, drift is inevitable due to the localization. In this paper we propose a framework that revises errors accumulated in the mobile camera localization by exploiting human pose information obtained from our markerless motion capture system.