We propose a method to visualize the gaze distribution measured from an observer who perceives dynamism from the subject's movement in a movie using a three-dimensional (3D) heat map in a manner that is independent of time series changes in the subject's posture. In the previous method, the gaze distribution measured from an observer is represented using a heat map on a 3D human body model estimated from the subject in a still image. However, the previous method cannot handle time-series changes in the subject's posture in a movie. Furthermore, the previous method cannot visualize the gaze distribution measured from the surrounding region of the subject's body because it only considers the gaze distribution measured on the surface region of the subject's body. Our method introduces an angle between the gaze direction vector and the vertex position vector to visualize the gaze distribution measured in the surface and surrounding regions. From the experimental results, we confirmed that our method can visualize not only the gaze distribution measured in the surface region but also that measured in the surrounding region compared to the previous method. We also confirmed that it is possible to visualize the gaze distribution of a subject's movements in a movie without depending on time series changes using a standard posture human body model.
The video sequence acquired from the camera can be used to estimate a person's position accurately. We need to pay attention to the fact that the camera does not estimate the person's position when the camera's field of view is unavoidably blocked by temporary shielding. In recent years, several methods have been proposed to estimate the person's position using radio waves, which emit a wavelength longer than that of visible light and are unaffected by shielding. However, collecting training samples consisting of pairs of the radio wave strength and the person's position is difficult because the radio wave strength acquired from wireless devices cannot be intuitively annotated by humans like the video sequences acquired from the camera. We propose a method for estimating the person's position, even temporarily shielded, by combining the camera and the wireless devices. Specifically, our method automatically collects training samples of the radio wave strength and the person's position during no shielding. Our method estimates the person's position using a regression model with only the radio wave strength acquired from the wireless devices when shielding occurs. The experimental results show that the estimation error of the person's position was 15.9 ± 5.2 cm when one person existed at the same time in a poster panel area with temporary shielding and 18.7 ± 6.6 cm when two persons existed at the same time in that area.
We propose a new method for pixel selection template matching that sets certain conditions for pixel selection in order to ensure robustness against rotation. Without impairing the high-speed processing that characterizes pixel selection template matching, it is possible to improve matching accuracy when distortion occurs compared to when no movement occurs, by moving the selected pixel to a position where the pixel value changes less due to distortion due to rotation.