抄録
A sound rendering system comprising a loudspeaker in front of a listener, a fully open-air headphone-set, and adaptive filters is described. This system enables cancellation of the sound from the loudspeaker at one ear of the listener, as well as generation of a delayed and attenuated version of the loudspeaker sound. The delay and attenuation are adjusted to control the sound image direction. Unlike conventional systems, the adjustment is accomplished irrespective of listener’s position.The performance was evaluated in terms of the estimation error and the perception of the sound image. The estimation error was simulated on the assumption of an impulsive head movement. The estimation error was insignificant for minor or slow movements. The sound image direction and distance were psycho-acoustically investigated. The sound image direction was controllable from the left to the right. There was no significant difference between the distance with the proposed system and that with an actual source. These results indicate that the proposed system enables individual localization control over a frontal semicircle.