2020 Volume 41 Issue 1 Pages 245-248
The perception of acoustic motion is not uniform as a function of azimuth; listeners need roughly twice as much motion at the side than at the front to judge the two motions as equivalent. Self-generated acoustic motion perception has also been shown to be distorted. Sounds moved slightly with the listener's head are more consistently judged to be world-stable than those that are truly static. These distortions can be captured by a model that incorporates a head-centric warping of perceived sound location, characterized by a displacement in apparent sound location away from the acoustic midline. Such a distortion has been demonstrated; listeners tend to overestimate azimuth when they are asked to point at a sound source while keeping their head and eyes fixated ahead of them. Here we show that this mathematical framework may be inverted and we demonstrate the benefits of re-mapping sound source locations toward the auditory midline. We show that listeners prefer different amounts of spatial remapping, but none preferred no remapping. Modelling shows minimal impact on spatial release from masking for small amounts of remapping, demonstrating that it is possible to achieve a more stable perceptual environment without sacrificing speech intelligibility in spatially complex environments.