Over the last 50 years or so, a large body of empirical research has demonstrated the importance of a variety of low-level spatiotemporal factors in the multisensory integration of auditory and visual stimuli (as, for example, indexed by research on the ventriloquism effect). Here, the evidence highlighting the contribution of both spatial and temporal factors to multisensory integration is briefly reviewed. The role played by the temporal correlation between auditory and visual signals, stimulus motion, intramodal versus crossmodal perceptual grouping, semantic congruency, and the unity assumption in modulating multisensory integration is also discussed. Taken together, the evidence now supports the view that a number of different factors, both structural and cognitive, conjointly contribute to the multisensory integration (or binding) of auditory and visual information.
In the acoustic design of special spaces, such as studios, it may be required to realize diffuse reflection from a specific surface of a wall. Moreover, this requirement often changes with time or situation. In this research study, a new acoustic wall system, which has the potential to cope with such strict requirements, is proposed. The proposed system can vary the reflection characteristics by radiating additional sound from a source located behind the absorbing material. Therefore, if sound is not emitted from the source, it is possible to use the system as a mere sound absorbing material. This system can be used as a rigid wall, provided sound of equal amplitude to the incident sound is emitted from the source. Furthermore, a trial was carried out to make the characteristics of radiated sound being diffuse by using appropriate devices. Several models are proposed and the fundamental characteristics such as directivity were examined. The results show the possibilities for realizing uniform reflection directivity and variable absorption characteristics.
Howling cancelation is an important issue for realizing either monaural or binaural hearing assistant systems including hearing aids. Although binaural hearing assisting systems have become popular, most of them use two independent monaural howling cancelation systems without utilizing the advantage of binaural processing. In this paper, we propose a simple howling cancelation method for a binaural hearing assistant system using interaural level difference. The interaural level difference based on the observed signals is compared with a priori prepared interaural level difference database to distinguish whether howling occurs. Simulation is performed to verify the possibility of implementing the proposed method for a binaural hearing assistant system. Since the howling canceler works frame by frame because of its fast Fourier transform based processing, the performance of the howling canceler is measured with respect to a howling margin, which corresponds to how the frequency resolution depends on canceling performance. As a result, the proposed method showed the suppression of multiple oscillating frequency components simultaneously. In simulation for the effect of frequency resolution, the results showed that the maximum howling margin reaches up to 3.0 dB.
In this paper, we propose a method of estimating the reflection coefficients of an adaptive lattice filter. In this method, conventional adaptive algorithms, for example, the normalized least mean square (NLMS) algorithm, are used for the estimation. In general, the reflection coefficients are estimated as cross-correlation coefficients between forward and backward prediction errors in each stage of the adaptive lattice filter. Accordingly, two divisions in each stage, and effectively doubling the number of stages, are required. A problem is that the processing cost of division is higher than that of multiplication, especially in cheap digital signal processors (DSPs). Hence, the reduction of the number of divisions is strongly desired in practical use. The proposed technique can decrease the number of divisions to one, provided that the NLMS algorithm is used. Moreover, in the application of the adaptive lattice filter, system identification is also important. In this paper, we present a technique for the application. The technique is derived from the proposed method.
A speech signal captured by a distant microphone is generally smeared by reverberation. This severely degrades both the speech intelligibility and the Automatic Speech Recognition (ASR) performance. We have already proposed a novel dereverberation technique named “Harmonicity based dEReverBeration (HERB),” which utilizes an essential feature of speech, namely harmonics, and estimates an inverse filter for an unknown impulse response. If a large amount of acoustically stable training data is available, HERB is able to estimate an accurate inverse filter even in severely reverberant environments. In general, however, a dereverberation algorithm has to work with small amounts of training data, because the acoustic property of a real world environment changes according to various factors such as the speaker’s position and room temperature. In this paper, we propose a new dereverberation scheme based on HERB, aiming primarily at reducing the amount of training data needed to estimate an inverse filter. The proposed method re-estimates a more accurate source signal based on the output signal of conventional HERB, and re-calculates the inverse filter. We show experimentally that our new dereverberation scheme successfully achieves high quality dereverberation with much smaller amounts of training data, and is very effective at improving both audible quality and ASR performance, even in unknown severely reverberant environments.
In this paper, we propose a new adaptive microphone system characterized by a linear prediction circuit inserted before a noise reconstruction filter corresponding to the adaptive delay used in conventional systems. The insertion of the circuit provides various advantages to the adaptive microphone system. One is that steering a null for a noise source becomes possible in spite of an incident speech signal, although conventional systems require the stopping of the null steering during this incidence. Another advantage is that the new microphone system works as an omnidirectional microphone for the speech signal. Consequently, setting the microphones at arbitrary intervals is allowed. For example, building a microphone in a handset and another one in a telephone baseset is possible; thereby, a higher noise reduction effect is expected. We also verify the performance of the proposed system using an experimental system.
April 03, 2017 There had been a system trouble from April 1, 2017, 13:24 to April 2, 2017, 16:07(JST) (April 1, 2017, 04:24 to April 2, 2017, 07:07(UTC)) .The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.