A new model for predicting the acoustical properties of fibrous materials, the characteristic impedance Zc and the propagation constant γ, only from their airflow resistivity was constructed. The Delany-Bazley and Miki models are well known as conventional prediction methods. Their formulas are similar, and their difference is in the values of coefficients and degrees in the formulas. In this research, the acoustical properties and airflow resistivity of fibrous materials, 15 types of glass wool and 9 types of rock wool, were measured and compared with values obtained by conventional methods of prediction. It was found that the introduction of an expression involving the common logarithm improved the conventional models. This new model is more effective than the conventional models, particularly for the prediction for high-density fibrous materials where f⁄σ<0.01 m3/kg and low-density ones where f⁄σ>0.1 m3/kg.
In this paper, we propose a new approximation method for realizing a digital filter model for the lossy vocal tract with a view to evaluating the effect of the interaction between the vocal tract and the subglottal system, which has a frequency-dependent factor in the vocal-tract analog. In our approximation, for the propagation factor we compose an analog filter using the combination of a pair of poles and zeros with a different real part and the same imaginary part, and carry out a transfer from the s domain to the z domain using the delta transform. For the reflection coefficient we compose a Finite Impulse Response (FIR) filter of 511-order by Fast Fourier Transform (FFT) computation. It is known that the ripple component can be observed in the glottal source flow as the effect of the interaction. Here we show that this ripple component is reduced and the duration of the glottal open phase is increased as effects of the frequency-dependent characteristics of the loss.
Sound source localization and signal segregation using a small number of microphone elements is expected in not only multimedia products but also in daily-use products, such as hearing aids. The frequency domain binaural model can localize a sound source and segregate signals coming from a specific direction using two input signals. In this paper, a method of two sound sources localization in azimuth and elevation using interaural phase and level differences is proposed. The performance of this localization is examined by computer simulations for two concurrent speakers. In addition, the performance of the proposed method on the median plane is also confirmed. As a result, the proposed method is found to localize two sound sources in azimuth and elevation simultaneously. The possibility of sound source localization within 10° error for one sound source is 60–80% when the segmental power ratio is 0 dB. The possibility for the other sound source is 40–70%.
Virtual and augmented reality applications provide us with increasingly compelling multisensory worlds. Although spatial sound technologies are often used in such applications, headphone based sound reproduction may result in an undesired “mediation awareness” for an end-user. An alternative can be provided by bone-conducted sound technologies, traditionally used in hearing aids applications. Recent studies with bilaterally fitted bone-conduction transducers suggest that binaural sound cues can be rendered using this technology. In this paper we used binaural bone-conducted sound reproduction for enhancing a multi-modal self-motion simulator prototype. Similar to previous results from headphone based reproduction, the present study shows that the addition of moving sound images to visual stimuli significantly increase vection and spatial presence responses. These results provide empirical evidence that dynamic auditory scenes can be created using spatial bone-conducted sound with at least 45° horizontal resolution. The present research demonstrates the feasibility of using binaural bone-conducted sound in mediated environments.
The rating of tonal content is not just a prevalent issue in the context of sound design but also in annoyance/noise problems. Two physical parameters influencing the evaluated tonal content are tone length and S/N-ratio. In order to integrate these parameters into a reliable, objective measure of the perceived tonal content of interior car noises the following experiments were conducted: Realistic “howling,” i.e. the tonal content, in interior car noise were presented with variations in tone length and S/N ratio. Thirteen stimuli were judged by 41 subjects in a complete pair comparison. It was shown that subjective judgments can be modeled by probabilistic choice models quite well. An one-dimensional choice model, i.e. the BTL model, must be rejected in favor of a more general elimination-by-aspect model. The results are the following: (1) The subject’s decision aspects are identified by modeling the choice behavior: A non-howling aspect describes the sound evoking a sensation not described by the howling adjective and a pitch-salience aspect indicates the equivalent S/N-ratio of a tonal component. No subjective duration aspect was found. (2) The log-ratio scale obtained yields a constant factor for doubling the length in an interval from 250 ms to 2,000 ms with equal level as well as a constant factor for each increase by 3 dB. Furthermore, the tonal content conception and implications thereof are introduced and discussed.
A model of flow passing through the glottis is presented, that assumes a boundary layer. The fluid dynamic theory implies that a thin boundary layer formed in the vicinity of the glottal wall characterizes the flow behavior, including the flow separation, jet formation, and pressure loss across the channel. To analyze the boundary layer accurately, methods have been developed for solving the integral momentum relation on the basis of the similarity of the velocity profiles inside the layer, by assuming that the core flow velocity is known. On the other hand, development of the boundary layer reduces the effective size of the channel and increases the core flow velocity, thus causing the problem of viscous-inviscid interaction of the boundary layer. In this paper, the interactive boundary-layer problem is solved for glottal flow, and numerical results are compared with a conventional non-interactive model. In addition, the effects of the Reynolds number and glottal configuration on the flow behavior have been examined to validate the usefulness of the proposed flow analysis method.
Consumer products that are currently available on the market, such as electric home appliances, often adopt auditory signals in their user interfaces. The signals are expected to improve the usability of such appliances for all users, including elderly people and visually impaired people. The Association for Electric Home Appliances (AEHA) established guidelines for auditory signals in 1999, which recommended temporal patterns of auditory signals for electric home appliances. The AEHA Guidelines were adopted later as a Japanese Industrial Standard (JIS S 0013:2002) without any major changes, but embracing a wider range of consumer products. For users, it is desirable that the specifications related to auditory signals in the standard be accepted widely internationally and that they be applied to every product. In this Technical Report, we translate and summarize the AEHA Report written in Japanese, on which the AEHA Guidelines and the JIS are based, in the hope that the JIS will become the foundation of an international standard on the auditory signals of consumer products.