This article reviews the development and application of Computational AeroAcoustics (CAA) to acoustic propagation on subsonic mean flows, with a particular focus on methods used to predict acoustic radiation from turbofan aeroengines. The governing equations are presented and particular issues such as the formulation of impedance and far field boundary conditions, the treament of Kelvin-Helmholtz instabilities, resolution requirements and methods for controlling dispersion error, are discussed. The status of current CAA methods is reviewed. Finally, the matter of validation against benchmark problems and measured data is explored.
In this paper we examine how the precedence effect is affected by concurrent visual information about the source of sound. In the experiment, speech, snare drum and guitar sounds were used as sound stimuli, and moving images of the speech utterances or musical performances recorded synchronously with the sounds were presented as visual stimuli. Thus, differences in the lower limit (summing localization threshold, SLT) and the upper limit (echo threshold, ET) of the area where the precedence effect operates were observed when visual stimuli were presented and not presented. As a result, the following conclusions were drawn: 1) The influence of a visual stimulus on SLT was significant for speech and the drum sound, and there was a tendency that SLT was shortened, but for the guitar sound the influence was less significant. 2) The presentation of a visual stimulus along with an auditory stimulus significantly prolonged ET for the guitar sound, but no definite tendency was observed for speech and the drum sound. 3) SLT appears to be significantly influenced by the characteristics of the audiovisual stimulus, particularly by the dynamics of its movement. ET also changed significantly in the presence of a visual stimulus but the change was not uniform; it appeared to depend on the object of attention, i.e., the auditory stimulus, visual stimulus or both.
We propose an automatic utterance type recognizer that distinguishes declarative questions from statements in Indonesian speech. Since utterances in these two types have the same words with the same order and differ only in their intonations, their classification requires not only a speech recognizer, but also an intonation recognizer. In this paper, the most important utterance part for distinguishing those two types is first identified by perceptual experiments. Then, an utterance type recognizer using that part is proposed, where polynomial expansion is used as a feature extractor and a neural network is used as a classifier. We evaluated our method using Indonesian speech database including 29 pairs of sentences of those two types, each of which uttered by 35 speakers. It was proved that final word and final-two-syllables are equally effective for the discrimination of each utterance. The proposed recognizer achieved the best accuracy of 89.1% when the order of polynomial expansion was three and the neural network was a linear perceptron.
Comparative study of longitudinal wave propagation in cancellous bone was performed. We simulated wave propagation with the finite-difference time-domain (FDTD) method using a three-dimensional X-ray computer tomography (CT) model of an actual cancellous bone. We also experimentally measured the waves that propagated in an identical specimen under similar conditions. The speeds of fast waves and the amplitudes of fast and slow waves at small ROIs (region of interest) in the specimen were examined. We found good correlations in fast wave speed and wave amplitudes between simulated and measured data. The peak amplitude ratio of simulated two waves was also similar with the experimental results. These results show the importance of FDTD simulation to understand the wave propagation phenomena in the complicated medium.
Herein the possibility of the FDTD method, which considers longitudinal and shear waves and two types of damping terms, is investigated as a new prediction method for architectural structure-borne sound. Equations of motion and Hooke’s laws of three-dimensional elastic bodies are directly discretized with a difference scheme of the leap-flog algorithm. Using these formulations, vibration-mode transformations at the junctions of building frames can be considered without special handling. This method seems to be especially effective for providing both visible and audible predictions because time responses can be obtained directly. First, the formulations and the stability condition are discussed. Next, a parallel computing algorithm to efficiently perform the FDTD calculation is introduced. Finally, the predicted results are compared to the measured data from the viewpoints of energy decay and frequency characteristics. From these comparisons, it is confirmed that the predicted results and the measured data are in good agreement if the appropriate damping terms are employed.
Muscle activations during speech production are important for understanding speech motor control. In this paper, we put forward a physiological articulatory model-based approach to estimate muscle activations in producing five sustained Japanese vowels by minimizing the morphological difference between model simulations and target MRI observations, where the model is an improved version of Dang’s partial 3D model. The initial muscle activations in the model simulation are set according to observation obtained by EMG measurement in producing vowels . Then, the activation level of the tongue muscles are gradually adjusted so as to reduce the difference between the simulations and target MRI observations using an optimization approach. The results show that the proposed method can provide more details of the muscle activations than that obtained by EMG. In addition, the results suggest that the muscles Transversus and Verticalis play important roles in manipulating the length of tongue for vowel production; and, it is better to separate the Styloglossus into two control units, the anterior portion and posterior portion, in vowel production.
The ATR MRI database of Japanese vowel production was used to evaluate the acoustic characteristics of the vocal tract for the five Japanese vowels through the measurements of frequency responses from solid vocal-tract models formed by a stereolithographic technique. The database includes speech sounds as well as volumetric magnetic resonance imaging (MRI) data, but the speech sounds were recorded separately from the acquisition MRI data; therefore, their speech spectra are not appropriate for use as the reference for the transfer functions of the vocal tract. A time-stretched pulse signal generated from a horn driver unit was introduced into the physical model at the lips, and the response signals of the models were recorded at the model’s glottis. In the measurements, the glottis of the models was sealed with a plastic plate, and the response signals were measured from a small hole in the plate using a probe microphone. This method permits accurate measurement of the transfer functions of the vocal tract under a closed-glottis condition. The resulting transfer functions of the five Japanese vowels provide a benchmark for testing numerical analysis methods that have been used to study vocal-tract acoustics, although the solid wall decreases the frequencies of lower resonances.
We develop a new blind source separation (BSS) microphone, named SSM-001, which can separate multiple sounds in real time under noisy conditions. The BSS microphone is based on our previously proposed BSS algorithm, in which single-input multiple-output (SIMO)-model-based BSS and SIMO-model-based binary masking are combined. We modify this algorithm and implement it in a digital signal processor for a more effective and realistic real-time operation. In this paper, the issue of real-time implementation in the BSS microphone is illustrated in detail, and the experimental evaluations of the hardware reveal the efficacy of the proposed BSS microphone.
July 31, 2017 Due to the end of the Yahoo!JAPAN OpenID service, My J-STAGE will end the support of the following sign-in services with OpenID on August 26, 2017: -Sign-in with Yahoo!JAPAN ID -Sign-in with livedoor ID * After that, please sign-in with My J-STAGE ID.
July 03, 2017 There had been a service stop from Jul 2‚ 2017‚ 8:06 to Jul 2‚ 2017‚ 19:12(JST) (Jul 1‚ 2017‚ 23:06 to Jul 2‚ 2017‚ 10:12(UTC)) . The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.