In 1939, Pauline Davis reported the first study on event-related potentials (ERPs) performed on awake humans. ERPs are time-locked brain potentials that occur in response to cognitive, motor or perceptual events. The events used by Davis were sounds, and in the decades that followed her landmark study ERP research significantly contributed to the knowledge of auditory perception and neurophysiology we have today. ERPs are very well suited to study neural responses to sound stimuli, since the researcher can monitor the brain's registration of sound edges and spectral changes in sound on a millisecond-by-millisecond basis. In this overview we will introduce basic concepts of auditory ERP research. The overview includes descriptions of typical ERP components, experimental paradigms, sound stimuli, research methodology, and ways to analyze data.
To investigate the effects of aging on loudness perception, equal-loudness-level contours were derived from loudness estimates for pure tones with various combinations of frequency and sound pressure level. The listeners were young and older adults who had otologically normal hearing. Comparison of the contours between listener groups revealed a large difference at frequencies higher than 1,000 Hz, indicating that substantial effects of age and gender existed: older listeners were less sensitive to high-frequency tones than young listeners. In addition, older males were even less sensitive to those tones than older females. Recruitment of loudness was also observable for older listeners at high frequencies. These findings suggest that the use of frequency-weighting A, which is based on the hearing characteristics of young people, in noise evaluation is questionable when senior citizens are assumed to be the listeners. Another method that takes the effects of aging into account should be developed to estimate the loudness of sounds perceived by older listeners more accurately.
The discrimination of an /r/–/l/ speech continuum presented in an isolated monosyllabic context has been studied by a number of researchers. However, when one considers normal listening environments, it is unusual for people to hear syllables in isolation. In the present research, we investigated whether the characteristics of the discrimination performance for a /ra/–/la/ continuum presented under the isolated condition are also observed when the continuum is presented under nonisolated conditions, or more ordinary listening conditions. Two nonisolated conditions were employed: 1) the continuum was presented within a sentence, and 2) the continuum was preceded and followed by pure tones. Experiments revealed that the discrimination performance under condition 2) was similar to that under the isolated condition; however, the performance under condition 1) was different from those under the other conditions. The research suggests that the characteristics of the discrimination performance under nonisolated conditions are not necessarily identical to those under the isolated condition.
To better understand wave propagation and scattering phenomena in noncontact ultrasonic testing (UT), we have developed a time domain simulation tool to model wave propagation in air and solids. The tool is based on the finite integration technique (FIT) and an image-based modeling approach. The image-based FIT can treat the calculation of acoustic, elastic, and coupled waves in a unified grid form. In this paper, we first investigate the accuracy of the interface condition in FIT modeling by comparing numerical results with analytical solutions of transmission and reflection coefficients. Then we validate the FIT using measured signals in guided wave testing with air-coupled transducers. To simulate noncontact UT, a fine mesh is required in the FIT because the wavelength in air is much smaller than that in a solid. To solve the numerical problem with the finer mesh at high speed, we introduce a parallel computation technique with general-purpose computing on graphics processing units (GPUs). It is shown that the noncontact UT simulation can be executed within a reasonable time and with reasonable accuracy using multiple GPUs.
In this study, we investigated the influence of the distribution of sound-pressure levels (SPLs) on subjective loudness of environmental noise. To investigate the influence, subjective-loudness evaluation test was performed using road-traffic noise having various distributions of SPL. Skewness was used to evaluate the SPL distribution. Our results show that the loudness of the presented sound was perceived to be softer when the skewness was largest at 0.8 (the frequency of low SPL was high), even though the equivalent continuous A-weighted sound-pressure level (LAeq) was almost the same for all samples. On the other hand, the subjective loudness did not change significantly in the other conditions (the frequency of low SPL was low). From the result, the subjective loudness was found to decrease at a specific condition when the frequency of low SPL is high.
April 03, 2017 There had been a system trouble from April 1, 2017, 13:24 to April 2, 2017, 16:07(JST) (April 1, 2017, 04:24 to April 2, 2017, 07:07(UTC)) .The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.