The auditory brain stem response (ABR) and auditory steady state response (ASSR) are two objective audiometric assessments that are widely used in clinical practice, especially for pediatric patients with hearing impairments. Unfortunately, these assessment techniques can both take up to two hours to complete due to the paucity of recording data that can be collected for each stimulus (a click with the ABR and an amplitude-modulated (AM) or frequency-modulated (FM) tone with the ASSR). These stimuli result in a sequential response by the frequency areas of the cochlea, which requires repeated stimuli to obtain sufficient data. This extended assessment period presents a real challenge when the patient is a neonate or young child. This challenge has been addressed by the development of a new type of stimulus, a “chirp,” which produces a simultaneous response by all frequency areas of the cochlea. This simultaneous response results in enhanced neural synchronicity in cochlear nerve fibers and produces a higher amplitude of auditory responses that can shorten the ABR and ASSR test duration times.
In this article, we review the efficacy and still challenging issue of this new objective audiometric assessment method using chirp stimuli in comparison to the conventional ABR or ASSR assessment methods.
Evaluation of the functions of a hearing aid is necessary for proper hearing aid fitting. In this paper, I introduce two tests used for hearing aid fitting, real-ear measurement and wide-band tympanometry. Real-ear measurement is useful for hearing aid fitting in pediatric patients and for open fitting, and I present concrete examples were presented in this paper. Wide-band tympanometry, which is a relatively new method for measuring middle ear function, is also useful for hearing aid fitting.
We investigated the gender categorization ability and the cues used to judge a speaker's gender by children with cochlear implants (CIs) and children with normal hearing (NH). Ten children with CIs and 34 children with NH were administered a gender categorization task, in which the fundamental frequency and formant frequency of a female voice were manipulated as the clues for gender discrimination.
The results showed that the children with NH could correctly categorize the gender of the unambiguous typical voice, but the more the test voice was manipulated ambiguously, the less accurate the gender recognition performance of these children became. Children with CIs could partially categorize the gender of the unambiguous typical voice, however, they could not easily recognize the gender from ambiguously manipulated voices, and their performance was significantly worse than that of the children with NH.
The findings indicate that the children with NH relied on both F0 and formant frequency cues to discriminate the gender from the voice, while those with CIs mainly relied on F0 cues, with limited use of formant frequency cues.
These findings suggest that we must understand these audiological characteristics of children with CIs in order to provide appropriate support for them in educational settings.
There are many opportunities for exposure to radiation during diagnosis and treatment in clinical practice. Currently, it is not clear how hearing aids are affected by radiation exposure. Therefore, the effect of radiation exposure on hearing aids was investigated. In this study, the hearing aid is irradiated with radiation, we investigated what kind of effect was produced on which part of the we investigated the effects of radiation exposure on different parts of a hearing aid. The results revealed that radiation affects the microphones of the electret microphone type. Based on this result, wearing of hearing aids during radiological examination/treatment is not recommended.
We examined the effects of digital wireless hearing systems (DWHS) using speech-in-noise perceptions and two subjective evaluation methods: the Visual Analog Scale (VAS) and the Speech, Spatial and Qualities of Hearing Scale (SSQ). The subjects were seven hearing-impaired students (their four-frequency pure-tone average was 78.9±18.6dBHL: 52.5-107.5) in inclusive settings. The results showed that the speech-in-noise perception scores improved significantly by 10%-20% with the use of DWHS in high-noise conditions (SNR 0dB to -5dB). On the other hand, the VAS scores of hearing difficulty in classroom environments improved by 40%, suggesting that reduction in the listening effort (SSQ) to speech is an important background factor. These results suggest the effectiveness of using speech-in-noise perceptions and subjective evaluation methods for the evaluation of DWHS.
The present study was aimed at examining the music recognition ability of individuals who are deaf or hard of hearing. Eight adolescents who were deaf or hard of hearing were given tasks of pitch discrimination and six-tone melody discrimination, with controlled tonality in the presented pieces. In the pitch discrimination task, 6 people who were deaf or hard of hearing showed a similar accuracy to people with normal hearing, while 2 had difficulty with the task. The hearing threshold level and music experience affected the accuracy of pitch discrimination. Six people who showed high accuracy in the pitch discrimination task also participated in the melody discrimination task. However, only 2 people showed higher accuracy than the chance level. Moreover, one of the subjects whose hearing threshold level at 250 Hz was relatively good showed a higher accuracy for tonal than for atonal melodies. These results suggest that people who are deaf or hard of hearing have difficulty in melody recognition, not to mention comprehension of tonality, if they are able to discriminate pitch patterns accurately. Hearing threshold levels in the low-frequency range might have an effect on melody recognition in people who are deaf or hard of hearing.