When an event occurs in the world around us, say a lion roars or a cat meows, information about the event flows to us in light waves and sound waves.Our eyes form a visual image of the event, our ears form an auditory image of the event, and the two are combined with any other sensory inputs to produce our initial experience of the event.The purpose of this paper is to describe:1)the three natural categories of sounds:noises, transients, and tones, 2)a computer model designed to explain how the auditory system converts sounds into auditory images, 3)the characteristics of the auditory images of noises, transients and tones, and 4)the role of auditory image construction in the initial segregation of tones and noises.
The probability distribution of state variables for the stochastic systems has been very often used so far in the generalized expression form of an infinite orthonormal series expansion.However, in the case of applying this expansion expression to actual random phenomena, it is impossible to employ infinite expansion terms and only the finite sampled level data are available to estimate the expansion coefficients.In this study, a new trial for determining the optimal order of expansion terms for the state probability distribution is proposed.Furthermore, a part of the effectiveness of the proposed method is experimentally confirmed by applying it to the actually observed data in the sound environment.
This paper describes an approach to voice characteristics conversion for an HMM-based text-to-speech synthesis system using speaker interpolation.Although most text-to-speech synthesis systems which synthesize speech by concatenating speech units can synthesize speech with acceptable quality, they still cannot synthesize speech with various voice quality such as speaker individualities and emotions;In order to control speaker individualities and emotions, therefore, they need a large database, which records speech units with various voice characteristics in sythesis phase.On the other hand, our system synthesize speech with untrained speaker’s voice quality by interpolating HMM parameters among some representative speakers’ HMM sets.Accordingly, our system can synthesize speech with various voice quality without large database in synthesis phase.An HMM interpolation technique is derived from a probabilistic similarity measure for HMMs, and used to synthesize speech with untrained speaker’s voice quality by interpolating HMM parameters among some representative speakers’ HMM sets.The results of subjective experiments show that we can gradually change the voice quality of synthesized speech from one’s to the other’s by changing the interpolation ratio.
In this paper we have addressed the problem of resynthesizing stimulus signal from adapted auditory neural firing pattern.The major issues discussed are:new method of nonlinearity inversion and effects of the stimulus signal’s properties in forward and reverse outputs.A simple and efficient inner hair cell(IHC)inversion method based on Meddis IHC simulation has been constructed.With this inversion method and in response to tone-bursts of increasing frequencies and intensities, it was possible to reverse-process the nonlinearity of the auditory system and regenerate the estimate of the stimulus signal.Estimated signal showed good recovery of the information such as amplitude, frequency and phase, even in frequencies above 3kHz.However, at intensities higher than 65dB, amplitude recovery was not satisfactory enough.We conclude that, in reverse auditory simulations, our inversion method recovers important information pertaining to the identity of the original stimulus signal and could be employed as an IHC output monitoring or evaluation system.
May 27, 2017 Due to the urgent maintenance of Japan Link Center system, following linking services will not be available on Jun 8 from 10:00 to 15:00 (JST)(Jun 8, from 1:00 to 6:00(UTC)). We apologize for the inconvenience. a)reference linking b)cited-by linking c)linking with JOI/DOI/OpenURL d)linking via related services , such as PubMed , Google , etc.
April 03, 2017 There had been a system trouble from April 1, 2017, 13:24 to April 2, 2017, 16:07(JST) (April 1, 2017, 04:24 to April 2, 2017, 07:07(UTC)) .The service has been back to normal.We apologize for any inconvenience this may cause you.
May 18, 2016 We have released “J-STAGE BETA site”.
May 01, 2015 Please note the "spoofing mail" that pretends to be J-STAGE.