抄録
In the auditory cortex, auditory-evoked onset activities have been extensively investigated as a cortical representation of sound information and emotional information associated with the sound. Yet, less attention has been paid to date to steady-state activities following the onset activities. In this study, we used machine learning to investigate whether steady-state neural activities in the presence of continuous sounds represent the sound frequency and emotion associated with the sound. A microelectrode array with a grid of 96 sites recorded local field potentials (LFPs) in the 4th layer of the auditory cortex of anesthetized rats and Sparse Logistic Regression (SLR) decoded the sound frequency from steady-state neural activities of LFPs. Band-specific powers at every recording sites and phase locking values between every pair of recording sites served as input vectors of SLR. We also investigated how these decoding accuracies depended on either aversive or appetitive auditory classical conditioning. Consequently, we found that SLR were able to decode the sound frequency from steady-state neural activities as well as onset activities and that the decoding accuracy at some particular bands significantly either improved or worsened in conditioning-dependent manners. These results demonstrate that the steady-state neural activities contain rich information about not only test sounds but also emotions associated with the sounds.