In this paper, we propose a method to improve a recognition performance of multiparty conversations by using both close-talk and throat microphones. Throat microphone is robust against external noise. However, its characteristics are much different from those of conventional acoustic microphone such as close-talk. Therefore, it is generally not suitable for a speech recognition because of acoustic mismatch between them. We tried to map a spectra of throat microphone to close-talk and estimate a clean speech by controlling weights of speech of each microphone. The proposed method was able to improve a performance of large vocabulary continuous speech recognition task for data on which utterance was superimposed manually and group discussion.