The followings ermerged from the study of Japanese speech sound perception of seven patients with Nucleus multichannel cochlear implant. For nonsense monosyllable sounds, the percent recognition in the cochlear implant alone or in lipreading alone did not correlate to that of cochlear implant plus lipreading. For words or sentences, however, the percent recognition in the cochlear implant alone correlated highly with that of the cochlear implant plus lipreading. The auditory and the visual signals provided mutually complementary information for monosyllable sound recognition, whereas the auditory signal provided major information for word or sentence recognition. The role of the visual signal was not only supplemental, but more active in semantic identity for Japanese word and sentence recognition.