Infants can learn their own language. What kind of learning mechanism is running on within them? Elucidation of the language acquisition mechanisms is an important issue for brain understanding. Recently there were some intriguing reports about the language acquisition models that link phenomena with mechanism. In this paper, we introduce some of them.
Two models are presented for generating a representation of words from the input phoneme sequences. They use an unsupervised learning algorithm that compares the input with its internal representation and generates a new representation of each subsequence. Simulation using child-oriented utterances in the CHILDES database as the training stimuli showed that the model performs lexical segmentation better than SRN and that it has fairly good generalization ability.