In the field of language development, one interesting issue is how Japanese-speaking children acquire the case markers that play a role in understanding a sentence’s struc- ture, because previous studies reported that caregivers often omit them when talking to their children (e.g., Rispoli, 1991). Although grasping the characteristics of parental input on case markers is crucial for understanding a child’s acquisition process of them, the studies so far have shown insufficient data to clarify the qualitative and develop- mental characteristics of case marker inputs because of small sample size or a limited target age. This study used a larger sample of mothers (N=52) with children who ranged from 1 to 3 to measure their tendency to talk to their children using a struc- tured production-elicited task. Our results revealed that Japanese-speaking mothers tended to omit case markers more frequently when speaking to children than to adults. The omission rate also differed depending on the child’s age, the type of case mark- ers, verb transitivity, and maternal views about speech to children. Additionally, the mothers tended to omit arguments more frequently when speaking to children, sug- gesting that Japanese-speaking children have fewer opportunities for listening to case markers because of sentence simplification. These findings have important implications for investigating the relationship between parental language input and child language development.
In Japanese, some homophones can be distinguished by their lexical pitch accentual patterns. When and how do Japanese children start using pitch accent information as a cue to lexical distinction? In this research, we taught children two novel labels as names for two different objects. One label was a novel homophone whose accentual pattern was different from a familiar word, and the other, a novel non-homophone of a familiar word. The children ’s learning of these two labels was tested by a picture fixation task and an object choice task. The two-year-old children learned the novel non-homophone; however, they failed to learn the novel homophone (Experiment 1). On the other hand, three- to five-year-old children succeeded in learning both the la- bels, and their performance improved with age (Experiment 2). These results suggest that Japanese children gradually develop the ability to use pitch accent information as a cue to lexical distinction in words throughout their childhood. The findings are discussed in terms of how Japanese children pay attention to pitch information in the learning of words.
Onomatopoeias are frequently used in daily Japanese conversations and are a part of children’s early vocabularies. Previous studies have revealed that the phonetic structure and rhythm of onomatopoeia promotes word memory and production, and that sound symbolism functions as a cue to infer word meaning. This study examined whether the acoustic feature of speech is another factor that facilitates onomatopoeic word learning in children. The focus of the study was on voiced/unvoiced consonant contrasts related to the size of the referred object (e.g., dondon-large / tonton-small). First we analyzed mothers’ speech while reading a picture book that included onomatopoeic pairs contrasted with word-initial voiced/unvoiced consonants. Mothers read onomatopoeias that referred to small objects with higher fundamental frequency (f0) and lower amplitude than those of large objects. Then three-year-old children’s understanding of the onomatopoeic pairs was examined. The conditions were 1) original (acoustic features are almost identical between the onomatopoeias of small and large objects), 2) high 50 (the f0 of the onomatopoeias of small objects was 50 Hz higher than that of large objects), 3) high 100 (similar to the high 50 condition, but the difference was 100 Hz). The results indicated that the f0 is a possible cue to infer the meaning of onomatopoeias related to object size, and that the acoustic feature of speech would facilitate children’s learning of onomatopoeias.
In an utterance, paralinguistic information sometimes conveys the speaker’s affect differently from that which the lexical content indicates. In such a case, adults rely on paralinguistic information more heavily than lexical content to judge the speaker’s affect. However, young children often show a lexical bias (Friend & Bryant, 2000); they rely on lexical contents rather than paralinguistic information. Why do young children show this bias although even infants are very sensitive to speaker affect con- veyed by emotional prosody? We reviewed the literature and found two factors that may contribute to the appearance of this bias in young children. First, once children become capable of understanding speech, they rely more on lexical contents than emo- tional prosody, as their ability to infer speaker affect based on emotional prosody is still not as developed as adults’. Second, due to their immature ability to shift attention, young children have difficulty in transferring focus from lexical contents to emotional prosody when they encounter utterances whose lexical content indicates a different af- fect from the one inferred from the emotional prosody. We also suggest that future research should explore cultural influence on the appearance and disappearance of lex- ical bias as well as investigate the relationship between infants’ implicit sensitivity to, and children’s and adults’ explicit understanding of, speaker affect through speech.
Language acquisition is a process of symbol grounding, which is construction of sym- bol systems adapted to environment. Environmental adaptation defines the values which cognitive agents pursue primarily by means of hypothesis-test cycles encompass- ing both the inside and outside of their bodies. In addition to these directly grounded cycles, there are also hypothesis-test cycles within cognitive agents. Cognitive processes are combinations of these cycles, where cycles embody typical cognitive phenomena such as navigation and language use. Cycles are essentially countable, so that systems comprising cycles necessarily have discrete structures. A cognitive agent is hence formulated as a discrete system consist- ing of cycles including both directly grounded cycles and symbols (indirectly grounded cycles), where each cycle embodies some value or meaning directly or indirectly associ- ated with environmental adaptation. Computational models of cognition as combina- tion of such cycles (values = meanings) are far more efficient (simpler and less prone to overdesign) than traditional models stipulating possibly non-cyclic information flows. Environmental-adaptation cycles operate at multiple spatiotemporal scales, including real-time adaptive behavior, middle-term learning, and evolution across generations. It is vitally important to address real-time adaptation behavior in terms of cycles, which will raise the efficiency of the computational model not only at the level of real-time adaptation but also accordingly at higher levels. Cycle-based (meaning-based) com- putational models are necessary also because cycles derive meta-level constraints such as symmetry bias and naming insight, which are indispensable for abductive reasoning and language acquisition. Existing technologies including deep learning fail to reflect such a value-based (meaning-based) architecture of cognition. For the sake of thorough symbol grounding, novel approaches are necessary which should integrate environmental-adaptation cycles in the entire computational model at multiple levels of meaning and value.