2024 Volume 31 Issue 2 Pages 433-455
The success of neural language models (LMs) has considerably increased the attention on their language acquisition. This work focuses on the second language (L2) acquisition of LMs, whereas previous study has typically explored their first language (L1) acquisition. Specifically, we trained bilingual LMs using a scenario similar to human L2 acquisition and analyzed their cross-lingual transfer from linguistic perspectives. Our exploratory experiments demonstrated that L1 pre-training accelerated their linguistic generalization in L2, and language transfer configurations (for example, L1 choice and the presence of parallel texts) substantially affected their generalizations. These clarify their (non-) humanlike L2 acquisition in particular aspects.