2021 Volume 17 Issue 1 Pages 45-59
Cognitive diagnostic models (CDMs) have been attracting attention as a method for diagnosing the learning subgoals mastered by examinees. In particular, CDMs for multiple-choice item assessments have been developed to elicit more information on examinees when they choose an incorrect option. However, the CDMs for multiple-choice items require test developers to specify a Q-matrix for each option of the items. Owing to the difficulty of developing such multiple-choice tests as well as the unavailability of a public dataset, existing studies have often conducted only simulation studies. Nevertheless, they have not examined their application to real data. Hence, knowledge regarding the empirical performance of CDMs for multiple-choice items is limited. To address this problem, this study composed a multiple-choice assessment of English with a Q-matrix and evaluated the performance of a CDM for multiple choices. The results imply that, although the model passed the predictive check, the examinees’ answering process to multiple-choice items could not be fully captured by the model, indicating the need for further model development.