Japanese Journal for Research on Testing
Online ISSN : 2433-7447
Print ISSN : 1880-9618
Volume 1, Issue 1
Displaying 1-3 of 3 articles from this issue
  • Yoshikazu Sato, Eiji Muraki
    2005 Volume 1 Issue 1 Pages 59-66
    Published: 2005
    Released on J-STAGE: June 17, 2022
    JOURNAL FREE ACCESS

    This paper formulates how the standard errors of the item difficulty parameter estimates propagate to the maximum likelihood estimates of the ability parameter in the Rasch model by use of the delta method. The delta method is a commonly used statistical method for deriving standard error expressions approximately. As a result of formulations, it reveals that the standard errors of the item parameter propagate as the errors of the standard errors of the ability parameter. It also shows that the error of the standard error of the ability parameter can be expressed as the function of correct or incorrect response probabilities of the examinee to the items and the standard errors of the item parameter estimates. As for the simulation study, the rates of the errors included in the standard errors of the ability parameter are a few percents under the conditions of n = 25, 50, and 75 items and N = 200, 400, and 600 examinees. It is also suggested that the computer adaptive testing may have the advantage to the paper and pencil test in terms of the errors of the standard errors of the ability parameter estimates.

    Download PDF (7905K)
  • Noriaki Sasaki, Eiji Muraki
    2005 Volume 1 Issue 1 Pages 93-102
    Published: 2005
    Released on J-STAGE: June 17, 2022
    JOURNAL FREE ACCESS

    In this paper, we investigated that what kind of cognitive framework exists in examiners when performance assessment was executed in school education. A cognitive framework was defined as point of view on performance that examiners had. 139 persons (68 teachers and 71 students) were sampled and we used items on performance in school education, called “working style”. Examinees replied about degrees of cognition for 18 performances. We regarded a factor as a component of a cognitive framework. As a result of factor analysis, 3 factors were extracted, and named “execution”, “activity”, and “insistence” respectively. Furthermore, we compared scale scores of teachers and students. Consequently, students' scores tended to be higher than teachers' scores on “execution” significantly, teachers' scores were higher than students' scores on “activity” significantly, and there was no significant difference between teachers' score and students' scores on “insistence”.

    Download PDF (10154K)
  • Teruhisa Uchida, Naoko Nakaune, Kojiro Shojima
    2005 Volume 1 Issue 1 Pages 117-127
    Published: 2005
    Released on J-STAGE: June 17, 2022
    JOURNAL FREE ACCESS

    The study implemented experimental situations of English listening comprehension testing with low-level noises. Two sources of the noise were the environmental and Japanese speech sounds, and added over the speech scripts in the test at the levels in -12dB (A) and -6dB(A). Participants were 569 college freshmen, who took the same test with different noises in terms of the sources as well as the levels. The results indicated that the scores declined in the condition with the Japanese speech noise, and that students felt more interfered by the Japanese sound than the environmental noise. Further, item response theory was used to adjust the scores in order to compensate the effects of noises on the scores. The attempt was successful. Items without noises were used as anchor-items that every test-taker has answered to estimate θ s.

    Download PDF (11503K)
feedback
Top