Abstract
One of the critical issues in the field of testing and assessment adopted by English
interpreting courses at universities in Japan is the lack of a methodology for systematic
testing and related assessment criteria. The present paper proposes a new testing model
which can be employed for criterion-referenced testing at universities to replace the
conventional “computer-based recorded verbal performance test.” It is called “The
Performance Test of English Interpreting in the Written Form”, and its features include an
assessment instrument, and a scoring rubric. Utilizing data from 160 students who
concurrently took the identical interpreting performance tests in the recorded verbal form
and the written-form, two tests were examined based on several theoretical constructs. The
findings demonstrate the superiority of the written form to the recorded form and illustrate
how the scoring rubric based rating system is a determining factor for the legitimacy of the
performance test in the written form.