JLTA Journal
Online ISSN : 2189-9746
Print ISSN : 2189-5341
ISSN-L : 2189-5341
Volume 20
Displaying 1-10 of 10 articles from this issue
  • 2017 Volume 20 Pages 0-
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    JLTA 2016 Keynote Speech Approaches and Challenges to Assessing Oral Communication on Japanese Entrance Exams Gary J. OCKEY JLTA 2016 KELTA Delegate Presentation Validations of an English Placement Test for a General English Language Program at the Tertiary Level Young-Mi KIM Misook KIM Research Papers 1. The Effects of Questions on EFL Learners’ Situation Models: Types of Question, Text Levels and Learners’ L2 Reading Proficiency Masahiro MAEDA 2. Analyzing Explicit and Implicit Reading Questions in a Term-Exam: A Case Study Yusuke HASEGAWA 3. The Assessment Beliefs and Practices of English Teachers in Japanese Universities Paul WICKING 4. Measuring Vocabulary Size of Japanese Junior High School Students: Cross-Sectional and Longitudinal Viewpoints Tsuyoshi SATO General Correspondence 1. The Rationale for the Establishment of the Japan Language Testing Association 2. Constitution of the Japan Language Testing Association 3. 4. 5. Guidelines for Contributors to the JLTA Journal 6. Rules and Regulations of the JLTA Best Paper Award 7. Editors and Referees for the JLTA Journal Vol. 20 (2017)
    Download PDF (338K)
  • Gary J. OCKEY
    2017 Volume 20 Pages 3-14
    Published: 2017
    Released on J-STAGE: December 14, 2017
    JOURNAL OPEN ACCESS
    The Japanese university entrance exam does not include an oral communication section, and as a result, teachers are often conflicted as to whether they should spend class time on oral communication activities as is mandated by the national curriculum, or time focusing on preparing their students for the entrance exams. In addition, it is unlikely that universities are able to accurately select students with the strongest English abilities when they are not given information about students’ abilities to orally communicate. It would seem obvious that to solve this problem, an oral communication section could simply be included on the exam. However, valid oral communication assessments are immensely challenging to implement in large-scale high stakes contexts. This paper discusses some of the possible approaches that have been used to assess oral communication in large-scale high stakes contexts and their appropriateness for the Japanese entrance exam. It also explores some of the current areas of research that might help to alleviate some of the challenges of including an oral communication section on the Japanese entrance exam. It focuses on three approaches currently used: computer-delivered and computer- scored, computer-delivered and human-scored, and human-delivered and human- scored.
    Download PDF (843K)
  • Young-Mi KIM, Misook KIM
    2017 Volume 20 Pages 17-34
    Published: 2017
    Released on J-STAGE: December 14, 2017
    JOURNAL OPEN ACCESS
    The present study sought to validate an English placement test (EPT) developed for a General English Language Program (GELP). The goal of the GELP curriculum at the host university is to improve reading skills as well as speaking and writing skills. The standardized English tests (e.g., TOEIC or TOEFL), however, do not match well with the objectives of the English language program curriculum of the host university. In the present study, a total of 2,467 freshmen took the EPT online prior to the beginning of the semester. The test consisted of 40 reading comprehension questions, and 60 minutes was allotted to complete the exam. The findings showed that the EPT was highly reliable with alpha = .898. The levels of item difficulty indices illustrated that the EPT was appropriately developed, but more than half of the test items turned out be challenging. However, the level of discrimination index showed that the EPT discriminated the upper-level students relatively well from the lower-level students. In addition, a survey with 1,007 students who took the exam was conducted in order to obtain students’ perceptions regarding the EPT. In asking about the necessity of the EPT, nearly 80% of the students expressed the view that it was an essential procedure for optimal learning. With respect to test difficulty, the students responded that the EPT was difficult, and this view was in line with the results of the test item analysis. With respect to what skills should be tested in the EPT, the majority of the students selected “reading” as a fundamental component. Over 60% of the students perceived that the EPT of the host university actually measured their ability accurately. Lastly, the correlation between the EPT and KSAT was statistically significant, which implied that the validity of the EPT was high. This result seems to indicate that the EPT of the university is valid.
    Download PDF (1180K)
  • Masahiro MAEDA
    2017 Volume 20 Pages 37-56
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    The present study investigated the effects of three types of questions—fact-finding questions (FFQ), backward inference questions (BIQ), and self-reference questions (SRQ)—on learners’ construction of situation models during reading instruction in an English-as-a-foreign-language (EFL) environment. In this experiment, 89 Japanese high-school students read two passages with different text levels, and their comprehension and construction of situation models were assessed via a free recall task (FRT) and a sentence recognition task (SRT). For FRT, the recall data was analyzed qualitatively as well as quantitatively. The results indicated that the participants in the upper group did not generate inferences without questions in the difficult text, but they generated them without questions in the easy text. However, participants given SRQ in the upper group generated inferences even in the difficult text. On the other hand, FFQ and BIQ did not activate the generation of inferences by the participants in the upper group in the difficult text. In addition, we found that there is a possibility that questions would sometimes interfere with the generation of inferences as well as surface-level comprehension in the easy text in the upper group. The results also indicated that the generation of inferences was not activated for the participants in the lower group without questions, but inferences were generated if they were given SRQ.
    Download PDF (1178K)
  • Yusuke HASEGAWA
    2017 Volume 20 Pages 57-75
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    This research explores how teachers can validate True/False questions in a foreign language reading term-exam. Two English teachers from Japanese junior and senior high schools and six Japanese graduate students majoring in Foreign Language Education participated in a case study (Study 1). The eight participants and the author of this article independently classified five reading questions into text-explicit and text-implicit types using two criteria. The first criterion was to determine whether the question requires students to make inferences to understand implicit information; the second criterion was to determine whether test-takers need to read only one sentence or more than one sentence to answer the question correctly. Although the participants were not informed that the two criteria were designed for the same purpose (i.e., distinguishing text-explicit and -implicit items), the final results of item classification using the two criteria were identical, with a high interrater reliability. A further case study was conducted using the same five questions (Study 2). These reading questions were given to 80 undergraduate students as a part of a term-exam. The term-exam was an achievement test that was designed to measure how well the examinees understood what they had learned; the students had already read the text during a previous lesson, but they had never solved the five questions before. On average, the questions that had been classified as text-implicit items were more difficult for the students than the text-explicit questions. The test was composed of both easy and difficult items, as intended by the test creator. The validity, reliability, and practicability of the taxonomy of question types proposed in this research were further discussed based on the data.
    Download PDF (1057K)
  • Paul WICKING
    2017 Volume 20 Pages 76-89
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    Any discussion about English education in Japan is invariably bound up with a discussion about assessment. All too often, such discussions have focused on entrance examinations and other high stakes summative tests. However, language testing and assessment do not take place in a vacuum, but are deeply affected by broader social and cultural contexts, as well as individual features of the school and the classroom. In particular, perhaps more than any other factor, it is the teachers who have great influence over how tests are created, conducted and interpreted. In order for universities to foster an environment in which reliable assessment can take place, it is not enough to provide well-designed, rigorous tests. It is also imperative that teachers’ beliefs and practices concerning assessment be taken into account. This paper seeks to answer the question, what are the assessment beliefs and practices of EFL teachers working in Japanese universities? To answer this question, survey responses were gathered from English language teachers working in Japanese higher education. The results indicated some slight differences in belief and practice between native speaker teachers and non-native speaker teachers, as well as between full-time and part-time teachers. Despite these differences, it seems that most teachers have a learning-oriented approach to assessment. The findings provide support and direction for policy-makers and educational leaders seeking to promote better testing practice.
    Download PDF (935K)
  • Tsuyoshi SATO
    2017 Volume 20 Pages 90-106
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    The present study examines the current situation and development of the vocabulary size of Japanese junior high school students learning English at the beginner level using a vocabulary size test (Sato, 2016). A total of 505 of junior high school students participated in the research and their vocabulary size was measured at the end of the first and the second term. The results show that, at the end of the first term, junior high school students have significantly different vocabulary sizes depending on their grade levels, with an average estimated vocabulary size of 795.03 for seventh-grade, 1,024.59 for eighth-grade, and 1,070.76 for ninth-grade. The acquisition process shows the different patterns for the various word frequency groups. For seventh graders, the 400 and 1,200 word groups developed significantly, while only the 1,200 word level increased significantly for eighth graders. Considering these results, further research is needed in order to state more precisely the process of vocabulary development of junior high school students as well as improvement of the test adopted in the present study.
    Download PDF (1098K)
  • 2017 Volume 20 Pages 109-123
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    General Correspondence 1. The Rationale for the Establishment of the Japan Language Testing Association 2. Constitution of the Japan Language Testing Association 3. 4. 5. Guidelines for Contributors to the JLTA Journal 6. Rules and Regulations of the JLTA Best Paper Award 7. Editors and Referees for the JLTA Journal Vol. 20 (2017)
    Download PDF (1207K)
  • 2017 Volume 20 Pages 125
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    Download PDF (332K)
  • 2017 Volume 20 Pages 126
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL OPEN ACCESS
    Download PDF (838K)
feedback
Top