JLTA Journal
Online ISSN : 2189-9746
Print ISSN : 2189-5341
ISSN-L : 2189-5341
Volume 16
Displaying 1-28 of 28 articles from this issue
  • Article type: Cover
    2013 Volume 16 Pages Cover1-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (22828K)
  • Article type: Index
    2013 Volume 16 Pages Toc1-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (62K)
  • David BEGLAR
    Article type: Article
    2013 Volume 16 Pages 1-13
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    A perusal of the extensive reading literature shows that assessment has played a minor role that has been confined to the presentation of lower-order factual questions. In this paper I propose a broader role for assessment in extensive reading classrooms that is designed to bring extensive reading closer to the center of the foreign language curriculum. This requires that teachers conceptualize the role of extensive reading in foreign language curricula more broadly by viewing it as an important opportunity to work on cognitive, affective, and social goals and objectives. In this conceptualization, extensive reading is combined with the Revised Bloom's taxonomy to yield a richer form of pedagogy that gives extensive reading a more important role in the foreign language curriculum, allows for the teaching of the six levels of cognizing that make up the taxonomy, and thereby yields a firm foundation for assessing extensive reading, particularly when the assessment follows a particular set of general classroom assessment principles. The result is a rich set of assessment practices that go well beyond the mere assessment of factual recall; they have the potential to enhance the extensive reading experience, lead students to engage in higher-order thinking, and make important contributions to student learning.
    Download PDF (817K)
  • Dong Wan CHO, Jung Young PARK
    Article type: Article
    2013 Volume 16 Pages 15-42
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Language courses taught at universities in Korea have usually adopted a letter grading system without much consideration of its effects on learning outcomes and psychological burden on learners. A letter grading system, which has its base on norm-referenced testing, aims to differentiate students' achievement in relation to others and to boost their motivation for higher grades. A strict proportion of each grade has often been mandated, causing irrational competition between students. In contrast, a Satisfactory/Unsatisfactory grading system, an S/U grading system, which bases its rationale in criterion-referenced testing, measures to what extent students have achieved the goals set up by a class and what they can and cannot do. In the field of language teaching and testing, little research has been done on the effects of a grading system on the teaching and learning of language courses. This research intends to show how a change in a grading system from a letter grading to an S/U grading system at a university in Korea has impacted the motivation of students for learning, attitudes in classes, and achievements. With the results of two questionnaire surveys from 896 students, and interviews from twenty students and three English teaching faculty members, this research found that an S/U grading system functioned well for the language program of the university. However, failure to promote motivation, insincere and inactive attitude in classes, and lack of interest of the students were pointed out as the major drawbacks of the system. Suggestions to make up for limits of the S/U grading system are presented.
    Download PDF (1412K)
  • Yusuke HASEGAWA
    Article type: Article
    2013 Volume 16 Pages 43-63
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    There is a widely held belief that L2 vocabulary should be taught in an L2 context. However, studies in this field do not always support this idea because the effect of contextualized learning is rarely found in ordinary immediate-posttest scores. To explore the possibility of another way of assessing vocabulary cognition, Hasegawa (2010) used an imageability rating scale and found that context presentation enables learners to realize and imagine what the target word refers to. However, a follow-up study was imperative because the original study participants were only 22 university students and the factor of proficiency was not considered. According to past studies, the effect of contextualized learning might be found only among adult or advanced learners. Therefore, the present study carefully re-examined how context presentation affects learners' imageability ratings, as well as ordinary posttest scores, with an additional 118 high school students. In the experiment, the participants learned 21 unfamiliar adjectives in three types of translation-based learning that differed in context presentation: (a) no context, (b) collocational phrase context, and (c) sentential context. Study 1 analyzed the data from the new participants, and Study 2 compared the current data with Hasegawa's (2010) investigation. The results indicate that the context presentation does not enhance retention of vocabulary meanings and that the context effect on word imageability differed according to learner groups. Therefore, the previous conclusion should be modified as follows: When learners are provided with both translations and contexts, they tend to utilize translations as the immediate source of learning new words; however, university students are ready to enrich representations of lexical meaning in context.
    Download PDF (1540K)
  • Article type: Appendix
    2013 Volume 16 Pages 65-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (46K)
  • Yoshihito SUGITA
    Article type: Article
    2013 Volume 16 Pages 67-86
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    In this study, two sets of accuracy and communicability tasks (Original and New) are compared to determine whether they are parallel in terms of difficulty. Three experienced Japanese teachers of English examined the degree of parallelness of five accuracy tasks and five communicability tasks (both Original and New) with regard to time pressure, support, stakes, and overall difficulty. The two Original tasks and the two selected New tasks were administered to 24 university students in Japan. Two raters were asked to evaluate each of the two task sets separately after an interval of one month. Performance data from the different task sets were analyzed using classical procedures (correlation and ANOVA) and FACETS. The results indicate that the Original and New tasks can be considered parallel at the overall test level. At the individual task level, the two communicability tasks can be considered equally difficult, while the accuracy tasks can be separated into different levels of difficulty with a reasonable level of certainty. The variability in the degree of difficulty of the selected accuracy task may be attributed to the prompt effects in the assessment of writing performance. The results provided useful sights for further task-development.
    Download PDF (1274K)
  • Akihiro ITO, Hiroshi SHIMATANI, Ken NORIZUKI, Masayoshi KINOSHITA
    Article type: Article
    2013 Volume 16 Pages 87-105
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    A test should be reliable and valid enough for a test-taker to predict, assess, and/or judge his/her own test performance immediately after taking it. Based on this point of view, the present study investigated TOEIC[R]'s validity from the perspective of data on test-takers' post-test-taking psychological attitudes. The investigation involved the following three steps: (a) developing a self-assessment questionnaire as a tool for gathering data on participants' psychological attitudes, (b) conducting validation studies on the self-assessment questionnaire, and (c) examining the levels of matching; between TOEIC[R] test scores and test-takers' judgment on their test performance. Results indicated that both versions of the TOEIC[R] showed high reliability coefficients and the self-assessment questionnaire showed a high reliability coefficient. In order to examine the construct validity of the self-assessment questionnaires, principal component analyses were used. On the basis of the scree plot, a single factor solution was deemed appropriate for each of the two data, implying that the factor structure underlying each of the self-assessment questionnaires is unidimensional. Moreover, there are low to moderate correlations between each part of the TOEIC[R] and the item self-assessment questionnaire concerned, implying that the two self-assessment questionnaires are valid indicators of TOEIC[R] test scores.
    Download PDF (1438K)
  • Takanori SATO
    Article type: Article
    2013 Volume 16 Pages 107-126
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    The construct of general-purpose oral proficiency tests has been defined on the basis of the theoretical models of second language (L2) communication established by language specialists. In contrast, the perspectives on the L2 communication ability of linguistic laypersons (non-specialists in language testing and teaching) have not been incorporated into language assessment. However, it is important to understand how linguistic laypersons conceptualize L2 communication ability because they are the eventual interlocutors of L2 speakers in most real-world contexts. This study explores the features that influence linguistic laypersons' evaluative judgments of L2 oral communication ability. Four graduate students taking up disciplines other than applied linguistics and Teaching English to Speakers of Other Languages (TESOL) participated in the study. They witnessed 10 speakers' performances on the College English Test-Spoken English Test and indicated their impressions of each test-taker's communication ability. Three of the participants' ratings were moderately correlated with the test scores, whereas the ratings of one participant were weakly correlated. Their retrospective verbal protocols were also collected and analyzed. Fluency appeared to affect rater impressions the most, but grammar and vocabulary were shown to be peripheral factors in their judgments. Their protocols also revealed that the participants attended to various non-linguistic features, which implies that language proficiency does not guarantee a positive evaluation from linguistic laypersons. This study has also showed individual differences in the rating and protocols - a sign of the linguistic laypersons' complex subjective judgments. It is suggested that their unique criteria can be used to supplement conventional linguistically oriented assessment criteria and accurately predict linguistic laypersons' impressions in the real-life context.
    Download PDF (1586K)
  • Kazuhiko KATAGIRI
    Article type: Article
    2013 Volume 16 Pages 127-146
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    The purpose of this study is to compare the progress of receptive vocabulary knowledge and the listening, reading, and speaking proficiencies of Japanese senior high school (SHS) EFL students during the approximately three years of SHS. Thirty-nine students in an intact class at a Japanese SHS took Mochizuki's (1998) Vocabulary Size Tests (VSTs) and TOEIC Bridge tests (listening and reading sections) twice (about two weeks after they entered the SHS and three months before their graduation). They also took speaking interview tests twice (about seven months after they entered the SHS and five months before their graduation). Their utterances on the speaking tests were tape-recorded, transcribed, and analyzed in terms of four aspects (fluency, accuracy, syntactic complexity, and lexical complexity). Standard Errors of Difference (SEDiffs) were used to identify progress (↑), no progress (→), and deterioration (↓) for each participant for the estimated scores of the VSTs and the listening and reading scores of TOEIC Bridge, respectively. Compared to the standard deviations, the degrees of progress were done for the four speaking indicators produced from the transcriptions. The results showed that all the students made progress in receptive vocabulary size; the percentage was 69.2% in listening, 94.9% in reading, 51.3% in fluency, 66.7% in accuracy, 51.3% in syntactic complexity, and 51.3% in lexical complexity of speaking. Further, the results showed various combinations of progress, no progress, or deterioration in terms of fluency, accuracy, syntactic complexity, and lexical complexity, and only 10.3% of the students made progress in all four speaking areas. Thus, it appears difficult to make progress in all four areas of speaking. The four speaking indicators being neglected, most (69.2%) of the students made progress in receptive vocabulary sizes, listening, and reading; the second largest group (25.6%) made no progress in listening but did make progress in vocabulary and reading.
    Download PDF (1338K)
  • Tomoko FUJITA, Shin-ichi MAYEKAWA
    Article type: Article
    2013 Volume 16 Pages 147-165
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Currently, it has become a trend for English language programs at Japanese universities to introduce Can-do statements (CDS) to their curriculum. CDS should ideally be tailor-made for the target learners who study at the specific language program. Therefore, case studies investigating efficient ways to create valid CDS for specific language programs are quite important. This research focuses on CDS for a listening course at an English language program in a Japanese university. Firstly, a panel of teachers selected 28 CDSs for three different proficiency levels of students. Then, before the listening course began, students answered the CDS as a form of questionnaire referred to as Can-do self-checklist (SCL). After students completed the listening course they answered SCL again. The results were analyzed with the item response theory (IRT) one parameter model, and students' ability levels (θ) were estimated. The average θ of the students' SCL increased toward the end of the semester, and the students in the basic level increased θ the most. The results of SCL were compared with θ of students listening tests, but the correlation coefficient among these tests were mid-range, although relationships were strong at the basic level. Finally, the difficulty levels for each SCL based on teachers' expectations, and the item difficulty parameter calculated were compared, and five SCLs showing remarkable discrepancies were analyzed. Results indicated that the wording of SCL is quite important and should be systematically well organized. Moreover, special attention is needed in a Japanese students' language learning environment. An SCL in which students have never experienced is not an accurate means of self-evaluation.
    Download PDF (1616K)
  • [in Japanese], [in Japanese], [in Japanese]
    Article type: Article
    2013 Volume 16 Pages 167-184
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    This paper focuses on the validity of paper-and-pencil tests for pronunciation and accent. Due to the inclusion of the tests in the National Center Test, the validity of these subtests, in regard to whether scores obtained by the tests can be representatives of the actual oral performance of pronunciation and accent, has been debated for over two decades. However, comprehensive research such as on the investigation of the test makers' intention and students' perspective towards the tests, has not been done yet. Therefore, this paper investigated the relationship between written pronunciation and accent tests and the actual oral performances at word, sentence, and discourse levels. In addition, a questionnaire was delivered to the students to examine their learning experiences regarding pronunciation and accent, and their perspectives on these tests. As a result of the correlation analyses between the written and oral tests, small correlations were detected between the written and oral pronunciation tasks. On the other hand, a higher correlation was found between the written accent and the oral accent tasks at the word level, but the correlation coefficients were smaller between the written accent task and the oral tasks at the sentence level, and smaller still, at the discourse level. The questionnaire reveals the students felt the written pronunciation and accent tests did not measure their actual pronunciation and accent abilities, but rather their knowledge of these subskills. These findings were discussed from the point of the intention of the test makers.
    Download PDF (1460K)
  • Yuji USHIRO, Shuichi TAKAKI, Mayuko KOBAYASHI, Yusuke HASEGAWA, Shingo ...
    Article type: Article
    2013 Volume 16 Pages 185-204
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    The summary writing task has been widely used in order to examine how well readers comprehend texts (Alderson, 2000). As a scoring criterion of summary protocols, previous studies have considered whether or not a reader can effectively use macrorules, which reflect the process of readers' construction of their mental representation (e.g., Johns & Mayes, 1990; Kim, 2001). In fact, the use of macrorules is assumed to be closely related to the process of how readers construct their mental representation of a passage (van Dijk & Kintsch, 1983). However, summary task has a methodological problem in that readers' performance in the use of macrorules reflects not only the process of reading comprehension but also their writing skills (Cohen, 1993). Therefore, we have used another psycholinguistic measure for reading comprehension that suppressed the influence of writing skills: the Meaning Identification Technique (MIT). The present study examined English as a foreign language (EFL) learners' macrorule use with the MIT as well as in a summary writing task. Focusing on three types of macrorules (i.e., deletion, generalization, and construction rules), Experimental Study 1 showed that EFL students used all three types of macrorules when they did summary writing. In Experimental Study 2 using the MIT, the generalization and construction rules were more difficult for learners to use than the deletion rule. These two rules required learners to generate inferences for constructing the implicit main ideas of texts, whereas the deletion rule simply required learners to select explicit main ideas from texts. Comparison of the two series of experimental studies indicated that summary writing encouraged the participants to use the generalization and construction rules by requiring them to integrate pieces of information for making the summary. Therefore, macrorule use measured by the summary protocol should be interpreted carefully, taking the effects of the summary task itself into account in terms of promoted strategic macrorule use. Furthermore, it was implied that the MIT could be superior to a summary writing task as a reading comprehension test, as far as macrorule use under natural reading conditions reflected pure comprehension of a text.
    Download PDF (1580K)
  • Article type: Appendix
    2013 Volume 16 Pages 205-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (39K)
  • Article type: Appendix
    2013 Volume 16 Pages 207-208
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (153K)
  • Article type: Appendix
    2013 Volume 16 Pages 209-211
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (194K)
  • Article type: Appendix
    2013 Volume 16 Pages 212-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (76K)
  • Article type: Appendix
    2013 Volume 16 Pages 213-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (36K)
  • Article type: Appendix
    2013 Volume 16 Pages 213-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (36K)
  • Article type: Appendix
    2013 Volume 16 Pages 213-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (36K)
  • Article type: Appendix
    2013 Volume 16 Pages 214-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (53K)
  • Article type: Appendix
    2013 Volume 16 Pages 214-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (53K)
  • Article type: Appendix
    2013 Volume 16 Pages 215-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (71K)
  • Article type: Appendix
    2013 Volume 16 Pages 216-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (11K)
  • Article type: Appendix
    2013 Volume 16 Pages 217-220
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (181K)
  • Article type: Appendix
    2013 Volume 16 Pages 221-222
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (67K)
  • Article type: Appendix
    2013 Volume 16 Pages 223-224
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (49K)
  • Article type: Appendix
    2013 Volume 16 Pages App1-
    Published: December 25, 2013
    Released on J-STAGE: January 27, 2017
    JOURNAL OPEN ACCESS
    Download PDF (19K)
feedback
Top