The Journal of Information and Systems in Education
Online ISSN : 2186-3679
Print ISSN : 1348-236X
ISSN-L : 1348-236X
Volume 16, Issue 1
Displaying 1-4 of 4 articles from this issue
Short Note
  • Asami Shiwaku, Nobuyuki Kobayashi, Hiromitsu Shiina
    2017 Volume 16 Issue 1 Pages 1-6
    Published: 2017
    Released on J-STAGE: September 02, 2017
    JOURNAL FREE ACCESS

    Recently, as part of faculty development, universities employ questionnaires about lectures, and improvements are planned using investigations of student satisfaction with teachers and lectures. In this study, we perform value prediction of comments and words in free comments from a questionnaire about lectures. The comment evaluation technique involves value prediction of the entire comments via mutually recursive repetition of multiple person partial comment evaluations and value prediction of those comments and words. In addition, we evaluate the differences between individually evaluated words and comment-predicted values.

    Download PDF (533K)
  • Katsunori Kotani, Takehiko Yoshimi
    2017 Volume 16 Issue 1 Pages 7-11
    Published: 2017
    Released on J-STAGE: September 02, 2017
    JOURNAL FREE ACCESS

    As the ease of grasping the contents of listening material influences learners' motivation and learning outcome, language teachers need to choose materials appropriate for the proficiency of their learners. This heavy task has been addressed by using a traditional readability measurement method to develop an automatic measurement method of the ease of listening comprehension using linear regression analysis for listening materials. Because machine learning such as decision tree classification can properly handle different types of features, recent readability measurement methods use classification approaches such as a decision tree. Then, we proposed a measurement method using decision tree classification for linguistic features of listening materials as well as learner features of listening proficiency. The experimental results showed that the accuracy of our method (47.0%) was better than the baseline accuracy (25.2%), and that the listening test score and visiting experience in English speaking areas among the learner features were discriminative for the measurement accuracy.

    Download PDF (311K)
Report on Practice
  • Kei Amano, Shigeki Tsuzuku, Katsuaki Suzuki, Naoshi Hiraoka
    2017 Volume 16 Issue 1 Pages 12-17
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL FREE ACCESS

    This paper describes the design of a digital badge that provides support for learners who participate in blended instructional-design (ID) workshops. The workshops are hosted by Kumamoto University as lifelong learning activities, and their purpose is to introduce a practical method that enhances participants' education and supports them in applying ID in their jobs. We designed the digital badge as a tool that continues to support participants after the workshop has finished. The digital badge constitutes not just a program certification, but also an index of the learning outcomes of the blended workshop, such as online report assignments, and an asynchronous discussion record of forum posts made during the workshop. We confirmed through actual use that the acquired digital badge accumulated learning outcomes that would be useful in the participants' jobs and reflected their skill mastery.

    Download PDF (1382K)
Practical Paper
  • Sei Sumi, Yoshimitsu Miyazawa
    2017 Volume 16 Issue 1 Pages 18-25
    Published: 2017
    Released on J-STAGE: December 13, 2017
    JOURNAL FREE ACCESS

    The purposes of this paper are to (a) develop an adaptive system with the help of the graded response model of IRT, (b) test the system with a small group of L2 learners, (c) examine the feasibility of computerized dynamic assessment, and (d) propose a new set of scoring mechanisms. We developed a computer-based adaptive system that provides students with appropriate test items and a set of graduated prompts from implicit to explicit under the conditions of mediation. We found that the set of graduated prompts successfully guided students to the correct answer and provided rich diagnostic information needed for L2 English education. With the help of the graded response model of IRT, computerized dynamic assessment will become a more advanced and practical educational assessment tool in the field of L2 studies.

    Download PDF (601K)
feedback
Top