Bulletin of Research Center for Computing and Multimedia Studies, Hosei University
Online ISSN : 1882-7594
Exploring the Use of CLIP Model for Images Recommendation in Noun Memorization using Various Learning Context
Mohammad Nehal HasnineThuy Thi Thu TranHiroshi Ueda
Author information
RESEARCH REPORT / TECHNICAL REPORT FREE ACCESS

2022 Volume 37 Pages 54-56

Details
Abstract

CLIP (Contrastive Language–Image Pre-training) is a neural network capable of learning visual features from various still images with a wide variety of natural language supervision. This model's efficacy has not been explored much for vocabulary learning research, particularly how the CLIP model performs when images are searched for representing a noun coupled with/without a learner-described learning context. Hence, this paper developed a web-based system for noun learning that creates learning materials using the CLIP-model recommended images and translation data from translation API. The research aspect of this study explored- how the image ranking in the CLIP model varies when an image search operation happens for a noun supported with/without a language learner-described learning context. This web application is for foreign language learners who wish to learn new nouns using learning materials.

Content from these authors
© 2022 Hosei University
Previous article Next article
feedback
Top