法政大学情報メディア教育研究センター研究報告
Online ISSN : 1882-7594
Exploring the Use of CLIP Model for Images Recommendation in Noun Memorization using Various Learning Context
Mohammad Nehal HasnineThuy Thi Thu TranHiroshi Ueda
著者情報
研究報告書・技術報告書 フリー

2022 年 37 巻 p. 54-56

詳細
抄録

CLIP (Contrastive Language–Image Pre-training) is a neural network capable of learning visual features from various still images with a wide variety of natural language supervision. This model's efficacy has not been explored much for vocabulary learning research, particularly how the CLIP model performs when images are searched for representing a noun coupled with/without a learner-described learning context. Hence, this paper developed a web-based system for noun learning that creates learning materials using the CLIP-model recommended images and translation data from translation API. The research aspect of this study explored- how the image ranking in the CLIP model varies when an image search operation happens for a noun supported with/without a language learner-described learning context. This web application is for foreign language learners who wish to learn new nouns using learning materials.

著者関連情報
© 2022 Hosei University
前の記事 次の記事
feedback
Top