Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
34th (2020)
Session ID : 4I3-GS-2-02
Conference information

Improved Meta-learning by Parameter Adjustment via Latent Variables and Probabilistic Inference
*Eiki SHIMIZUShogo AOKIKenta MIKAWAMasayuki GOTO
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Standard deep neural networks require large training data and fail to achieve good performance in the small data regime. To overcome this limitation, meta-learning approaches have recently been explored. The goal of meta-learning methods is to empower models to automatically learn across-task knowledge usually referred to meta-knowledge, so that task-specific knowledge of new tasks can be obtained using only few data. Among them, Model-Agnostic Meta-Learning or MAML is one of the best approaches, showing high performances in many settings. However, MAML does not consider varying effectiveness of meta-knowledge to each task, since learning rate is set constant across tasks. In this paper, we propose a model that adjusts learning rate for each task by introducing latent variables and applying probabilistic inference. We demonstrate that this approach improves the performance of MAML on few-shot image classification benchmark dataset, and confirm that learning rate is adaptively adjusted by visualizing latent variables.

Content from these authors
© 2020 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top