2019 年 7 巻 2 号 p. 97-105
Despite the recent success in deep neural networks on the visual domain, we need a large amount of data to train the networks. Previous works addressed this issue as the few-shot learning which is the task to identify the class of an example in new classes not seen in a training phase with only a few examples of each new class. Some methods performed well on the few-shot tasks, but need a complex architecture and/or specialized loss functions, such as metric loss, meta learner, and memory. In this paper,we evaluate the performance of the ensemble approach aggregating a huge number of simple neural network models (up to 128 models) on standard few-shot datasets. Surprisingly, although the approach is simple, our experimental results show that the ensemble approach is competitive with state-of-the-art methods among similar architecture methods in some settings.