Nonlinear Theory and Its Applications, IEICE
Online ISSN : 2185-4106
ISSN-L : 2185-4106
Special Section on Recent Progress in Nonlinear Theory and Its Applications
Backdoor poisoning attacks against few-shot classifiers based on meta-learning
Ganma KatoChako TakahashiKoutarou Suzuki
Author information
JOURNAL OPEN ACCESS

2023 Volume 14 Issue 2 Pages 491-499

Details
Abstract

Few-shot classification is a classification made on the basis of very few samples, and meta-learning methods (also called “learning to learn”) are often employed to accomplish it. Research on poisoning attacks against meta-learning-based few-shot classifier has recently started to be investigated. While poisoning attacks aimed at disrupting the availability of the classifier during meta-testing have been studied in Xu et al. [1] and Oldewage et al. [2], backdoor poisoning in meta-testing has only been briefly explored by Oldewage et al. [2] under limited conditions. We formulate a backdoor poisoning attack on meta-learning-based few-shot classification in this study. We show that the proposed backdoor poisoning attack is effective against the few-shot classification using model-agnostic meta-learning (MAML) [3] through experiments.

Content from these authors
© 2023 The Institute of Electronics, Information and Communication Engineers

This article is licensed under a Creative Commons [Attribution-NonCommercial-NoDerivatives 4.0 International] license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
Previous article Next article
feedback
Top