Nonlinear Theory and Its Applications, IEICE
Online ISSN : 2185-4106
ISSN-L : 2185-4106
Special Section on Recent Progress in Nonlinear Theory and Its Applications
Backdoor poisoning attacks against few-shot classifiers based on meta-learning
Ganma KatoChako TakahashiKoutarou Suzuki
著者情報
ジャーナル オープンアクセス

2023 年 14 巻 2 号 p. 491-499

詳細
抄録

Few-shot classification is a classification made on the basis of very few samples, and meta-learning methods (also called “learning to learn”) are often employed to accomplish it. Research on poisoning attacks against meta-learning-based few-shot classifier has recently started to be investigated. While poisoning attacks aimed at disrupting the availability of the classifier during meta-testing have been studied in Xu et al. [1] and Oldewage et al. [2], backdoor poisoning in meta-testing has only been briefly explored by Oldewage et al. [2] under limited conditions. We formulate a backdoor poisoning attack on meta-learning-based few-shot classification in this study. We show that the proposed backdoor poisoning attack is effective against the few-shot classification using model-agnostic meta-learning (MAML) [3] through experiments.

著者関連情報
© 2023 The Institute of Electronics, Information and Communication Engineers

This article is licensed under a Creative Commons [Attribution-NonCommercial-NoDerivatives 4.0 International] license.
https://creativecommons.org/licenses/by-nc-nd/4.0/
前の記事 次の記事
feedback
Top