2025 Volume E108.A Issue 3 Pages 254-266
Backdoor attacks on machine learning are a kind of attack whereby an adversary obtains the expected output for a particular input called a trigger, and the existing work, called latent backdoor attack (Yao et al., CCS 2019), can resist backdoor removal as countermeasures to the attacks, i.e., pruning and transfer learning. In this paper, we present a novel backdoor attack, TALPA, which outperforms the latent backdoor attack with respect to the attack success rate of backdoors as well as keeping the same-level accuracy. The key idea of TALPA is to directly overrides parameters of latent representations in competitive learning between a generative model for triggers and a victim model, and hence can more optimize model parameters and trigger generation than the latent backdoor attack. We experimentally demonstrate that TALPA outperforms the latent backdoor attack with respect to the attack success rate and also show that TALPA can resist both pruning and transfer learning through extensive experiments. We also show various discussions, such as the impact of hyperparameters and extensions to other layers from the latent representation, to shed light on the properties of TALPA. Our code is publicly available (https://github.com/fseclab-osaka/talpa).