Bulletin of the Society of Photography and Imaging of Japan
Online ISSN : 2188-9937
DNN Data Hiding for Transformer Encoder in ViT Models
Shuntaro FUKUOKAShoko IMAIZUMIMinoru KURIBAYASHI
著者情報
ジャーナル オープンアクセス

2024 年 34 巻 2 号 p. 20-30

詳細
抄録
We investigated the effects of data hiding on the vision transformer (ViT), which is a transformer model, in this paper. In the field of deep neural network data-hiding, methods for protecting against piracy have been intensively studied for convolutional neural network (CNN) models. It has been observed that CNN models to which watermarking is applied show little effect in terms of model performance and train- ing convergence. To the best of our knowledge, this is the first study to apply data hiding for ViT, which has a completely different architec- ture from the CNN model. We apply a quantization-based data hiding method to ViT and evaluate the effects on performance. Our experi- ments confirm that the proposed method does not cause ViT performance to degrade in terms of the classification accuracy and loss-function transition.
著者関連情報
© 2024 The Society of Photography and Imaging of Japan
前の記事 次の記事
feedback
Top