抄録
We investigated the effects of data hiding on the vision transformer (ViT), which is a transformer model, in this paper. In the field of deep neural network data-hiding, methods for protecting against piracy have been intensively studied for convolutional neural network (CNN) models. It has been observed that CNN models to which watermarking is applied show little effect in terms of model performance and train- ing convergence. To the best of our knowledge, this is the first study to apply data hiding for ViT, which has a completely different architec- ture from the CNN model. We apply a quantization-based data hiding method to ViT and evaluate the effects on performance. Our experi- ments confirm that the proposed method does not cause ViT performance to degrade in terms of the classification accuracy and loss-function transition.