主催: The Japanese Society for Artificial Intelligence
会議名: 2023年度人工知能学会全国大会(第37回)
回次: 37
開催地: 熊本城ホール+オンライン
開催日: 2023/06/06 - 2023/06/09
In recent years, the Transformer achieved remarkable results in computer vision related tasks, matching, or even surpassing those of convolutional neural networks. However, to achieve state-of-the-art results, vision transformers rely on large architectures and extensive pre-training on very large datasets. One of the main reasons for this limitation is the fact that vision transformers, whose core is its global self-attention computation, inherently lack inductive biases, with solutions often converging on a local minimum. This work presents a new method to pre-train vision transformers, denoted self-attention misdirection. In this pre-training method, an adversarial U-Net like network pre-processes the input images, altering them with the goal of misdirecting the self-attention computation process in the vision transformer. It uses style representations of image patches to generate inputs that are difficult for self-attention learning, leading the vision transformer to learn representations that generalize better on unseen data.