Transactions of the Japanese Society for Artificial Intelligence
Online ISSN : 1346-8030
Print ISSN : 1346-0714
ISSN-L : 1346-0714
Original Paper
Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains
Xin ZhangShixiang Shane GuYutaka MatsuoYusuke Iwasawa
Author information
JOURNAL FREE ACCESS

2023 Volume 38 Issue 6 Pages B-MC2_1-10

Details
Abstract

Domain generalization (DG) is a difficult transfer learning problem aiming to learn a generalizable model for unseen domains. Recent foundation models (FMs) are robust to many distribution shifts and, therefore, should substantially improve the performance of DG. In this work, we study generic ways to adopt contrastive languageimage pre-training (CLIP), a visual-language foundation model, for DG problems in image classification. While empirical risk minimization (ERM) greatly improves the accuracy with bigger backbones and training datasets using standard DG benchmarks, fine-tuning FMs is not practical in many real-world situations. We propose Domain Prompt Learning (DPL) as a novel approach for domain inference in the form of conditional prompt generation. DPL achieved a significant accuracy improvement with only training a lightweight prompt generator (a three-layer MLP), whose parameter is of equivalent scale to the classification projector in the previous DG literature. Combining DPL with CLIP provides surprising performance, raising the accuracy of zero-shot CLIP from 73.7% to 79.3% on several standard datasets, namely PACS, VLCS, OfficeHome, and TerraIncognita. We hope the simplicity and success of our approach lead to broader adoption and analysis of foundation models in the domain generalization field.

Content from these authors
© The Japanese Society for Artificial Intelligence 2023
Previous article Next article
feedback
Top