StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot Learning CVPR 2023
- Yuqian Fu Fudan
- Yu Xie Fudan
- Yanwei Fu Fudan
- Yu-Gang Jiang Fudan
Abstract
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains. It aims at transferring prior knowledge learned on the source dataset to novel target datasets. The CD-FSL task is especially challenged by the huge domain gap between different datasets. Critically, such a domain gap actually comes from the changes of visual styles, and wave-SAN empirically shows that spanning the style distribution of the source data helps alleviate this issue. However, wave-SAN simply swaps styles of two images. Such a vanilla operation makes the generated styles ``real'' and ``easy'', which still fall into the original set of the source styles. Thus, inspired by vanilla adversarial learning, a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method is proposed for CD-FSL. Particularly, our style attack method synthesizes both ``virtual'' and ``hard'' adversarial styles for model training. This is achieved by perturbing the original style with the signed style gradients. By continually attacking styles and forcing the model to recognize these challenging adversarial styles, our model is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets. Besides the typical CNN-based backbone, we also employ our StyleAdv method on large-scale pretrained vision transformer. Extensive experiments conducted on eight various target datasets show the effectiveness of our method. Whether built upon ResNet or ViT, we achieve the new state of the art for CD-FSL.
StyleAdv Method
StyleAdv solves CD-FSL via generating both "virtual" and "hard" styles. Generally, we have two iterative loops:
- Inner loop: synthesizing new adversarial styles by attacking the original source styles;
- Outer loop: optimizing the whole network by classifying source images with both original and adversarial styles;
The details of how we achieve the style attack and apply StyleAdv on both ResNet and ViT backbones please see the paper.
Results
We conduct experiments on both RN10/ViT-small with eight target targets including ChestX, ISIC, EuroSAT, CropDisease, CUB, Cars, Places, and Plantae. Both the results of StyleAdv (only meta-trained) and StyleAdv-FT (finetuned with few target support images) are reported.
We also provide the visulization result. Note that wave-SAN is our former work for StyleAdv which also augments style but in a easier way.
Related Works
Wave-SAN is the former work that tackles CD-FSL via style augmentation.
Meta-FDMixup, Me-D2N, and TGDM are our works that address CD-FSL with few-labeled target examples.