Mind The Gap:

Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks


Peihao Zhu, Rameen Abdal, John Femiani, Peter Wonka


Click to view the paper.

Video

Abstract

We present a new method for one shot domain adaptation. The input to our method is trained GAN that can produce images in domain A and a single reference image I_B from domain B. The proposed algorithm can translate any output of the trained GAN from domain A to domain B. There are two main advantages of our method compared to the current state of the art: First, our solution achieves higher visual quality, e.g. by noticeably reducing overfitting. Second, our solution allows for more degrees of freedom to control the domain gap, i.e. what aspects of image I_B are used to define the domain B. Technically, we realize the new method by building on a pre-trained StyleGAN generator as GAN and a pre-trained CLIP model for representing the domain gap. We propose several new regularizers for controlling the domain gap to optimize the weights of the pre-trained StyleGAN generator to output images in domain B instead of domain A. The regularizers prevent the optimization from taking on too many attributes of the single reference image. Our results show significant visual improvements over the state of the art as well as multiple applications that highlight improved control.


BibTex

@misc{zhu2021mind,
    title={Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks},
    author={Peihao Zhu and Rameen Abdal and John Femiani and Peter Wonka},
    year={2021},
    eprint={2110.08398},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}