Publication
EMNLP 2022
Short paper

Not to Overfit or Underfit the Source Domains? An Empirical Study of Domain Generalization in Question Answering

Abstract

Machine learning models are prone to overfitting their training (source) domains, which is commonly believed to be the reason why they falter in novel target domains. Here we examine the contrasting view that multi-source domain generalization (DG) is first and foremost a problem of mitigating source domain underfitting: models not adequately learning the signal already present in their multi-domain training data. Experiments on a reading comprehension DG benchmark show that as a model learns its source domains better---using familiar methods such as knowledge distillation from a bigger model---its zero-shot out-of-domain utility improves at an even faster pace. Improved source domain learning also demonstrates superior out-of-domain generalization over three popular existing DG approaches, which aim to reduce overfitting to the source domains.

Date

Publication

EMNLP 2022

Authors

Topics

Share