机器学习实验室博士生系列论坛(第三十一期)—— Learning to Generalize to Unseen Domains

Abstract:

A basic assumption of traditional machine learning models is that the training and testing data are identically and independently distributed. However, it does not always hold in practice, resulting in performance deterioration. Domain generalization, which aims to predict well on unseen test domains different from those seen during training, has attracted increasing interest in recent years.


In this talk, we will start with the domain generalization formulation and present brief theoretical analyses. Then we will focus on recent progress on domain generalization. In general, the prediction model consists of a feature extractor and a classifier. Thus methods can be categorized according to their different emphasis on data samples, features, and classifiers.