Abstract
In standard supervised learning algorithms training and test data are assumed to follow the same probability distribution. However, because of a sample selection bias or non-stationarity of the environment, this important assumption is often violated in practice, which causes a significant estimation bias. In this article, we review semi-supervised adaptation techniques for coping with such distribution changes. We focus on two scenarios of such distribution change: the covariate shift (input distributions change but the input-output dependency does not change) and the class-balance change in classification (class-prior probabilities change but class-wise input distributions remain unchanged). We also show methods of change detection in probability distributions.