Hide/Show Apps


Many domain adaptation methods are based on learning a projection or a transformation of the source and target domains to a common domain and training a classifier there, while the performance of such algorithms has not been theoretically studied yet. Previous studies proposing generalization bounds for domain adaptation relate the target loss to the discrepancy between the source and target distributions, however, do not take into account the possible effects of learning a transformation between the two domains. In this work, we present generalization bounds that study the target performance of domain adaptation methods learning a transformation of the source and target domains along with a hypothesis. We show that, under some conditions on the loss regularity, if the domain transformations reduce the distribution distance at a sufficiently high rate, then the expected target loss can be bounded with probability improving at an exponential rate with the number of labeled samples.