Uneven Bi-Classifier Learning for Domain Adaptation

The bi-classifier paradigm is widely adopted as an adversarial method to address domain shift challenge in unsupervised domain adaptation (UDA) by evenly training two classifiers. In this paper, we report that although the generalization ability of the feature extractor can be strengthened by the two even classifiers, the decision boundaries of the two classifiers would be shrank to the source domain in the adversarial process, which weakens the discriminative ability of the learned model. To tame this dilemma, we disentangle the function of the two classifiers and introduce uneven bi-classifier learning for domain adaptation. Specifically, we leverage the F-norm (Frobenius Norm) of classifier predictions instead of the classifier disagreement to achieve adversarial learning. By this way, our feature extractor can be adversarially trained with a single classifier and the other classifier is used for preserving the target-specific decision boundaries. The proposed uneven bi-classifier learning protocol can simultaneously enhance the generalization ability of the feature extractor and expand the decision boundary of the target classifier. Extensive experiments on large-scale datasets prove that our method can significantly surpass previous domain adaptation methods, even with only a single classifier being involved.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here