Uncertainty-aware Active Learning for Optimal Bayesian Classifier

For pool-based active learning, in each iteration a candidate training sample is chosen for labeling by optimizing an acquisition function. Expected Loss Reduction~(ELR) methods maximize the expected reduction in the classification error given a new labeled candidate based on a one-step-look-ahead strategy. ELR is the optimal strategy with a single query; however, since such myopic strategies cannot identify the long-term effect of a query on the classification error, ELR may get stuck before reaching the optimal classifier. To improve convergence in the context of optimal Bayesian classification, we utilize an one-step-look-ahead acquisition function based on a weighted form of the mean objective cost of uncertainty (MOCU) that focuses on the uncertainty related to the classification error. The weight scheme is designed to guarantee the Weighted-MOCU algorithm converging to the optimal classifier of the true model. We demonstrate its performance with both synthetic and real-world datasets.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods