A Comparative Evaluation of Curriculum Learning with Filtering and Boosting

17 Dec 2013  ·  Michael R. Smith, Tony Martinez ·

Not all instances in a data set are equally beneficial for inferring a model of the data. Some instances (such as outliers) are detrimental to inferring a model of the data. Several machine learning techniques treat instances in a data set differently during training such as curriculum learning, filtering, and boosting. However, an automated method for determining how beneficial an instance is for inferring a model of the data does not exist. In this paper, we present an automated method that orders the instances in a data set by complexity based on the their likelihood of being misclassified (instance hardness). The underlying assumption of this method is that instances with a high likelihood of being misclassified represent more complex concepts in a data set. Ordering the instances in a data set allows a learning algorithm to focus on the most beneficial instances and ignore the detrimental ones. We compare ordering the instances in a data set in curriculum learning, filtering and boosting. We find that ordering the instances significantly increases classification accuracy and that filtering has the largest impact on classification accuracy. On a set of 52 data sets, ordering the instances increases the average accuracy from 81% to 84%.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here