Generalizing Cross Entropy Loss with a Beta Proper Composite Loss: An Improved Loss Function for Open Set Recognition

29 Sep 2021  ·  Matthew Lyle Olson, Neale Ratzlaff, Weng-Keen Wong ·

Open set recognition involves identifying data instances encountered during test time that do not belong to known classes in the training set. The majority of recent deep learning approaches to open set recognition use a cross entropy loss to train their networks. Surprisingly, other loss functions are seldom used. In our work, we explore generalizing cross entropy with a Beta loss. This Beta loss is a proper composite loss with a Beta weight function. This weight function adds the flexibility of putting more emphasis on different parts of the observation-conditioned class probability (i.e. $P(Y|X)$) range during training. We show that the flexibility gained through this is Beta loss function produces consistent improvements over cross entropy loss for open set recognition and produces state of the art results relative to recent methods.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods