Optimizing Class Distribution in Memory for Multi-Label Continual Learning

29 Sep 2021  ·  Yan-Shuo Liang, Wu-Jun Li ·

Continual learning, which tries to learn from a data stream with non-stationary distribution, is an important yet challenging problem. One of the most effective ways to solve this problem is replay-based methods, in which a replay buffer called memory is maintained to keep a small part of past samples and the model rehearses these samples to keep its performance on old distribution when learning on new distribution. Most existing replay-based methods focus on single-label problems in which each sample in the data stream has only one label. But many real applications are multi-label problems in which each sample may have more than one label. To the best of our knowledge, there exists only one method, called partition reservoir sampling (PRS), for multi-label continual learning problems. PRS suffers from low speed due to its complicated process. In this paper, we propose a novel method, called optimizing class distribution in memory (OCDM), for multi-label continual learning. OCDM formulates the memory update mechanism as an optimization problem and updates the memory by solving this problem. Experiments on two widely used multi-label datasets show that OCDM outperforms other state-of-the-art methods including PRS in terms of accuracy, and its speed is also much faster than PRS.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here