Paper

"Forget" the Forget Gate: Estimating Anomalies in Videos using Self-contained Long Short-Term Memory Networks

Abnormal event detection is a challenging task that requires effectively handling intricate features of appearance and motion. In this paper, we present an approach of detecting anomalies in videos by learning a novel LSTM based self-contained network on normal dense optical flow. Due to their sigmoid implementations, standard LSTM's forget gate is susceptible to overlooking and dismissing relevant content in long sequence tasks like abnormality detection. The forget gate mitigates participation of previous hidden state for computation of cell state prioritizing current input. In addition, the hyperbolic tangent activation of standard LSTMs sacrifices performance when a network gets deeper. To tackle these two limitations, we introduce a bi-gated, light LSTM cell by discarding the forget gate and introducing sigmoid activation. Specifically, the LSTM architecture we come up with fully sustains content from previous hidden state thereby enabling the trained model to be robust and make context-independent decision during evaluation. Removing the forget gate results in a simplified and undemanding LSTM cell with improved performance effectiveness and computational efficiency. Empirical evaluations show that the proposed bi-gated LSTM based network outperforms various LSTM based models verifying its effectiveness for abnormality detection and generalization tasks on CUHK Avenue and UCSD datasets.

Results in Papers With Code
(↓ scroll down to see all results)