"Forget" the Forget Gate: Estimating Anomalies in Videos using Self-contained Long Short-Term Memory Networks

3 Apr 2021  ·  Habtamu Fanta, Zhiwen Shao, Lizhuang Ma ·

Abnormal event detection is a challenging task that requires effectively handling intricate features of appearance and motion. In this paper, we present an approach of detecting anomalies in videos by learning a novel LSTM based self-contained network on normal dense optical flow. Due to their sigmoid implementations, standard LSTM's forget gate is susceptible to overlooking and dismissing relevant content in long sequence tasks like abnormality detection. The forget gate mitigates participation of previous hidden state for computation of cell state prioritizing current input. In addition, the hyperbolic tangent activation of standard LSTMs sacrifices performance when a network gets deeper. To tackle these two limitations, we introduce a bi-gated, light LSTM cell by discarding the forget gate and introducing sigmoid activation. Specifically, the LSTM architecture we come up with fully sustains content from previous hidden state thereby enabling the trained model to be robust and make context-independent decision during evaluation. Removing the forget gate results in a simplified and undemanding LSTM cell with improved performance effectiveness and computational efficiency. Empirical evaluations show that the proposed bi-gated LSTM based network outperforms various LSTM based models verifying its effectiveness for abnormality detection and generalization tasks on CUHK Avenue and UCSD datasets.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods