Leveraging Recent Advances in Deep Learning for Audio-Visual Emotion Recognition

16 Mar 2021  ·  Liam Schoneveld, Alice Othmani, Hazem Abdelkawy ·

Emotional expressions are the behaviors that communicate our emotional state or attitude to others. They are expressed through verbal and non-verbal communication. Complex human behavior can be understood by studying physical features from multiple modalities; mainly facial, vocal and physical gestures. Recently, spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis. In this paper, we propose a new deep learning-based approach for audio-visual emotion recognition. Our approach leverages recent advances in deep learning like knowledge distillation and high-performing deep architectures. The deep feature representations of the audio and visual modalities are fused based on a model-level fusion strategy. A recurrent neural network is then used to capture the temporal dynamics. Our proposed approach substantially outperforms state-of-the-art approaches in predicting valence on the RECOLA dataset. Moreover, our proposed visual facial expression feature extraction network outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets.

PDF Abstract

Datasets


Results from the Paper


Ranked #11 on Facial Expression Recognition (FER) on AffectNet (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Facial Expression Recognition (FER) AffectNet Distilled student Accuracy (7 emotion) 65.4 # 13
Accuracy (8 emotion) 61.60 # 11

Methods