Distinguishing Homophenes Using Multi-Head Visual-Audio Memory for Lip Reading

Recognizing speech from silent lip movement, which is called lip reading, is a challenging task due to 1) the inherent information insufficiency of lip movement to fully represent the speech, and 2) the existence of homophenes that have similar lip movement with different pronunciations. In this paper, we try to alleviate the aforementioned two challenges in lip reading by proposing a Multi-head Visual-audio Memory (MVM). Firstly, MVM is trained with audio-visual datasets and remembers audio representations by modelling the inter-relationships of paired audio-visual representations. At the inference stage, visual input alone can extract the saved audio representation from the memory by examining the learned inter-relationships. Therefore, the lip reading model can complement the insufficient visual information with the extracted audio representations. Secondly, MVM is composed of multi-head key memories for saving visual features and one value memory for saving audio knowledge, which is designed to distinguish the homophenes. With the multi-head key memories, MVM extracts possible candidate audio features from the memory, which allows the lip reading model to consider the possibility of which pronunciations can be represented from the input lip movement. This also can be viewed as an explicit implementation of the one-to-many mapping of viseme-to-phoneme. Moreover, MVM is employed in multi-temporal levels to consider the context when retrieving the memory and distinguish the homophenes. Extensive experimental results verify the effectiveness of the proposed method in lip reading and in distinguishing the homophenes.

PDF Abstract The AAAI 2022 PDF
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Lipreading CAS-VSR-W1k (LRW-1000) 3D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory Top-1 Accuracy 53.8 # 2
Lipreading Lip Reading in the Wild 3D Conv + ResNet-18 + MS-TCN + Multi-Head Visual-Audio Memory Top-1 Accuracy 88.5 # 5
Lipreading LRS2 Multi-head Visual-Audio Memory Word Error Rate (WER) 44.5% # 9

Methods


No methods listed for this paper. Add relevant methods here