Efficient Face Detection with Audio-Based Region Proposals for Human-Robot Interactions

14 Sep 2023  ·  William Aris, François Grondin ·

Efficient face detection is critical to provide natural human-robot interactions. However, computer vision tends to involve a large computational load due to the amount of data (i.e. pixels) that needs to be processed in a short amount of time. This is undesirable on robotics platforms where multiple processes need to run in parallel and where the processing power is limited by portability constraints. Existing solutions often involve reducing image quality which can negatively impact processing. The literature also reports methods to generate regions of interest in images from pixel data. Although it is a promising idea, these methods often involve heavy vision algorithms. In this paper, we evaluate how audio can be used to generate regions of interest in optical images to reduce the number of pixels to process with computer vision. Thereby, we propose a unique attention mechanism to localize a speech source and evaluate its impact on an existing face detection algorithm. Our results show that the attention mechanism reduces the computational load and offers an interesting trade-off between speed and accuracy. The proposed pipeline is flexible and can be easily adapted to other applications such as robot surveillance, video conferences or smart glasses.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods