Paper

A Retina-inspired Sampling Method for Visual Texture Reconstruction

Conventional frame-based camera is not able to meet the demand of rapid reaction for real-time applications, while the emerging dynamic vision sensor (DVS) can realize high speed capturing for moving objects. However, to achieve visual texture reconstruction, DVS need extra information apart from the output spikes. This paper introduces a fovea-like sampling method inspired by the neuron signal processing in retina, which aims at visual texture reconstruction only taking advantage of the properties of spikes. In the proposed method, the pixels independently respond to the luminance changes with temporal asynchronous spikes. Analyzing the arrivals of spikes makes it possible to restore the luminance information, enabling reconstructing the natural scene for visualization. Three decoding methods of spike stream for texture reconstruction are proposed for high-speed motion and stationary scenes. Compared to conventional frame-based camera and DVS, our model can achieve better image quality and higher flexibility, which is capable of changing the way that demanding machine vision applications are built.

Results in Papers With Code
(↓ scroll down to see all results)