Sensing Diversity and Sparsity Models for Event Generation and Video Reconstruction from Events

Events-to-video (E2V) reconstruction and video-to-events (V2E) simulation are two fundamental research topics in event-based vision. Current deep neural networks for E2V reconstruction are usually complex and difficult to interpret. Moreover, existing event simulators are designed to generate realistic events, but research on how to improve the event generation process has been so far limited. In this paper, we propose a light, simple model-based deep network for E2V reconstruction, explore the diversity for adjacent pixels in V2E generation, and finally build a video-to-events-to-video (V2E2V) architecture to validate how alternative event generation strategies improve video reconstruction. For the E2V reconstruction, we model the relationship between events and intensity using sparse representation models. A convolutional ISTA network (CISTA) is then designed using the algorithm unfolding strategy. Long short-term temporal consistency (LSTC) constraints are further introduced to enhance the temporal coherence. In the V2E generation, we introduce the idea of having interleaved pixels with different contrast threshold and lowpass bandwidth and conjecture that this can help extract more useful information from intensity. Finally, V2E2V architecture is used to verify the effectiveness of this strategy. Results highlight that our CISTA-LSTC network outperforms state-of-the-art methods and achieves better temporal consistency. Sensing diversity in event generation reveals more fine details and this leads to a significantly improved reconstruction quality.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here