Deeply-Sparse Signal rePresentations ($\text{D}\text{S}^2\text{P}$)

5 Jul 2018  ·  Demba Ba ·

A recent line of work shows that a deep neural network with ReLU nonlinearities arises from a finite sequence of cascaded sparse coding models, the outputs of which, except for the last element in the cascade, are sparse and unobservable. That is, intermediate outputs deep in the cascade are sparse, hence the title of this manuscript. We show here, using techniques from the dictionary learning literature that, if the measurement matrices in the cascaded sparse coding model (a) satisfy RIP and (b) all have sparse columns except for the last, they can be recovered with high probability. We propose two algorithms for this purpose: one that recovers the matrices in a forward sequence, and another that recovers them in a backward sequence. The method of choice in deep learning to solve this problem is by training an auto-encoder. Our algorithms provide a sound alternative, with theoretical guarantees, as well upper bounds on sample complexity. The theory shows that the learning complexity of the forward algorithm depends on the number of hidden units at the deepest layer and the number of active neurons at that layer (sparsity). In addition, the theory relates the number of hidden units in successive layers, thus giving a practical prescription for designing deep ReLU neural networks. Because it puts fewer restrictions on the architecture, the backward algorithm requires more data. We demonstrate the deep dictionary learning algorithm via simulations. Finally, we use a coupon-collection argument to conjecture a lower bound on sample complexity that gives some insight as to why deep networks require more data to train than shallow ones.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods