Paper

Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks

Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent in parameter space is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK). Lee et al. (2019) built on this result by establishing that the output of a neural network trained using gradient descent can be approximated by a linear model for wide networks. In parallel, a recent line of studies (Schoenholz et al. 2017; Hayou et al. 2019) has suggested that a special initialization, known as the Edge of Chaos, improves training. In this paper, we bridge the gap between these two concepts by quantifying the impact of the initialization and the activation function on the NTK when the network depth becomes large. In particular, we show that the performance of wide deep neural networks cannot be explained by the NTK regime and we provide experiments illustrating our theoretical results.

Results in Papers With Code
(↓ scroll down to see all results)