2 code implementations • 17 Nov 2020 • Varun Ranganathan, Alex Lewandowski
Despite ongoing success, training a neural network with gradient descent can be a slow and strenuous affair.
no code implementations • 25 Jan 2018 • Varun Ranganathan, S. Natarajan
In this paper, we develop an alternative to the backpropagation without the use of the Gradient Descent Algorithm, but instead we are going to devise a new algorithm to find the error in the weights and biases of an artificial neuron using Moore-Penrose Pseudo Inverse.