Search Results for author: Amit Vikram Singh

Found 5 papers, 0 papers with code

Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality

no code implementations1 Mar 2022 Chandrashekar Lakshminarayanan, Amit Vikram Singh, Arun Rajkumar

Using the dual view, in this paper, we rethink the conventional interpretations of DNNs thereby explicitsing the implicit interpretability of DNNs.

Disentangling deep neural networks with rectified linear units using duality

no code implementations6 Oct 2021 Chandrashekar Lakshminarayanan, Amit Vikram Singh

To address `black box'-ness, we propose a novel interpretable counterpart of DNNs with ReLUs namely deep linearly gated networks (DLGN): the pre-activations to the gates are generated by a deep linear network, and the gates are then applied as external masks to learn the weights in a different network.

Disentanglement

Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning

no code implementations NeurIPS 2020 Chandrashekar Lakshminarayanan, Amit Vikram Singh

To this end, we encode the on/off state of the gates of a given input in a novel 'neural path feature' (NPF), and the weights of the DNN are encoded in a novel 'neural path value' (NPV).

Deep Gated Networks: A framework to understand training and generalisation in deep learning

no code implementations10 Feb 2020 Chandrashekar Lakshminarayanan, Amit Vikram Singh

In DGNs, a single neuronal unit has two components namely the pre-activation input (equal to the inner product the weights of the layer and the previous layer outputs), and a gating value which belongs to $[0, 1]$ and the output of the neuronal unit is equal to the multiplication of pre-activation input and the gating value.

Cannot find the paper you are looking for? You can Submit a new open access paper.