no code implementations • 26 Aug 2022 • Vinod Ganesan, Anwesh Bhattacharya, Pratyush Kumar, Divya Gupta, Rahul Sharma, Nishanth Chandran
For instance, the model provider could be a diagnostics company that has trained a state-of-the-art DenseNet-121 model for interpreting a chest X-ray and the user could be a patient at a hospital.
no code implementations • 7 Jun 2021 • Anwesh Bhattacharya, Marios Mattheakis, Pavlos Protopapas
In certain situations, neural networks are trained upon data that obey underlying symmetries.
1 code implementation • 10 Apr 2021 • Anwesh Bhattacharya, Snehanshu Saha, Nithin Nagaraj
It has been well documented that the use of exponentially-averaged momentum (EM) in particle swarm optimization (PSO) is advantageous over the vanilla PSO algorithm.
no code implementations • 10 Apr 2021 • Urvil Nileshbhai Jivani, Omatharv Bharat Vaidya, Anwesh Bhattacharya, Snehanshu Saha
This paper introduces application of the Exponentially Averaged Momentum Particle Swarm Optimization (EM-PSO) as a derivative-free optimizer for Neural Networks.
1 code implementation • 23 Nov 2020 • Anwesh Bhattacharya, Nehal C. P., Mousumi Das, Abhishek Paswan, Snehanshu Saha, Francoise Combes
We present a novel algorithm to detect double nuclei galaxies (DNG) called GOTHIC (Graph BOosted iterated HIll Climbing) - that detects whether a given image of a galaxy has two or more closely separated nuclei.
2 code implementations • 19 May 2020 • Rohan Mohapatra, Snehanshu Saha, Carlos A. Coello Coello, Anwesh Bhattacharya, Soma S. Dhavala, Sriparna Saha
This paper introduces AdaSwarm, a novel gradient-free optimizer which has similar or even better performance than the Adam optimizer adopted in neural networks.