no code implementations • 16 Apr 2024 • Truman Hickok, Dhireesha Kudithipudi
One of the most widely used approaches in continual learning is referred to as replay.
no code implementations • 13 Mar 2024 • Fatima Tuz Zohora, Vedant Karia, Nicholas Soures, Dhireesha Kudithipudi
We demonstrate the efficacy of the proposed mechanism by integrating probabilistic metaplasticity into a spiking network trained on an error threshold with low-precision memristor weights.
no code implementations • 8 Mar 2024 • Gido M. van de Ven, Nicholas Soures, Dhireesha Kudithipudi
This book chapter delves into the dynamics of continual learning, which is the process of incrementally learning from a non-stationary stream of data.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
no code implementations • 5 Oct 2023 • Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin Epstein
Lifelong learning - an agent's ability to learn throughout its lifetime - is a hallmark of biological learning systems and a central challenge for artificial intelligence (AI).
1 code implementation • 10 Apr 2023 • Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi
The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.
no code implementations • 18 Jan 2023 • Megan M. Baker, Alexander New, Mario Aguilar-Simon, Ziad Al-Halah, Sébastien M. R. Arnold, Ese Ben-Iwhiwhu, Andrew P. Brna, Ethan Brooks, Ryan C. Brown, Zachary Daniels, Anurag Daram, Fabien Delattre, Ryan Dellana, Eric Eaton, Haotian Fu, Kristen Grauman, Jesse Hostetler, Shariq Iqbal, Cassandra Kent, Nicholas Ketz, Soheil Kolouri, George Konidaris, Dhireesha Kudithipudi, Erik Learned-Miller, Seungwon Lee, Michael L. Littman, Sandeep Madireddy, Jorge A. Mendez, Eric Q. Nguyen, Christine D. Piatko, Praveen K. Pilly, Aswin Raghavan, Abrar Rahman, Santhosh Kumar Ramakrishnan, Neale Ratzlaff, Andrea Soltoggio, Peter Stone, Indranil Sur, Zhipeng Tang, Saket Tiwari, Kyle Vedder, Felix Wang, Zifan Xu, Angel Yanguas-Gil, Harel Yedidsion, Shangqun Yu, Gautam K. Vallabha
Despite the advancement of machine learning techniques in recent years, state-of-the-art systems lack robustness to "real world" events, where the input distributions and tasks encountered by the deployed systems will not be limited to the original training context, and systems will instead need to adapt to novel distributions and tasks while deployed.
no code implementations • 6 Apr 2021 • Hamed F. Langroudi, Vedant Karia, Tej Pandit, Dhireesha Kudithipudi
In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models.
no code implementations • 22 Jun 2020 • Abdullah M. Zyarah, Kevin Gomez, Dhireesha Kudithipudi
We also illustrate that the system offers 3. 46X reduction in latency and 77. 02X reduction in power consumption when compared to a custom CMOS digital design implemented at the same technology node.
2 code implementations • 22 Apr 2020 • Nicholas Soures, David Chambers, Zachariah Carmichael, Anurag Daram, Dimpy P. Shah, Kal Clark, Lloyd Potter, Dhireesha Kudithipudi
The SARS-CoV-2 infectious outbreak has rapidly spread across the globe and precipitated varying policies to effectuate physical distancing to ameliorate its impact.
Populations and Evolution
no code implementations • 26 Feb 2020 • Fatima Tuz Zohora, Abdullah M. Zyarah, Nicholas Soures, Dhireesha Kudithipudi
In the $128\times128$ network, it is observed that the number of input patterns the multistate synapse can classify is $\simeq$ 2. 1x that of a simple binary synapse model, at a mean accuracy of $\geq$ 75% .
no code implementations • 6 Aug 2019 • Hamed F. Langroudi, Zachariah Carmichael, David Pastuch, Dhireesha Kudithipudi
Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats.
no code implementations • 30 Jul 2019 • Hamed F. Langroudi, Zachariah Carmichael, Dhireesha Kudithipudi
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5.. 8]-bit).
no code implementations • 1 Jul 2019 • Zachariah Carmichael, Humza Syed, Dhireesha Kudithipudi
Echo state networks are computationally lightweight reservoir models inspired by the random projections observed in cortical circuitry.
no code implementations • 25 Mar 2019 • Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi
Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.
no code implementations • 27 Dec 2018 • Abdullah M. Zyarah, Dhireesha Kudithipudi
A memristor that is suitable for emulating the HTM synapses is identified and a new Z-window function is proposed.
no code implementations • 5 Dec 2018 • Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi
We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.
no code implementations • 20 Oct 2018 • Hamed F. Langroudi, Zachariah Carmichael, John L. Gustafson, Dhireesha Kudithipudi
Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.
no code implementations • 17 Aug 2018 • Abdullah M. Zyarah, Dhireesha Kudithipudi
The spatial pooler architecture is synthesized on Xilinx ZYNQ-7, with 91. 16% classification accuracy for MNIST and 90\% accuracy for EUNF, with noise.
no code implementations • 1 Aug 2018 • Zachariah Carmichael, Humza Syed, Stuart Burtner, Dhireesha Kudithipudi
Neuro-inspired recurrent neural network algorithms, such as echo state networks, are computationally lightweight and thereby map well onto untethered devices.
no code implementations • 22 May 2018 • Seyed H. F. Langroudi, Tej Pandit, Dhireesha Kudithipudi
Performing the inference step of deep learning in resource constrained environments, such as embedded devices, is challenging.
no code implementations • 20 Feb 2018 • Qiuyi Wu, Ernest Fokoue, Dhireesha Kudithipudi
We create, develop and implement a family of predictably optimal robust and stable ensemble of Echo State Networks via regularizing the training and perturbing the input.
no code implementations • 3 Nov 2017 • Dillon Graham, Seyed Hamed Fatemi Langroudi, Christopher Kanan, Dhireesha Kudithipudi
Analyzing spatio-temporal data like video is a challenging task that requires processing visual and temporal information effectively.
no code implementations • 27 Jan 2016 • Cory Merkel, Dhireesha Kudithipudi
Neuromemristive systems (NMSs) currently represent the most promising platform to achieve energy efficient neuro-inspired computation.
1 code implementation • 22 Jan 2016 • James Mnatzaganian, Ernest Fokoué, Dhireesha Kudithipudi
Hierarchical temporal memory (HTM) is an emerging machine learning algorithm, with the potential to provide a means to perform predictions on spatiotemporal data.