no code implementations • 20 Nov 2023 • Ali Abbasi, Parsa Nooralinejad, Hamed Pirsiavash, Soheil Kolouri
Continual learning has gained substantial attention within the deep learning community, offering promising solutions to the challenging problem of sequential learning.
1 code implementation • 4 Oct 2023 • Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash
These methods can reduce the number of parameters needed to fine-tune an LLM by several orders of magnitude.
2 code implementations • ICCV 2023 • Parsa Nooralinejad, Ali Abbasi, Soroush Abbasi Koohpayegani, Kossar Pourahmadi Meibodi, Rana Muhammad Shahroz Khan, Soheil Kolouri, Hamed Pirsiavash
We demonstrate that a deep model can be reparametrized as a linear combination of several randomly initialized and frozen deep models in the weight space.
no code implementations • 12 Mar 2022 • Ali Abbasi, Parsa Nooralinejad, Vladimir Braverman, Hamed Pirsiavash, Soheil Kolouri
Overcoming catastrophic forgetting in deep neural networks has become an active field of research in recent years.
1 code implementation • 22 Oct 2021 • Kossar Pourahmadi, Parsa Nooralinejad, Hamed Pirsiavash
However, most such methods assume that a large subset of the data can be annotated.
no code implementations • 11 Oct 2020 • Reza Hojabr, Kamyar Givaki, Kossar Pourahmadi, Parsa Nooralinejad, Ahmad Khonsari, Dara Rahmati, M. Hassan Najafi
In this work, first we present a novel approach to add the training ability to a baseline DNN accelerator (inference only) by splitting the SGD algorithm into simple computational elements.