1 code implementation • 26 May 2024 • Shuvendu Roy, Elham Dolatabadi, Arash Afkanpour, Ali Etemad
For the first time, we explore few-shot tuning of vision foundation models for class-incremental learning.
1 code implementation • 23 May 2024 • Adibvafa Fallahpour, Mahshid Alinoori, Arash Afkanpour, Amrit Krishnan
We also introduce a novel approach to Multitask Prompted Finetuning (MTF) for EHR data, which enables EHRMamba to simultaneously learn multiple clinical tasks in a single finetuning phase, significantly enhancing deployment and cross-task generalization.
no code implementations • 9 Mar 2024 • Sana Ayromlou, Arash Afkanpour, Vahid Reza Khazaie, Fereshteh Forghani
By directly conditioning generative models on a source image representation, our method enables the generation of diverse augmentations while maintaining the semantics of the source image, thus offering a richer set of data for self-supervised learning.
no code implementations • 7 Nov 2023 • Philip Andrew Mansfield, Arash Afkanpour, Warren Richard Morningstar, Karan Singhal
In this work, we propose a new family of local transformations based on Gaussian random fields to generate image augmentations for self-supervised representation learning.
no code implementations • 23 May 2023 • Elahe Vedadi, Joshua V. Dillon, Philip Andrew Mansfield, Karan Singhal, Arash Afkanpour, Warren Richard Morningstar
We then approximate this process using Variational Inference to train our model efficiently.
no code implementations • 4 Nov 2022 • Arash Afkanpour, Shabir Adeel, Hansenclever Bassani, Arkady Epshteyn, Hongbo Fan, Isaac Jones, Mahan Malihi, Adrian Nauth, Raj Sinha, Sanjana Woonna, Shiva Zamani, Elli Kanal, Mikhail Fomitchev, Donny Cheung
Transformer models have achieved great success across many NLP problems.
no code implementations • 30 Sep 2022 • Raviteja Vemulapalli, Warren Richard Morningstar, Philip Andrew Mansfield, Hubert Eichner, Karan Singhal, Arash Afkanpour, Bradley Green
In this work, we focus on federated training of dual encoding models on decentralized data composed of many small, non-IID (independent and identically distributed) client datasets.
2 code implementations • 5 Nov 2021 • Michal Lisicki, Arash Afkanpour, Graham W. Taylor
We consider policies based on a GP and a Student's t-process (TP).
no code implementations • NeurIPS Workshop LMCA 2020 • Michal Lisicki, Arash Afkanpour, Graham W. Taylor
Neural combinatorial optimization (NCO) aims at designing problem-independent and efficient neural network-based strategies for solving combinatorial problems.