Search Results for author: Arash Afkanpour

Found 9 papers, 3 papers with code

Few-shot Tuning of Foundation Models for Class-incremental Learning

1 code implementation26 May 2024 Shuvendu Roy, Elham Dolatabadi, Arash Afkanpour, Ali Etemad

For the first time, we explore few-shot tuning of vision foundation models for class-incremental learning.

EHRMamba: Towards Generalizable and Scalable Foundation Models for Electronic Health Records

1 code implementation23 May 2024 Adibvafa Fallahpour, Mahshid Alinoori, Arash Afkanpour, Amrit Krishnan

We also introduce a novel approach to Multitask Prompted Finetuning (MTF) for EHR data, which enables EHRMamba to simultaneously learn multiple clinical tasks in a single finetuning phase, significantly enhancing deployment and cross-task generalization.

Can Generative Models Improve Self-Supervised Representation Learning?

no code implementations9 Mar 2024 Sana Ayromlou, Arash Afkanpour, Vahid Reza Khazaie, Fereshteh Forghani

By directly conditioning generative models on a source image representation, our method enables the generation of diverse augmentations while maintaining the semantics of the source image, thus offering a richer set of data for self-supervised learning.

Representation Learning Self-Supervised Learning

Random Field Augmentations for Self-Supervised Representation Learning

no code implementations7 Nov 2023 Philip Andrew Mansfield, Arash Afkanpour, Warren Richard Morningstar, Karan Singhal

In this work, we propose a new family of local transformations based on Gaussian random fields to generate image augmentations for self-supervised representation learning.

Representation Learning

Federated Training of Dual Encoding Models on Small Non-IID Client Datasets

no code implementations30 Sep 2022 Raviteja Vemulapalli, Warren Richard Morningstar, Philip Andrew Mansfield, Hubert Eichner, Karan Singhal, Arash Afkanpour, Bradley Green

In this work, we focus on federated training of dual encoding models on decentralized data composed of many small, non-IID (independent and identically distributed) client datasets.

Federated Learning Representation Learning

Evaluating Curriculum Learning Strategies in Neural Combinatorial Optimization

no code implementations NeurIPS Workshop LMCA 2020 Michal Lisicki, Arash Afkanpour, Graham W. Taylor

Neural combinatorial optimization (NCO) aims at designing problem-independent and efficient neural network-based strategies for solving combinatorial problems.

Combinatorial Optimization Efficient Neural Network +2

Cannot find the paper you are looking for? You can Submit a new open access paper.