Search Results for author: Ashish Ramayee Asokan

Found 5 papers, 2 papers with code

DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets

1 code implementation3 Apr 2024 Harsh Rangwani, Pradipto Mondal, Mayank Mishra, Ashish Ramayee Asokan, R. Venkatesh Babu

In DeiT-LT, we introduce an efficient and effective way of distillation from CNN via distillation DIST token by using out-of-distribution images and re-weighting the distillation loss to enhance focus on tail classes.

 Ranked #1 on Image Classification on iNaturalist (Overall metric)

Image Classification Inductive Bias +1

Aligning Non-Causal Factors for Transformer-Based Source-Free Domain Adaptation

no code implementations27 Nov 2023 Sunandini Sanyal, Ashish Ramayee Asokan, Suvaansh Bhambri, Pradyumna YM, Akshay Kulkarni, Jogendra Nath Kundu, R Venkatesh Babu

Conventional domain adaptation algorithms aim to achieve better generalization by aligning only the task-discriminative causal factors between a source and target domain.

Disentanglement Privacy Preserving +1

Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification

1 code implementation12 Oct 2023 Sravanti Addepalli, Ashish Ramayee Asokan, Lakshay Sharma, R. Venkatesh Babu

The proposed approach achieves state-of-the-art results on the standard Domain Generalization benchmarks in a black-box teacher setting as well as a white-box setting where the weights of the VLM are accessible.

Domain Generalization Image Classification

Domain-Specificity Inducing Transformers for Source-Free Domain Adaptation

no code implementations ICCV 2023 Sunandini Sanyal, Ashish Ramayee Asokan, Suvaansh Bhambri, Akshay Kulkarni, Jogendra Nath Kundu, R. Venkatesh Babu

We are the first to utilize vision transformers for domain adaptation in a privacy-oriented source-free setting, and our approach achieves state-of-the-art performance on single-source, multi-source, and multi-target benchmarks

Disentanglement Source-Free Domain Adaptation +1

Interpretability for Multimodal Emotion Recognition using Concept Activation Vectors

no code implementations2 Feb 2022 Ashish Ramayee Asokan, Nidarshan Kumar, Anirudh Venkata Ragam, Shylaja S Sharath

We then evaluate the influence of our proposed concepts at multiple layers of the Bi-directional Contextual LSTM (BC-LSTM) network to show that the reasoning process of neural networks for emotion recognition can be represented using human-understandable concepts.

Decision Making Multimodal Emotion Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.