Search Results for author: Neha Kalibhat

Found 6 papers, 1 papers with code

Disentangling the Effects of Data Augmentation and Format Transform in Self-Supervised Learning of Image Representations

no code implementations2 Dec 2023 Neha Kalibhat, Warren Morningstar, Alex Bijamov, Luyang Liu, Karan Singhal, Philip Mansfield

We define augmentations in frequency space called Fourier Domain Augmentations (FDA) and show that training SSL models on a combination of these and image augmentations can improve the downstream classification accuracy by up to 1. 3% on ImageNet-1K.

Data Augmentation Self-Supervised Learning +1

Adapting Self-Supervised Representations to Multi-Domain Setups

no code implementations7 Sep 2023 Neha Kalibhat, Sam Sharpe, Jeremy Goodsitt, Bayan Bruss, Soheil Feizi

Current state-of-the-art self-supervised approaches, are effective when trained on individual domains but show limited generalization on unseen domains.

Disentanglement

Identifying Interpretable Subspaces in Image Representations

1 code implementation20 Jul 2023 Neha Kalibhat, Shweta Bhardwaj, Bayan Bruss, Hamed Firooz, Maziar Sanjabi, Soheil Feizi

Although many existing approaches interpret features independently, we observe in state-of-the-art self-supervised and supervised models, that less than 20% of the representation space can be explained by individual features.

counterfactual Language Modelling

Measuring Self-Supervised Representation Quality for Downstream Classification using Discriminative Features

no code implementations3 Mar 2022 Neha Kalibhat, Kanika Narang, Hamed Firooz, Maziar Sanjabi, Soheil Feizi

Fine-tuning with Q-Score regularization can boost the linear probing accuracy of SSL models by up to 5. 8% on ImageNet-100 and 3. 7% on ImageNet-1K compared to their baselines.

Linear evaluation Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.