Search Results for author: Kuluhan Binici

Found 4 papers, 2 papers with code

CRISP: Hybrid Structured Sparsity for Class-aware Model Pruning

1 code implementation24 Nov 2023 Shivam Aggarwal, Kuluhan Binici, Tulika Mitra

Machine learning pipelines for classification tasks often train a universal model to achieve accuracy across a broad range of classes.

Computational Efficiency

Visual-Policy Learning through Multi-Camera View to Single-Camera View Knowledge Distillation for Robot Manipulation Tasks

no code implementations13 Mar 2023 Cihan Acar, Kuluhan Binici, Alp Tekirdağ, Yan Wu

Our proposed method involves utilizing a technique known as knowledge distillation, in which a pre-trained ``teacher'' policy trained with multiple camera viewpoints guides a ``student'' policy in learning from a single camera viewpoint.

Data Augmentation Knowledge Distillation +2

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

1 code implementation9 Jan 2022 Kuluhan Binici, Shivam Aggarwal, Nam Trung Pham, Karianto Leman, Tulika Mitra

In particular, we design a Variational Autoencoder (VAE) with a training objective that is customized to learn the synthetic data representations optimally.

Data-free Knowledge Distillation Image Classification +1

Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data

no code implementations11 Aug 2021 Kuluhan Binici, Nam Trung Pham, Tulika Mitra, Karianto Leman

Moreover, the sample generation strategies in some of these methods could result in a mismatch between the synthetic and real data distributions.

Knowledge Distillation Model Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.