1 code implementation • 24 Nov 2023 • Shivam Aggarwal, Kuluhan Binici, Tulika Mitra
Machine learning pipelines for classification tasks often train a universal model to achieve accuracy across a broad range of classes.
no code implementations • 13 Mar 2023 • Cihan Acar, Kuluhan Binici, Alp Tekirdağ, Yan Wu
Our proposed method involves utilizing a technique known as knowledge distillation, in which a pre-trained ``teacher'' policy trained with multiple camera viewpoints guides a ``student'' policy in learning from a single camera viewpoint.
1 code implementation • 9 Jan 2022 • Kuluhan Binici, Shivam Aggarwal, Nam Trung Pham, Karianto Leman, Tulika Mitra
In particular, we design a Variational Autoencoder (VAE) with a training objective that is customized to learn the synthetic data representations optimally.
no code implementations • 11 Aug 2021 • Kuluhan Binici, Nam Trung Pham, Tulika Mitra, Karianto Leman
Moreover, the sample generation strategies in some of these methods could result in a mismatch between the synthetic and real data distributions.