1 code implementation • 11 May 2024 • Md Mostafijur Rahman, Mustafa Munir, Radu Marculescu
An efficient and effective decoding mechanism is crucial in medical image segmentation, especially in scenarios with limited computational resources.
no code implementations • 10 May 2024 • Mustafa Munir, William Avery, Md Mostafijur Rahman, Radu Marculescu
Our smallest model, GreedyViG-S, achieves 81. 1% top-1 accuracy on ImageNet-1K, 2. 9% higher than Vision GNN and 2. 2% higher than Vision HyperGraph Neural Network (ViHGNN), with less GMACs and a similar number of parameters.
no code implementations • 7 Feb 2024 • Arash Amini, Yigit Ege Bayiz, Ashwin Ram, Radu Marculescu, Ufuk Topcu
In the era of social media platforms, identifying the credibility of online content is crucial to combat misinformation.
1 code implementation • 1 Feb 2024 • Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu
This paper serves as a bridge, addressing the gap by providing a unifying framework of machine unlearning for image-to-image generative models.
no code implementations • 22 Dec 2023 • Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu
The rapid growth of machine learning has spurred legislative initiatives such as ``the Right to be Forgotten,'' allowing users to request data removal.
1 code implementation • 24 Oct 2023 • Md Mostafijur Rahman, Radu Marculescu
The encoder utilizes the self-attention mechanism to capture long-range dependencies, while the decoder refines the feature maps preserving long-range information due to the global receptive fields of the graph convolution block.
Ranked #1 on Retinal Vessel Segmentation on DRIVE (Specificity metric)
1 code implementation • 5 Jul 2023 • Guihong Li, Duc Hoang, Kartikeya Bhardwaj, Ming Lin, Zhangyang Wang, Radu Marculescu
Recently, zero-shot (or training-free) Neural Architecture Search (NAS) approaches have been proposed to liberate NAS from the expensive training process.
1 code implementation • 1 Jul 2023 • Mustafa Munir, William Avery, Radu Marculescu
Our work proves that well designed hybrid CNN-GNN architectures can be a new avenue of exploration for designing models that are extremely fast and accurate on mobile devices.
no code implementations • 13 May 2023 • Guihong Li, Kartikeya Bhardwaj, Yuedong Yang, Radu Marculescu
Anytime neural networks (AnytimeNNs) are a promising solution to adaptively adjust the model complexity at runtime under various hardware resource constraints.
1 code implementation • 29 Mar 2023 • Md Mostafijur Rahman, Radu Marculescu
Transformers have shown great success in medical image segmentation.
1 code implementation • 26 Jan 2023 • Guihong Li, Yuedong Yang, Kartikeya Bhardwaj, Radu Marculescu
Based on this theoretical analysis, we propose a new zero-shot proxy, ZiCo, the first proxy that works consistently better than #Params.
1 code implementation • Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023 • Md Mostafijur Rahman, Radu Marculescu
To address this issue, we propose a novel attention-based decoder, namely CASCaded Attention DEcoder (CASCADE), which leverages the multiscale features of hierarchical vision transformers.
Ranked #3 on Polyp Segmentation on Kvasir-SEG
1 code implementation • CVPR 2023 • Yuedong Yang, Guihong Li, Radu Marculescu
Despite its importance for federated learning, continuous learning and many other applications, on-device training remains an open problem for EdgeAI.
1 code implementation • 31 Mar 2022 • Zihui Xue, Radu Marculescu
In this work, we propose dynamic multimodal fusion (DynMM), a new approach that adaptively fuses multimodal data and generates data-dependent forward paths during inference.
Ranked #43 on Semantic Segmentation on NYU Depth v2
no code implementations • 31 Jan 2022 • Zihui Xue, Yuedong Yang, Mengtian Yang, Radu Marculescu
Graph Neural Networks (GNNs) have demonstrated a great potential in a variety of graph-based applications, such as recommender systems, drug discovery, and object recognition.
no code implementations • 1 Aug 2021 • Guihong Li, Sumit K. Mandal, Umit Y. Ogras, Radu Marculescu
This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform.
1 code implementation • CVPR 2021 • Kartikeya Bhardwa, Guihong Li2, Radu Marculescu
In this paper, we reveal that the topology of the concatenation-type skip connections is closely related to the gradient propagation which, in turn, enables a predictable behavior of DNNs’ test performance.
Ranked #34 on Neural Architecture Search on CIFAR-10
no code implementations • 1 Jan 2021 • Kartikeya Bhardwaj, Guihong Li, Radu Marculescu
(ii) Can certain topological characteristics of deep networks indicate a priori (i. e., without training) which models, with a different number of parameters/FLOPS/layers, achieve a similar accuracy?
no code implementations • 25 Aug 2020 • Kartikeya Bhardwaj, Wei Chen, Radu Marculescu
In this paper, we first highlight three major challenges to large-scale adoption of deep learning at the edge: (i) Hardware-constrained IoT devices, (ii) Data security and privacy in the IoT era, and (iii) Lack of network-aware deep learning algorithms for distributed inference across multiple IoT devices.
1 code implementation • 7 Apr 2020 • Wei Chen, Kartikeya Bhardwaj, Radu Marculescu
In this paper, we identify a new phenomenon called activation-divergence which occurs in Federated Learning (FL) due to data heterogeneity (i. e., data being non-IID) across multiple users.
no code implementations • 23 Oct 2019 • Kartikeya Bhardwaj, Naveen Suda, Radu Marculescu
The significant computational requirements of deep learning present a major bottleneck for its large-scale adoption on hardware-constrained IoT-devices.
2 code implementations • CVPR 2021 • Kartikeya Bhardwaj, Guihong Li, Radu Marculescu
In this paper, we reveal that the topology of the concatenation-type skip connections is closely related to the gradient propagation which, in turn, enables a predictable behavior of DNNs' test performance.
no code implementations • 26 Jul 2019 • Kartikeya Bhardwaj, Chingyi Lin, Anderson Sartor, Radu Marculescu
Therefore, we propose Network of Neural Networks (NoNN), a new distributed IoT learning paradigm that compresses a large pretrained 'teacher' deep network into several disjoint and highly-compressed 'student' modules, without loss of accuracy.
no code implementations • 17 May 2019 • Kartikeya Bhardwaj, Naveen Suda, Radu Marculescu
Model compression is eminently suited for deploying deep learning on IoT-devices.
no code implementations • 20 Jan 2019 • Brian Davis, Umang Bhatt, Kartikeya Bhardwaj, Radu Marculescu, José M. F. Moura
In this paper, we present a new approach to interpret deep learning models.
no code implementations • 1 Dec 2018 • Jiqian Dong, Gopaljee Atulya, Kartikeya Bhardwaj, Radu Marculescu
To this end, we propose a new network science- and representation learning-based approach that can quantify economic indicators and visualize the growth of various regions.
1 code implementation • 20 Oct 2018 • Biresh Kumar Joardar, Ryan Gary Kim, Janardhan Rao Doppa, Partha Pratim Pande, Diana Marculescu, Radu Marculescu
Our results show that these generalized 3D NoCs only incur a 1. 8% (36-tile system) and 1. 1% (64-tile system) average performance loss compared to application-specific NoCs.
no code implementations • 30 Nov 2017 • Ryan Gary Kim, Janardhan Rao Doppa, Partha Pratim Pande, Diana Marculescu, Radu Marculescu
Tight collaboration between experts of machine learning and manycore system design is necessary to create a data-driven manycore design framework that integrates both learning and expert knowledge.