no code implementations • 15 Jan 2024 • Rongyu Zhang, Zefan Cai, Huanrui Yang, Zidong Liu, Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Baobao Chang, Yuan Du, Li Du, Shanghang Zhang
Finetuning a pretrained vision model (PVM) is a common technique for learning downstream vision tasks.
no code implementations • 27 Dec 2023 • Rongyu Zhang, Yulin Luo, Jiaming Liu, Huanrui Yang, Zhen Dong, Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata, Kurt Keutzer, Yuan Du, Shanghang Zhang
In this work, we propose an efficient MoE architecture with weight sharing across the experts.
1 code implementation • 16 May 2023 • Denis Gudovskiy, Tomoyuki Okuno, Yohei Nakata
FlowEneDet achieves promising results on Cityscapes, Cityscapes-C, FishyScapes and SegmentMeIfYouCan benchmarks in IDM/OOD detection when applied to pretrained DeepLabV3+ and SegFormer semantic segmentation models.
Ranked #5 on Anomaly Detection on Fishyscapes L&F
1 code implementation • 3 May 2022 • Jinze Yu, Jiaming Liu, Xiaobao Wei, Haoyi Zhou, Yohei Nakata, Denis Gudovskiy, Tomoyuki Okuno, JianXin Li, Kurt Keutzer, Shanghang Zhang
To solve this problem, we propose an end-to-end cross-domain detection Transformer based on the mean teacher framework, MTTrans, which can fully exploit unlabeled target domain data in object detection training and transfer knowledge between domains via pseudo labels.
1 code implementation • 24 Oct 2021 • Konstantinos Kallidromitis, Denis Gudovskiy, Kazuki Kozuka, Iku Ohama, Luca Rigazio
In this paper, we propose a novel self-supervised learning framework that combines contrastive learning with neural processes.
3 code implementations • 27 Jul 2021 • Denis Gudovskiy, Shun Ishizaka, Kazuki Kozuka
Our approach results in a computationally and memory-efficient model: CFLOW-AD is faster and smaller by a factor of 10x than prior state-of-the-art with the same input setting.
Ranked #13 on Anomaly Detection on VisA (Detection AUROC metric)
1 code implementation • CVPR 2021 • Denis Gudovskiy, Luca Rigazio, Shun Ishizaka, Kazuki Kozuka, Sotaro Tsukizawa
To overcome these limitations, we reformulate AutoAugment as a generalized automated dataset optimization (AutoDO) task that minimizes the distribution shift between test data and distorted train dataset.
1 code implementation • CVPR 2020 • Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Sotaro Tsukizawa
We theoretically derive an optimal acquisition function for AL in this setting.
2 code implementations • 19 Dec 2019 • Denis Gudovskiy, Gyuri Han, Takuya Yamaguchi, Sotaro Tsukizawa
Current home appliances are capable to execute a limited number of voice commands such as turning devices on or off, adjusting music volume or light conditions.
no code implementations • ICLR Workshop LLD 2019 • Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Sotaro Tsukizawa
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting.
1 code implementation • 19 Nov 2018 • Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Yasunori Ishii, Sotaro Tsukizawa
We qualitatively and quantitatively show that the proposed explanation method can be used to find image features which cause failures in DNN object detection.