2 code implementations • 21 Dec 2023 • Han Shu, Wenshuo Li, Yehui Tang, Yiman Zhang, Yihao Chen, Houqiang Li, Yunhe Wang, Xinghao Chen
Extensive experiments on various zero-shot transfer tasks demonstrate the significantly advantageous performance of our TinySAM against counterpart methods.
no code implementations • 16 Aug 2023 • Lei Guo, Wenshuo Li, Yukai Zhu, Xiang Yu, Zidong Wang
State estimation has long been a fundamental problem in signal processing and control areas.
no code implementations • 13 Dec 2022 • Wenshuo Li
Therefore, the main direction to improve the accuracy of object detection tasks is to improve the neural network to extract features better.
4 code implementations • CVPR 2022 • Wenshuo Li, Hanting Chen, Jianyuan Guo, Ziyang Zhang, Yunhe Wang
However, due to the simplicity of their structures, the performance highly depends on the local features communication machenism.
1 code implementation • CVPR 2021 • Hanting Chen, Tianyu Guo, Chang Xu, Wenshuo Li, Chunjing Xu, Chao Xu, Yunhe Wang
Experiments on various datasets demonstrate that the student networks learned by the proposed method can achieve comparable performance with those using the original dataset.
no code implementations • 12 May 2021 • Wenshuo Li, Hanting Chen, Mingqiang Huang, Xinghao Chen, Chunjing Xu, Yunhe Wang
Adder neural network (AdderNet) is a new kind of deep model that replaces the original massive multiplications in convolutions by additions while preserving the high performance.
1 code implementation • 22 Dec 2020 • Xuefei Ning, Junbo Zhao, Wenshuo Li, Tianchen Zhao, Yin Zheng, Huazhong Yang, Yu Wang
In this paper, considering scenarios with capacity budget, we aim to discover adversarially robust architecture at targeted capacities.
1 code implementation • 25 Nov 2020 • Xuefei Ning, Changcheng Tang, Wenshuo Li, Songyi Yang, Tianchen Zhao, Niansong Zhang, Tianyi Lu, Shuang Liang, Huazhong Yang, Yu Wang
Neural Architecture Search (NAS) has received extensive attention due to its capability to discover neural network architectures in an automated manner.
no code implementations • 28 Sep 2020 • Xuefei Ning, Wenshuo Li, Zixuan Zhou, Tianchen Zhao, Shuang Liang, Yin Zheng, Huazhong Yang, Yu Wang
A major challenge in NAS is to conduct a fast and accurate evaluation of neural architectures.
1 code implementation • NeurIPS 2021 • Xuefei Ning, Changcheng Tang, Wenshuo Li, Zixuan Zhou, Shuang Liang, Huazhong Yang, Yu Wang
Conducting efficient performance estimations of neural architectures is a major challenge in neural architecture search (NAS).
no code implementations • 31 Jul 2020 • Tong Wu, Xuefei Ning, Wenshuo Li, Ranran Huang, Huazhong Yang, Yu Wang
In this paper, we tackle the issue of physical adversarial examples for object detectors in the wild.
1 code implementation • ECCV 2020 • Xuefei Ning, Tianchen Zhao, Wenshuo Li, Peng Lei, Yu Wang, Huazhong Yang
In budgeted pruning, how to distribute the resources across layers (i. e., sparsity allocation) is the key problem.
no code implementations • 20 Mar 2020 • Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming Chen, Zhen Gao, Yu Wang, Huazhong Yang
By inspecting the discovered architectures, we find that the operation primitives, the weight quantization range, the capacity of the model, and the connection pattern have influences on the fault resilience capability of NN models.
no code implementations • 14 May 2018 • Wenshuo Li, Jincheng Yu, Xuefei Ning, Pengjun Wang, Qi Wei, Yu Wang, Huazhong Yang
So, in this paper, we propose a hardware-software collaborative attack framework to inject hidden neural network Trojans, which works as a back-door without requiring manipulating input images and is flexible for different scenarios.
no code implementations • 24 May 2017 • Huizi Mao, Song Han, Jeff Pool, Wenshuo Li, Xingyu Liu, Yu Wang, William J. Dally
Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.