no code implementations • 6 Sep 2023 • Guang Yang, Yin Tang, Zhijian Wu, Jun Li, Jianhua Xu, Xili Wan
Recent mainstream masked distillation methods function by reconstructing selectively masked areas of a student network from the feature map of its teacher counterpart.
1 code implementation • 16 Jul 2023 • Yin Tang, Tao Chen, Xiruo Jiang, Yazhou Yao, Guo-Sen Xie, Heng-Tao Shen
Existing methods have demonstrated that the domain agent-based attention mechanism is effective in FSVOS by learning the correlation between support images and query frames.
no code implementations • 31 Jan 2023 • Guang Yang, Yin Tang, Jun Li, Jianhua Xu, Xili Wan
As a general model compression paradigm, feature-based knowledge distillation allows the student model to learn expressive features from the teacher counterpart.
no code implementations • 5 Jun 2020 • Xin Cheng, Lei Zhang, Yin Tang, Yue Liu, Hao Wu, Jun He
For deep learning, improvements in performance have to heavily rely on increasing model size or capacity to scale to larger and larger datasets, which inevitably leads to the increase of operations.
no code implementations • 8 May 2020 • Yin Tang, Qi Teng, Lei Zhang, Fuhong Min, Jun He
A set of lower-dimensional filters is used as Lego bricks to be stacked for conventional filters, which does not rely on any special network structure.