2 code implementations • 14 Feb 2024 • Siyuan Li, Zicheng Liu, Juanxi Tian, Ge Wang, Zedong Wang, Weiyang Jin, Di wu, Cheng Tan, Tao Lin, Yang Liu, Baigui Sun, Stan Z. Li
Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization.
1 code implementation • 4 Oct 2023 • Siyuan Li, Weiyang Jin, Zedong Wang, Fang Wu, Zicheng Liu, Cheng Tan, Stan Z. Li
The main challenge is how to distinguish high-quality pseudo labels against the confirmation bias.
no code implementations • 11 Apr 2023 • Weiyang Jin
Rolling bearings are critical components in rotating machinery, and their faults can cause severe damage.
no code implementations • 14 Mar 2023 • Weiyang Jin, Yongpei Zhu, Yuxi Peng
This paper aims to identify various physical models that are suitable for constructing ODE-style generative diffusion models accurately from a mathematical perspective.
1 code implementation • 11 Sep 2022 • Siyuan Li, Zedong Wang, Zicheng Liu, Di wu, Cheng Tan, Weiyang Jin, Stan Z. Li
Data mixing, or mixup, is a data-dependent augmentation technique that has greatly enhanced the generalizability of modern deep neural networks.