1 code implementation • 13 Mar 2024 • Li Lin, Yamini Sri Krubha, Zhenhuan Yang, Cheng Ren, Thuc Duy Le, Irene Amerini, Xin Wang, Shu Hu
In the realm of medical imaging, particularly for COVID-19 detection, deep learning models face substantial challenges such as the necessity for extensive computational resources, the paucity of well-annotated datasets, and a significant amount of unlabeled data.
1 code implementation • 10 Sep 2023 • Shu Hu, Zhenhuan Yang, Xin Wang, Yiming Ying, Siwei Lyu
Theoretically, we show that the learning objective of ORAT satisfies the $\mathcal{H}$-consistency in binary classification, which establishes it as a proper surrogate to adversarial 0/1 loss.
no code implementations • 16 Mar 2023 • Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao, Yiming Ying
Recently, there has been an increasing adoption of differential privacy guided algorithms for privacy-preserving machine learning tasks.
1 code implementation • 22 Aug 2022 • Zhenhuan Yang, Yan Lok Ko, Kush R. Varshney, Yiming Ying
We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.
no code implementations • 22 Jan 2022 • Zhenhuan Yang, Shu Hu, Yunwen Lei, Kush R. Varshney, Siwei Lyu, Yiming Ying
We further provide its utility analysis in the nonconvex-strongly-concave setting which is the first-ever-known result in terms of the primal population risk.
no code implementations • NeurIPS 2021 • Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying
A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.
1 code implementation • 23 Nov 2021 • Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying
A popular approach to handle streaming data in pairwise learning is an online gradient descent (OGD) algorithm, where one needs to pair the current instance with a buffering set of previous instances with a sufficiently large size and therefore suffers from a scalability issue.
1 code implementation • 8 May 2021 • Yunwen Lei, Zhenhuan Yang, Tianbao Yang, Yiming Ying
In this paper, we provide a comprehensive generalization analysis of stochastic gradient methods for minimax problems under both convex-concave and nonconvex-nonconcave cases through the lens of algorithmic stability.
1 code implementation • 4 Nov 2020 • Zhenhuan Yang, Baojian Zhou, Yunwen Lei, Yiming Ying
In this paper, we aim to develop stochastic hard thresholding algorithms for the important problem of AUC maximization in imbalanced classification.
no code implementations • 25 Apr 2019 • Wei Shen, Zhenhuan Yang, Yiming Ying, Xiaoming Yuan
From this fundamental trade-off, we obtain lower bounds for the optimization error of SGD algorithms and the excess expected risk over a class of pairwise losses.