no code implementations • 27 Jan 2024 • Shao-Bo Lin
This paper focuses on scattered data fitting problems on spheres.
no code implementations • 16 Jan 2024 • Xiaotong Liu, Jinxin Wang, Di Wang, Shao-Bo Lin
In this paper, we introduce a weighted spectral filter approach to reduce the condition number of the kernel matrix and then stabilize kernel interpolation.
no code implementations • 10 Dec 2023 • Shao-Bo Lin
This paper focuses on parameter selection issues of kernel ridge regression (KRR).
no code implementations • 27 Oct 2023 • Shao-Bo Lin, Tao Li, Shaojie Tang, Yao Wang, Ding-Xuan Zhou
In this paper, we make fundamental contributions to the field of reinforcement learning by answering to the following three questions: Why does deep Q-learning perform so well?
no code implementations • 25 Oct 2023 • Shao-Bo Lin, Xingping Sun, Di Wang
For radial basis function (RBF) kernel interpolation of scattered data, Schaback in 1995 proved that the attainable approximation error and the condition number of the underlying interpolation matrix cannot be made small simultaneously.
no code implementations • 8 Sep 2023 • Di Wang, Xiaotong Liu, Shao-Bo Lin, Ding-Xuan Zhou
Data silos, mainly caused by privacy and interoperability, significantly constrain collaborations among different organizations with similar data for the same purpose.
no code implementations • 7 Aug 2023 • Shao-Bo Lin
This paper focuses on approximation and learning performance analysis for deep convolutional neural networks with zero-padding and max-pooling.
1 code implementation • 30 Jul 2023 • Zhi Han, Baichen Liu, Shao-Bo Lin, Ding-Xuan Zhou
This paper studies the performance of deep convolutional neural networks (DCNNs) with zero-padding in feature extraction and learning.
no code implementations • 8 Mar 2023 • Shao-Bo Lin, Di Wang, Ding-Xuan Zhou
These interesting findings show that the proposed sketching strategy is capable of fitting massive and noisy data on spheres.
no code implementations • 21 Feb 2023 • Di Wang, Yao Wang, Shaojie Tang, Shao-Bo Lin
The novelties of our research are as follows: 1) From a methodological perspective, we present a novel and scalable approach for generating DTRs by combining distributed learning with Q-learning.
no code implementations • 5 Dec 2021 • Han Feng, Shao-Bo Lin, Ding-Xuan Zhou
This paper proposes a distributed weighted regularized least squares algorithm (DWRLS) based on spherical radial basis functions and spherical quadrature rules to tackle spherical data that are stored across numerous local servers and cannot be shared with each other.
no code implementations • 28 Nov 2021 • Shao-Bo Lin, Yao Wang, Ding-Xuan Zhou
In this paper, we study the generalization performance of global minima for implementing empirical risk minimization (ERM) on over-parameterized deep ReLU nets.
no code implementations • 13 Nov 2021 • Zirui Sun, Mingwei Dai, Yao Wang, Shao-Bo Lin
This paper focuses on learning rate analysis of Nystr\"{o}m regularization with sequential sub-sampling for $\tau$-mixing time series.
no code implementations • 23 Jun 2021 • Shao-Bo Lin, Kaidong Wang, Yao Wang, Ding-Xuan Zhou
Compared with avid research activities of deep convolutional neural networks (DCNNs) in practice, the study of theoretical behaviors of DCNNs lags heavily behind.
no code implementations • 16 Sep 2020 • Yao Wang, Xin Guo, Shao-Bo Lin
Numerically, we carry out a series of simulations to show the promising performance of KReBooT in terms of its good generalization, near over-fitting resistance and structure constraints.
no code implementations • 3 Sep 2020 • Shao-Bo Lin, Xiangyu Chang, Xingping Sun
Data sites selected from modeling high-dimensional problems often appear scattered in non-paternalistic ways.
no code implementations • 1 Apr 2020 • Jinshan Zeng, Min Zhang, Shao-Bo Lin
Boosting is a well-known method for improving the accuracy of weak learners in machine learning.
no code implementations • 1 Apr 2020 • Zhi Han, Siquan Yu, Shao-Bo Lin, Ding-Xuan Zhou
One of the most important challenge of deep learning is to figure out relations between a feature and the depth of deep neural networks (deep nets for short) to reflect the necessity of depth.
no code implementations • 27 Mar 2020 • Shao-Bo Lin, Di Wang, Ding-Xuan Zhou
This paper focuses on generalization performance analysis for distributed algorithms in the framework of learning theory.
no code implementations • 10 Feb 2020 • Zirui Sun, Shao-Bo Lin
This paper focuses on learning rate analysis of distributed kernel ridge regression for strong mixing sequences.
no code implementations • 9 Jan 2020 • Xiangyu Chang, Shao-Bo Lin
In this paper, we propose an adaptive stopping rule for kernel-based gradient descent (KGD) algorithms.
no code implementations • 16 Dec 2019 • Charles K. Chui, Shao-Bo Lin, Bo Zhang, Ding-Xuan Zhou
The great success of deep learning poses urgent challenges for understanding its working mechanism and rationality.
1 code implementation • 24 Nov 2019 • Jinshan Zeng, Minrun Wu, Shao-Bo Lin, Ding-Xuan Zhou
In the era of big data, it is desired to develop efficient machine learning algorithms to tackle massive data challenges such as storage bottleneck, algorithmic scalability, and interpretability.
no code implementations • 6 Oct 2019 • Shao-Bo Lin, Yu Guang Wang, Ding-Xuan Zhou
This paper develops distributed filtered hyperinterpolation for noisy data on the sphere, which assigns the data fitting task to multiple servers to find a good approximation of the mapping of input and output data.
no code implementations • 3 Apr 2019 • Charles K. Chui, Shao-Bo Lin, Ding-Xuan Zhou
Based on the tree architecture, the objective of this paper is to design deep neural networks with two or more hidden layers (called deep nets) for realization of radial functions so as to enable rotational invariance for near-optimal function approximation in an arbitrarily high dimensional Euclidian space.
1 code implementation • 6 Feb 2019 • Jinshan Zeng, Shao-Bo Lin, Yuan YAO, Ding-Xuan Zhou
In this paper, we develop an alternating direction method of multipliers (ADMM) for deep neural networks training with sigmoid-type activation functions (called \textit{sigmoid-ADMM pair}), mainly motivated by the gradient-free nature of ADMM in avoiding the saturation of sigmoid-type activations and the advantages of deep neural networks with sigmoid-type activations (called deep sigmoid nets) over their rectified linear unit (ReLU) counterparts (called deep ReLU nets) in terms of approximation.
no code implementations • 1 Jan 2019 • Zheng-Chu Guo, Lei Shi, Shao-Bo Lin
Based on refined covering number estimates, we find that, to realize some complex data features, deep nets can improve the performances of shallow neural networks (shallow nets for short) without requiring additional capacity costs.
no code implementations • 22 Mar 2018 • Jian Fang, Shao-Bo Lin, Zongben Xu
Supervised learning frequently boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples.
no code implementations • 10 Mar 2018 • Shao-Bo Lin
Generalization and expressivity are two widely used measurements to quantify theoretical behaviors of deep learning.
no code implementations • 9 Mar 2018 • Charles K. Chui, Shao-Bo Lin, Ding-Xuan Zhou
The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines.
2 code implementations • 1 Mar 2018 • Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, Yuan YAO
Deep learning has aroused extensive attention due to its great empirical success.
no code implementations • 28 Feb 2017 • Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang
This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss.
no code implementations • 11 Aug 2016 • Shao-Bo Lin, Xin Guo, Ding-Xuan Zhou
We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS).
no code implementations • 20 Apr 2016 • Lin Xu, Shao-Bo Lin, Jinshan Zeng, Xia Liu, Zongben Xu
In this paper, we find that SGD is not the unique greedy criterion and introduce a new greedy criterion, called "$\delta$-greedy threshold" for learning.
no code implementations • 23 Jan 2016 • Xiangyu Chang, Shao-Bo Lin, Yao Wang
After theoretically analyzing the pros and cons, we find that although the divide and conquer local average regression can reach the optimal learning rate, the restric- tion to the number of data blocks is a bit strong, which makes it only feasible for small number of data blocks.
no code implementations • 17 May 2015 • Lin Xu, Shao-Bo Lin, Yao Wang, Zongben Xu
Re-scale boosting (RBoosting) is a variant of boosting which can essentially improve the generalization performance of boosting learning.
no code implementations • 18 Sep 2014 • Jian Fang, Shao-Bo Lin, Zongben Xu
We consider the approximation capability of orthogonal super greedy algorithms (OSGA) and its applications in supervised learning.