1 code implementation • 9 Mar 2024 • Binghao Lu, Caiwen Ding, Jinbo Bi, Dongjin Song
Moreover, we designed a Multiscale Sigmoid Inference (MSI) module as a post processing step to further refine the change probability map from the trained student network.
no code implementations • 23 Feb 2024 • Jieren Deng, Aaron Palmer, Rigel Mahmood, Ethan Rathbun, Jinbo Bi, Kaleel Mahmood, Derek Aguiar
Achieving resiliency against adversarial attacks is necessary prior to deploying neural network classifiers in domains where misclassification incurs substantial costs, e. g., self-driving cars or medical imaging.
no code implementations • 14 Nov 2022 • Chandan Chunduru, Chun Jiang Zhu, Blake Gains, Jinbo Bi
Graph sparsification is a powerful tool to approximate an arbitrary graph and has been used in machine learning over homogeneous graphs.
no code implementations • 12 Oct 2022 • Aaron Palmer, Zhiyi Chi, Derek Aguiar, Jinbo Bi
Goodness of fit (GoF) hypothesis tests provide a measure of statistical indistinguishability between the latent distribution and a target distribution class.
no code implementations • 7 Aug 2022 • Hongwu Peng, Shaoyi Huang, Shiyang Chen, Bingbing Li, Tong Geng, Ang Li, Weiwen Jiang, Wujie Wen, Jinbo Bi, Hang Liu, Caiwen Ding
Particularly, we develop a hardware-friendly sparse attention operator and a length-aware hardware resource scheduling algorithm.
no code implementations • 29 Sep 2021 • Tan Zhu, Fei Do, Chloe Becquey, Jinbo Bi
Identifying interpretable interactions among categorical predictors for predictive modeling is crucial in various research fields.
no code implementations • 12 Apr 2021 • Tan Zhu, Guannan Liang, Chunjiang Zhu, Haining Li, Jinbo Bi
In this work, we formulate the SCB that uses a DNN reward function as a non-convex stochastic optimization problem, and design a stage-wise stochastic gradient descent algorithm to optimize the problem and determine the action policy.
no code implementations • 7 Mar 2021 • Guannan Liang, Qianqian Tong, Chunjiang Zhu, Jinbo Bi
Stochastically controlled stochastic gradient (SCSG) methods have been proved to converge efficiently to first-order stationary points which, however, can be saddle points in nonconvex optimization.
1 code implementation • ICLR 2021 • Chao Shang, Jie Chen, Jinbo Bi
Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model.
no code implementations • 31 Dec 2020 • Qianqian Tong, Guannan Liang, Tan Zhu, Jinbo Bi
Nonconvex sparse learning plays an essential role in many areas, such as signal processing and deep network compression.
no code implementations • 14 Sep 2020 • Guannan Liang, Qianqian Tong, Jiahao Ding, Miao Pan, Jinbo Bi
Sparse learning is a very important tool for mining useful information and patterns from high dimensional data.
no code implementations • 14 Sep 2020 • Qianqian Tong, Guannan Liang, Jinbo Bi
Federated learning allows loads of edge computing devices to collaboratively learn a global model without data sharing.
no code implementations • 28 Aug 2020 • Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran
The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices.
no code implementations • 11 Aug 2020 • Jiahao Ding, Jingyi Wang, Guannan Liang, Jinbo Bi, Miao Pan
In PP-ADMM, each agent approximately solves a perturbed optimization problem that is formulated from its local private data in an iteration, and then perturbs the approximate solution with Gaussian noise to provide the DP guarantee.
2 code implementations • 2 Aug 2019 • Qianqian Tong, Guannan Liang, Jinbo Bi
Theoretically, we provide a new way to analyze the convergence of AGMs and prove that the convergence rate of \textsc{Adam} also depends on its hyper-parameter $\epsilon$, which has been overlooked previously.
1 code implementation • 11 Nov 2018 • Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, Bo-Wen Zhou
The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure.
Ranked #28 on Link Prediction on FB15k-237
1 code implementation • 14 Feb 2018 • Chao Shang, Qinqing Liu, Ko-Shin Chen, Jiangwen Sun, Jin Lu, Jin-Feng Yi, Jinbo Bi
The proposed GCN model, which we call edge attention-based multi-relational GCN (EAGCN), jointly learns attention weights and node features in graph convolution.
no code implementations • 18 Dec 2017 • Guoqing Chao, Shiliang Sun, Jinbo Bi
With advances in information acquisition technologies, multi-view data become ubiquitous.
1 code implementation • 22 Aug 2017 • Chao Shang, Aaron Palmer, Jiangwen Sun, Ko-Shin Chen, Jin Lu, Jinbo Bi
Especially, when certain samples miss an entire view of data, it creates the missing view problem.
no code implementations • 8 Dec 2016 • Ioannis Papavasileiou, Wenlong Zhang, Xin Wang, Jinbo Bi, Li Zhang, Song Han
An advanced machine learning method, multi-task feature learning (MTFL), is used to jointly train classification models of a subject's gait in three classes, post-stroke, PD and healthy gait.
no code implementations • NeurIPS 2016 • Jin Lu, Guannan Liang, Jiangwen Sun, Jinbo Bi
We prove that when the side features can span the latent feature space of the matrix to be recovered, the number of observed entries needed for an exact recovery is $O(\log N)$ where $N$ is the size of the matrix.
no code implementations • NeurIPS 2014 • Xin Wang, Jinbo Bi, Shipeng Yu, Jiangwen Sun
We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers.