1 code implementation • 7 May 2024 • Chen Qian, Jiahao Li, Yufan Dang, Wei Liu, Yifei Wang, Zihao Xie, Weize Chen, Cheng Yang, Yingli Zhang, Zhiyuan Liu, Maosong Sun
We propose two fundamental patterns: the successive pattern, refining based on nearest experiences within a task batch, and the cumulative pattern, acquiring experiences across all previous task batches.
1 code implementation • 10 Apr 2024 • Yifei Wang, Wenhan Ma, Stefanie Jegelka, Yisen Wang
Relying only on unlabeled data, Self-supervised learning (SSL) can learn rich features in an economical and scalable way.
no code implementations • 24 Mar 2024 • Yifei Wang, Chuhong Zhu
This method uses three strategies - scaling-up image size, multi-class mixing, and object shape jittering - to improve the ability to learn semantic features within medical images.
1 code implementation • 19 Mar 2024 • Yifei Wang, Jizhe Zhang, Yisen Wang
Contrastive Learning (CL) has emerged as one of the most successful paradigms for unsupervised visual representation learning, yet it often depends on intensive manual data augmentations.
1 code implementation • 19 Mar 2024 • Yifei Wang, Qi Zhang, Yaoyu Guo, Yisen Wang
In this paper, we propose Non-negative Contrastive Learning (NCL), a renaissance of Non-negative Matrix Factorization (NMF) aimed at deriving interpretable features.
no code implementations • 2 Mar 2024 • Emi Zeger, Yifei Wang, Aaron Mishkin, Tolga Ergen, Emmanuel Candès, Mert Pilanci
We prove that training neural networks on 1-D data is equivalent to solving a convex Lasso problem with a fixed, explicitly defined dictionary matrix of features.
1 code implementation • 23 Feb 2024 • Yihao Zhang, Hangzhou He, Jingyu Zhu, Huanran Chen, Yifei Wang, Zeming Wei
Instead of perturbing the samples, Sharpness-Aware Minimization (SAM) perturbs the model weights during training to find a more flat loss landscape and improve generalization.
no code implementations • 25 Dec 2023 • Yupei Zhang, Yuxin Li, Yifei Wang, Shuangshuang Wei, Yunan Xu, Xuequn Shang
To this end, this study proposes a distributed grade prediction model, dubbed FecMap, by exploiting the federated learning (FL) framework that preserves the private data of local clients and communicates with others through a global generalized model.
1 code implementation • 13 Dec 2023 • Shengsheng Qian, Yifei Wang, Dizhan Xue, Shengjie Zhang, Huaiwen Zhang, Changsheng Xu
After obtaining the threat model trained on the poisoned dataset, our method can precisely detect poisonous samples based on the assumption that masking the backdoor trigger can effectively change the activation of a downstream clustering model.
1 code implementation • 29 Nov 2023 • Pengqian Han, Partha Roop, Jiamou Liu, Tianzhe Bao, Yifei Wang
The main reason is that the trajectory is a kind of complex data, including spatial and temporal information, which is crucial for accurate prediction.
no code implementations • 18 Nov 2023 • Yifei Wang, Mert Pilanci
Using this convex formulation, we prove that the hardness of approximation of ReLU networks not only mirrors the complexity of the Max-Cut problem but also, in certain special cases, exactly corresponds to it.
no code implementations • 11 Nov 2023 • Hao Xu, Yifei Wang, Yunrui Li, Pengyu Hong
Through practical tasks such as isomer discrimination and uncovering crucial chemical properties for drug discovery, ACML exhibits its capability to revolutionize chemical research and applications, providing a deeper understanding of chemical semantics of different modalities.
1 code implementation • NeurIPS 2023 • Ang Li, Yifei Wang, Yiwen Guo, Yisen Wang
A well-known theory by \citet{ilyas2019adversarial} explains adversarial vulnerability from a data perspective by showing that one can extract non-robust features from adversarial examples and these features alone are useful for classification.
3 code implementations • NeurIPS 2023 • Jiangyan Ma, Yifei Wang, Yisen Wang
However, from a theoretical perspective, the universal expressive power of spectral embedding comes at the price of losing two important invariance properties of graphs, sign and basis invariance, which also limits its effectiveness on graph data.
no code implementations • 27 Oct 2023 • Weixu Zhang, Yifei Wang, Yuanfeng Song, Victor Junqiu Wei, Yuxing Tian, Yiyan Qi, Jonathan H. Chan, Raymond Chi-Wing Wong, Haiqin Yang
This survey presents a comprehensive overview of natural language interfaces for tabular data querying and visualization, which allow users to interact with data using natural language queries.
no code implementations • 25 Oct 2023 • Chen Liu, Hongyu Zang, Xin Li, Yong Heng, Yifei Wang, Zhen Fang, Yisen Wang, Mingzhong Wang
Image-based Reinforcement Learning is a practical yet challenging task.
1 code implementation • 19 Oct 2023 • Lin Li, Yifei Wang, Chawin Sitawarin, Michael Spratling
Based on this, we are able to predict the upper limit of OOD robustness for existing robust training schemes.
no code implementations • 17 Oct 2023 • Zeyu Zhang, Jiamou Liu, Kaiqi Zhao, Yifei Wang, Pengqian Han, Xianda Zheng, Qiqi Wang, Zijian Zhang
Signed graphs are valuable for modeling complex relationships with positive and negative connections, and Signed Graph Neural Networks (SGNNs) have become crucial tools for their analysis.
no code implementations • 10 Oct 2023 • Zeming Wei, Yifei Wang, Yisen Wang
Large Language Models (LLMs) have shown remarkable success in various tasks, but concerns about their safety and the potential for generating harmful content have emerged.
no code implementations • 29 Aug 2023 • Hong Zhu, Runpeng Yu, Xing Tang, Yifei Wang, Yuan Fang, Yisen Wang
Data in the real-world classification problems are always imbalanced or long-tailed, wherein the majority classes have the most of the samples that dominate the model training.
1 code implementation • 7 Jun 2023 • Qi Zhang, Yifei Wang, Yisen Wang
Multi-modal contrastive learning (MMCL) has recently garnered considerable interest due to its superior performance in visual tasks, achieved by embedding multi-modal data, such as visual-language pairs.
no code implementations • 7 Jun 2023 • Jingyi Cui, Weiran Huang, Yifei Wang, Yisen Wang
Therefore, to explore the mechanical differences between semi-supervised and noisy-labeled information in helping contrastive learning, we establish a unified theoretical framework of contrastive learning under weak supervision.
1 code implementation • 29 May 2023 • Yifei Wang, Zhengyang Zhou, Liqin Wang, John Laurentiev, Peter Hou, Li Zhou, Pengyu Hong
The confounding factors, which are non-sensitive variables but manifest systematic differences, can significantly affect fairness evaluation.
no code implementations • 16 May 2023 • Yifei Wang, Yiyang Zhou, Jihua Zhu, Xinyuan Liu, Wenbiao Yan, Zhiqiang Tian
Label distribution learning (LDL) is a new machine learning paradigm for solving label ambiguity.
1 code implementation • CVPR 2023 • Zeming Wei, Yifei Wang, Yiwen Guo, Yisen Wang
Adversarial training has been widely acknowledged as the most effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs).
2 code implementations • 12 Mar 2023 • Xiaojun Guo, Yifei Wang, Tianqi Du, Yisen Wang
Instead of characterizing oversmoothing from the view of complete collapse in which representations converge to a single point, we dive into a more general perspective of dimensional collapse in which representations lie in a narrow cone.
Ranked #8 on Node Property Prediction on ogbn-arxiv
1 code implementation • 8 Mar 2023 • Yifei Wang, Qi Zhang, Tianqi Du, Jiansheng Yang, Zhouchen Lin, Yisen Wang
In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics.
1 code implementation • 4 Mar 2023 • Zhijian Zhuo, Yifei Wang, Jinwen Ma, Yisen Wang
In this work, we propose a unified theoretical understanding for existing variants of non-contrastive learning.
1 code implementation • 2 Mar 2023 • Rundong Luo, Yifei Wang, Yisen Wang
Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap.
no code implementations • 28 Feb 2023 • Wenbiao Yan, Jihua Zhu, Yiyang Zhou, Yifei Wang, Qinghai Zheng
In this way, the learned semantic consistency from multi-view data can improve the information bottleneck to more exactly distinguish the consistent information and learn a unified feature representation with more discriminative consistent information for clustering.
no code implementations • 26 Feb 2023 • Yiyang Zhou, Qinghai Zheng, Wenbiao Yan, Yifei Wang, Pengcheng Shi, Jihua Zhu
Further, we designed a multi-level consistency collaboration strategy, which utilizes the consistent information of semantic space as a self-supervised signal to collaborate with the cluster assignments in feature space.
Ranked #1 on Multiview Clustering on Fashion-MNIST
1 code implementation • 12 Feb 2023 • Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu
To this end, we propose USER, an unsupervised robust version of graph neural networks that is based on structural entropy.
no code implementations • 18 Dec 2022 • Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang
Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.
no code implementations • 16 Dec 2022 • Matthias Eisenmann, Annika Reinke, Vivienn Weru, Minu Dietlinde Tizabi, Fabian Isensee, Tim J. Adler, Patrick Godau, Veronika Cheplygina, Michal Kozubek, Sharib Ali, Anubha Gupta, Jan Kybic, Alison Noble, Carlos Ortiz de Solórzano, Samiksha Pachade, Caroline Petitjean, Daniel Sage, Donglai Wei, Elizabeth Wilden, Deepak Alapatt, Vincent Andrearczyk, Ujjwal Baid, Spyridon Bakas, Niranjan Balu, Sophia Bano, Vivek Singh Bawa, Jorge Bernal, Sebastian Bodenstedt, Alessandro Casella, Jinwook Choi, Olivier Commowick, Marie Daum, Adrien Depeursinge, Reuben Dorent, Jan Egger, Hannah Eichhorn, Sandy Engelhardt, Melanie Ganz, Gabriel Girard, Lasse Hansen, Mattias Heinrich, Nicholas Heller, Alessa Hering, Arnaud Huaulmé, Hyunjeong Kim, Bennett Landman, Hongwei Bran Li, Jianning Li, Jun Ma, Anne Martel, Carlos Martín-Isla, Bjoern Menze, Chinedu Innocent Nwoye, Valentin Oreiller, Nicolas Padoy, Sarthak Pati, Kelly Payette, Carole Sudre, Kimberlin Van Wijnen, Armine Vardazaryan, Tom Vercauteren, Martin Wagner, Chuanbo Wang, Moi Hoon Yap, Zeyun Yu, Chun Yuan, Maximilian Zenk, Aneeq Zia, David Zimmerer, Rina Bao, Chanyeol Choi, Andrew Cohen, Oleh Dzyubachyk, Adrian Galdran, Tianyuan Gan, Tianqi Guo, Pradyumna Gupta, Mahmood Haithami, Edward Ho, Ikbeom Jang, Zhili Li, Zhengbo Luo, Filip Lux, Sokratis Makrogiannis, Dominik Müller, Young-tack Oh, Subeen Pang, Constantin Pape, Gorkem Polat, Charlotte Rosalie Reed, Kanghyun Ryu, Tim Scherr, Vajira Thambawita, Haoyu Wang, Xinliang Wang, Kele Xu, Hung Yeh, Doyeob Yeo, Yixuan Yuan, Yan Zeng, Xin Zhao, Julian Abbing, Jannes Adam, Nagesh Adluru, Niklas Agethen, Salman Ahmed, Yasmina Al Khalil, Mireia Alenyà, Esa Alhoniemi, Chengyang An, Talha Anwar, Tewodros Weldebirhan Arega, Netanell Avisdris, Dogu Baran Aydogan, Yingbin Bai, Maria Baldeon Calisto, Berke Doga Basaran, Marcel Beetz, Cheng Bian, Hao Bian, Kevin Blansit, Louise Bloch, Robert Bohnsack, Sara Bosticardo, Jack Breen, Mikael Brudfors, Raphael Brüngel, Mariano Cabezas, Alberto Cacciola, Zhiwei Chen, Yucong Chen, Daniel Tianming Chen, Minjeong Cho, Min-Kook Choi, Chuantao Xie Chuantao Xie, Dana Cobzas, Julien Cohen-Adad, Jorge Corral Acero, Sujit Kumar Das, Marcela de Oliveira, Hanqiu Deng, Guiming Dong, Lars Doorenbos, Cory Efird, Sergio Escalera, Di Fan, Mehdi Fatan Serj, Alexandre Fenneteau, Lucas Fidon, Patryk Filipiak, René Finzel, Nuno R. Freitas, Christoph M. Friedrich, Mitchell Fulton, Finn Gaida, Francesco Galati, Christoforos Galazis, Chang Hee Gan, Zheyao Gao, Shengbo Gao, Matej Gazda, Beerend Gerats, Neil Getty, Adam Gibicar, Ryan Gifford, Sajan Gohil, Maria Grammatikopoulou, Daniel Grzech, Orhun Güley, Timo Günnemann, Chunxu Guo, Sylvain Guy, Heonjin Ha, Luyi Han, Il Song Han, Ali Hatamizadeh, Tian He, Jimin Heo, Sebastian Hitziger, SeulGi Hong, Seungbum Hong, Rian Huang, Ziyan Huang, Markus Huellebrand, Stephan Huschauer, Mustaffa Hussain, Tomoo Inubushi, Ece Isik Polat, Mojtaba Jafaritadi, SeongHun Jeong, Bailiang Jian, Yuanhong Jiang, Zhifan Jiang, Yueming Jin, Smriti Joshi, Abdolrahim Kadkhodamohammadi, Reda Abdellah Kamraoui, Inha Kang, Junghwa Kang, Davood Karimi, April Khademi, Muhammad Irfan Khan, Suleiman A. Khan, Rishab Khantwal, Kwang-Ju Kim, Timothy Kline, Satoshi Kondo, Elina Kontio, Adrian Krenzer, Artem Kroviakov, Hugo Kuijf, Satyadwyoom Kumar, Francesco La Rosa, Abhi Lad, Doohee Lee, Minho Lee, Chiara Lena, Hao Li, Ling Li, Xingyu Li, Fuyuan Liao, Kuanlun Liao, Arlindo Limede Oliveira, Chaonan Lin, Shan Lin, Akis Linardos, Marius George Linguraru, Han Liu, Tao Liu, Di Liu, Yanling Liu, João Lourenço-Silva, Jingpei Lu, Jiangshan Lu, Imanol Luengo, Christina B. Lund, Huan Minh Luu, Yi Lv, Uzay Macar, Leon Maechler, Sina Mansour L., Kenji Marshall, Moona Mazher, Richard McKinley, Alfonso Medela, Felix Meissen, Mingyuan Meng, Dylan Miller, Seyed Hossein Mirjahanmardi, Arnab Mishra, Samir Mitha, Hassan Mohy-ud-Din, Tony Chi Wing Mok, Gowtham Krishnan Murugesan, Enamundram Naga Karthik, Sahil Nalawade, Jakub Nalepa, Mohamed Naser, Ramin Nateghi, Hammad Naveed, Quang-Minh Nguyen, Cuong Nguyen Quoc, Brennan Nichyporuk, Bruno Oliveira, David Owen, Jimut Bahan Pal, Junwen Pan, Wentao Pan, Winnie Pang, Bogyu Park, Vivek Pawar, Kamlesh Pawar, Michael Peven, Lena Philipp, Tomasz Pieciak, Szymon Plotka, Marcel Plutat, Fattaneh Pourakpour, Domen Preložnik, Kumaradevan Punithakumar, Abdul Qayyum, Sandro Queirós, Arman Rahmim, Salar Razavi, Jintao Ren, Mina Rezaei, Jonathan Adam Rico, ZunHyan Rieu, Markus Rink, Johannes Roth, Yusely Ruiz-Gonzalez, Numan Saeed, Anindo Saha, Mostafa Salem, Ricardo Sanchez-Matilla, Kurt Schilling, Wei Shao, Zhiqiang Shen, Ruize Shi, Pengcheng Shi, Daniel Sobotka, Théodore Soulier, Bella Specktor Fadida, Danail Stoyanov, Timothy Sum Hon Mun, Xiaowu Sun, Rong Tao, Franz Thaler, Antoine Théberge, Felix Thielke, Helena Torres, Kareem A. Wahid, Jiacheng Wang, Yifei Wang, Wei Wang, Xiong Wang, Jianhui Wen, Ning Wen, Marek Wodzinski, Ye Wu, Fangfang Xia, Tianqi Xiang, Chen Xiaofei, Lizhan Xu, Tingting Xue, Yuxuan Yang, Lin Yang, Kai Yao, Huifeng Yao, Amirsaeed Yazdani, Michael Yip, Hwanseung Yoo, Fereshteh Yousefirizi, Shunkai Yu, Lei Yu, Jonathan Zamora, Ramy Ashraf Zeineldin, Dewen Zeng, Jianpeng Zhang, Bokai Zhang, Jiapeng Zhang, Fan Zhang, Huahong Zhang, Zhongchen Zhao, Zixuan Zhao, Jiachen Zhao, Can Zhao, Qingshuo Zheng, Yuheng Zhi, Ziqi Zhou, Baosheng Zou, Klaus Maier-Hein, Paul F. Jäger, Annette Kopp-Schneider, Lena Maier-Hein
Of these, 84% were based on standard architectures.
no code implementations • 1 Nov 2022 • Yifei Wang, Tavor Baharav, Yanjun Han, Jiantao Jiao, David Tse
In the infinite-armed bandit problem, each arm's average reward is sampled from an unknown distribution, and each arm can be sampled further to obtain noisy estimates of the average reward of that arm.
2 code implementations • 15 Oct 2022 • Qi Zhang, Yifei Wang, Yisen Wang
Masked Autoencoders (MAE) based on a reconstruction task have risen to be a promising paradigm for self-supervised learning (SSL) and achieve state-of-the-art performance across different benchmark datasets.
1 code implementation • 14 Oct 2022 • Yichuan Mo, Dongxian Wu, Yifei Wang, Yiwen Guo, Yisen Wang
We find, when randomly masking gradients from some attention blocks or masking perturbations on some patches during adversarial training, the adversarial robustness of ViTs can be remarkably improved, which may potentially open up a line of work to explore the architectural information inside the newly designed models like ViTs.
1 code implementation • 13 Oct 2022 • Qixun Wang, Yifei Wang, Hong Zhu, Yisen Wang
In this paper, we empirically show that sample-wise AT has limited improvement on OOD performance.
1 code implementation • 30 Sep 2022 • Yifei Wang, Yixuan Hua, Emmanuel Candés, Mert Pilanci
For randomly generated data, we show the existence of a phase transition in recovering planted neural network models, which is easy to describe: whenever the ratio between the number of samples and the dimension exceeds a numerical threshold, the recovery succeeds with high probability; otherwise, it fails with high probability.
1 code implementation • 9 Aug 2022 • Yifei Wang, Shiyang Chen, Guobin Chen, Ethan Shurberg, Hang Liu, Pengyu Hong
MCM builds a motif vocabulary in an unsupervised way and deploys a novel motif convolution operation to extract the local structural context of individual nodes, which is then used to learn higher-level node representations via multilayer perceptron and/or message passing in graph neural networks.
1 code implementation • 29 Jun 2022 • Qi Chen, Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Moreover, we show that the optimization-induced variants of our models can boost the performance and improve training stability and efficiency as well.
no code implementations • 2 Jun 2022 • Yifei Wang, Qichao Ying, Zhenxing Qian, Sheng Li, Xinpeng Zhang
To address this issue, we present a new video watermarking based on joint Dual-Tree Cosine Wavelet Transformation (DTCWT) and Singular Value Decomposition (SVD), which is resistant to frame rate conversion.
1 code implementation • 26 May 2022 • Yifei Wang, Peng Chen, Mert Pilanci, Wuchen Li
We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation.
1 code implementation • 19 Apr 2022 • Alex Leviyev, Joshua Chen, Yifei Wang, Omar Ghattas, Aaron Zimmerman
Meanwhile, Stein variational Newton (SVN), a Newton-like extension of SVGD, dramatically accelerates the convergence of SVGD by incorporating Hessian information into the dynamics, but also produces biased samples.
no code implementations • 15 Apr 2022 • Tong Yang, Yifei Wang, Long Sha, Jan Engelbrecht, Pengyu Hong
As far as we know, by applying abstract algebra in statistical learning, this work develops the first formal language for general knowledge graphs, and also sheds light on the problem of neural-symbolic integration from an algebraic perspective.
no code implementations • ICLR 2022 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
On the other hand, our unified framework can be extended to the unsupervised scenario, which interprets unsupervised contrastive learning as an important sampling of CEM.
1 code implementation • 25 Mar 2022 • Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Our theory suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and the overlapped augmented views (i. e., the chaos) create a ladder for contrastive learning to gradually learn class-separated representations.
no code implementations • 19 Nov 2021 • Zhirui Wang, Yifei Wang, Yisen Wang
Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.
no code implementations • NeurIPS 2021 • Yifei Wang, Zhengyang Geng, Feng Jiang, Chuming Li, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Multi-view methods learn representations by aligning multiple views of the same image and their performance largely depends on the choice of data augmentation.
no code implementations • 13 Oct 2021 • Yifei Wang, Tolga Ergen, Mert Pilanci
Recent work has proven that the strong duality holds (which means zero duality gap) for regularized finite-width two-layer ReLU networks and consequently provided an equivalent convex training problem.
no code implementations • ICLR 2022 • Yifei Wang, Mert Pilanci
We then show that the limit points of non-convex subgradient flows can be identified via primal-dual correspondence in this convex optimization problem.
no code implementations • 12 Oct 2021 • Justin Li, Dakang Zhang, Yifei Wang, Christopher Ye, Hao Xu, Pengyu Hong
Since late 1960s, there have been numerous successes in the exciting new frontier of asymmetric catalysis.
no code implementations • ICLR 2022 • Yifei Wang, Jonathan Lacotte, Mert Pilanci
As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.
no code implementations • 29 Sep 2021 • Shiji Xin, Yifei Wang, Jingtong Su, Yisen Wang
Extensive experiments show that our proposed DAT can effectively remove the domain-varying features and improve OOD generalization on both correlation shift and diversity shift tasks.
no code implementations • ICLR 2022 • Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Our work suggests an alternative understanding of contrastive learning: the role of aligning positive samples is more like a surrogate task than an ultimate goal, and it is the overlapping augmented views (i. e., the chaos) that create a ladder for contrastive learning to gradually learn class-separated representations.
no code implementations • 29 Sep 2021 • Zhirui Wang, Yifei Wang, Yisen Wang
Adversarial training is widely believed to be a reliable approach to improve model robustness against adversarial attack.
1 code implementation • 1 Jul 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).
no code implementations • ICML Workshop AML 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios.
no code implementations • 15 May 2021 • Jonathan Lacotte, Yifei Wang, Mert Pilanci
Our first contribution is to show that, at each iteration, the embedding dimension (or sketch size) can be as small as the effective dimension of the Hessian matrix.
1 code implementation • NeurIPS 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Graph Convolutional Networks (GCNs) have attracted more and more attentions in recent years.
1 code implementation • 12 Feb 2021 • Yifei Wang, Peng Chen, Wuchen Li
We propose a projected Wasserstein gradient descent method (pWGD) for high-dimensional Bayesian inference problems.
1 code implementation • ICLR 2021 • Peizhao Li, Yifei Wang, Han Zhao, Pengyu Hong, Hongfu Liu
Disparate impact has raised serious concerns in machine learning applications and its societal impacts.
no code implementations • 1 Jan 2021 • Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs).
1 code implementation • ICCV 2021 • Miao Zhang, Jie Liu, Yifei Wang, Yongri Piao, Shunyu Yao, Wei Ji, Jingjing Li, Huchuan Lu, Zhongxuan Luo
Our bidirectional dynamic fusion strategy encourages the interaction of spatial and temporal information in a dynamic manner.
Ranked #12 on Video Polyp Segmentation on SUN-SEG-Easy (Unseen)
no code implementations • COLING 2020 • Chao Tian, Yifei Wang, Hao Cheng, Yijiang Lian, Zhihua Zhang
In this paper we propose a unified approach for supporting different generation manners of machine translation, including autoregressive, semi-autoregressive, and refinement-based non-autoregressive models.
no code implementations • 5 Aug 2020 • Yijiang Lian, Zhijie Chen, Xin Pei, Shuang Li, Yifei Wang, Yuefeng Qiu, Zhiheng Zhang, Zhipeng Tao, Liang Yuan, Hanju Guan, Kefeng Zhang, Zhigang Li, Xiaochun Liu
Industrial sponsored search system (SSS) can be logically divided into three modules: keywords matching, ad retrieving, and ranking.
no code implementations • 2 Jul 2020 • Yifei Wang, Dan Peng, Furui Liu, Zhenguo Li, Zhitang Chen, Jiansheng Yang
Adversarial Training (AT) is proposed to alleviate the adversarial vulnerability of machine learning models by extracting only robust features from the input, which, however, inevitably leads to severe accuracy reduction as it discards the non-robust yet useful features.
no code implementations • 10 Jun 2020 • Yifei Wang, Jonathan Lacotte, Mert Pilanci
As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.
no code implementations • 13 Jan 2020 • Yifei Wang, Wuchen Li
We introduce a framework for Newton's flows in probability space with information metrics, named information Newton's flows.
no code implementations • 1 Nov 2019 • Yifei Wang, Rui Liu, Yong Chen, Hui Zhangs, Zhiwen Ye
Spectral Clustering is a popular technique to split data points into groups, especially for complex datasets.
1 code implementation • 4 Sep 2019 • Yifei Wang, Wuchen Li
We present a framework for Nesterov's accelerated gradient flows in probability space to design efficient mean-field Markov chain Monte Carlo (MCMC) algorithms for Bayesian inverse problems.
no code implementations • 13 Jul 2017 • Yifei Wang, Wen Li, Dengxin Dai, Luc van Gool
Our work builds on the recently proposed Deep CORAL method, which proposed to train a convolutional neural network and simultaneously minimize the Euclidean distance of convariance matrices between the source and target domains.