no code implementations • 27 May 2024 • Yixiong Zou, Shanghang Zhang, Haichen Zhou, Yuhua Li, Ruixuan Li
Few-shot class-incremental learning (FSCIL) is proposed to continually learn from novel classes with only a few samples after the (pre-)training on base classes with sufficient data.
no code implementations • 8 May 2024 • Haichen Zhou, Yixiong Zou, Ruixuan Li, Yuhua Li, Kui Xiao
We first interpret the confusion as the collision between the novel-class and the base-class region in the feature space.
1 code implementation • 24 Mar 2024 • Ziwen Zhao, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang
Graph self-supervised learning is now a go-to method for pre-training graph foundation models, including graph neural networks, graph transformers, and more recent large language model (LLM)-based graph models.
no code implementations • 1 Mar 2024 • Yixiong Zou, Yicong Liu, Yiman Hu, Yuhua Li, Ruixuan Li
To enhance the transferability and facilitate fine-tuning, we introduce a simple yet effective approach to achieve long-range flattening of the minima in the loss landscape.
1 code implementation • 6 Feb 2024 • Ziwen Zhao, Yuhua Li, Yixiong Zou, Jiliang Tang, Ruixuan Li
Inspired by these understandings, we explore non-discrete edge masks, which are sampled from a continuous and dispersive probability distribution instead of the discrete Bernoulli distribution.
1 code implementation • 8 May 2023 • Han Chen, Ziwen Zhao, Yuhua Li, Yixiong Zou, Ruixuan Li, Rui Zhang
Graph Contrastive Learning (GCL) is an effective way to learn generalized graph representations in a self-supervised manner, and has grown rapidly in recent years.
1 code implementation • 10 Oct 2022 • Yixiong Zou, Shanghang Zhang, Yuhua Li, Ruixuan Li
Few-shot class-incremental learning (FSCIL) is designed to incrementally recognize novel classes with only few training samples after the (pre-)training on base classes with sufficient samples, which focuses on both base-class performance and novel-class generalization.
no code implementations • 21 Feb 2019 • Chengjie Li, Ruixuan Li, Haozhao Wang, Yuhua Li, Pan Zhou, Song Guo, Keqin Li
Distributed asynchronous offline training has received widespread attention in recent years because of its high performance on large-scale data and complex models.