no code implementations • 24 May 2024 • Qichao Shentu, Beibu Li, Kai Zhao, Yang Shu, Zhongwen Rao, Lujia Pan, Bin Yang, Chenjuan Guo
The significant divergence of time series data across different domains presents two primary challenges in building such a general model: (1) meeting the diverse requirements of appropriate information bottlenecks tailored to different datasets in one unified model, and (2) enabling distinguishment between multiple normal and abnormal patterns, both are crucial for effective anomaly detection in various target scenarios.
no code implementations • 24 May 2024 • Yihang Wang, Yuying Qiu, Peng Chen, Kai Zhao, Yang Shu, Zhongwen Rao, Lujia Pan, Bin Yang, Chenjuan Guo
Enabling general time series forecasting faces two challenges: how to obtain unified representations from multi-domian time series data, and how to capture domain-specific features from time series data across various domains for adaptive transfer in downstream tasks.
1 code implementation • 4 Feb 2024 • Peng Chen, Yingying Zhang, Yunyao Cheng, Yang Shu, Yihang Wang, Qingsong Wen, Bin Yang, Chenjuan Guo
Multi-scale division divides the time series into different temporal resolutions using patches of various sizes.
1 code implementation • 2 Feb 2023 • Yang Shu, Xingzhuo Guo, Jialong Wu, Ximei Wang, Jianmin Wang, Mingsheng Long
This paper aims at generalizing CLIP to out-of-distribution test data on downstream tasks.
no code implementations • 8 Jun 2022 • Yang Shu, Zhangjie Cao, Ziyang Zhang, Jianmin Wang, Mingsheng Long
The proposed framework can be trained end-to-end with the target task-specific loss, where it learns to explore better pathway configurations and exploit the knowledge in pre-trained models for each target datum.
1 code implementation • 15 Jan 2022 • Junguang Jiang, Yang Shu, Jianmin Wang, Mingsheng Long
The success of deep learning algorithms generally depends on large-scale data, while humans appear to have inherent ability of knowledge transfer, by recognizing and applying relevant knowledge from previous learning experiences when encountering and solving unseen tasks.
no code implementations • 14 Oct 2021 • Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long
While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift.
no code implementations • 29 Jun 2021 • Yang Shu, Zhi Kou, Zhangjie Cao, Jianmin Wang, Mingsheng Long
We propose \emph{Zoo-Tuning} to address these challenges, which learns to adaptively transfer the parameters of pretrained models to the target task.
no code implementations • CVPR 2021 • Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, Mingsheng Long
Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable.