Search Results for author: Yang Shu

Found 9 papers, 3 papers with code

Towards a General Time Series Anomaly Detector with Adaptive Bottlenecks and Dual Adversarial Decoders

no code implementations24 May 2024 Qichao Shentu, Beibu Li, Kai Zhao, Yang Shu, Zhongwen Rao, Lujia Pan, Bin Yang, Chenjuan Guo

The significant divergence of time series data across different domains presents two primary challenges in building such a general model: (1) meeting the diverse requirements of appropriate information bottlenecks tailored to different datasets in one unified model, and (2) enabling distinguishment between multiple normal and abnormal patterns, both are crucial for effective anomaly detection in various target scenarios.

ROSE: Register Assisted General Time Series Forecasting with Decomposed Frequency Learning

no code implementations24 May 2024 Yihang Wang, Yuying Qiu, Peng Chen, Kai Zhao, Yang Shu, Zhongwen Rao, Lujia Pan, Bin Yang, Chenjuan Guo

Enabling general time series forecasting faces two challenges: how to obtain unified representations from multi-domian time series data, and how to capture domain-specific features from time series data across various domains for adaptive transfer in downstream tasks.

CLIPood: Generalizing CLIP to Out-of-Distributions

1 code implementation2 Feb 2023 Yang Shu, Xingzhuo Guo, Jialong Wu, Ximei Wang, Jianmin Wang, Mingsheng Long

This paper aims at generalizing CLIP to out-of-distribution test data on downstream tasks.

Hub-Pathway: Transfer Learning from A Hub of Pre-trained Models

no code implementations8 Jun 2022 Yang Shu, Zhangjie Cao, Ziyang Zhang, Jianmin Wang, Mingsheng Long

The proposed framework can be trained end-to-end with the target task-specific loss, where it learns to explore better pathway configurations and exploit the knowledge in pre-trained models for each target datum.

Transfer Learning

Transferability in Deep Learning: A Survey

1 code implementation15 Jan 2022 Junguang Jiang, Yang Shu, Jianmin Wang, Mingsheng Long

The success of deep learning algorithms generally depends on large-scale data, while humans appear to have inherent ability of knowledge transfer, by recognizing and applying relevant knowledge from previous learning experiences when encountering and solving unseen tasks.

Domain Adaptation Transfer Learning

Omni-Training: Bridging Pre-Training and Meta-Training for Few-Shot Learning

no code implementations14 Oct 2021 Yang Shu, Zhangjie Cao, Jinghan Gao, Jianmin Wang, Philip S. Yu, Mingsheng Long

While pre-training and meta-training can create deep models powerful for few-shot generalization, we find that pre-training and meta-training focuses respectively on cross-domain transferability and cross-task transferability, which restricts their data efficiency in the entangled settings of domain shift and task shift.

Few-Shot Learning Transfer Learning

Zoo-Tuning: Adaptive Transfer from a Zoo of Models

no code implementations29 Jun 2021 Yang Shu, Zhi Kou, Zhangjie Cao, Jianmin Wang, Mingsheng Long

We propose \emph{Zoo-Tuning} to address these challenges, which learns to adaptively transfer the parameters of pretrained models to the target task.

Facial Landmark Detection Image Classification +1

Open Domain Generalization with Domain-Augmented Meta-Learning

no code implementations CVPR 2021 Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, Mingsheng Long

Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable.

Domain Generalization Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.