no code implementations • 22 May 2024 • Peiwang Tang, Weitai Zhang
We attribute the effectiveness of these models largely to the adopted Patch mechanism, which enhances sequence locality to an extent yet fails to fully address the loss of temporal information inherent to the permutation-invariant self-attention mechanism.
no code implementations • 4 Jan 2023 • Peiwang Tang, Xianchao Zhang
The Transformer architecture yields state-of-the-art results in many tasks such as natural language processing (NLP) and computer vision (CV), since the ability to efficiently capture the precise long-range dependency coupling between input sequences.
no code implementations • 4 Oct 2022 • Peiwang Tang, Xianchao Zhang
Large-scale self-supervised pre-training Transformer architecture have significantly boosted the performance for various tasks in natural language processing (NLP) and computer vision (CV).
Multivariate Time Series Forecasting Self-Supervised Learning +1
no code implementations • 5 Sep 2022 • Peiwang Tang, Xianchao Zhang
Firstly, the complex features are extracted according to the irregular patterns of different events.