How Features Benefit: Parallel Series Embedding for Multivariate Time Series Forecasting with Transformer

ICTAI 2022  ·  Xuande Feng, Zonglin Lyu ·

Forecasting time series is an engaging and vital mathematical topic. Theories and applications in related fields have been studied for decades, and deep learning has provided reliable tools in recent years. Transformer, capable to capture longer sequence dependencies, was exploited as a powerful architecture in time series forecasting. While existing work majorly contributed to breaking memory bottleneck of Trasnformer, how to effectively leverage multivariate time series remains barely focused. In this work, a novel architecture utilizing a primary Transformer is proposed to conduct multivariate time series predictions. Our proposed architecture has two main advantages. Firstly, it accurately predicts multivariate time series with shorter or longer sequence lengths and steps. We benchmark our proposed model with various baseline architectures on realworld datasets, and our model improved their performances significantly. Secondly, it can easily be leveraged in Transformerbased variants,

PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Time Series ETTh1 (720) Parallel Series Transformer Mean Squared Error 0.129 # 1
Time Series Forecasting ETTh1 (720) Univariate Parallel Series Transformer MSE 0.129 # 8
MAE 0.286 # 5
Time Series Weather (720) Parallel Series Transformer Mean Squared Error 0.319 # 1

Methods


No methods listed for this paper. Add relevant methods here