no code implementations • 29 Dec 2021 • Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo
With this in mind, we come up with a new kind of Transformer-based Hawkes process model, Temporal Attention Augmented Transformer Hawkes Process (TAA-THP), we modify the traditional dot-product attention structure, and introduce the temporal encoding into attention structure.
no code implementations • 29 Dec 2021 • Lu-ning Zhang, Jian-wei Liu, Zhi-yan Song, Xin Zuo
Enlighten by transformer model, which can learning sequence data efficiently without recurrent and convolutional structure, transformer Hawkes process is come out, and achieves state-of-the-art performance.