Are Negative Samples Necessary in Entity Alignment? An Approach with High Performance, Scalability and Robustness

11 Aug 2021  ·  Xin Mao, Wenting Wang, Yuanbin Wu, Man Lan ·

Entity alignment (EA) aims to find the equivalent entities in different KGs, which is a crucial step in integrating multiple KGs. However, most existing EA methods have poor scalability and are unable to cope with large-scale datasets. We summarize three issues leading to such high time-space complexity in existing EA methods: (1) Inefficient graph encoders, (2) Dilemma of negative sampling, and (3) "Catastrophic forgetting" in semi-supervised learning. To address these challenges, we propose a novel EA method with three new components to enable high Performance, high Scalability, and high Robustness (PSR): (1) Simplified graph encoder with relational graph sampling, (2) Symmetric negative-free alignment loss, and (3) Incremental semi-supervised learning. Furthermore, we conduct detailed experiments on several public datasets to examine the effectiveness and efficiency of our proposed method. The experimental results show that PSR not only surpasses the previous SOTA in performance but also has impressive scalability and robustness.

PDF Abstract

Datasets


Results from the Paper


Ranked #6 on Entity Alignment on dbp15k ja-en (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Entity Alignment dbp15k fr-en PSR Hits@1 0.958 # 6
Entity Alignment dbp15k ja-en PSR Hits@1 0.908 # 6
Entity Alignment DBP15k zh-en PSR Hits@1 0.883 # 6

Methods


No methods listed for this paper. Add relevant methods here