1 code implementation • spnlp (ACL) 2022 • Guirong Fu, Zhao Meng, Zhen Han, Zifeng Ding, Yunpu Ma, Matthias Schubert, Volker Tresp, Roger Wattenhofer
In this paper, we tackle the temporal knowledge graph completion task by proposing TempCaps, which is a Capsule network-based embedding model for Temporal knowledge graph completion.
no code implementations • 4 May 2024 • Zeyu Yang, Zhao Meng, Xiaochen Zheng, Roger Wattenhofer
Large Language Models (LLMs) have revolutionized natural language processing, but their robustness against adversarial attacks remains a critical concern.
1 code implementation • 29 Oct 2022 • Yu Fei, Ping Nie, Zhao Meng, Roger Wattenhofer, Mrinmaya Sachan
We further explore the applicability of our clustering approach by evaluating it on 14 datasets with more diverse topics, text lengths, and numbers of classes.
1 code implementation • 12 Aug 2022 • Zifeng Ding, Zongyue Li, Ruoxia Qi, Jingpei Wu, Bailan He, Yunpu Ma, Zhao Meng, Shuo Chen, Ruotong Liao, Zhen Han, Volker Tresp
To this end, we propose ForecastTKGQA, a TKGQA model that employs a TKG forecasting module for future inference, to answer all three types of questions.
1 code implementation • 2022 2022 • Zhao Meng, Prof. Dr. Roger Wattenhofer
A cosmic ray consists of mostly highly energetic protons that emanate from the sun, the Milky Way and distant galaxies.
1 code implementation • 17 Oct 2021 • Zai Shi, Zhao Meng, Yiran Xing, Yunpu Ma, Roger Wattenhofer
3D-RETR is capable of 3D reconstruction from a single view or multiple views.
no code implementations • 15 Sep 2021 • Jens Hauser, Zhao Meng, Damián Pascual, Roger Wattenhofer
We combine a human evaluation of individual word substitutions and a probabilistic analysis to show that between 96% and 99% of the analyzed attacks do not preserve semantics, indicating that their success is mainly based on feeding poor data to the model.
1 code implementation • Findings (NAACL) 2022 • Zhao Meng, Yihan Dong, Mrinmaya Sachan, Roger Wattenhofer
In this paper, we present an approach to improve the robustness of BERT language models against word substitution-based adversarial attacks by leveraging adversarial perturbations for self-supervised contrastive learning.
no code implementations • 25 Feb 2021 • Nikola Jovanović, Zhao Meng, Lukas Faber, Roger Wattenhofer
We study the problem of adversarially robust self-supervised learning on graphs.
1 code implementation • ACL 2021 • Yiran Xing, Zai Shi, Zhao Meng, Gerhard Lakemeyer, Yunpu Ma, Roger Wattenhofer
We present Knowledge Enhanced Multimodal BART (KM-BART), which is a Transformer-based sequence-to-sequence model capable of reasoning about commonsense knowledge from multimodal inputs of images and texts.
no code implementations • COLING 2020 • Zhao Meng, Roger Wattenhofer
Generating adversarial examples for natural language is hard, as natural language consists of discrete symbols, and examples are often of variable lengths.
no code implementations • LREC 2018 • Zhao Meng, Lili Mou, Zhi Jin
Neural network-based dialog systems are attracting increasing attention in both academia and industry.
1 code implementation • 22 Mar 2017 • Zhao Meng, Lili Mou, Zhi Jin
Speaker change detection (SCD) is an important task in dialog modeling.
no code implementations • 10 Oct 2016 • Tiancheng Zhao, Ran Zhao, Zhao Meng, Justine Cassell
Social norms are shared rules that govern and facilitate social interaction.
no code implementations • EMNLP 2016 • Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, Zhi Jin
Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain.