Search Results for author: Yunzhi Yao

Found 19 papers, 18 papers with code

Knowledge Circuits in Pretrained Transformers

1 code implementation28 May 2024 Yunzhi Yao, Ningyu Zhang, Zekun Xi, Mengru Wang, Ziwen Xu, Shumin Deng, Huajun Chen

In this paper, we delve into the computation graph of the language model to uncover the knowledge circuits that are instrumental in articulating specific knowledge.

WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models

1 code implementation23 May 2024 Peng Wang, Zexi Li, Ningyu Zhang, Ziwen Xu, Yunzhi Yao, Yong Jiang, Pengjun Xie, Fei Huang, Huajun Chen

In WISE, we design a dual parametric memory scheme, which consists of the main memory for the pretrained knowledge and a side memory for the edited knowledge.

Hallucination Model Editing +2

Detoxifying Large Language Models via Knowledge Editing

1 code implementation21 Mar 2024 Mengru Wang, Ningyu Zhang, Ziwen Xu, Zekun Xi, Shumin Deng, Yunzhi Yao, Qishen Zhang, Linyi Yang, Jindong Wang, Huajun Chen

This paper investigates using knowledge editing techniques to detoxify Large Language Models (LLMs).

knowledge editing

Editing Conceptual Knowledge for Large Language Models

1 code implementation10 Mar 2024 Xiaohan Wang, Shengyu Mao, Ningyu Zhang, Shumin Deng, Yunzhi Yao, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen

Recently, there has been a growing interest in knowledge editing for Large Language Models (LLMs).

knowledge editing

Unveiling the Pitfalls of Knowledge Editing for Large Language Models

1 code implementation3 Oct 2023 Zhoubo Li, Ningyu Zhang, Yunzhi Yao, Mengru Wang, Xi Chen, Huajun Chen

This paper pioneers the investigation into the potential pitfalls associated with knowledge editing for LLMs.

knowledge editing

EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models

2 code implementations14 Aug 2023 Peng Wang, Ningyu Zhang, Bozhong Tian, Zekun Xi, Yunzhi Yao, Ziwen Xu, Mengru Wang, Shengyu Mao, Xiaohan Wang, Siyuan Cheng, Kangwei Liu, Yuansheng Ni, Guozhou Zheng, Huajun Chen

Large Language Models (LLMs) usually suffer from knowledge cutoff or fallacy issues, which means they are unaware of unseen events or generate text with incorrect facts owing to outdated/noisy data.

knowledge editing

Editing Large Language Models: Problems, Methods, and Opportunities

2 code implementations22 May 2023 Yunzhi Yao, Peng Wang, Bozhong Tian, Siyuan Cheng, Zhoubo Li, Shumin Deng, Huajun Chen, Ningyu Zhang

Our objective is to provide valuable insights into the effectiveness and feasibility of each editing technique, thereby assisting the community in making informed decisions on the selection of the most appropriate method for a specific task or context.

Model Editing

LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities

1 code implementation22 May 2023 Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Huajun Chen, Ningyu Zhang

We engage in experiments across eight diverse datasets, focusing on four representative tasks encompassing entity and relation extraction, event extraction, link prediction, and question-answering, thereby thoroughly exploring LLMs' performance in the domain of construction and inference.

Event Extraction graph construction +4

Knowledge Rumination for Pre-trained Language Models

1 code implementation15 May 2023 Yunzhi Yao, Peng Wang, Shengyu Mao, Chuanqi Tan, Fei Huang, Huajun Chen, Ningyu Zhang

Previous studies have revealed that vanilla pre-trained language models (PLMs) lack the capacity to handle knowledge-intensive NLP tasks alone; thus, several works have attempted to integrate external knowledge into PLMs.

Language Modelling

Reasoning with Language Model Prompting: A Survey

2 code implementations19 Dec 2022 Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen

Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc.

Arithmetic Reasoning Common Sense Reasoning +4

Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph Construction

1 code implementation19 Oct 2022 Yunzhi Yao, Shengyu Mao, Ningyu Zhang, Xiang Chen, Shumin Deng, Xi Chen, Huajun Chen

With the development of pre-trained language models, many prompt-based approaches to data-efficient knowledge graph construction have been proposed and achieved impressive performance.

Event Extraction graph construction +2

Good Visual Guidance Makes A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction

1 code implementation7 May 2022 Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

To deal with these issues, we propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction, aiming to achieve more effective and robust performance.

named-entity-recognition Named Entity Recognition +3

Kformer: Knowledge Injection in Transformer Feed-Forward Layers

1 code implementation15 Jan 2022 Yunzhi Yao, Shaohan Huang, Li Dong, Furu Wei, Huajun Chen, Ningyu Zhang

In this work, we propose a simple model, Kformer, which takes advantage of the knowledge stored in PTMs and external knowledge via knowledge injection in Transformer FFN layers.

Language Modelling Question Answering

KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction

1 code implementation15 Apr 2021 Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt).

Ranked #5 on Dialog Relation Extraction on DialogRE (F1 (v1) metric)

Dialog Relation Extraction Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.