1 code implementation • 16 Apr 2024 • Pancheng Wang, Shasha Li, Dong Li, Kehan Long, Jintao Tang, Ting Wang
Our insights are twofold: Firstly, summary candidates can provide instructive information from both positive and negative perspectives, and secondly, selecting higher-quality candidates from multiple options contributes to producing better summaries.
2 code implementations • 7 Apr 2024 • Shezheng Song, Shasha Li, Shan Zhao, Xiaopeng Li, Chengyu Wang, Jie Yu, Jun Ma, Tianwei Yan, Bin Ji, Xiaoguang Mao
Multimodal entity linking (MEL) aims to utilize multimodal information (usually textual and visual information) to link ambiguous mentions to unambiguous entities in knowledge base.
1 code implementation • 4 Mar 2024 • Kehan Long, Shasha Li, Pancheng Wang, Chenlong Bao, Jintao Tang, Ting Wang
To help improve citations of full papers, we first define a novel task of Recommending Missed Citations Identified by Reviewers (RMC) and construct a corresponding expert-labeled dataset called CitationR.
no code implementations • 31 Jan 2024 • Xiaopeng Li, Shasha Li, Shezheng Song, Huijun Liu, Bin Ji, Xi Wang, Jun Ma, Jie Yu, Xiaodong Liu, Jing Wang, Weimin Zhang
In particular, local editing methods, which directly update model parameters, are more suitable for updating a small amount of knowledge.
1 code implementation • 19 Dec 2023 • Shezheng Song, Shan Zhao, Chengyu Wang, Tianwei Yan, Shasha Li, Xiaoguang Mao, Meng Wang
Multimodal Entity Linking (MEL) aims at linking ambiguous mentions with multimodal information to entity in Knowledge Graph (KG) such as Wikipedia, which plays a key role in many applications.
no code implementations • 10 Nov 2023 • Shezheng Song, Xiaopeng Li, Shasha Li, Shan Zhao, Jie Yu, Jun Ma, Xiaoguang Mao, Weimin Zhang
The study surveys existing modal alignment methods in MLLMs into four groups: (1) Multimodal Converters that change data into something LLMs can understand; (2) Multimodal Perceivers to improve how LLMs perceive different types of data; (3) Tools Assistance for changing data into one common format, usually text; and (4) Data-Driven methods that teach LLMs to understand specific types of data in a dataset.
1 code implementation • 17 Aug 2023 • Xiaopeng Li, Shasha Li, Shezheng Song, Jing Yang, Jun Ma, Jie Yu
To achieve more precise model editing, we analyze hidden states of MHSA and FFN, finding that MHSA encodes certain general knowledge extraction patterns.
no code implementations • 11 Jul 2023 • Chunxi Guo, Zhiliang Tian, Jintao Tang, Shasha Li, Zhihua Wen, Kaixuan Wang, Ting Wang
Prompt learning with large language models (LLMs) has emerged as a recent approach, which designs prompts to lead LLMs to understand the input question and generate the corresponding SQL.
1 code implementation • 20 Jun 2023 • Yongzhu Miao, Shasha Li, Jintao Tang, Ting Wang
We evaluate the effectiveness of MuDPT on few-shot vision recognition and out-of-domain generalization tasks.
no code implementations • 10 May 2023 • Chengxian Zhang, Jintao Tang, Ting Wang, Shasha Li
There is evidence that address matching plays a crucial role in many areas such as express delivery, online shopping and so on.
no code implementations • 9 Mar 2023 • Jing Yang, Bin Ji, Shasha Li, Jun Ma, Long Peng, Jie Yu
Recently, many studies incorporate external knowledge into character-level feature based models to improve the performance of Chinese relation extraction.
no code implementations • 23 Oct 2022 • Bin Ji, Shasha Li, Hao Xu, Jie Yu, Jun Ma, Huijun Liu, Jing Yang
On the one hand, the core architecture enables our model to learn token-level label information via the sequence tagging mechanism and then uses the information in the span-based joint extraction; on the other hand, it establishes a bi-directional information interaction between NER and RE.
Joint Entity and Relation Extraction named-entity-recognition +3
no code implementations • 20 Sep 2022 • Abhishek Aich, Shasha Li, Chengyu Song, M. Salman Asif, Srikanth V. Krishnamurthy, Amit K. Roy-Chowdhury
Our goal is to design an attack strategy that can learn from such natural scenes by leveraging the local patch differences that occur inherently in such images (e. g. difference between the local patch on the object `person' and the object `bike' in a traffic scene).
1 code implementation • COLING 2022 • Pancheng Wang, Shasha Li, Kunyuan Pang, Liangliang He, Dong Li, Jintao Tang, Ting Wang
Multi-Document Scientific Summarization (MDSS) aims to produce coherent and concise summaries for clusters of topic-relevant scientific papers.
no code implementations • 18 Aug 2022 • Bin Ji, Hao Xu, Jie Yu, Shasha Li, Jun Ma, Yuke Ji, Huijun Liu
An exhaustive study has been conducted to investigate span-based models for the joint entity and relation extraction task.
no code implementations • COLING 2022 • Bin Ji, Shasha Li, Shaoduo Gan, Jie Yu, Jun Ma, Huijun Liu
Few-shot named entity recognition (NER) enables us to build a NER system for a new domain using very few labeled examples.
no code implementations • 17 Aug 2022 • Huijun Liu, Jie Yu, Shasha Li, Jun Ma, Bin Ji
Textual adversarial attacks expose the vulnerabilities of text classifiers and can be used to improve their robustness.
no code implementations • 11 Jul 2022 • Mengxue Du, Shasha Li, Jie Yu, Jun Ma, Bin Ji, Huijun Liu, Wuhang Lin, Zibo Yi
Document retrieval enables users to find their required documents accurately and quickly.
no code implementations • 11 Jul 2022 • Wuhang Lin, Shasha Li, Chen Zhang, Bin Ji, Jie Yu, Jun Ma, Zibo Yi
However, the existing evaluation metrics for summary text are only rough proxies for summary quality, suffering from low correlation with human scoring and inhibition of summary diversity.
no code implementations • 7 Jul 2022 • Bin Ji, Shasha Li, Jie Yu, Jun Ma, Huijun Liu
Previous research has demonstrated that the two paradigms have clear complementary advantages, but few models have attempted to leverage these advantages in a single NER model as far as we know.
no code implementations • 6 Dec 2021 • Zikui Cai, Xinxin Xie, Shasha Li, Mingjun Yin, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy-Chowdhury, M. Salman Asif
In this paper, we present a new approach to generate context-aware attacks for object detectors.
no code implementations • 24 Oct 2021 • Mingjun Yin, Shasha Li, Chengyu Song, M. Salman Asif, Amit K. Roy-Chowdhury, Srikanth V. Krishnamurthy
A very recent defense strategy for detecting adversarial examples, that has been shown to be robust to current attacks, is to check for intrinsic context consistencies in the input data, where context refers to various relationships (e. g., object-to-object co-occurrence relationships) in images.
1 code implementation • NeurIPS 2021 • Shasha Li, Abhishek Aich, Shitong Zhu, M. Salman Asif, Chengyu Song, Amit K. Roy-Chowdhury, Srikanth V. Krishnamurthy
When compared to the image classification models, black-box adversarial attacks against video classification models have been largely understudied.
no code implementations • ICCV 2021 • Xueping Wang, Shasha Li, Min Liu, Yaonan Wang, Amit K. Roy-Chowdhury
The success of deep neural networks (DNNs) has promoted the widespread applications of person re-identification (ReID).
no code implementations • ICCV 2021 • Mingjun Yin, Shasha Li, Zikui Cai, Chengyu Song, M. Salman Asif, Amit K. Roy-Chowdhury, Srikanth V. Krishnamurthy
Vision systems that deploy Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples.
no code implementations • 21 May 2021 • Bin Ji, Shasha Li, Jie Yu, Jun Ma, Huijun Liu
To solve this problem, we pro-pose Sequence Tagging enhanced Span-based Network (STSN), a span-based joint extrac-tion network that is enhanced by token BIO label information derived from sequence tag-ging based NER.
Joint Entity and Relation Extraction named-entity-recognition +4
no code implementations • 7 Dec 2020 • Ran Gu, Gregory Gutin, Shasha Li, Yongtang Shi, Zhenyu Taoqiu
They also proved that every digraph on at most 6 vertices and arc-connectivity at least 2 has a good pair and gave an example of a 2-arc-strong digraph $D$ on 10 vertices with independence number 4 that has no good pair.
Combinatorics
no code implementations • COLING 2020 • Bin Ji, Jie Yu, Shasha Li, Jun Ma, Qingbo Wu, Yusong Tan, Huijun Liu
Span-based joint extraction models have shown their efficiency on entity recognition and relation extraction.
no code implementations • 26 Aug 2020 • Shasha Li, Karim Khalil, Rameswar Panda, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy-Chowdhury, Ananthram Swami
The emergence of Internet of Things (IoT) brings about new security challenges at the intersection of cyber and physical spaces.
no code implementations • ECCV 2020 • Shasha Li, Shitong Zhu, Sudipta Paul, Amit Roy-Chowdhury, Chengyu Song, Srikanth Krishnamurthy, Ananthram Swami, Kevin S. Chan
There has been a recent surge in research on adversarial perturbations that defeat Deep Neural Networks (DNNs) in machine vision; most of these perturbation-based attacks target object classifiers.
no code implementations • 29 Jan 2020 • Shitong Zhu, Zhongjie Wang, Xun Chen, Shasha Li, Umar Iqbal, Zhiyun Qian, Kevin S. Chan, Srikanth V. Krishnamurthy, Zubair Shafiq
Efforts by online ad publishers to circumvent traditional ad blockers towards regaining fiduciary benefits, have been demonstrably successful.
1 code implementation • 2 Jul 2018 • Shasha Li, Ajaya Neupane, Sujoy Paul, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy Chowdhury, Ananthram Swami
We exploit recent advances in generative adversarial network (GAN) architectures to account for temporal correlations and generate adversarial samples that can cause misclassification rates of over 80% for targeted activities.
no code implementations • 9 May 2017 • Zibo Yi, Shasha Li, Jie Yu, Qingbo Wu
The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning methods.