1 code implementation • Findings (NAACL) 2022 • Shuo Lei, Xuchao Zhang, Jianfeng He, Fanglan Chen, Chang-Tien Lu
Large-scale multilingual pre-trained language models have achieved remarkable performance in zero-shot cross-lingual tasks.
1 code implementation • EMNLP 2020 • Jianfeng He, Xuchao Zhang, Shuo Lei, Zhiqian Chen, Fanglan Chen, Abdulaziz Alhamadani, Bei Xiao, ChangTien Lu
The uncertainty measurement of classified results is especially important in areas requiring limited human resources for higher accuracy.
no code implementations • 27 Mar 2024 • Yanshen Sun, Jianfeng He, Limeng Cui, Shuo Lei, Chang-Tien Lu
Studies highlight the gap in the deceptive power of LLM-generated fake news with and without human assistance, yet the potential of prompting techniques has not been fully explored.
1 code implementation • 6 Mar 2024 • Jianfeng He, Hang Su, Jason Cai, Igor Shalyminov, Hwanjun Song, Saab Mansour
Semi-supervised dialogue summarization (SSDS) leverages model-generated summaries to reduce reliance on human-labeled data and improve the performance of summarization models.
Abstractive Text Summarization Natural Language Understanding
no code implementations • 18 Feb 2024 • Min Zhang, Jianfeng He, Taoran Ji, Chang-Tien Lu
This serves as a reminder to carefully consider sensitivity and confidence in the pursuit of model fairness.
no code implementations • 12 Dec 2023 • Min Zhang, Jianfeng He, Shuo Lei, Murong Yue, Linhang Wang, Chang-Tien Lu
The meaning of complex phrases in natural language is composed of their individual components.
no code implementations • 15 Nov 2023 • Jianfeng He, Linlin Yu, Shuo Lei, Chang-Tien Lu, Feng Chen
Sequential labeling is a task predicting labels for each token in a sequence, such as Named Entity Recognition (NER).
no code implementations • 12 Oct 2023 • Ziyang Song, Ruijie Zhu, Chuxin Wang, Jiacheng Deng, Jianfeng He, Tianzhu Zhang
Self-supervised monocular depth estimation holds significant importance in the fields of autonomous driving and robotics.
Ranked #1 on Unsupervised Monocular Depth Estimation on KITTI-C (using extra training data)
1 code implementation • 11 Sep 2023 • Linhan Wang, Shuo Lei, Jianfeng He, Shengkun Wang, Min Zhang, Chang-Tien Lu
To tackle these challenges, we propose a Self-Correlation and Cross-Correlation Learning Network for the few-shot remote sensing image semantic segmentation.
1 code implementation • 3 Jun 2023 • Shuo Lei, Xuchao Zhang, Jianfeng He, Fanglan Chen, Chang-Tien Lu
Meta-learning has emerged as a trending technique to tackle few-shot text classification and achieve state-of-the-art performance.
1 code implementation • 22 May 2023 • Jianfeng He, Julian Salazar, Kaisheng Yao, Haoqi Li, Jinglun Cai
End-to-end (E2E) spoken language understanding (SLU) is constrained by the cost of collecting speech-semantics pairs, especially when label domains change.
Natural Language Understanding Spoken Language Understanding
1 code implementation • CVPR 2023 • Jiacheng Deng, Chuxin Wang, Jiahao Lu, Jianfeng He, Tianzhu Zhang, Jiyang Yu, Zhe Zhang
The key of our approach is to exploit an orientation estimation module with a domain adaptive discriminator to align the orientations of point cloud pairs, which significantly alleviates the mispredictions of symmetrical parts.
Ranked #2 on 3D Dense Shape Correspondence on SHREC'19 (using extra training data)
no code implementations • CVPR 2023 • Jiahuan Yu, Jiahao Chang, Jianfeng He, Tianzhu Zhang, Feng Wu
To deal with the above issues, we propose Adaptive Spot-Guided Transformer (ASTR) for local feature matching, which jointly models the local consistency and scale variations in a unified coarse-to-fine architecture.
no code implementations • ICCV 2023 • Dawei Yang, Jianfeng He, Yinchao Ma, Qianjin Yu, Tianzhu Zhang
To address the above limitations, we propose a novel foreground-background distribution modeling transformer for visual object tracking (F-BDMTrack), including a fore-background agent learning (FBAL) module and a distribution-aware attention (DA2) module in a unified transformer architecture.
no code implementations • CVPR 2023 • Jianfeng He, Yuan Gao, Tianzhu Zhang, Zhe Zhang, Feng Wu
Second, the HKDL module can generate keypoint detectors in a hierarchical way, which is helpful for detecting keypoints with diverse levels of structures.
no code implementations • ICCV 2023 • Jiahao Lu, Jiacheng Deng, Chuxin Wang, Jianfeng He, Tianzhu Zhang
Additionally, we design an affiliated transformer decoder that suppresses the interference of noise background queries and helps the foreground queries focus on instance discriminative parts to predict final segmentation results.
Ranked #3 on 3D Instance Segmentation on ScanNet(v2)
no code implementations • 24 Oct 2022 • Abdulaziz Alhamadani, Xuchao Zhang, Jianfeng He, Chang-Tien Lu
Yet, Arabic Text Summarization (ATS) is still in its developing stages.
no code implementations • CVPR 2021 • Yulin Li, Jianfeng He, Tianzhu Zhang, Xiang Liu, Yongdong Zhang, Feng Wu
To address these issues, we propose a novel end-to-end Part-Aware Transformer (PAT) for occluded person Re-ID through diverse part discovery via a transformer encoderdecoder architecture, including a pixel context based transformer encoder and a part prototype based transformer decoder.
no code implementations • 16 Oct 2020 • Jianfeng He, Xuchao Zhang, Shuo Lei, Shuhui Wang, Qingming Huang, Chang-Tien Lu, Bei Xiao
Each MEx area has the mask area of the generation as the majority and the boundary of original context as the minority.
no code implementations • 3 Jul 2020 • Shuo Lei, Xuchao Zhang, Jianfeng He, Fanglan Chen, Chang-Tien Lu
Despite the great progress made by deep neural networks in the semantic segmentation task, traditional neural-networkbased methods typically suffer from a shortage of large amounts of pixel-level annotations.