1 code implementation • 21 Mar 2024 • Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu
Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.
1 code implementation • 23 Feb 2024 • Ailin Deng, Zhirui Chen, Bryan Hooi
Large Vision-Language Models (LVLMs) are susceptible to object hallucinations, an issue in which their generated text contains non-existent objects, greatly limiting their reliability and practicality.
no code implementations • 17 Jan 2024 • Zhirui Chen, P. N. Karthik, Yeow Meng Chee, Vincent Y. F. Tan
We study best arm identification (BAI) in linear bandits in the fixed-budget regime under differential privacy constraints, when the arm rewards are supported on the unit interval.
1 code implementation • 6 Feb 2023 • Ailin Deng, Shen Li, Miao Xiong, Zhirui Chen, Bryan Hooi
Trustworthy machine learning is of primary importance to the practical deployment of deep learning models.
no code implementations • 14 Oct 2022 • Zhirui Chen, P. N. Karthik, Vincent Y. F. Tan, Yeow Meng Chee
Furthermore, we show that for any algorithm whose upper bound on the expected stopping time matches with the lower bound up to a multiplicative constant ({\em almost-optimal} algorithm), the ratio of any two consecutive communication time instants must be {\em bounded}, a result that is of independent interest.
no code implementations • ICCV 2021 • Xinke Li, Zhirui Chen, Yue Zhao, Zekun Tong, Yabang Zhao, Andrew Lim, Joey Tianyi Zhou
We present the backdoor attacks in 3D point cloud with a unified framework that exploits the unique properties of 3D data and networks.
no code implementations • 31 Oct 2019 • Zhirui Chen, Jianheng Li, Wei-Shi Zheng
The scalability problem caused by the difficulty in annotating Person Re-identification(Re-ID) datasets has become a crucial bottleneck in the development of Re-ID. To address this problem, many unsupervised Re-ID methods have recently been proposed. Nevertheless, most of these models require transfer from another auxiliary fully supervised dataset, which is still expensive to obtain. In this work, we propose a Re-ID model based on Weakly Supervised Tracklets(WST) data from various camera views, which can be inexpensively acquired by combining the fragmented tracklets of the same person in the same camera view over a period of time. We formulate our weakly supervised tracklets Re-ID model by a novel method, named deep feature-wise mutual learning(DFML), which consists of Mutual Learning on Feature Extractors (MLFE) and Mutual Learning on Feature Classifiers (MLFC). We propose MLFE by leveraging two feature extractors to learn from each other to extract more robust and discriminative features. On the other hand, we propose MLFC by adapting discriminative features from various camera views to each classifier.