1 code implementation • 23 Apr 2024 • Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, Chitta Baral
Existing work investigating this reasoning ability of LLMs has focused only on a couple of inference rules (such as modus ponens and modus tollens) of propositional and first-order logic.
no code implementations • 25 Mar 2024 • Sanyam Lakhanpal, Shivang Chopra, Vinija Jain, Aman Chadha, Man Luo
We introduce a benchmark, LenCom-Eval, specifically designed for testing models' capability in generating images with Lengthy and Complex visual text.
Optical Character Recognition (OCR) Text-to-Image Generation
no code implementations • 21 Feb 2024 • Jiawei Liang, Siyuan Liang, Man Luo, Aishan Liu, Dongchen Han, Ee-Chien Chang, Xiaochun Cao
Nevertheless, the frozen visual encoder in autoregressive VLMs imposes constraints on the learning of conventional image triggers.
no code implementations • 21 Jan 2024 • Man Luo, Xin Xu, Yue Liu, Panupong Pasupat, Mehran Kazemi
Language models, especially pre-trained large language models, have showcased remarkable abilities as few-shot in-context learners (ICL), adept at adapting to new tasks with just a few demonstrations in the input context.
no code implementations • 2 Oct 2023 • Man Luo, Shrinidhi Kumbhar, Ming Shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, Chitta Baral
This work strives to understand the proficiency of LLMs in logical reasoning by offering a brief review of the latest progress in this area; with a focus on the logical reasoning datasets, tasks, and the methods adopted to utilize LLMs for reasoning.
1 code implementation • 16 Aug 2023 • Srija Macherla, Man Luo, Mihir Parmar, Chitta Baral
We introduce a unified score for the ADD system that takes into account the interplay between symptoms and diagnosis.
1 code implementation • 1 Jun 2023 • Man Luo, Zhiyuan Fang, Tejas Gokhale, Yezhou Yang, Chitta Baral
We investigate knowledge retrieval with multi-modal queries, i. e. queries containing information split across image and text inputs, a challenging task that differs from previous work on cross-modal retrieval.
no code implementations • 23 May 2023 • Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbrasaite, Vincent Y Zhao
In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs.
no code implementations • 20 May 2023 • Neeraj Varshney, Mihir Parmar, Nisarg Patel, Divij Handa, Sayantan Sarkar, Man Luo, Chitta Baral
Can state-of-the-art NLP models correctly reason over the contexts of such scenarios?
no code implementations • 23 Nov 2022 • Neeraj Varshney, Man Luo, Chitta Baral
Comparing with the FiD reader, this approach matches its accuracy by utilizing just 18. 32% of its reader inference cost and also outperforms it by achieving up to 55. 10% accuracy on NQ Open.
1 code implementation • 11 Nov 2022 • Man Luo, Bowen Du, Wenzhe Zhang, Tianyou Song, Kun Li, HongMing Zhu, Mark Birkin, Hongkai Wen
This is particularly challenging in the context of expanding systems, because i) the range of the EVs is limited while charging time is typically long, which constrain the viable rebalancing operations; and ii) the EV stations in the system are dynamically changing, i. e., the legitimate targets for rebalancing operations can vary over time.
no code implementations • 4 Oct 2022 • Man Luo, Shashank Jain, Anchit Gupta, Arash Einolghozati, Barlas Oguz, Debojeet Chatterjee, Xilun Chen, Chitta Baral, Peyman Heidari
Driven by this question, we leverage an indexing-efficient dense retriever (i. e. DrBoost) and introduce a LITE retriever that further reduces the memory of DrBoost.
no code implementations • 27 Jul 2022 • Jianshu Li, Man Luo, Jian Liu, Tao Chen, Chengjie Wang, Ziwei Liu, Shuo Liu, Kewei Yang, Xuning Shao, Kang Chen, Boyuan Liu, Mingyu Guo, Ying Guo, Yingying Ao, Pengfei Gao
In this paper, we present the solutions from the Top 3 teams, in order to boost the research work in the field of image forgery detection.
no code implementations • 6 Jul 2022 • Man Luo, Sharad Saxena, Swaroop Mishra, Mihir Parmar, Chitta Baral
To the best of our knowledge, none of TQA datasets exist in the biomedical domain where tables are frequently used to present information.
no code implementations • NAACL (ACL) 2022 • Man Luo
First, we introduce methods to address the abovementioned issues of neural retrievers from three angles, new model architectures, IR-oriented pretraining tasks, and generating large scale training data.
2 code implementations • Findings (NAACL) 2022 • Mihir Parmar, Swaroop Mishra, Mirali Purohit, Man Luo, M. Hassan Murad, Chitta Baral
Recently, instructional prompts have shown significant improvement towards multi-task generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain.
1 code implementation • 25 Mar 2022 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
no code implementations • Findings (ACL) 2022 • Tejas Gokhale, Swaroop Mishra, Man Luo, Bhavdeep Singh Sachdeva, Chitta Baral
However, the effect of data modification on adversarial robustness remains unclear.
no code implementations • SpaNLP (ACL) 2022 • Man Luo, Kazuma Hashimoto, Semih Yavuz, Zhiwei Liu, Chitta Baral, Yingbo Zhou
Among several interesting findings, it is important to highlight that (1) the generative readers perform better in long context QA, (2) the extractive readers perform better in short context while also showing better out-of-domain generalization, and (3) the encoder of encoder-decoder PrLMs (e. g., T5) turns out to be a strong extractive reader and outperforms the standard choice of encoder-only PrLMs (e. g., RoBERTa).
no code implementations • 19 Jan 2022 • Man Luo, Arindam Mitra, Tejas Gokhale, Chitta Baral
We show that BM25 and our method can complement each other, and a simple hybrid model leads to further gains in the large corpus setting.
no code implementations • 3 Nov 2021 • Man Luo, Bowen Du, Konstantin Klemmer, HongMing Zhu, Hongkai Wen
Shared e-mobility services have been widely tested and piloted in cities across the globe, and already woven into the fabric of modern urban planning.
no code implementations • 29 Sep 2021 • Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
no code implementations • NAACL (ACL) 2022 • Man Luo, Shuguang Chen, Chitta Baral
Furthermore, we propose consistency and similarity constraints to promote the correlation and interaction between passage ranking and sentence selection. The experiments demonstrate that our framework can achieve competitive results with previous systems and outperform the baseline by 28\% in terms of exact matching of relevant sentences on the HotpotQA dataset.
1 code implementation • EMNLP 2021 • Man Luo, Yankai Zeng, Pratyay Banerjee, Chitta Baral
The visual retriever aims to retrieve relevant knowledge, and the visual reader seeks to predict answers based on given knowledge.
no code implementations • 24 Aug 2021 • Qi Feng, Man Luo, Zhaoyu Zhang
We propose a deep signature/log-signature FBSDE algorithm to solve forward-backward stochastic differential equations (FBSDEs) with state and path dependent features.
no code implementations • EACL 2021 • Man Luo, Shailaja Keyur Sampat, Riley Tallman, Yankai Zeng, Manuha Vancha, Akarshan Sajja, Chitta Baral
GQA (CITATION) is a dataset for real-world visual reasoning and compositional question answering.
1 code implementation • 28 Mar 2021 • Man Luo, Shailaja Keyur Sampat, Riley Tallman, Yankai Zeng, Manuha Vancha, Akarshan Sajja, Chitta Baral
GQA~\citep{hudson2019gqa} is a dataset for real-world visual reasoning and compositional question answering.
1 code implementation • 25 Feb 2021 • Yuanhan Zhang, Zhenfei Yin, Jing Shao, Ziwei Liu, Shuo Yang, Yuanjun Xiong, Wei Xia, Yan Xu, Man Luo, Jian Liu, Jianshu Li, Zhijun Chen, Mingyu Guo, Hui Li, Junfu Liu, Pengfei Gao, Tianqi Hong, Hao Han, Shijie Liu, Xinhua Chen, Di Qiu, Cheng Zhen, Dashuang Liang, Yufeng Jin, Zhanlong Hao
It is the largest face anti-spoofing dataset in terms of the numbers of the data and the subjects.
no code implementations • 25 Jan 2021 • Man Luo, Qinghua Guo, Ming Jin, Yonina C. Eldar, Defeng, Huang, Xiangming Meng
Sparse Bayesian learning (SBL) can be implemented with low complexity based on the approximate message passing (AMP) algorithm.
no code implementations • 17 Dec 2020 • Pratyay Banerjee, Chitta Baral, Man Luo, Arindam Mitra, Kuntal Pal, Tran C. Son, Neeraj Varshney
A recent work has shown that transformers are able to "reason" with facts and rules in a limited setting where the rules are natural language expressions of conjunctions of conditions implying a conclusion.
no code implementations • 18 Sep 2019 • Joohyung Lee, Man Luo
We show that the verification of strong equivalence in LPMLN can be reduced to equivalence checking in classical logic via a reduct and choice rules as well as to equivalence checking under the "soft" logic of here-and-there.
no code implementations • 18 May 2019 • Man Luo
Strong equivalence is a well-studied and important concept in answer set programming (ASP).
Logic in Computer Science
no code implementations • 10 Mar 2019 • Man Luo, Hongkai Wen, Yi Luo, Bowen Du, Konstantin Klemmer, Hong-Ming Zhu
Electric Vehicle (EV) sharing systems have recently experienced unprecedented growth across the globe.