Search Results for author: Boya Wu

Found 2 papers, 2 papers with code

Efficient Multimodal Learning from Data-centric Perspective

1 code implementation18 Feb 2024 Muyang He, Yexin Liu, Boya Wu, Jianhao Yuan, Yueze Wang, Tiejun Huang, Bo Zhao

Multimodal Large Language Models (MLLMs) have demonstrated notable capabilities in general visual understanding and reasoning tasks.

SVIT: Scaling up Visual Instruction Tuning

2 code implementations9 Jul 2023 Bo Zhao, Boya Wu, Muyang He, Tiejun Huang

Thanks to the emerging of foundation models, the large language and vision models are integrated to acquire the multimodal ability of visual captioning, question answering, etc.

Image Captioning Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.