no code implementations • 17 Apr 2024 • Hongzhao Li, Hongyu Wang, Xia Sun, Hua He, Jun Feng
Our method introduces a prompt-guided approach to generate structured chest X-ray reports using a pre-trained large language model (LLM).
no code implementations • 1 Oct 2021 • Xiaowen Cao, Li Xing, Elham Majd, Hua He, Junhua Gu, Xuekui Zhang
Methods and Results: This study evaluates 13 popular supervised machine learning algorithms to classify cell phenotypes, using published real and simulated data sets with diverse cell sizes.
no code implementations • WS 2019 • Tongfei Chen, Chetan Naik, Hua He, Pushpendre Rastogi, Lambert Mathias
One such approach for tracking the dialogue state is slot carryover, where a model makes a binary decision if a slot from the context is relevant to the current turn.
5 code implementations • ICLR 2020 • Chengxi Ye, Matthew Evanusa, Hua He, Anton Mitrokhin, Tom Goldstein, James A. Yorke, Cornelia Fermüller, Yiannis Aloimonos
Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image.
no code implementations • EMNLP 2017 • Hua He, Kris Ganjam, Navendu Jain, Jessica Lundin, Ryen White, Jimmy Lin
Mining biomedical text offers an opportunity to automatically discover important facts and infer associations among them.
no code implementations • EMNLP 2017 • Wuwei Lan, Siyu Qiu, Hua He, Wei Xu
The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work.
no code implementations • 25 Jul 2017 • Jinfeng Rao, Hua He, Haotian Zhang, Ferhan Ture, Royal Sequiera, Salman Mohammed, Jimmy Lin
To our knowledge, we are the first to integrate lexical and temporal signals in an end-to-end neural network architecture, in which existing neural ranking models are used to generate query-document similarity vectors that feed into a bidirectional LSTM layer for temporal modeling.
no code implementations • TACL 2015 • Hua He, Jimmy Lin, Adam Lopez
We believe that GPU-based extraction of hierarchical grammars is an attractive proposition, particularly for MT applications that demand high throughput.