Search Results for author: Ying Hua Tan

Found 3 papers, 1 papers with code

ACORT: A Compact Object Relation Transformer for Parameter Efficient Image Captioning

1 code implementation11 Feb 2022 Jia Huei Tan, Ying Hua Tan, Chee Seng Chan, Joon Huang Chuah

Recent research that applies Transformer-based architectures to image captioning has resulted in state-of-the-art image captioning performance, capitalising on the success of Transformers on natural language tasks.

Image Captioning Relation

Phrase-based Image Captioning with Hierarchical LSTM Model

no code implementations11 Nov 2017 Ying Hua Tan, Chee Seng Chan

Automatic generation of caption to describe the content of an image has been gaining a lot of research interests recently, where most of the existing works treat the image caption as pure sequential data.

Decoder Image Captioning +1

phi-LSTM: A Phrase-based Hierarchical LSTM Model for Image Captioning

no code implementations20 Aug 2016 Ying Hua Tan, Chee Seng Chan

The two levels of this model are dedicated to i) learn to generate image relevant noun phrases, and ii) produce appropriate image description from the phrases and other words in the corpus.

Image Captioning Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.