Search Results for author: Kenneth P. Camilleri

Found 8 papers, 8 papers with code

The Best of Both Worlds: a Framework for Combining Degradation Prediction with High Performance Super-Resolution Networks

1 code implementation Sensors 2023 Matthew Aquilina, Keith George Ciantar, Christian Galea, Kenneth P. Camilleri, Reuben A. Farrugia, John Abela

To date, the best-performing blind super-resolution (SR) techniques follow one of two paradigms: A) generate and train a standard SR network on synthetic low-resolution - high-resolution (LR - HR) pairs or B) attempt to predict the degradations an LR image has suffered and use these to inform a customised SR network.

Blind Super-Resolution Image Restoration +1

Improving Super-Resolution Performance using Meta-Attention Layers

1 code implementation IEEE Signal Processing Letters 2021 Matthew Aquilina, Christian Galea, John Abela, Kenneth P. Camilleri, Reuben A. Farrugia

While many such networks can upscale low-resolution (LR) images using just the raw pixel-level information, the ill-posed nature of SR can make it difficult to accurately super-resolve an image which has undergone multiple different degradations.

Image Restoration Image Super-Resolution

On Architectures for Including Visual Information in Neural Language Models for Image Description

1 code implementation9 Nov 2019 Marc Tanti, Albert Gatt, Kenneth P. Camilleri

We also observe that the merge architecture can have its recurrent neural network pre-trained in a text-only language model (transfer learning) rather than be initialised randomly as usual.

Language Modelling Sentence +1

Face2Text: Collecting an Annotated Image Description Corpus for the Generation of Rich Face Descriptions

1 code implementation LREC 2018 Albert Gatt, Marc Tanti, Adrian Muscat, Patrizia Paggio, Reuben A. Farrugia, Claudia Borg, Kenneth P. Camilleri, Mike Rosner, Lonneke van der Plas

To gain a better understanding of the variation we find in face description and the possible issues that this may raise, we also conducted an annotation study on a subset of the corpus.

What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?

4 code implementations WS 2017 Marc Tanti, Albert Gatt, Kenneth P. Camilleri

This view suggests that the RNN should only be used to encode linguistic features and that only the final representation should be `merged' with the image features at a later stage.

Image Captioning

Where to put the Image in an Image Caption Generator

12 code implementations27 Mar 2017 Marc Tanti, Albert Gatt, Kenneth P. Camilleri

When a recurrent neural network language model is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in the RNN -- conditioning the language model by `injecting' image features -- or in a layer following the RNN -- conditioning the language model by `merging' image features.

Caption Generation Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.