no code implementations • 11 Nov 2023 • Maarten De Raedt, Semere Kiros Bitew, Fréderic Godin, Thomas Demeester, Chris Develder
The brittleness of finetuned language model performance on out-of-distribution (OOD) test samples in unseen domains has been well-studied for English, yet is unexplored for multi-lingual models.
Cross-Lingual Sentiment Classification Sentiment Analysis +3
1 code implementation • 31 May 2023 • Maarten De Raedt, Fréderic Godin, Thomas Demeester, Chris Develder
Intent discovery is the task of inferring latent intents from a set of unlabeled utterances, and is a useful step towards the efficient creation of new conversational agents.
1 code implementation • 21 Oct 2022 • Maarten De Raedt, Fréderic Godin, Chris Develder, Thomas Demeester
We demonstrate the effectiveness of our approach in sentiment classification, using IMDb data for training and other sets for OOD tests (i. e., Amazon, SemEval and Yelp).
no code implementations • EMNLP 2021 • Maarten De Raedt, Fréderic Godin, Pieter Buteneers, Chris Develder, Thomas Demeester
Powerful sentence encoders trained for multiple languages are on the rise.
no code implementations • NAACL 2019 • Fréderic Godin, Anjishnu Kumar, Arpit Mittal
In this paper, we investigate the challenges of using reinforcement learning agents for question-answering over knowledge graphs for real-world applications.
1 code implementation • EMNLP 2018 • Fréderic Godin, Kris Demuynck, Joni Dambre, Wesley De Neve, Thomas Demeester
In this paper, we investigate which character-level patterns neural networks learn and if those patterns coincide with manually-defined word segmentations and annotations.
1 code implementation • CONLL 2018 • Thomas Demeester, Johannes Deleu, Fréderic Godin, Chris Develder
Inducing sparseness while training neural networks has been shown to yield models with a lower memory footprint but similar effectiveness to dense models.
2 code implementations • 25 Jul 2017 • Fréderic Godin, Jonas Degrave, Joni Dambre, Wesley De Neve
A DReLU, which comes with an unbounded positive and negative image, can be used as a drop-in replacement for a tanh activation function in the recurrent step of Quasi-Recurrent Neural Networks (QRNNs) (Bradbury et al. (2017)).
no code implementations • WS 2017 • Fréderic Godin, Joni Dambre, Wesley De Neve
In this paper, we introduce the novel concept of densely connected layers into recurrent neural networks.