Search Results for author: Wolfgang Lehner

Found 4 papers, 3 papers with code

To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models

1 code implementation6 Oct 2022 Julius Gonsior, Christian Falkenberg, Silvio Magino, Anja Reusch, Maik Thiele, Wolfgang Lehner

Despite achieving state-of-the-art results in nearly all Natural Language Processing applications, fine-tuning Transformer-based language models still requires a significant amount of labeled data to work.

Active Learning

ImitAL: Learned Active Learning Strategy on Synthetic Data

1 code implementation24 Aug 2022 Julius Gonsior, Maik Thiele, Wolfgang Lehner

Basically, most of the the existing AL strategies are a combination of the two simple heuristics informativeness and representativeness, and the big differences lie in the combination of the often conflicting heuristics.

Active Learning Informativeness +1

ImitAL: Learning Active Learning Strategies from Synthetic Data

1 code implementation17 Aug 2021 Julius Gonsior, Maik Thiele, Wolfgang Lehner

Active Learning (AL) is a well-known standard method for efficiently obtaining labeled data by first labeling the samples that contain the most information based on a query strategy.

Active Learning Imitation Learning +1

RETRO: Relation Retrofitting For In-Database Machine Learning on Textual Data

no code implementations28 Nov 2019 Michael Günther, Maik Thiele, Wolfgang Lehner

Thus, we argue to additionally incorporate the information given by the database schema into the embedding, e. g. which words appear in the same column or are related to each other.

BIG-bench Machine Learning Imputation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.