no code implementations • 11 May 2023 • Anna-Christina Glock, Florian Sobieczky, Johannes Fürnkranz, Peter Filzmoser, Martin Jech
A change point detection (CPD) framework assisted by a predictive machine learning model called "Predict and Compare" is introduced and characterised in relation to other state-of-the-art online CPD routines which it outperforms in terms of false positive rate and out-of-control average run length.
1 code implementation • 24 Jan 2023 • Van Quoc Phuong Huynh, Johannes Fürnkranz, Florian Beck
Instead, we propose an efficient algorithm that aims at finding the best rule covering each training example in a greedy optimization consisting of one specialization and one generalization loop.
no code implementations • 3 Aug 2022 • Timo Bertram, Johannes Fürnkranz, Martin Müller
In this work, we adapt a training approach inspired by the original AlphaGo system to play the imperfect information game of Reconnaissance Blind Chess.
no code implementations • 20 Apr 2022 • Timo Bertram, Johannes Fürnkranz, Martin Müller
In this paper, we study learning in probabilistic domains where the learner may receive incorrect labels but can improve the reliability of labels by repeatedly sampling them.
no code implementations • 9 Mar 2022 • Aissatou Diallo, Johannes Fürnkranz
Cross-domain alignment play a key roles in tasks ranging from machine translation to transfer learning.
no code implementations • 28 Feb 2022 • Aïssatou Diallo, Johannes Fürnkranz
Entity Set Expansion is an important NLP task that aims at expanding a small set of entities into a larger one with items from a large pool of candidates.
1 code implementation • 13 Dec 2021 • Eneldo Loza Mencía, Moritz Kulessa, Simon Bohlender, Johannes Fürnkranz
However, the method requires a fixed, static order of the labels.
1 code implementation • 18 Oct 2021 • Michael Rapp, Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz
Early outbreak detection is a key aspect in the containment of infectious diseases, as it enables the identification and isolation of infected individuals before the disease can spread to a larger population.
no code implementations • 9 Jul 2021 • Timo Bertram, Johannes Fürnkranz, Martin Müller
We discuss and compare two different Siamese network architectures for this task: a twin network that compares the two sets resulting after the addition, and a triplet network that models the contribution of each candidate to the existing set.
no code implementations • 22 Jun 2021 • Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier
Based on the derivatives computed during training, we dynamically group the labels into a predefined number of bins to impose an upper bound on the dimensionality of the linear system.
no code implementations • 18 Jun 2021 • Florian Beck, Johannes Fürnkranz
Inductive rule learning is arguably among the most traditional paradigms in machine learning.
no code implementations • 18 Jun 2021 • Florian Beck, Johannes Fürnkranz
We investigate whether it is possible to learn rule sets efficiently in a network structure with a single hidden layer using iterative refinements over mini-batches of examples.
1 code implementation • 25 May 2021 • Timo Bertram, Johannes Fürnkranz, Martin Müller
Drafting, i. e., the selection of a subset of items from a larger candidate set, is a key element of many games and related problems.
no code implementations • 21 May 2021 • Aïssatou Diallo, Johannes Fürnkranz
Typically, each object is mapped onto a point vector in a low dimensional metric space.
1 code implementation • 28 Jan 2021 • Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz
Infectious disease surveillance is of great importance for the prevention of major outbreaks.
no code implementations • 26 Jan 2021 • Tobias Joppen, Johannes Fürnkranz
In this paper we take a look at MCTS, a popular algorithm to solve MDPs, highlight a reoccurring problem concerning its use of rewards, and show that an ordinal treatment of the rewards overcomes this problem.
no code implementations • 8 Dec 2020 • Johannes Fürnkranz, Eyke Hüllermeier, Eneldo Loza Mencía, Michael Rapp
Arguably the key reason for the success of deep neural networks is their ability to autonomously form non-linear combinations of the input features, which can be used in subsequent layers of the network.
no code implementations • 2 Nov 2020 • Eyke Hüllermeier, Marcel Wever, Eneldo Loza Mencia, Johannes Fürnkranz, Michael Rapp
For evaluating such predictions, the set of predicted labels needs to be compared to the ground-truth label set associated with that instance, and various loss functions have been proposed for this purpose.
no code implementations • 16 Jul 2020 • Eyke Hüllermeier, Johannes Fürnkranz, Eneldo Loza Mencia
We advocate the use of conformal prediction (CP) to enhance rule-based multi-label classification (MLC).
1 code implementation • 23 Jun 2020 • Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz, Vu-Linh Nguyen, Eyke Hüllermeier
In multi-label classification, where the evaluation of predictions is less straightforward than in single-label classification, various meaningful, though different, loss functions have been proposed.
no code implementations • 21 Jun 2020 • Vu-Linh Nguyen, Eyke Hüllermeier, Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz
While a variety of ensemble methods for multilabel classification have been proposed in the literature, the question of how to aggregate the predictions of the individual members of the ensemble has received little attention so far.
no code implementations • 11 Nov 2019 • Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz
We analyze the trade-off between model complexity and accuracy for random forests by breaking the trees up into individual classification rules and selecting a subset of them.
no code implementations • 8 Nov 2019 • Tomáš Kliegr, Štěpán Bahník, Johannes Fürnkranz
The areas of machine learning and knowledge discovery in databases have considerably matured in recent years.
3 code implementations • 19 Aug 2019 • Johannes Czech, Moritz Willig, Alena Beyer, Kristian Kersting, Johannes Fürnkranz
Crazyhouse is a game with a higher branching factor than chess and there is only limited data of lower quality available compared to AlphaGo.
1 code implementation • 8 Aug 2019 • Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz
Many rule learning algorithms employ a heuristic-guided search for rules that model regularities contained in the training data and it is commonly accepted that the choice of the heuristic has a significant impact on the predictive performance of the learner.
no code implementations • 17 Jul 2019 • Moritz Kulessa, Eneldo Loza Mencía, Johannes Fürnkranz
Our results on synthetic data show that it is challenging to improve the performance with a trainable fusion method based on machine learning.
no code implementations • 31 May 2019 • Tobias Joppen, Tilman Strübig, Johannes Fürnkranz
In this paper, we present a simple and cheap ordinal bucketing algorithm that approximately generates $q$-quantiles from an incremental data stream.
1 code implementation • 6 May 2019 • Alexander Zap, Tobias Joppen, Johannes Fürnkranz
Reinforcement learning usually makes use of numerical rewards, which have nice properties but also come with drawbacks and difficulties.
2 code implementations • 14 Jan 2019 • Tobias Joppen, Johannes Fürnkranz
In many problem settings, most notably in game playing, an agent receives a possibly delayed reward for its actions.
1 code implementation • 14 Dec 2018 • Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz
Exploiting dependencies between labels is considered to be crucial for multi-label classification.
1 code implementation • 30 Nov 2018 • Eneldo Loza Mencía, Johannes Fürnkranz, Eyke Hüllermeier, Michael Rapp
Multi-label classification (MLC) is a supervised learning problem in which, contrary to standard multiclass classification, an instance can be associated with several class labels simultaneously.
no code implementations • 27 Nov 2018 • Lukas Fleckenstein, Sebastian Kauschke, Johannes Fürnkranz
With today's abundant streams of data, the only constant we can rely on is change.
no code implementations • 17 Jul 2018 • Tobias Joppen, Christian Wirth, Johannes Fürnkranz
To deal with such cases, the experimenter has to supply an additional numeric feedback signal in the form of a heuristic, which intrinsically guides the agent.
no code implementations • 9 Apr 2018 • Tomáš Kliegr, Štěpán Bahník, Johannes Fürnkranz
While the interpretability of machine learning models is often equated with their mere syntactic comprehensibility, we think that interpretability goes beyond that, and that human interpretability should also be investigated from the point of view of cognitive science.
1 code implementation • 4 Mar 2018 • Johannes Fürnkranz, Tomáš Kliegr, Heiko Paulheim
It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones.
no code implementations • NeurIPS 2017 • Jinseok Nam, Eneldo Loza Mencía, Hyunwoo J. Kim, Johannes Fürnkranz
Multi-label classification is the task of predicting a set of labels for a given input instance.
no code implementations • 22 Dec 2014 • Jinseok Nam, Johannes Fürnkranz
We present a novel method to learn vector representations of a label space given a hierarchy of labels and label co-occurrence patterns.
no code implementations • 19 Dec 2013 • Jinseok Nam, Jungi Kim, Eneldo Loza Mencía, Iryna Gurevych, Johannes Fürnkranz
Neural networks have recently been proposed for multi-label classification because they are able to capture and model label dependencies in the output layer.