Search Results for author: Sandjai Bhulai

Found 13 papers, 4 papers with code

Robustly overfitting latents for flexible neural image compression

no code implementations31 Jan 2024 Yura Perugachi-Diaz, Arwin Gansekoele, Sandjai Bhulai

Finally, we show how refinement of the latents with our best-performing method improves the compression performance on the Tecnick dataset and how it can be deployed to partly move along the rate-distortion curve.

Decoder Image Compression

Multi-Agent Reinforcement Learning for Power Grid Topology Optimization

no code implementations4 Oct 2023 Erica van der Sar, Alessandro Zocca, Sandjai Bhulai

Recent challenges in operating power networks arise from increasing energy demands and unpredictable renewable sources like wind and solar.

Multi-agent Reinforcement Learning reinforcement-learning +1

The Berkelmans-Pries Feature Importance Method: A Generic Measure of Informativeness of Features

1 code implementation11 Jan 2023 Joris Pries, Guus Berkelmans, Sandjai Bhulai, Rob van der Mei

We prove that our method has many useful properties, and accurately predicts the correct FI values for several cases where the ground truth FI can be derived in an exact manner.

Feature Importance Informativeness

The Optimal Input-Independent Baseline for Binary Classification: The Dutch Draw

no code implementations9 Jan 2023 Joris Pries, Etienne van de Bijl, Jan Klein, Sandjai Bhulai, Rob van der Mei

The goal of this paper is to examine all baseline methods that are independent of feature values and determine which model is the `best' and why.

Binary Classification

RangL: A Reinforcement Learning Competition Platform

no code implementations28 Jul 2022 Viktor Zobernig, Richard A. Saldanha, Jinke He, Erica van der Sar, Jasper van Doorn, Jia-Chen Hua, Lachlan R. Mason, Aleksander Czechowski, Drago Indjic, Tomasz Kosmala, Alessandro Zocca, Sandjai Bhulai, Jorge Montalvo Arvizu, Claude Klöckl, John Moriarty

The RangL project hosted by The Alan Turing Institute aims to encourage the wider uptake of reinforcement learning by supporting competitions relating to real-world dynamic decision problems.

OpenAI Gym reinforcement-learning +1

The Dutch Draw: Constructing a Universal Baseline for Binary Prediction Models

1 code implementation24 Mar 2022 Etienne van de Bijl, Jan Klein, Joris Pries, Sandjai Bhulai, Mark Hoogendoorn, Rob van der Mei

Summarizing, the DD baseline is: (1) general, as it is applicable to all binary classification problems; (2) simple, as it is quickly determined without training or parameter-tuning; (3) informative, as insightful conclusions can be drawn from the results.

Binary Classification

The BP Dependency Function: a Generic Measure of Dependence between Random Variables

1 code implementation23 Mar 2022 Guus Berkelmans, Joris Pries, Sandjai Bhulai, Rob van der Mei

To this end, we also provide Python code to determine the dependency function for use in practice.

Job Recommender Systems: A Review

no code implementations26 Nov 2021 Corné de Ruijt, Sandjai Bhulai

This paper provides a review of the job recommender system (JRS) literature published in the past decade (2011-2021).

Fairness Recommendation Systems

The Generalized Cascade Click Model: A Unified Framework for Estimating Click Models

no code implementations22 Nov 2021 Corné de Ruijt, Sandjai Bhulai

To arrive at that conclusion, we will present the Generalized Cascade Model (GCM) and show how this model can be estimated using the IO-HMM EM framework, and provide two examples of how existing click models can be mapped to GCM.

Jasmine: A New Active Learning Approach to Combat Cybercrime

no code implementations13 Aug 2021 Jan Klein, Sandjai Bhulai, Mark Hoogendoorn, Rob van der Mei

These approaches choose speci? fic unlabeled instances by a query function that are expected to improve overall classi? cation performance.

Active Learning Intrusion Detection

Personalized Stopping Rules in Bayesian Adaptive Mastery Assessment

no code implementations5 Mar 2021 Anni Sapountzi, Sandjai Bhulai, Ilja Cornelisz, Chris van Klaveren

We propose a new model to assess the mastery level of a given skill efficiently.

Decision Making Optimization and Control

Invertible DenseNets with Concatenated LipSwish

1 code implementation NeurIPS 2021 Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai

Furthermore, we propose a learnable weighted concatenation, which not only improves the model performance but also indicates the importance of the concatenated weighted representation.

Density Estimation

Invertible DenseNets

no code implementations pproximateinference AABI Symposium 2021 Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai

We introduce Invertible Dense Networks (i-DenseNets), a more parameter efficient alternative to Residual Flows.

Cannot find the paper you are looking for? You can Submit a new open access paper.