no code implementations • 10 Feb 2024 • Bernhard A. Moser, Michael Lunglmayr
Leaky-integrate-and-fire (LIF) is studied as a non-linear operator that maps an integrable signal $f$ to a sequence $\eta_f$ of discrete events, the spikes.
no code implementations • 24 Nov 2023 • Daniel Windhager, Bernhard A. Moser, Michael Lunglmayr
We present synthesis and performance results showing that this architecture can be implemented for networks of more than 1000 neurons with high clock speeds on a State-of-the-Art FPGA.
1 code implementation • 13 May 2023 • Bernhard A. Moser, Michael Lunglmayr
In spiking neural networks (SNN), at each node, an incoming sequence of weighted Dirac pulses is converted into an output sequence of weighted Dirac pulses by a leaky-integrate-and-fire (LIF) neuron model based on spike aggregation and thresholding.
1 code implementation • 9 May 2023 • Bernhard A. Moser, Michael Lunglmayr
A central question is the adequate structure for a space of spike trains and its implication for the design of error measurements of SNNs including time delay, threshold deviations, and the design of the reinitialization mode of the leaky-integrate-and-fire (LIF) neuron model.
1 code implementation • 2 May 2023 • Marius-Constantin Dinu, Markus Holzleitner, Maximilian Beck, Hoan Duc Nguyen, Andrea Huber, Hamid Eghbal-zadeh, Bernhard A. Moser, Sergei Pereverzyev, Sepp Hochreiter, Werner Zellinger
Our method outperforms deep embedded validation (DEV) and importance weighted validation (IWV) on all datasets, setting a new state-of-the-art performance for solving parameter choice issues in unsupervised domain adaptation with theoretical error guarantees.
no code implementations • 3 Apr 2023 • Mohit Kumar, Bernhard A. Moser, Lukas Fischer
Privacy-utility tradeoff remains as one of the fundamental issues of differentially private machine learning.
no code implementations • 4 May 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli
In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.
1 code implementation • NeurIPS 2021 • Werner Zellinger, Natalia Shepeleva, Marius-Constantin Dinu, Hamid Eghbal-zadeh, Hoan Nguyen, Bernhard Nessler, Sergei Pereverzyev, Bernhard A. Moser
Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target feature representations, shares characteristics with regularized ill-posed inverse problems.
no code implementations • 6 Jun 2021 • Mohit Kumar, Bernhard A. Moser, Lukas Fischer, Bernhard Freudenthaler
A variational membership-mapping Bayesian model is used for the analytical approximations of the defined information theoretic measures for privacy-leakage, interpretability, and transferability.
no code implementations • 14 Apr 2021 • Mohit Kumar, Bernhard A. Moser, Lukas Fischer, Bernhard Freudenthaler
An analytical approach to the variational learning of a membership-mappings based data representation model is considered.
no code implementations • 6 Jul 2020 • Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, Michal Lewandowski, Werner Zellinger, Bernhard A. Moser, Gerhard Widmer
Data augmentation techniques have become standard practice in deep learning, as it has been shown to greatly improve the generalisation abilities of models.
no code implementations • 19 Feb 2020 • Werner Zellinger, Bernhard A. Moser, Susanne Saminger-Platz
Domain adaptation algorithms are designed to minimize the misclassification risk of a discriminative model for a target domain with little training data by adapting a model from a source domain with a large amount of training data.
1 code implementation • 22 Jun 2018 • Hamid Eghbal-zadeh, Lukas Fischer, Niko Popitsch, Florian Kromp, Sabine Taschner-Mandl, Khaled Koutini, Teresa Gerber, Eva Bozsaky, Peter F. Ambros, Inge M. Ambros, Gerhard Widmer, Bernhard A. Moser
We show, that Deep SNP is capable of successfully predicting the presence or absence of a breakpoint in large genomic windows and outperforms state-of-the-art neural network models.
2 code implementations • 16 Nov 2017 • Werner Zellinger, Bernhard A. Moser, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, Susanne Saminger-Platz
A novel approach for unsupervised domain adaptation for neural networks is proposed.