no code implementations • 16 Feb 2024 • Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann
This paper tackles the scarcity of benchmarking data in disentangled auditory representation learning.
no code implementations • 12 Feb 2024 • Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kuehnberger
Our research demonstrates the significant benefits of using fine-tuning with explanations to enhance the performance of language models.
no code implementations • 4 Nov 2023 • Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann
This benchmark dataset and framework address the gap in the rigorous evaluation of state-of-the-art disentangled speech representation learning methods.
no code implementations • 7 Sep 2023 • Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann
This study provides an empirical analysis of Barlow Twins (BT), an SSL technique inspired by theories of redundancy reduction in human perception.
no code implementations • 21 Jun 2023 • Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger
They all have similar performance and they outperform transformers that are trained from scratch by a large margin.
1 code implementation • 21 Jun 2023 • Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger
Investigating deep learning language models has always been a significant research area due to the ``black box" nature of most advanced models.
no code implementations • 29 Sep 2021 • Mats Leon Richter, Krupal Shah, Anna Wiedenroth, Saketh Bachu, Ulf Krumnack
The architectures of convolution neural networks (CNN) have a great impact on the predictive performance and efficiency of the model.
1 code implementation • 23 Jun 2021 • Mats L. Richter, Julius Schöning, Anna Wiedenroth, Ulf Krumnack
When optimizing convolutional neural networks (CNN) for a specific image-based task, specialists commonly overshoot the number of convolutional layers in their designs.
no code implementations • 17 Jun 2021 • Mats L. Richter, Leila Malihi, Anne-Kathrin Patricia Windler, Ulf Krumnack
In this work we explore the information processing inside neural networks using logistic regression probes \cite{probes} and the saturation metric \cite{featurespace_saturation}.
no code implementations • 2 Feb 2021 • Mats L. Richter, Wolf Byttner, Ulf Krumnack, Ludwdig Schallner, Justin Shenk
Fully convolutional neural networks can process input of arbitrary size by applying a combination of downsampling and pooling.