Search Results for author: Gunther Heidemann

Found 7 papers, 1 papers with code

Learning Disentangled Audio Representations through Controlled Synthesis

no code implementations16 Feb 2024 Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann

This paper tackles the scarcity of benchmarking data in disentangled auditory representation learning.

Benchmarking Disentanglement

Show Me How It's Done: The Role of Explanations in Fine-Tuning Language Models

no code implementations12 Feb 2024 Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kuehnberger

Our research demonstrates the significant benefits of using fine-tuning with explanations to enhance the performance of language models.

Learning Disentangled Speech Representations

no code implementations4 Nov 2023 Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann

This benchmark dataset and framework address the gap in the rigorous evaluation of state-of-the-art disentangled speech representation learning methods.

Disentanglement

Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction

no code implementations7 Sep 2023 Yusuf Brima, Ulf Krumnack, Simone Pika, Gunther Heidemann

This study provides an empirical analysis of Barlow Twins (BT), an SSL technique inspired by theories of redundancy reduction in human perception.

Keyword Spotting Self-Supervised Learning +1

Opening the Black Box: Analyzing Attention Weights and Hidden States in Pre-trained Language Models for Non-language Tasks

1 code implementation21 Jun 2023 Mohamad Ballout, Ulf Krumnack, Gunther Heidemann, Kai-Uwe Kühnberger

Investigating deep learning language models has always been a significant research area due to the ``black box" nature of most advanced models.

Language Modelling ListOps

Embedding semantic relationships in hidden representations via label smoothing

no code implementations1 Jan 2021 Michael Marino, Pascal Nieters, Gunther Heidemann, Joachim Hertzberg

Further, we use a new method for analyzing class hierarchy in hidden representations, Neurodynamical Agglomerative Analyisis (NAA), to show that latent class relationships in this analysis model tend toward the relationships of the label vectors as the data is projected deeper into the network.

Cannot find the paper you are looking for? You can Submit a new open access paper.