1 code implementation • 14 Apr 2024 • Gerhard Stenzel, Sebastian Zielinski, Michael Kölle, Philipp Altmann, Jonas Nüßlein, Thomas Gabor
To address the computational complexity associated with state-vector simulation for quantum circuits, we propose a combination of advanced techniques to accelerate circuit execution.
no code implementations • 4 Apr 2024 • Philipp Altmann, Céline Davignon, Maximilian Zorn, Fabian Ritz, Claudia Linnhoff-Popien, Thomas Gabor
To enhance the interpretability of Reinforcement Learning (RL), we propose Revealing Evolutionary Action Consequence Trajectories (REACT).
no code implementations • 13 Jan 2024 • Michael Kölle, Tom Schubert, Philipp Altmann, Maximilian Zorn, Jonas Stein, Claudia Linnhoff-Popien
With recent advancements in quantum computing technology, optimizing quantum circuits and ensuring reliable quantum state preparation have become increasingly vital.
no code implementations • 13 Jan 2024 • Michael Kölle, Mohamad Hgog, Fabian Ritz, Philipp Altmann, Maximilian Zorn, Jonas Stein, Claudia Linnhoff-Popien
In this work, we propose a novel quantum reinforcement learning approach that combines the Advantage Actor-Critic algorithm with variational quantum circuits by substituting parts of the classical components.
1 code implementation • 18 Dec 2023 • Philipp Altmann, Jonas Stein, Michael Kölle, Adelina Bärligea, Thomas Gabor, Thomy Phan, Sebastian Feld, Claudia Linnhoff-Popien
Quantum computing (QC) in the current NISQ era is still limited in size and precision.
no code implementations • 9 Dec 2023 • Jonas Stein, Navid Roshani, Maximilian Zorn, Philipp Altmann, Michael Kölle, Claudia Linnhoff-Popien
A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs).
no code implementations • 27 Nov 2023 • Daniëlle Schuman, Leo Sünkel, Philipp Altmann, Jonas Stein, Christoph Roch, Thomas Gabor, Claudia Linnhoff-Popien
Quantum Transfer Learning (QTL) recently gained popularity as a hybrid quantum-classical approach for image classification tasks by efficiently combining the feature extraction capabilities of large Convolutional Neural Networks with the potential benefits of Quantum Machine Learning (QML).
no code implementations • 9 Nov 2023 • Michael Kölle, Felix Topp, Thomy Phan, Philipp Altmann, Jonas Nüßlein, Claudia Linnhoff-Popien
We showed that our Variational Quantum Circuit approaches perform significantly better compared to a neural network with a similar amount of trainable parameters.
no code implementations • 9 Nov 2023 • Michael Kölle, Jonas Maurer, Philipp Altmann, Leo Sünkel, Jonas Stein, Claudia Linnhoff-Popien
We propose a novel hybrid architecture: instead of utilizing a pre-trained network for compression, we employ an autoencoder to derive a compressed version of the input data.
1 code implementation • 26 Apr 2023 • Philipp Altmann, Fabian Ritz, Leonard Feuchtinger, Jonas Nüßlein, Claudia Linnhoff-Popien, Thomy Phan
Current state-of-the-art approaches for generalization apply data augmentation techniques to increase the diversity of training data.
no code implementations • 18 Jan 2023 • Philipp Altmann, Thomy Phan, Fabian Ritz, Thomas Gabor, Claudia Linnhoff-Popien
We propose discriminative reward co-training (DIRECT) as an extension to deep reinforcement learning algorithms.
no code implementations • 18 Jan 2023 • Michael Kölle, Tim Matheis, Philipp Altmann, Kyrill Schmid
Enabling autonomous agents to act cooperatively is an important step to integrate artificial intelligence in our daily lives.
1 code implementation • 6 Jan 2023 • Philipp Altmann, Leo Sünkel, Jonas Stein, Tobias Müller, Christoph Roch, Claudia Linnhoff-Popien
However, as high-dimensional real-world applications are not yet feasible to be solved using purely quantum hardware, hybrid methods using both classical and quantum machine learning paradigms have been proposed.
no code implementations • 10 Aug 2022 • Fabian Ritz, Thomy Phan, Andreas Sedlmeier, Philipp Altmann, Jan Wieghardt, Reiner Schmid, Horst Sauer, Cornel Klein, Claudia Linnhoff-Popien, Thomas Gabor
We define a comprehensive SD process model for ML that encompasses most tasks and artifacts described in the literature in a consistent way.
1 code implementation • NeurIPS 2021 • Thomy Phan, Fabian Ritz, Lenz Belzner, Philipp Altmann, Thomas Gabor, Claudia Linnhoff-Popien
We evaluate VAST in three multi-agent domains and show that VAST can significantly outperform state-of-the-art VFF, when the number of agents is sufficiently large.
no code implementations • 22 Sep 2021 • Tobias Müller, Christoph Roch, Kyrill Schmid, Philipp Altmann
Reinforcement learning has driven impressive advances in machine learning.
BIG-bench Machine Learning Multi-agent Reinforcement Learning +2
no code implementations • 8 Aug 2019 • Thomas Gabor, Philipp Altmann
The surrogate is used to recommend new items to the user, which are then evaluated according to the user's liking and subsequently removed from the search space.