1 code implementation • 6 Feb 2024 • David Peer, Philemon Schöpf, Volckmar Nebendahl, Alexander Rietzler, Sebastian Stabinger
However, evaluating GLLMs presents a challenge as the binary true or false evaluation used for discriminative models is not applicable to the predictions made by GLLMs.
2 code implementations • 1 Aug 2022 • David Peer, Bart Keulen, Sebastian Stabinger, Justus Piater, Antonio Rodríguez-Sánchez
We show empirically that we can therefore train a "vanilla" fully connected network and convolutional neural network -- no skip connections, batch normalization, dropout, or any other architectural tweak -- with 500 layers by simply adding the batch-entropy regularization term to the loss function.
1 code implementation • 31 May 2021 • David Peer, Sebastian Stabinger, Stefan Engl, Antonio Rodriguez-Sanchez
Knowledge distillation maintains high performance and reaches high compression rates, nevertheless, the size of the student model is fixed after pre-training and can not be changed individually for a given downstream task and use-case to reach a desired performance/speedup ratio.
1 code implementation • 7 Mar 2021 • David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez
In the worst-case scenario, we prove that such a layer could lead to a network that cannot be trained at all.
no code implementations • 23 Feb 2021 • Sebastian Stabinger, David Peer, Antonio Rodríguez-Sánchez
Convolutional neural networks have established themselves over the past years as the state of the art method for image classification, and for many datasets, they even surpass humans in categorizing images.
1 code implementation • 5 Nov 2020 • David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez
In this paper, we introduce a novel theory and metric to identify layers that decrease the test accuracy of the trained models, this identification is done as early as at the beginning of training.
no code implementations • 29 Jan 2020 • Sebastian Stabinger, Peer David, Justus Piater, Antonio Rodríguez-Sánchez
Convolutional Neural Networks (CNNs) have become the state of the art method for image classification in the last ten years.
3 code implementations • LREC 2020 • Alexander Rietzler, Sebastian Stabinger, Paul Opitz, Stefan Engl
Aspect-Target Sentiment Classification (ATSC) is a subtask of Aspect-Based Sentiment Analysis (ABSA), which has many applications e. g. in e-commerce, where data and insights from reviews can be leveraged to create value for businesses and customers.
Ranked #7 on Aspect-Based Sentiment Analysis (ABSA) on SemEval-2014 Task-4 (using extra training data)
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +5
no code implementations • 21 May 2019 • David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez
A recently proposed method in deep learning groups multiple neurons to capsules such that each capsule represents an object or part of an object.
no code implementations • 30 Mar 2019 • Gregor Ehrensperger, Sebastian Stabinger, Antonio Rodríguez Sánchez
Deep convolutional neural networks (CNNs) are widely known for their outstanding performance in classification and regression tasks over high-dimensional data.
1 code implementation • 23 Dec 2018 • David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez
In this paper we introduce a new inductive bias for capsule networks and call networks that use this prior $\gamma$-capsule networks.
no code implementations • 6 Dec 2017 • Sebastian Stabinger, Antonio Rodriguez-Sanchez
Over the last couple of years, deep learning and especially convolutional neural networks have become one of the work horses of computer vision.
no code implementations • 25 Aug 2017 • Sebastian Stabinger, Antonio Rodriguez-Sanchez
Convolutional Neural Networks have become state of the art methods for image classification over the last couple of years.
no code implementations • 28 Jul 2016 • Sebastian Stabinger, Antonio Rodríguez-Sánchez, Justus Piater
We try to determine the progress made by convolutional neural networks over the past 25 years in classifying images into abstractc lasses.
no code implementations • 17 Jun 2016 • Sebastian Stabinger, Antonio Rodriguez-Sanchez, Justus Piater
Humans are generally good at learning abstract concepts about objects and scenes (e. g.\ spatial orientation, relative sizes, etc.).