Search Results for author: Gunar Schirner

Found 8 papers, 0 papers with code

Enhancing Automatic Modulation Recognition for IoT Applications Using Transformers

no code implementations8 Mar 2024 Narges Rashvand, Kenneth Witham, Gabriel Maldonado, Vinit Katariya, Nishanth Marer Prabhu, Gunar Schirner, Hamed Tabkhi

Automatic modulation recognition (AMR) is vital for accurately identifying modulation types within incoming signals, a critical task for optimizing operations within edge devices in IoT ecosystems.

Automatic Modulation Recognition Edge-computing

Multistatic-Radar RCS-Signature Recognition of Aerial Vehicles: A Bayesian Fusion Approach

no code implementations28 Feb 2024 Michael Potter, Murat Akcakaya, Marius Necsoiu, Gunar Schirner, Deniz Erdogmus, Tales Imbiriba

To address this, we propose a fully Bayesian RATR framework employing Optimal Bayesian Fusion (OBF) to aggregate classification probability vectors from multiple radars.

Classification

Inference of Upcoming Human Grasp Using EMG During Reach-to-Grasp Movement

no code implementations19 Apr 2021 Mo Han, Mehrshad Zandigohar, Sezen Yagmur Gunay, Gunar Schirner, Deniz Erdogmus

We collected and utilized data from large gesture vocabularies with multiple dynamic actions to encode the transitions from one grasp intent to another based on common sequences of the grasp movements.

Electromyography (EMG) General Classification +2

Multimodal Fusion of EMG and Vision for Human Grasp Intent Inference in Prosthetic Hand Control

no code implementations8 Apr 2021 Mehrshad Zandigohar, Mo Han, Mohammadreza Sharif, Sezen Yagmur Gunay, Mariusz P. Furmanek, Mathew Yarossi, Paolo Bonato, Cagdas Onal, Taskin Padir, Deniz Erdogmus, Gunar Schirner

Conclusion: Our experimental data analyses demonstrate that EMG and visual evidence show complementary strengths, and as a consequence, fusion of multimodal evidence can outperform each individual evidence modality at any given time.

Electroencephalogram (EEG) Electromyography (EMG)

From Hand-Perspective Visual Information to Grasp Type Probabilities: Deep Learning via Ranking Labels

no code implementations8 Mar 2021 Mo Han, Sezen Ya{ğ}mur Günay, İlkay Yıldız, Paolo Bonato, Cagdas D. Onal, Taşkın Padır, Gunar Schirner, Deniz Erdo{ğ}muş

Convolutional neural network-based computer vision control of the prosthetic hand has received increased attention as a method to replace or complement physiological signals due to its reliability by training visual information to predict the hand gesture.

HANDS: A Multimodal Dataset for Modeling Towards Human Grasp Intent Inference in Prosthetic Hands

no code implementations8 Mar 2021 Mo Han, Sezen Ya{ğ}mur Günay, Gunar Schirner, Taşkın Padır, Deniz Erdo{ğ}muş

Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure.

Motion Planning

NetCut: Real-Time DNN Inference Using Layer Removal

no code implementations13 Jan 2021 Mehrshad Zandigohar, Deniz Erdogmus, Gunar Schirner

Deep Learning plays a significant role in assisting humans in many aspects of their lives.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.