no code implementations • 3 Sep 2023 • Mohamed Akrout
Existing large language models (LLMs) are known for generating "hallucinated" content, namely a fabricated text of plausibly looking, yet unfounded, facts.
no code implementations • 13 Mar 2023 • Mohamed Akrout, Amal Feriani, Faouzi Bellili, Amine Mezghani, Ekram Hossain
Data-driven machine learning (ML) is promoted as one potential technology to be used in next-generations wireless systems.
no code implementations • 12 Jan 2023 • Mohamed Akrout, Bálint Gyepesi, Péter Holló, Adrienn Poór, Blága Kincső, Stephen Solis, Katrina Cirone, Jeremy Kawahara, Dekker Slade, Latif Abid, Máté Kovács, István Fazekas
Similar to recent applications of generative models, our study suggests that diffusion models are indeed effective in generating high-quality skin images that do not sacrifice the classifier performance, and can improve the augmentation of training datasets after curation.
no code implementations • 26 Mar 2022 • Mohamed Akrout, Amal Feriani, Bob McLeod
We study the benefits of reinforcement learning (RL) environments based on agent-based models (ABM).
no code implementations • 16 Nov 2021 • Mohamed Akrout, Douglas Tweed
Why does the Adam optimizer work so well in deep-learning applications?
no code implementations • 30 Oct 2021 • Mohamed Akrout, Faouzi Bellili, Amine Mezghani, Hayet Amdouni
Symptom checkers have been widely adopted as an intelligent e-healthcare application during the ongoing pandemic crisis.
1 code implementation • 30 Aug 2021 • Albert Jiménez Sanfiz, Mohamed Akrout
While the interest in the field is growing, there is a necessity for open-source libraries and toolkits to foster research and benchmark algorithms.
no code implementations • 26 Jun 2020 • Firas Fredj, Yasser Al-Eryani, Setareh Maghsudi, Mohamed Akrout, Ekram Hossain
First, we propose a fully centralized beamforming method that uses the deep deterministic policy gradient algorithm (DDPG) with continuous space.
no code implementations • 29 Jan 2020 • Yasser Al-Eryani, Mohamed Akrout, Ekram Hossain
To significantly reduce the complexity of joint processing of users' signals in presence of a large number of devices and APs, we propose a novel dynamic cell-free network architecture.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Mohamed Akrout
Neural networks trained with backpropagation, the standard algorithm of deep learning which uses weight transport, are easily fooled by existing gradient-based adversarial attacks.
3 code implementations • NeurIPS 2019 • Mohamed Akrout, Collin Wilson, Peter C. Humphreys, Timothy Lillicrap, Douglas Tweed
Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, where forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically.
no code implementations • 8 Mar 2019 • Mohamed Akrout, Amir-Massoud Farahmand, Tory Jarmain, Latif Abid
Moreover, the increased accuracy is up to 10% compared to the approach that uses the visual information provided by CNN along with a conventional decision tree-based QA system.
no code implementations • 3 Mar 2019 • Ismail Akrout, Amal Feriani, Mohamed Akrout
We present a Reinforcement Learning (RL) methodology to bypass Google reCAPTCHA v3.
no code implementations • 15 Nov 2018 • Mohamed Akrout, Amir-Massoud Farahmand, Tory Jarmain
We present a skin condition classification methodology based on a sequential pipeline of a pre-trained Convolutional Neural Network (CNN) and a Question Answering (QA) model.
no code implementations • 16 Mar 2018 • Hongyu Zhu, Mohamed Akrout, Bojian Zheng, Andrew Pelegris, Amar Phanishayee, Bianca Schroeder, Gennady Pekhimenko
Our primary goal in this work is to break this myopic view by (i) proposing a new benchmark for DNN training, called TBD (TBD is short for Training Benchmark for DNNs), that uses a representative set of DNN models that cover a wide range of machine learning applications: image classification, machine translation, speech recognition, object detection, adversarial networks, reinforcement learning, and (ii) by performing an extensive performance analysis of training these different applications on three major deep learning frameworks (TensorFlow, MXNet, CNTK) across different hardware configurations (single-GPU, multi-GPU, and multi-machine).