Search Results for author: Olivier Sentieys

Found 4 papers, 0 papers with code

AdaQAT: Adaptive Bit-Width Quantization-Aware Training

no code implementations22 Apr 2024 Cédric Gernigon, Silviu-Ioan Filip, Olivier Sentieys, Clément Coggiola, Mickael Bruno

Compared to other methods that are generally designed to be run on a pretrained network, AdaQAT works well in both training from scratch and fine-tuning scenarios. Initial results on the CIFAR-10 and ImageNet datasets using ResNet20 and ResNet18 models, respectively, indicate that our method is competitive with other state-of-the-art mixed-precision quantization approaches.

Quantization

When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence

no code implementations23 Nov 2023 Benoit Coqueret, Mathieu Carbone, Olivier Sentieys, Gabriel Zaid

Our method combines hardware and software attacks, by performing a side-channel attack that exploits electromagnetic leakages to extract the logits for a given input, allowing an attacker to estimate the gradients and produce state-of-the-art adversarial examples to fool the targeted neural network.

Adversarial Attack object-detection +1

Low-Precision Floating-Point for Efficient On-Board Deep Neural Network Processing

no code implementations18 Nov 2023 Cédric Gernigon, Silviu-Ioan Filip, Olivier Sentieys, Clément Coggiola, Mickaël Bruno

Using a Thin U-Net 32 model, only a 0. 3% accuracy degradation is observed with 6-bit minifloat quantization (a 6-bit equivalent integer-based approach leads to a 0. 5% degradation).

Earth Observation Quantization +1

Customizing Number Representation and Precision

no code implementations8 Dec 2022 Olivier Sentieys, Daniel Menard

There is a growing interest in the use of reduced-precision arithmetic, exacerbated by the recent interest in artificial intelligence, especially with deep learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.