no code implementations • 14 Aug 2023 • Justin Veiner, Fady Alajaji, Bahman Gharesifard
A unifying $\alpha$-parametrized generator loss function is introduced for a dual-objective generative adversarial network (GAN), which uses a canonical (or classical) discriminator loss function such as the one in the original GAN (VanillaGAN) system.
no code implementations • 15 Feb 2023 • William Paul, Philip Mathew, Fady Alajaji, Philippe Burlina
This paper investigates to what degree and magnitude tradeoffs exist between utility, fairness and attribute privacy in computer vision.
no code implementations • 28 Jun 2022 • Ferenc Cole Thierrin, Fady Alajaji, Tamás Linder
The R\'{e}nyi cross-entropy measure between two distributions, a generalization of the Shannon cross-entropy, was recently used as a loss function for the improved design of deep learning generative adversarial networks.
1 code implementation • 20 Jun 2022 • Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
Designing machine learning algorithms that are accurate yet fair, not discriminating based on any sensitive attribute, is of paramount importance for society to accept AI for critical applications.
no code implementations • 9 Mar 2022 • Adam Gronowski, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
We develop a novel method for ensuring fairness in machine learning which we term as the Renyi Fair Information Bottleneck (RFIB).
no code implementations • 29 Jan 2021 • Jian-Jia Weng, Fady Alajaji, Tamás Linder
This paper considers an information bottleneck problem with the objective of obtaining a most informative representation of a hidden feature subject to a R\'enyi entropy complexity constraint.
Information Theory Information Theory
no code implementations • 11 Dec 2020 • William Paul, Armin Hadzic, Neil Joshi, Fady Alajaji, Phil Burlina
Our experiments also demonstrate the ability of these novel metrics in assessing the Pareto efficiency of the proposed methods.
no code implementations • 3 Jun 2020 • Himesh Bhatia, William Paul, Fady Alajaji, Bahman Gharesifard, Philippe Burlina
Another novel GAN generator loss function is next proposed in terms of R\'{e}nyi cross-entropy functionals with order $\alpha >0$, $\alpha\neq 1$.
no code implementations • 25 Feb 2020 • William Paul, I-Jeng Wang, Fady Alajaji, Philippe Burlina
Our work focuses on unsupervised and generative methods that address the following goals: (a) learning unsupervised generative representations that discover latent factors controlling image semantic attributes, (b) studying how this ability to control attributes formally relates to the issue of latent factor disentanglement, clarifying related but dissimilar concepts that had been confounded in the past, and (c) developing anomaly detection methods that leverage representations learned in (a).
no code implementations • 2 Apr 2019 • Jian-Jia Weng, Fady Alajaji, Tamás Linder
In this report, we generalize Shannon's push-to-talk two-way channel (PTT-TWC) by allowing reliable full-duplex transmission as well as noisy reception in the half-duplex (PTT) mode.
Information Theory Information Theory
no code implementations • 7 Nov 2015 • Shahab Asoodeh, Mario Diaz, Fady Alajaji, Tamás Linder
To this end, the so-called {\em rate-privacy function} is introduced to quantify the maximal amount of information (measured in terms of mutual information) that can be extracted from $Y$ under a privacy constraint between $X$ and the extracted information, where privacy is measured using either mutual information or maximal correlation.