no code implementations • 21 Jul 2023 • Toan N. Nguyen, Phuong Ha Nguyen, Lam M. Nguyen, Marten van Dijk
In this paper, we propose {\em a new ALC and provide rigorous DP proofs for both BC and ALC}.
no code implementations • 8 Mar 2023 • Marten van Dijk, Phuong Ha Nguyen
In federated learning collaborative learning takes place by a set of clients who each want to remain in control of how their local training data is used, in particular, how can each client's local training data remain private?
no code implementations • 12 Dec 2022 • Marten van Dijk, Phuong Ha Nguyen, Toan N. Nguyen, Lam M. Nguyen
Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach.
no code implementations • 17 Feb 2021 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Phuong Ha Nguyen
Generally, DP-SGD is $(\epsilon\leq 1/2,\delta=1/N)$-DP if $\sigma=\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$ with $T$ at least $\approx 2k^2/\epsilon$ and $(2/e)^2k^2-1/2\geq \ln(N)$, where $T$ is the total number of rounds, and $K=kN$ is the total number of gradient computations where $k$ measures $K$ in number of epochs of size $N$ of the local data set.
no code implementations • 1 Jan 2021 • Kaleel Mahmood, Phuong Ha Nguyen, Lam M. Nguyen, Thanh V Nguyen, Marten van Dijk
Based on our study of these defenses, we develop three contributions.
no code implementations • 27 Oct 2020 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen
We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides.
no code implementations • 17 Jul 2020 • Marten van Dijk, Nhuong V. Nguyen, Toan N. Nguyen, Lam M. Nguyen, Quoc Tran-Dinh, Phuong Ha Nguyen
The feasibility of federated learning is highly constrained by the server-clients infrastructure in terms of network communication.
1 code implementation • 18 Jun 2020 • Kaleel Mahmood, Deniz Gurevin, Marten van Dijk, Phuong Ha Nguyen
We provide this large scale study and analyses to motivate the field to move towards the development of more robust black-box defenses.
1 code implementation • 1 Mar 2020 • Nhan H. Pham, Lam M. Nguyen, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk, Quoc Tran-Dinh
We propose a novel hybrid stochastic policy gradient estimator by combining an unbiased policy gradient estimator, the REINFORCE estimator, with another biased one, an adapted SARAH estimator for policy optimization.
no code implementations • 19 Feb 2020 • Lam M. Nguyen, Quoc Tran-Dinh, Dzung T. Phan, Phuong Ha Nguyen, Marten van Dijk
We also study uniformly randomized shuffling variants with different learning rates and model assumptions.
no code implementations • 3 Oct 2019 • Kaleel Mahmood, Phuong Ha Nguyen, Lam M. Nguyen, Thanh Nguyen, Marten van Dijk
We argue that our defense based on buffer zones offers significant improvements over state-of-the-art defenses.
no code implementations • 22 Jan 2019 • Lam M. Nguyen, Marten van Dijk, Dzung T. Phan, Phuong Ha Nguyen, Tsui-Wei Weng, Jayant R. Kalagnanam
The total complexity (measured as the total number of gradient computations) of a stochastic first-order optimization algorithm that finds a first-order stationary point of a finite-sum smooth nonconvex objective function $F(w)=\frac{1}{n} \sum_{i=1}^n f_i(w)$ has been proven to be at least $\Omega(\sqrt{n}/\epsilon)$ for $n \leq \mathcal{O}(\epsilon^{-2})$ where $\epsilon$ denotes the attained accuracy $\mathbb{E}[ \|\nabla F(\tilde{w})\|^2] \leq \epsilon$ for the outputted approximation $\tilde{w}$ (Fang et al., 2018).
no code implementations • 22 Jan 2019 • Lam M. Nguyen, Phuong Ha Nguyen, Dzung T. Phan, Jayant R. Kalagnanam, Marten van Dijk
This paper has some inconsistent results, i. e., we made some failed claims because we did some mistakes for using the test criterion for a series.
no code implementations • 10 Nov 2018 • Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takáč, Marten van Dijk
We show the convergence of SGD for strongly convex objective function without using bounded gradient assumption when $\{\eta_t\}$ is a diminishing sequence and $\sum_{t=0}^\infty \eta_t \rightarrow \infty$.
no code implementations • NeurIPS 2019 • Phuong Ha Nguyen, Lam M. Nguyen, Marten van Dijk
We study the convergence of Stochastic Gradient Descent (SGD) for strongly convex objective functions.
no code implementations • 9 Oct 2018 • Marten van Dijk, Lam M. Nguyen, Phuong Ha Nguyen, Dzung T. Phan
We study Stochastic Gradient Descent (SGD) with diminishing step sizes for convex objective functions.
no code implementations • ICML 2018 • Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takáč
In (Bottou et al., 2016), a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm.
no code implementations • 21 Mar 2017 • Raihan Sayeed Khan, Nadim Kanan, Chenglu Jin, Jake Scoggin, Nafisa Noor, Sadid Muneer, Faruk Dirisaglik, Phuong Ha Nguyen, Helena Silva, Marten van Dijk, Ali Gokirmak
Physical Obfuscated Keys (POKs) allow tamper-resistant storage of random keys based on physical disorder.
Cryptography and Security