no code implementations • NeurIPS 2020 • Vitalii Aksenov, Dan Alistarh, Janne H. Korhonen
The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning.
no code implementations • NeurIPS 2021 • Dan Alistarh, Janne H. Korhonen
We focus on the communication complexity of this problem: our main result provides the first fully unconditional bounds on total number of bits which need to be sent and received by the $N$ machines to solve this problem under point-to-point communication, within a given error-tolerance.
no code implementations • 28 Sep 2020 • Janne H. Korhonen, Dan Alistarh
Motivated by the interest in communication-efficient methods for distributed machine learning, we consider the communication complexity of minimising a sum of $d$-dimensional functions $\sum_{i = 1}^N f_i (x)$, where each function $f_i$ is held by one of the $N$ different machines.
no code implementations • 25 Feb 2020 • Vitaly Aksenov, Dan Alistarh, Janne H. Korhonen
The ability to leverage large-scale hardware parallelism has been one of the key enablers of the accelerated recent progress in machine learning.
no code implementations • 13 May 2016 • James Cussens, Matti Järvisalo, Janne H. Korhonen, Mark Bartlett
The challenging task of learning structures of probabilistic graphical models is an important problem within modern AI research.
1 code implementation • NeurIPS 2015 • Janne H. Korhonen, Pekka Parviainen
Both learning and inference tasks on Bayesian networks are NP-hard in general.