Search Results for author: Artem M. Grachev

Found 2 papers, 0 papers with code

Compression of Recurrent Neural Networks for Efficient Language Modeling

no code implementations6 Feb 2019 Artem M. Grachev, Dmitry I. Ignatov, Andrey V. Savchenko

We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling.

Language Modelling Quantization

Neural Networks Compression for Language Modeling

no code implementations20 Aug 2017 Artem M. Grachev, Dmitry I. Ignatov, Andrey V. Savchenko

In this paper, we consider several compression techniques for the language modeling problem based on recurrent neural networks (RNNs).

Language Modelling Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.