Search Results for author: Salar Latifi

Found 1 papers, 0 papers with code

Efficient Sparsely Activated Transformers

no code implementations31 Aug 2022 Salar Latifi, Saurav Muralidharan, Michael Garland

Transformer-based neural networks have achieved state-of-the-art task performance in a number of machine learning domains including natural language processing and computer vision.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.