Search Results for author: Javier S. Turek

Found 12 papers, 2 papers with code

Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

no code implementations12 May 2021 Hsiang-Yun Sherry Chien, Javier S. Turek, Nicole Beckage, Vy A. Vo, Christopher J. Honey, Ted L. Willke

Altogether, we found that LSTM with the proposed forget gate can learn long-term dependencies, outperforming other recurrent networks in multiple domains; such gating mechanism can be integrated into other architectures for improving the learning of long timescale information in recurrent neural networks.

Image Classification Language Modelling

Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech

no code implementations NeurIPS 2020 Shailee Jain, Vy Vo, Shivangi Mahto, Amanda LeBel, Javier S. Turek, Alexander Huth

To understand how the human brain represents this information, one approach is to build encoding models that predict fMRI responses to natural language using representations extracted from neural network language models (LMs).

Multi-timescale Representation Learning in LSTM Language Models

no code implementations ICLR 2021 Shivangi Mahto, Vy A. Vo, Javier S. Turek, Alexander G. Huth

Earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power law.

Language Modelling Representation Learning

Clinically Deployed Distributed Magnetic Resonance Imaging Reconstruction: Application to Pediatric Knee Imaging

no code implementations11 Sep 2018 Michael J. Anderson, Jonathan I. Tamir, Javier S. Turek, Marcus T. Alley, Theodore L. Willke, Shreyas S. Vasanawala, Michael Lustig

Our improvements to the pipeline on a single machine provide a 3x overall reconstruction speedup, which allowed us to add algorithmic changes improving image quality.

Efficient, sparse representation of manifold distance matrices for classical scaling

1 code implementation CVPR 2018 Javier S. Turek, Alexander Huth

Thus for large point sets it is common to use a low-rank approximation to the distance matrix, which fits in memory and can be efficiently analyzed using methods such as multidimensional scaling (MDS).

A Searchlight Factor Model Approach for Locating Shared Information in Multi-Subject fMRI Analysis

no code implementations29 Sep 2016 Hejia Zhang, Po-Hsuan Chen, Janice Chen, Xia Zhu, Javier S. Turek, Theodore L. Willke, Uri Hasson, Peter J. Ramadge

In this work, we examine a searchlight based shared response model to identify shared information in small contiguous regions (searchlights) across the whole brain.

General Classification

A Convolutional Autoencoder for Multi-Subject fMRI Data Aggregation

no code implementations17 Aug 2016 Po-Hsuan Chen, Xia Zhu, Hejia Zhang, Javier S. Turek, Janice Chen, Theodore L. Willke, Uri Hasson, Peter J. Ramadge

We examine two ways to combine the ideas of a factor model and a searchlight based analysis to aggregate multi-subject fMRI data while preserving spatial locality.

Anatomy

Enabling Factor Analysis on Thousand-Subject Neuroimaging Datasets

no code implementations16 Aug 2016 Michael J. Anderson, Mihai Capotă, Javier S. Turek, Xia Zhu, Theodore L. Willke, Yida Wang, Po-Hsuan Chen, Jeremy R. Manning, Peter J. Ramadge, Kenneth A. Norman

The scale of functional magnetic resonance image data is rapidly increasing as large multi-subject datasets are becoming widely available and high-resolution scanners are adopted.

A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation

no code implementations NeurIPS 2014 Eran Treister, Javier S. Turek

Numerical experiments on both synthetic and real gene expression data demonstrate that our approach outperforms the existing state of the art methods, especially for large-scale problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.