Search Results for author: Noga Mudrik

Found 7 papers, 3 papers with code

Multiway Multislice PHATE: Visualizing Hidden Dynamics of RNNs through Training

no code implementations4 Jun 2024 Jiancheng Xie, Lou C. Kohler Voinov, Noga Mudrik, Gal Mishne, Adam Charles

Recurrent neural networks (RNNs) are a widely used tool for sequential data analysis, however, they are still often seen as black boxes of computation.

CrEIMBO: Cross Ensemble Interactions in Multi-view Brain Observations

no code implementations27 May 2024 Noga Mudrik, Ryan Ly, Oliver Ruebel, Adam S. Charles

We assume that brain observations stem from the joint activity of a set of functional neural ensembles (groups of co-active neurons) that are similar in functionality across recordings, and propose to discover the ensemble and their non-stationary dynamical interactions in a new model we term CrEIMBO (Cross-Ensemble Interactions in Multi-view Brain Observations).

Dictionary Learning

LINOCS: Lookahead Inference of Networked Operators for Continuous Stability

no code implementations28 Apr 2024 Noga Mudrik, Eva Yezerets, Yenho Chen, Christopher Rozell, Adam Charles

Such systems, often modeled as dynamical systems, typically exhibit noisy high-dimensional and non-stationary temporal behavior that renders their identification challenging.

Time Series

SiBBlInGS: Similarity-driven Building-Block Inference using Graphs across States

1 code implementation7 Jun 2023 Noga Mudrik, Gal Mishne, Adam S. Charles

Time series data across scientific domains are often collected under distinct states (e. g., tasks), wherein latent processes (e. g., biological factors) create complex inter- and intra-state variability.

Dictionary Learning Time Series

Multi-Lingual DALL-E Storytime

2 code implementations22 Dec 2022 Noga Mudrik, Adam S. Charles

While DALL-E is a promising tool for many applications, its decreased performance when given input in a different language, limits its audience and deepens the gap between populations.

Decomposed Linear Dynamical Systems (dLDS) for learning the latent components of neural dynamics

1 code implementation7 Jun 2022 Noga Mudrik, Yenho Chen, Eva Yezerets, Christopher J. Rozell, Adam S. Charles

Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how observed neural activity relates to perception and behavior.

Dictionary Learning Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.