Search Results for author: Vivswan Shah

Found 2 papers, 1 papers with code

Leveraging Continuously Differentiable Activation Functions for Learning in Quantized Noisy Environments

no code implementations4 Feb 2024 Vivswan Shah, Nathan Youngblood

Real-world analog systems intrinsically suffer from noise that can impede model convergence and accuracy on a variety of deep learning models.

Quantization

AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks

1 code implementation14 Oct 2022 Vivswan Shah, Nathan Youngblood

AnalogVNN, a simulation framework built on PyTorch which can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators.

BIG-bench Machine Learning Hyperparameter Optimization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.