Search Results for author: Mehrazin Alizadeh

Found 3 papers, 0 papers with code

CompactifAI: Extreme Compression of Large Language Models using Quantum-Inspired Tensor Networks

no code implementations25 Jan 2024 Andrei Tomut, Saeed S. Jahromi, Abhijoy Sarkar, Uygar Kurt, Sukhbinder Singh, Faysal Ishtiaq, Cesar Muñoz, Prabdeep Singh Bajaj, Ali Elborady, Gianni Del Bimbo, Mehrazin Alizadeh, David Montero, Pablo Martin-Ramiro, Muhammad Ibrahim, Oussama Tahiri Alaoui, John Malcolm, Samuel Mugel, Roman Orus

Traditional compression methods such as pruning, distillation, and low-rank approximation focus on reducing the effective number of neurons in the network, while quantization focuses on reducing the numerical precision of individual weights to reduce the model size while keeping the number of neurons fixed.

Model Compression Quantization +1

Power Control with QoS Guarantees: A Differentiable Projection-based Unsupervised Learning Framework

no code implementations31 May 2023 Mehrazin Alizadeh, Hina Tabassum

Utilizing a differentiable projection function, two novel deep learning (DL) solutions are pursued.

Deep Unsupervised Learning for Generalized Assignment Problems: A Case-Study of User-Association in Wireless Networks

no code implementations26 Mar 2021 Arjun Kaushik, Mehrazin Alizadeh, Omer Waqar, Hina Tabassum

More specifically, we propose a new approach that facilitates to train a deep neural network (DNN) using a customized loss function.

Cannot find the paper you are looking for? You can Submit a new open access paper.