no code implementations • 1 Apr 2024 • Qi Zhang, Yi Zhou, Ashley Prater-Bennette, Lixin Shen, Shaofeng Zou
We prove that our algorithm finds an $\epsilon$-stationary point with a computational complexity of $\mathcal O(\epsilon^{-3k_*-5})$, where $k_*$ is the parameter of the Cressie-Read divergence.
no code implementations • 9 Oct 2023 • Yash Garg, Nebiyou Yismaw, Rakib Hyder, Ashley Prater-Bennette, M. Salman Asif
In this paper, we propose a factorized tensor network (FTN) that can achieve accuracy comparable to independent single-task/domain networks with a small number of additional parameters.
no code implementations • 6 Oct 2023 • Md Kaykobad Reza, Ashley Prater-Bennette, M. Salman Asif
We conduct a series of experiments to highlight the missing modality robustness of our proposed method on 5 different datasets for multimodal semantic segmentation, multimodal material segmentation, and multimodal sentiment analysis tasks.
1 code implementation • 7 Sep 2023 • Md Kaykobad Reza, Ashley Prater-Bennette, M. Salman Asif
Furthermore, our ablation studies also highlight the capacity of different input modalities to improve performance in the identification of different types of materials.
Ranked #1 on Semantic Segmentation on MCubeS (P)
no code implementations • 17 May 2023 • Yue Wang, Alvaro Velasquez, George Atia, Ashley Prater-Bennette, Shaofeng Zou
Robust Markov decision processes (MDPs) address the challenge of model uncertainty by optimizing the worst-case performance over an uncertainty set of MDPs.
no code implementations • 2 Jan 2023 • Yue Wang, Alvaro Velasquez, George Atia, Ashley Prater-Bennette, Shaofeng Zou
We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably finds its solution, or equivalently, the optimal robust policy.
1 code implementation • 19 Jul 2022 • Rakib Hyder, Ken Shao, Boyu Hou, Panos Markopoulos, Ashley Prater-Bennette, M. Salman Asif
Our method also offers better memory efficiency compared to episodic memory- and mask-based approaches.
no code implementations • 29 Sep 2021 • Rakib Hyder, Ken Shao, Boyu Hou, Panos Markopoulos, Ashley Prater-Bennette, Salman Asif
To update the network for a new task, we learn a low-rank (or rank-1) matrix and add that to the weights of every layer.
1 code implementation • 29 Apr 2021 • Tian Tong, Cong Ma, Ashley Prater-Bennette, Erin Tripp, Yuejie Chi
Tensors, which provide a powerful and flexible model for representing multi-attribute data and multi-way interactions, play an indispensable role in modern data science across various fields in science and engineering.
no code implementations • 3 Mar 2021 • Ashley Prater-Bennette, Lixin Shen, Erin E. Tripp
The log-sum penalty is often adopted as a replacement for the $\ell_0$ pseudo-norm in compressive sensing and low-rank optimization.
Compressive Sensing Optimization and Control 49J53, 49J52, 90C26