1 code implementation • 1 Feb 2023 • Kavya Gupta, Sagar Verma
We show that CertViT networks have better certified accuracy than state-of-the-art Lipschitz trained networks.
no code implementations • 24 Nov 2022 • Benjamin Kiefer, Matej Kristan, Janez Perš, Lojze Žust, Fabio Poiesi, Fabio Augusto de Alcantara Andrade, Alexandre Bernardino, Matthew Dawkins, Jenni Raitoharju, Yitong Quan, Adem Atmaca, Timon Höfer, Qiming Zhang, Yufei Xu, Jing Zhang, DaCheng Tao, Lars Sommer, Raphael Spraul, Hangyue Zhao, Hongpu Zhang, Yanyun Zhao, Jan Lukas Augustin, Eui-ik Jeon, Impyeong Lee, Luca Zedda, Andrea Loddo, Cecilia Di Ruberto, Sagar Verma, Siddharth Gupta, Shishir Muralidhara, Niharika Hegde, Daitao Xing, Nikolaos Evangeliou, Anthony Tzes, Vojtěch Bartl, Jakub Špaňhel, Adam Herout, Neelanjan Bhowmik, Toby P. Breckon, Shivanand Kundargi, Tejas Anvekar, Chaitra Desai, Ramesh Ashok Tabib, Uma Mudengudi, Arpita Vats, Yang song, Delong Liu, Yonglin Li, Shuman Li, Chenhao Tan, Long Lan, Vladimir Somers, Christophe De Vleeschouwer, Alexandre Alahi, Hsiang-Wei Huang, Cheng-Yen Yang, Jenq-Neng Hwang, Pyong-Kun Kim, Kwangju Kim, Kyoungoh Lee, Shuai Jiang, Haiwen Li, Zheng Ziqiang, Tuan-Anh Vu, Hai Nguyen-Truong, Sai-Kit Yeung, Zhuang Jia, Sophia Yang, Chih-Chung Hsu, Xiu-Yu Hou, Yu-An Jhang, Simon Yang, Mau-Tsuen Yang
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection.
no code implementations • CVPR 2022 • Sagar Verma, Siddharth Gupta, Hal Shin, Akash Panigrahi, Shubham Goswami, Shweta Pardeshi, Natanael Exe, Ujwal Dutta, Tanka Raj Joshi, Nitin Bhojwani
In this paper we introduce the GeoEngine platform for reproducible and production-ready geospatial machine learning research.
no code implementations • 1 Jan 2021 • Sagar Verma, Jean-Christophe Pesquet
Sparsifying deep neural networks is of paramount interest in many areas, especially when those networks have to be implemented on low-memory devices.
no code implementations • 8 Oct 2020 • Sagar Verma, Nicolas Henwood, Marc Castella, Francois Malrait, Jean-Christophe Pesquet
In this paper, we explore the feasibility of modeling the dynamics of an electrical motor by following a data-driven approach, which uses only its inputs and outputs and does not make any assumption on its internal behaviour.
no code implementations • 21 Sep 2020 • Sagar Verma
This survey is on recent advancements in the intersection of physical modeling and machine learning.
1 code implementation • 18 Oct 2019 • Sagar Verma, Sukhad Anand, Chetan Arora, Atul Rai
In this paper, we propose to recommend images by explicitly learning and exploiting part based similarity.
1 code implementation • 17 Oct 2019 • Sagar Verma, Prince Patel, Angshul Majumdar
The possibility of employing restricted Boltzmann machine (RBM) for collaborative filtering has been known for about a decade.
1 code implementation • 17 Oct 2019 • Sagar Verma, Richa Verma, P. B. Sujit
We present a detailed analysis of how these two cooperation methods perform when the number of agents in the game are increased.
no code implementations • 17 Oct 2019 • Sagar Verma, Shikha Singh, Angshul Majumdar
Some recent studies have proposed that if we frame Non-Intrusive Load Monitoring (NILM) as a multi-label classification problem, the need for appliance-level data can be avoided.
4 code implementations • 17 Oct 2019 • Maria Papadomanolaki, Sagar Verma, Maria Vakalopoulou, Siddharth Gupta, Konstantinos Karantzalos
\begin{abstract} The advent of multitemporal high resolution data, like the Copernicus Sentinel-2, has enhanced significantly the potential of monitoring the earth's surface and environmental dynamics.
1 code implementation • 17 Oct 2019 • Sagar Verma, Pravin Nagar, Divam Gupta, Chetan Arora
Unlike third person domain, researchers have divided first-person actions into two categories: involving hand-object interactions and the ones without, and developed separate techniques for the two action categories.