no code implementations • 5 Feb 2024 • Ruihan Wu, Siddhartha Datta, Yi Su, Dheeraj Baby, Yu-Xiang Wang, Kilian Q. Weinberger
This paper addresses the prevalent issue of label shift in an online setting with missing labels, where data distributions change over time and obtaining timely labels is challenging.
no code implementations • 27 Dec 2023 • Siddhartha Datta, Alexander Ku, Deepak Ramachandran, Peter Anderson
Text-to-image generation models are powerful but difficult to use.
no code implementations • 27 Jan 2023 • Siddhartha Datta, Nigel Shadbolt
Large models support great zero-shot and few-shot capabilities.
no code implementations • 15 Nov 2022 • Siddhartha Datta
As digital realities become an increasingly-impactful aspect of human lives, we investigate the design of a system that enables users to manipulate the perception of both their physical realities and digital realities.
no code implementations • 29 Sep 2022 • Siddhartha Datta, Nigel Shadbolt
Adapting model parameters to incoming streams of data is a crucial factor to deep learning scalability.
no code implementations • 19 May 2022 • Siddhartha Datta, Nigel Shadbolt
Inspired by recent work on neural subspaces and mode connectivity, we revisit parameter subspace sampling for shifted and/or interpolatable input distributions (instead of a single, unshifted distribution).
no code implementations • COLING 2022 • Siddhartha Datta
Recent work in black-box adversarial attacks for NLP systems has attracted much attention.
no code implementations • NAACL (WOAH) 2022 • Siddhartha Datta, Konrad Kollnig, Nigel Shadbolt
Digital harms can manifest across any interface.
no code implementations • 7 Mar 2022 • Siddhartha Datta, Nigel Shadbolt
clean labels, which motivates this paper's work on the construction of multi-agent backdoor defenses that maximize accuracy w. r. t.
no code implementations • 28 Jan 2022 • Siddhartha Datta, Nigel Shadbolt
Malicious agents in collaborative learning and outsourced data collection threaten the training of clean models.
no code implementations • 24 Jan 2022 • Siddhartha Datta, Nigel Shadbolt
Attack vectors that compromise machine learning pipelines in the physical world have been demonstrated in recent research, from perturbations to architectural components.
1 code implementation • 20 Dec 2021 • Siddhartha Datta, Konrad Kollnig, Nigel Shadbolt
Digital harms are widespread in the mobile ecosystem.
no code implementations • 9 Oct 2021 • Siddhartha Datta, Giulio Lovisotto, Ivan Martinovic, Nigel Shadbolt
As collaborative learning and the outsourcing of data collection become more common, malicious actors (or agents) which attempt to manipulate the learning process face an additional obstacle as they compete with each other.
1 code implementation • 23 Feb 2021 • Konrad Kollnig, Siddhartha Datta, Max Van Kleek
Dark patterns in mobile apps take advantage of cognitive biases of end-users and can have detrimental effects on people's lives.
Human-Computer Interaction
no code implementations • 1 Jan 2021 • Siddhartha Datta
Recent work in black-box adversarial attacks for NLP systems has attracted attention.
1 code implementation • 3 Sep 2019 • Siddhartha Datta
The paper explores a novel methodology in source code obfuscation through the application of text-based recurrent neural network (RNN) encoder-decoder models in ciphertext generation and key generation.