no code implementations • 15 Mar 2022 • Kendra Albert, Maggie Delano
False assumptions about sex and gender are deeply embedded in the medical system, including that they are binary, static, and concordant.
no code implementations • ICML Workshop AML 2021 • Kendra Albert, Maggie Delano, Bogdan Kulynych, Ram Shankar Siva Kumar
In this paper, we review the broader impact statements that adversarial ML researchers wrote as part of their NeurIPS 2020 papers and assess the assumptions that authors have about the goals of their work.
no code implementations • 20 Jan 2021 • Kendra Albert, Maggie Delano
Smart weight scales offer bioimpedance-based body composition analysis as a supplement to pure body weight measurement.
Computers and Society
no code implementations • 3 Dec 2020 • Kendra Albert, Maggie Delano, Jonathon Penney, Afsaneh Rigot, Ram Shankar Siva Kumar
This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects.
Computers and Society
no code implementations • 29 Jun 2020 • Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert
Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities.
no code implementations • 1 Feb 2020 • Kendra Albert, Jonathon Penney, Bruce Schneier, Ram Shankar Siva Kumar
In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems.
2 code implementations • 25 Nov 2019 • Ram Shankar Siva Kumar, David O Brien, Kendra Albert, Salomé Viljöen, Jeffrey Snover
In the last two years, more than 200 papers have been written on how machine learning (ML) systems can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate papers covering non-adversarial failure modes.
no code implementations • 25 Oct 2018 • Ram Shankar Siva Kumar, David R. O'Brien, Kendra Albert, Salome Vilojen
When machine learning systems fail because of adversarial manipulation, how should society expect the law to respond?