Search Results for author: Kendra Albert

Found 8 papers, 1 papers with code

Sex Trouble: Common pitfalls in incorporating sex/gender in medical machine learning and how to avoid them

no code implementations15 Mar 2022 Kendra Albert, Maggie Delano

False assumptions about sex and gender are deeply embedded in the medical system, including that they are binary, static, and concordant.

BIG-bench Machine Learning

Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks

no code implementations ICML Workshop AML 2021 Kendra Albert, Maggie Delano, Bogdan Kulynych, Ram Shankar Siva Kumar

In this paper, we review the broader impact statements that adversarial ML researchers wrote as part of their NeurIPS 2020 papers and assess the assumptions that authors have about the goals of their work.

"This Whole Thing Smacks of Gender": Algorithmic Exclusion in Bioimpedance-based Body Composition Analysis

no code implementations20 Jan 2021 Kendra Albert, Maggie Delano

Smart weight scales offer bioimpedance-based body composition analysis as a supplement to pure body weight measurement.

Computers and Society

Ethical Testing in the Real World: Evaluating Physical Testing of Adversarial Machine Learning

no code implementations3 Dec 2020 Kendra Albert, Maggie Delano, Jonathon Penney, Afsaneh Rigot, Ram Shankar Siva Kumar

This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects.

Computers and Society

Legal Risks of Adversarial Machine Learning Research

no code implementations29 Jun 2020 Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert

Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities.

BIG-bench Machine Learning

Politics of Adversarial Machine Learning

no code implementations1 Feb 2020 Kendra Albert, Jonathon Penney, Bruce Schneier, Ram Shankar Siva Kumar

In this paper, we draw on insights from science and technology studies, anthropology, and human rights literature, to inform how defenses against adversarial attacks can be used to suppress dissent and limit attempts to investigate machine learning systems.

BIG-bench Machine Learning

Failure Modes in Machine Learning Systems

2 code implementations25 Nov 2019 Ram Shankar Siva Kumar, David O Brien, Kendra Albert, Salomé Viljöen, Jeffrey Snover

In the last two years, more than 200 papers have been written on how machine learning (ML) systems can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate papers covering non-adversarial failure modes.

BIG-bench Machine Learning

Law and Adversarial Machine Learning

no code implementations25 Oct 2018 Ram Shankar Siva Kumar, David R. O'Brien, Kendra Albert, Salome Vilojen

When machine learning systems fail because of adversarial manipulation, how should society expect the law to respond?

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.