Human-Imitating Metrics for Training and Evaluating Privacy Preserving Emotion Recognition Models Using Sociolinguistic Knowledge

18 Apr 2021  ·  Mimansa Jaiswal, Emily Mower Provost ·

Privacy preservation is a crucial component of any real-world application. But, in applications relying on machine learning backends, privacy is challenging because models often capture more than what the model was initially trained for, resulting in the potential leakage of sensitive information. In this paper, we propose an automatic and quantifiable metric that allows us to evaluate humans' perception of a model's ability to preserve privacy with respect to sensitive variables. In this paper, we focus on saliency-based explanations, explanations that highlight regions of the input text, to infer internal workings of a black box model. We use the degree with which differences in interpretation of general vs privacy preserving models correlate with sociolinguistic biases to inform metric design. We show how certain commonly-used methods that seek to preserve privacy do not align with human perception of privacy preservation leading to distrust about model's claims. We demonstrate the versatility of our proposed metric by validating its utility for measuring cross corpus generalization for both privacy and emotion. Finally, we conduct crowdsourcing experiments to evaluate the inclination of the evaluators to choose a particular model for a given purpose when model explanations are provided, and show a positive relationship with the proposed metric. To the best of our knowledge, we take the first step in proposing automatic and quantifiable metrics that best align with human perception of model's ability for privacy preservation, allowing for cost-effective model development.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here