The Value of Context: Human versus Black Box Evaluators

17 Feb 2024  ·  Andrei Iakovlev, Annie Liang ·

Evaluations once solely within the domain of human experts (e.g., medical diagnosis by doctors) can now also be carried out by machine learning algorithms. This raises a new conceptual question: what is the difference between being evaluated by humans and algorithms, and when should an individual prefer one form of evaluation over the other? We propose a theoretical framework that formalizes one key distinction between the two forms of evaluation: Machine learning algorithms are standardized, fixing a common set of covariates by which to assess all individuals, while human evaluators customize which covariates are acquired to each individual. Our framework defines and analyzes the advantage of this customization -- the value of context -- in environments with very high-dimensional data. We show that unless the agent has precise knowledge about the joint distribution of covariates, the value of more covariates exceeds the value of context.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here