Thursday 26 July 2018

Updating Prior Beliefs Based on Ambiguous Evidence


Suppose two nations, North Bayesland and South Bayesland are independently testing new missile technology. Each has made six detonation attempts: North Bayesland has been successful once and  South Bayesland four times. You observe another detonation on the border between the two countries but cannot determine the source. Based only on the provided information:
  1. What is the probability that North (or South) Bayesland is the source of this missile? 
  2. What is your best estimate of the propensity for success of North and South Bayesland after this latest observation (i.e. the probability, for each nation, that a future missile they launch will detonate)?
The general form of this problem is ubiquitous in many areas of life.  But how well do people answer such questions?

Our paper "Updating Prior Beliefs Based on Ambiguous Evidence", which was accepted at the prestigious 40th Annual Meeting of the Cognitive Science Society (CogSci 2018) in Madison, Wisconsin, addresses this problem. Stephen Dewitt (former QMUL PhD student) is presenting the paper on 27 July. 

First of all the normative answer to Question 1 - based on simple Bayesian reasoning - is 20% for North Bayesland and 80% for South Bayesland. But Question 2 is much more complex because we cannot assume the small amount of data on previous detonation attempts represents a 'fixed' propensity of success (the normative Bayesian solution requires a non-trivial Bayesian network that models our uncertainty about the success propensities).

Based on experiments involving 250 paid participants, we discovered two types of errors in the answers.
  1. There was a ‘double updating’ error: individuals appear to first use their prior beliefs to interpret the evidence, then use the interpreted form of the evidence, rather than the raw form, when updating. 
  2. We found an error where individuals convert from a probabilistic representation of the evidence to a categorical one and use this representation when updating. 
Both errors have the effect of exaggerating the evidence in favour of the solver’s prior belief and could lead to confirmation bias and polarisation. Given the importance of the class of problems to which the study applies, we believe that greater understanding of the cognitive processes underlying the errors should therefore be an important avenue for future study.

The full paper details and pdf (also available here)
Dewitt, S, Lagnado, D, Fenton N. E (2018), "Updating Prior Beliefs Based on Ambiguous Evidence", CogSci 2018, Madison Wisconsin, 25-28 July 2018 
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under Contract [2017-16122000003]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Funding was also provided by the ERC project ERC-2013-AdG339182-BAYES_KNOWLEDGE and the Leverhulme Trust project RPG-2016-118 CAUSAL-DYNAMICS.

UPDATE: Stephen Dewitt presenting the paper in Madison:


No comments:

Post a Comment