Thursday, 26 July 2018

Updating Prior Beliefs Based on Ambiguous Evidence


Suppose two nations, North Bayesland and South Bayesland are independently testing new missile technology. Each has made six detonation attempts: North Bayesland has been successful once and  South Bayesland four times. You observe another detonation on the border between the two countries but cannot determine the source. Based only on the provided information:
  1. What is the probability that North (or South) Bayesland is the source of this missile? 
  2. What is your best estimate of the propensity for success of North and South Bayesland after this latest observation (i.e. the probability, for each nation, that a future missile they launch will detonate)?
The general form of this problem is ubiquitous in many areas of life.  But how well do people answer such questions?

Our paper "Updating Prior Beliefs Based on Ambiguous Evidence", which was accepted at the prestigious 40th Annual Meeting of the Cognitive Science Society (CogSci 2018) in Madison, Wisconsin, addresses this problem. Stephen Dewitt (former QMUL PhD student) is presenting the paper on 27 July. 

First of all the normative answer to Question 1 - based on simple Bayesian reasoning - is 20% for North Bayesland and 80% for South Bayesland. But Question 2 is much more complex because we cannot assume the small amount of data on previous detonation attempts represents a 'fixed' propensity of success (the normative Bayesian solution requires a non-trivial Bayesian network that models our uncertainty about the success propensities).

Based on experiments involving 250 paid participants, we discovered two types of errors in the answers.
  1. There was a ‘double updating’ error: individuals appear to first use their prior beliefs to interpret the evidence, then use the interpreted form of the evidence, rather than the raw form, when updating. 
  2. We found an error where individuals convert from a probabilistic representation of the evidence to a categorical one and use this representation when updating. 
Both errors have the effect of exaggerating the evidence in favour of the solver’s prior belief and could lead to confirmation bias and polarisation. Given the importance of the class of problems to which the study applies, we believe that greater understanding of the cognitive processes underlying the errors should therefore be an important avenue for future study.

The full paper details and pdf (also available here)
Dewitt, S, Lagnado, D, Fenton N. E (2018), "Updating Prior Beliefs Based on Ambiguous Evidence", CogSci 2018, Madison Wisconsin, 25-28 July 2018 
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under Contract [2017-16122000003]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Funding was also provided by the ERC project ERC-2013-AdG339182-BAYES_KNOWLEDGE and the Leverhulme Trust project RPG-2016-118 CAUSAL-DYNAMICS.

UPDATE: Stephen Dewitt presenting the paper in Madison:


Saturday, 14 July 2018

How to handle uncertain priors in Bayesian reasoning

In the classic simple Bayesian problem we have:
  • a hypothesis H (such as 'person has specific disease') with a prior probability (say 1 in a 1000) and
  • evidence E (such as a test result which may be positive or negative for the disease) for which we know the probability E given H (for example the probability of a false positive is 5% and the probability of a false negative is 0%). 
With those particular values Bayes' theorem tells us that a randomly selected person who tests positive has a 1.96% probability of having the disease.

But what if there is uncertainty about the prior probabilities (i.e. the 1 in a 1000, the 5% and 0%). Maybe the 5% means 'anywhere between 0 and 10%'. Maybe the 1 in a 1000 means we only saw it once in 1000 people. This new technical report explains how to properly incorporate uncertainty about the priors using a Bayesian Network.


Fenton NE, "Handling Uncertain Priors in Basic Bayesian Reasoning", July 2018,  DOI 10.13140/RG.2.2.16066.89280

Friday, 13 July 2018

How much do we trust academic 'experts'?


Queen Mary has released the following press release about our new paper: Osman, M., Fenton, N. E., Pilditch, T., Lagnado, D. A., & Neil. M. (2018). "Who do we trust on social policy interventions", to appear next week in the journal Basic and Applied Social Psychology (6/8/18 update: the published paper is here). The preprint of the paper is here. There are already a number of press reports on it (see below).

People trust scientific experts more than the government even when the evidence is outlandish


Members of the public in the UK and US have far greater trust in scientific experts than the government, according to a new study by Queen Mary University of London. In three large scale experiments, participants were asked to make several judgments about nudges -behavioural in interventions designed to improve decisions in our day-to-day lives.

The nudges were introduced either by a group of leading scientific experts or a government working group consisting of special interest groups and policy makers. Some of the nudges were real and had been implemented, such as using catchy pictures in stairwells to encourage people to take the stairs, while others were fictitious and actually implausible like stirring coffee anti-clockwise for two minutes to avoid any cancerous effects.

The study, published in Basic and Applied Social Psychology, found that trust was higher for scientists than the government working group, even when the scientists were proposing fictitious nudges. Professor Norman Fenton, from Queen Mary’s School of Electronic Engineering and Computer Science, said: “While people judged genuine nudges as more plausible than fictitious nudges, people trusted some fictitious nudges proposed by scientists as more plausible than genuine nudges proposed by government. For example, people were more likely trust the health benefits of coffee stirring than exercise if the former was recommended by scientists and the latter by government.”

The results also revealed that there was a slight tendency for the US sample to find the nudges more plausible and more ethical overall compared to the UK sample. Lead author Dr Magda Osman from Queen Mary’s School of Biological and Chemical Sciences, said: “In the context of debates regarding the loss of trust in experts, what we show is that in actual fact, when compared to a government working group, the public in the US and UK judge scientists very favourably, so much so that they show greater levels of trust even when the interventions that are being proposed are implausible and most likely ineffective. This means that the public still have a high degree of trust in experts, in particular, in this case, social scientists.” She added: “The evidence suggests that trust in scientists is high, but that the public are sceptical about nudges in which they might be manipulated without them knowing. They consider these as less ethical and trust the experts proposing them less with nudges in which they do have an idea of what is going on.”

Nudges have become highly popular decision-support methods used by governments to help in a wide range of areas such as health, personal finances, and general wellbeing. The scientific claim is that to help people make better decisions regarding their lifestyle choices, and those that improve the welfare of the state, it is potentially effective to subtly change the framing of the decision-making context, which makes the option which maximises long term future gains more prominent. In essence the position adopted by nudge enthusiasts is that poor social outcomes are often the result of poor decision-making, and in order to address this, behavioural interventions such as nudges can be used to reduce the likelihood of poor decisions being made in the first place.

Dr Osman said: “Overall, the public make pretty sensible judgments, and what this shows is that people will scrutinize the information they are provided by experts, so long as they are given a means to do it. In other words, ask the questions in the right way, and people will show a level of scrutiny that is often not attributed to them. So, before there are strong claims made about public opinion about experts, and knee-jerk policy responses to this, it might be worth being a bit more careful about how the public are surveyed in the first place.”
Press reports:
  • The  Daily Record: Stirred by science:

Tuesday, 3 July 2018

How Bayesian Networks are pioneering the ‘smart data’ revolution

The July issue of Open Access Government has a 2-page article summarising our recent research and tool developments on Bayesian networks. A high-res pdf article of the article can be found here or here.



Thursday, 28 June 2018

Guilty Until Proven Innocent: The Crisis in Our Justice System



As mentioned in my previous posting I was invited by Jon Robins (the Justice Gap) to speak at the third meeting of the All-Party Parliamentary Group on Miscarriages of Justice, hosted by Barry Sheerman MP, in the House of Commons on 25 June 2018. The meeting was based around the launch of Jon Robins' outstanding new book "Guilty Until Proven Innocent: The Crisis in Our Justice System".  Other speakers were: Michael Mansfield QC and lawyer Matt Foot who have been involved in many of the cases described in the book; Waney Squier the world-renowned neuropathologist who suffered for being one of the few medical experts to question the mainstream medical guidelines on 'shaken baby syndrome'; Gloria Morrison who spoke about the problems of Joint Enterprise relevant to some of the cases; and Liam Allan and Eddie Gilfoyle who spoke about their own experiences (theirs are are two of the cases discussed in the book). It was a very powerful and informative meeting which was very well attended (with many having to stand for the full two hours)

I  have now written a detailed review of the book (also available here) which includes more about the House of Commons meeting. (Note: there are two errors in the researchgate version: Waney Squier is misspelt as "Squire" on page 2 and the chapter describing the Ben Geen case should be 9 not 8 as stated on page 1)



See also



Monday, 25 June 2018

On the Role of Statistics in Miscarriages of Justice


I have been invited by Jon Robins (the Justice Gap) to speak today at the third meeting of the All-Party Parliamentary Group on Miscarriages of Justice, hosted by Barry Sheerman MP, in the House of Commons. Jon Robins will be talking about his outstanding new book "Guilty Until Proven Innocent: The Crisis in Our Justice System" at the event. The book includes a description of the Ben Geen case for which I provided a report to the Criminal Cases Review Commission in 2015 showing that the sequence of 'unusual events' at the Horton General Hospital (where Ben Geen worked as a nurse) was not especially unusual.

My short talk today focuses on the role of statistics in miscarriages of justice. A transcript of the talk can be found here.

Norman Fenton

See also

Friday, 22 June 2018

Bias in AI Algorithms


This is an update of a posting originally made on  18 Jan 2018 (see below for the update)

On 17 Jan 2018 multiple news sources (e.g. see here, here, and here) ran a story about a new research paper ‎ that claims to expose both the inaccuracies and racial bias in COMPAS - one of the most common algorithms used for parole and sentencing decisions to predict recidivism (i.e. whether or not a defendant will re-offend).

The research paper was written by the world famous computer scientist Hany Farid (along with a student Julia Dressel).

But the real story here is that the paper’s accusation of racial bias (specifically that the algorithm is biased against black people) is based on a fundamental misunderstanding of causation and statistics. The algorithm is no more ‘biased’ against black people than it is biased against white single parents, ‎ old people, people living in Beattyville Kentucky, or women called ‘Amber’. In fact, as we show in this brief article, if you choose any factor that correlates with poverty you will inevitably replicate the statistical ‘bias’ claimed in the paper. And if you accept the validity of the claims in the paper then you must also accept, for example, that a charity which uses poverty as a factor to identify and help homeless people is being racist because it is biased against white people (and also, interestingly, Indian Americans).

The fact that the article was published and that none of the media running the story realise that they are pushing fake news is what is most important here. Depressingly, many similar research studies involving the same kind of misinterpretation of statistics result in popular media articles that push a false narrative of one kind or another.

22 June 2018 Update: It turns out that now Microsoft is "developing a tool to help engineers catch bias in algorithms" This article also cites the case of the COMPAS software:
 "...., which uses machine learning to predict whether a defendant will commit future crimes, was found to judge black defendants more harshly than white defendants." 
Interestingly, this latest news article about Microsoft does NOT refer to the 2018 Dressel and Fardi article but, rather, to an earlier 2016 article by Larson et al: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm From a quick inspection it does seem to be a more comprehensive study than the flawed Dressel and Farid article. But my quick impression is that the same fundamental misunderstandings statistics/causality are there. Given the great degree of interest in AI/bias, and given also that we were unaware of the 2016 study, we plan to do an update to our unpublished paper.

Our article (5 pages): Fenton, N.E., & Neil, M. (2018). "Criminally Incompetent Academic Misinterpretation of Criminal Data - and how the Media Pushed the Fake News" http://dx.doi.org/10.13140/RG.2.2.32052.55680  Also available here.

The research paper: Dressel, J. & Farid, H. The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4, eaao5580 (2018). 

Thanks to Scott McLachlan for the tip off on this story.

See some previous articles on poor use of statistics: