Saturday, 14 July 2018

How to handle uncertain priors in Bayesian reasoning

In the classic simple Bayesian problem we have:
  • a hypothesis H (such as 'person has specific disease') with a prior probability (say 1 in a 1000) and
  • evidence E (such as a test result which may be positive or negative for the disease) for which we know the probability E given H (for example the probability of a false positive is 5% and the probability of a false negative is 0%). 
With those particular values Bayes' theorem tells us that a randomly selected person who tests positive has a 1.96% probability of having the disease.

But what if there is uncertainty about the prior probabilities (i.e. the 1 in a 1000, the 5% and 0%). Maybe the 5% means 'anywhere between 0 and 10%'. Maybe the 1 in a 1000 means we only saw it once in 1000 people. This new technical report explains how to properly incorporate uncertainty about the priors using a Bayesian Network.


Fenton NE, "Handling Uncertain Priors in Basic Bayesian Reasoning", July 2018,  DOI 10.13140/RG.2.2.16066.89280

Friday, 13 July 2018

How much do we trust academic 'experts'?


Queen Mary has released the following press release about our new paper: Osman, M., Fenton, N. E., Pilditch, T., Lagnado, D. A., & Neil. M. (2018). "Who do we trust on social policy interventions", to appear next week in the journal Basic and Applied Social Psychology. The preprint of the paper is here. There are already a number of press reports on it (see below).

People trust scientific experts more than the government even when the evidence is outlandish


Members of the public in the UK and US have far greater trust in scientific experts than the government, according to a new study by Queen Mary University of London. In three large scale experiments, participants were asked to make several judgments about nudges -behavioural in interventions designed to improve decisions in our day-to-day lives.

The nudges were introduced either by a group of leading scientific experts or a government working group consisting of special interest groups and policy makers. Some of the nudges were real and had been implemented, such as using catchy pictures in stairwells to encourage people to take the stairs, while others were fictitious and actually implausible like stirring coffee anti-clockwise for two minutes to avoid any cancerous effects.

The study, published in Basic and Applied Social Psychology, found that trust was higher for scientists than the government working group, even when the scientists were proposing fictitious nudges. Professor Norman Fenton, from Queen Mary’s School of Electronic Engineering and Computer Science, said: “While people judged genuine nudges as more plausible than fictitious nudges, people trusted some fictitious nudges proposed by scientists as more plausible than genuine nudges proposed by government. For example, people were more likely trust the health benefits of coffee stirring than exercise if the former was recommended by scientists and the latter by government.”

The results also revealed that there was a slight tendency for the US sample to find the nudges more plausible and more ethical overall compared to the UK sample. Lead author Dr Magda Osman from Queen Mary’s School of Biological and Chemical Sciences, said: “In the context of debates regarding the loss of trust in experts, what we show is that in actual fact, when compared to a government working group, the public in the US and UK judge scientists very favourably, so much so that they show greater levels of trust even when the interventions that are being proposed are implausible and most likely ineffective. This means that the public still have a high degree of trust in experts, in particular, in this case, social scientists.” She added: “The evidence suggests that trust in scientists is high, but that the public are sceptical about nudges in which they might be manipulated without them knowing. They consider these as less ethical and trust the experts proposing them less with nudges in which they do have an idea of what is going on.”

Nudges have become highly popular decision-support methods used by governments to help in a wide range of areas such as health, personal finances, and general wellbeing. The scientific claim is that to help people make better decisions regarding their lifestyle choices, and those that improve the welfare of the state, it is potentially effective to subtly change the framing of the decision-making context, which makes the option which maximises long term future gains more prominent. In essence the position adopted by nudge enthusiasts is that poor social outcomes are often the result of poor decision-making, and in order to address this, behavioural interventions such as nudges can be used to reduce the likelihood of poor decisions being made in the first place.

Dr Osman said: “Overall, the public make pretty sensible judgments, and what this shows is that people will scrutinize the information they are provided by experts, so long as they are given a means to do it. In other words, ask the questions in the right way, and people will show a level of scrutiny that is often not attributed to them. So, before there are strong claims made about public opinion about experts, and knee-jerk policy responses to this, it might be worth being a bit more careful about how the public are surveyed in the first place.”
Press reports:
  • The  Daily Record: Stirred by science:

Tuesday, 3 July 2018

How Bayesian Networks are pioneering the ‘smart data’ revolution

The July issue of Open Access Government has a 2-page article summarising our recent research and tool developments on Bayesian networks. A high-res pdf article of the article can be found here or here.