Recent reports have highlighted the difficulties faced by the criminal justice system in adequately responding to the dramatic increase in the amount and complexity of forensic science, particularly given its (not infrequently) questionable value. Despite the growing consensus that the role of experts should be limited to making statements about the probability of their findings under competing hypotheses (instead of, for example, making categorical source attributions), and the ability of Bayes’ theorem to encapsulate the proper or normative effect of probabilistic evidence, Bayesian reasoning has been largely ignored or misunderstood by criminal justice professionals.
Proper use of probabilistic reasoning has the potential to improve dramatically the efficiency, transparency and fairness of the criminal justice system and the accuracy of its verdicts, by enabling the value of any given piece of evidence to be meaningfully evaluated and communicated. Bayesian reasoning employs the likelihood ratio (which is the probability of seeing the evidence given the prosecution hypothesis divided by the probability of seeing the evidence given the defence hypothesis), to illustrate the relevance and strength of each piece of evidence. Bayesian reasoning can therefore help the expert formulate accurate and informative opinions; help the court in determining the admissibility of evidence; help identify which cases should and should not be pursued and help lawyers explain, and jurors to evaluate the weight of evidence during a trial. It would also help identify error rates and unjustified assumptions entailed in expert opinions, which would in turn contribute to the transparency and legitimacy of the criminal justice process.
Unfortunately, there is widespread disagreement about the kind of evidence to which Bayesian reasoning should be applied and the manner in which it should be presented. Much of the disagreement over when it should be applied arises from fundamental misunderstandings about the way Bayes’ reasoning works, whereas disagreement over the manner in which it should be presented could be resolved by empirical research. Misunderstandings in the criminal justice system are exacerbated by the fact that in the few areas where Bayesian reasoning has been applied (such as DNA profiling ) its application has often been faulty and its ramifications poorly communicated. This has further resulted in widespread recourse to probabilistic fallacies in legal proceedings.
A dramatic and worrying example of this was a recent appeal court decision ((2010). R v T. EWCA Crim 2439 , see our draft article about this here) which appears to reject the use of Bayesian analysis and likelihood ratios for all but a very narrowly defined class of forensic evidence. Instead of being accepted as a standard tool of the forensic science trade, Bayesian analysis is perceived by much of the legal profession as an exotic, somewhat eccentric method to be wheeled out for occasional specialist appearances whereupon a judge or lawyer will cast doubts on, and even ridicule, its integrity (hence ensuring it is kept firmly locked in the cupboard for more years to come).
Ultimately this represents a failure by the community of academics, expert witnesses and lawyers who understand the potentially crucial and wide role that can be played by Bayesian analysis and likelihood ratios in legal arguments. This failure must be attributed to our inability to communicate the core ideas effectively. Resorting to the formulas and calculations in court is a dead-end strategy since these will never be understood by most lawyers, judges and juries.
Improving public understanding of probability and risk with special emphasis on its application to the law. Why Bayes theorem and Bayesian networks are needed
Thursday, 21 July 2011
Sunday, 17 July 2011
Using Bayes to prove Obama did not write his own book?
There have been many questions about the closeness of President Obama's relationship with Weather Underground terrorist Bill Ayers. A whole new angle on the relationship has been raised in Jack Cashill's book Deconstructing Obama. Using information in this book Andre Lofthus has applied Bayes Theorem to conclude that Bill Ayers actually was the ghost writer for Obama's best selling book Dreams from My Father.
Loftus's analysis is based on a) a comparison of Dreams with one of Ayers's own books Fugitive Days; and b) a comparison of Dreams with a different book Sucker Punch based on similar material to that of both Dreams and Fugitive Days.
Specifically in a) there were 759 similarities, of which 180 were categorized by Cashill as "striking similarities", whereas in b) Cashill claims there were just six definite similarities, with a maximum of sixteen possible or definite similarities. As Lofhus's Bayesian analysis is not complete I have done my own analysis here. My own conclusions are not as definite. The evidence does indeed provide very strong support in favour of the books being written by the same author. However, if you have a strong prior belief that the books were written by different authors (say you are 99.9% sure) then even after observing the evidence of 180 striking similarities, it turns out that (with what I believe are more reasonable assumptions than made by Lofhus) there is still a better than 50% chance that the books were written by different authors.
Loftus's analysis is based on a) a comparison of Dreams with one of Ayers's own books Fugitive Days; and b) a comparison of Dreams with a different book Sucker Punch based on similar material to that of both Dreams and Fugitive Days.
Specifically in a) there were 759 similarities, of which 180 were categorized by Cashill as "striking similarities", whereas in b) Cashill claims there were just six definite similarities, with a maximum of sixteen possible or definite similarities. As Lofhus's Bayesian analysis is not complete I have done my own analysis here. My own conclusions are not as definite. The evidence does indeed provide very strong support in favour of the books being written by the same author. However, if you have a strong prior belief that the books were written by different authors (say you are 99.9% sure) then even after observing the evidence of 180 striking similarities, it turns out that (with what I believe are more reasonable assumptions than made by Lofhus) there is still a better than 50% chance that the books were written by different authors.
Monday, 11 July 2011
An interesting Conference
I just got back from the International Centre for Comparative Criminological Research annual conference (programme is here) held at the Open University Milton Keynes. There were some outstanding keynote speakers such as Lord Justice Leveson, Prof John Hatchard, Prof Jim Fraser and Dr Itiel Dror, and the panel included Iain McKie (ex-policeman and campaigner on behalf of his daughter Shirley McKie who was wrongly accused of leaving her fingerprint at a crime scene and lying about it). When I was originally invited to speak at the conference I was going to talk about the latest research we were doing in collaboration with David Lagnado at UCL on using Bayesian networks to help build complex legal arguments (dealing with things like alibi evidence, motive and opportunity). But that was before the R v T ruling and its potentially devasting impact for using Bayes in English courts (the analogy would have been like talking about differential equations after being told that you were not allowed to use addition and subtraction). So I ended up doing a presentation based on our draft paper addressing the R v T ruling (my slides are here). This turned out to be a good move because I think it also addressed some of the core recurrent themes of the conference. Full report is here.
Wednesday, 6 July 2011
Drug traces on banknotes: problems with the statistics in the legal arguments?
A colleague at our Law School has alerted us to a range of possible problems associated with evidence about drug traces in a number of cases. Most people are unaware of the extent to which drug traces are found on banknotes in circulation. As long ago as 1999 there were reports that 99% of banknotes in circulation in London were tained with cocaine and that one in 20 of the notes show levels high enough to indicate they have been handled by dealers or used to snort the drug. Similar results have been reported for Euros in Germany. Nevertheless in England and Wales (although not apparently in Scotland) drug trace evidence on banknotes has been used to help convict suspects on drug-related offences. One company in the UK specialises in analysing drug traces on banknotes and they have provided evidence in many cases. Even if we ignore the intriguing question of why high levels of drugs on a person's banknotes suggests that they are drug dealers, a quick look at some of the case materials and papers suggests that there may be some fundamental statistical flaws in the way this kind of evidence is presented. I have produced a very simplified version of the problem in these slides (which includes relevant references at the end).
Friday, 1 July 2011
An introduction to using Bayes and causal modelling in decision making, uncertainty and risk
We have produced a new overview paper on Bayesian networks for risk assessment. It was actually an invited paper for the a special issue of Upgrade, the journal of CEPIS (Council of European Professional Informatics Societies.
I guess this means we can finally 'retire' our previous overview paper "Managing Risk in the Modern World" that we produced on behalf of the London Mathematical Society and Knowledge Transfer Network for Industrial Mathematics.
Marco Ramoni: a great Bayesian and a great guy
I only just discovered the terrible news that Marco died last June (2010) aged just 47. Marco was actually the first consultant we ever employed at Agena back in 1998. He was working at the Open University and had developed the Bayesian Knowledge Discovey tool with Paola Sebastiani (who we also worked with at the time at City University). We used Marco's expertise and tool on a project with a major insurance company. The last time I saw him was around 2000 or 2001 when I took him and Paola Sebastiani to a Turkish restaurant in North London to try and convince both of them to come and work at Queen Mary University of London where I had just moved with my research group. Although they turned down my offer in favour of a move to the USA I kept in touch with Marco until quite recently. I only found out about his death when I tried to contact him about a new project I wanted to involve him in.
Marco was a brilliant man, and (unlike some academics) was also pleasure to be around.
Marco was a brilliant man, and (unlike some academics) was also pleasure to be around.
Why risk models used by financial analysts are fundamentally flawed
A letter I sent to the Financial Times, 2 March 2011:
John Kay's analysis of why the models used by financial analysts are
fundamentally flawed when it comes to predicting rare events ("Don't blame
luck when your models misfire" FT 1 March 2011) is correct but overly
pessimistic as he focuses only on 'traditional' statistical techniques
that rely on relevant historical data. These flawed methods cannot
accommodate even simple causal explanations that involve new risk
factors where previous data has not been accumulated. It is like trying to
predict what happens to the surface area of a balloon as you puff into it,
by relying only on data from puffs of this balloon. If, after each puff, you
measure the surface area and record it, and after say, the 23rd puff, you
create a statistical model showing how surface area increases with each
puff, you will then have a 'model' to predict what will happen after a
further 20, 50 or 100 puffs. None of these predictions will tell you that
the surface area will drop to zero as a result of the balloon bursting,
because your model does not incorporate the basic causal knowledge.
Fortunately, and in contrast to the article's dire conclusions, there are
formal modelling techniques that enable you to incorporate causal,
subjective judgements about previously unseen risks, and allow you to
predict rare events with some accuracy. We have been using such techniques
- causal Bayesian networks - successfully
in research and in practice for several years in real applications ranging
from transport accidents through to terrorist threats. We remain stunned
by the financial markets poor take-up of these methods as opposed to those
which have consistently proved not to work.
John Kay's analysis of why the models used by financial analysts are
fundamentally flawed when it comes to predicting rare events ("Don't blame
luck when your models misfire" FT 1 March 2011) is correct but overly
pessimistic as he focuses only on 'traditional' statistical techniques
that rely on relevant historical data. These flawed methods cannot
accommodate even simple causal explanations that involve new risk
factors where previous data has not been accumulated. It is like trying to
predict what happens to the surface area of a balloon as you puff into it,
by relying only on data from puffs of this balloon. If, after each puff, you
measure the surface area and record it, and after say, the 23rd puff, you
create a statistical model showing how surface area increases with each
puff, you will then have a 'model' to predict what will happen after a
further 20, 50 or 100 puffs. None of these predictions will tell you that
the surface area will drop to zero as a result of the balloon bursting,
because your model does not incorporate the basic causal knowledge.
Fortunately, and in contrast to the article's dire conclusions, there are
formal modelling techniques that enable you to incorporate causal,
subjective judgements about previously unseen risks, and allow you to
predict rare events with some accuracy. We have been using such techniques
- causal Bayesian networks - successfully
in research and in practice for several years in real applications ranging
from transport accidents through to terrorist threats. We remain stunned
by the financial markets poor take-up of these methods as opposed to those
which have consistently proved not to work.
Subscribe to:
Posts (Atom)