Thursday, 4 July 2019

Challenging claims that probability theory is incompatible with legal reasoning

 

The published version of our paper "Resolving the so-called 'probabilistic paradoxes in legal reasoning' with Bayesian networks" is available for free download courtesy of Elsevier until 16 Aug. This is the link: https://authors.elsevier.com/c/1ZIQf4q6IcgUdA

The previous blog posting about this article is here.

The full citation:
de Zoete, J., Fenton, N. E., Noguchi, T., & Lagnado, D. A. (2019). "Countering the ‘probabilistic paradoxes in legal reasoning’ with Bayesian networks". Science & Justice 59 (4), 367-379,   10.1016/j.scijus.2019.03.003

Friday, 14 June 2019

Review of clinical practice guidelines for gestational diabetes


Gestational diabetes is the most common metabolic disorder of pregnancy, and it is important that well-written clinical practice guidelines (CPGs) are used to optimise healthcare delivery and improve patient outcomes. This paper published today in BMJ Open  is a review of such hospital-based CPGs. Seven CPGs met the criteria for inclusion in the review. Only two of these were considered to be of acceptable quality (one was from the Canadian Diabetic Association and other from the Auckland DHB, New Zealand).

Full reference citation:
Daley, B., Hitman, G., Fenton, N.E., & McLachlan, S. (2019). "Assessment of the methodological quality of local clinical practice guidelines on the identification and management of gestational diabetes". BMJ Open, 9(6), e027285. https://doi.org/10.1136/bmjopen-2018-027285.  Fullpaper (pdf)
The work was funded by EPSRC as part of the PAMBAYESIAN project



Wednesday, 22 May 2019

Defining the dreaded 'prior probability of guilt' - a new paper that does just that


One of the greatest impediments to the use of probabilistic reasoning in legal arguments is the difficulty in agreeing on an appropriate prior probability that the defendant is guilty. The 'innocent until proven guilty' assumption technically means a prior probability of 0 - a figure that (by Bayesian reasoning) can never be overturned no matter how much evidence follows. Some have suggested the logical equivalent of 1/N where N is the number of people in the world. But this probability is clearly too low as N includes too many who could not physically have committed the crime. On the other hand the often suggested prior 0.5 is too high as it stacks the odds too much against the defendant.

Therefore, even strong supporters of a Bayesian approach seem to think they can and must ignore the need to consider a  prior probability of guilt (indeed it is this thinking that explains the prominence of the 'likelihood ratio' approach discussed so often on this blog).

This new paper published online in the OUP journal Law, Probability and Risk (and which extends a previous paper presented at the 2017 International Conference on Artificial Intelligence and the Law) - shows that, in a large class of cases, it is possible to arrive at a realistic prior that is also as consistent as possible with the legal notion of ‘innocent until proven guilty’. The approach is based first on identifying the 'smallest' time and location from the actual crime scene within which the defendant was definitely present and then estimating the number of people - other than the suspect - who were also within this time/area. If there were n people in total, then before any other evidence is considered each person, including the suspect, has an equal prior probability 1/n of having carried out the crime.

The method applies to cases where we assume a crime has definitely taken place and that it was committed by one person against one other person (e.g. murder, assault, robbery). The work considers both the practical and legal implications of the approach and demonstrates how the prior probability is naturally incorporated into a generic Bayesian network model that allows us to integrate other evidence about the case.

Full details:
Fenton, N. E., Lagnado, D. A., Dahlman, C., & Neil, M. (2019). "The Opportunity Prior: A proof-based prior for criminal cases", Law, Probability and Risk, DOI 10.1093/lpr/mgz007

Monday, 13 May 2019

When 'absence of forensic evidence' is not 'neutral'


It is widely accepted that ‘evidence of absence’ (such as an alibi confirming that the defendant was not at the crime scene) is not the same as ‘absence of evidence’ (such as where there is no evidence about whether or not the defendant was at the crime scene).

However, for forensic evidence, there is often confusion about these concepts. If DNA found at the crime scene does not match the defendant is that ‘evidence of absence’ or ‘absence of evidence’? It depends, of course, on the circumstances. If there is a high probability that the DNA found must have come from the person who committed the crime then this is clearly ‘evidence of absence’ - the fact that it does not match the defendant is highly probative in favour of the defence. On the other hand if the only DNA found at the crime scene is actually unrelated to the person who committed the crime, then this is clearly ‘absence of evidence’ – the fact that it does not match the defendant is no more probative for the defence than for the prosecution (so the evidence is ‘neutral’). The problem is that lawyers and forensic scientists often wrongly assume that absence ‘evidence of absence’ is ‘neutral’.

The full report (5 pages) includes a 'proof'  (using a simple Bayesian network model) of how the experts get it wrong in a real example.
Fenton, N. E. (2019). When “absence of forensic evidence” is not “neutral.” https://doi.org/10.13140/RG.2.2.14517.73440

The Bayesian network model is available here. It can be run in the trial version of AgenaRisk

Wednesday, 1 May 2019

House of Lords Report on Forensic Science and the Criminal Justice System


The House of Lords Report published today contains the following quote from me that was part of The Alan Turing Institute submission:



I said a lot more in the Turing submission about the use of probability and statistics in evidence, including concerns about low template DNA evidence and the possibility of using Bayesian networks to properly assess the overall impact of multiple pieces of related evidence.

Two other Queen Mary colleagues (Amber Marks and Ian Walden) also contributed to the Turing submission. 

For full details see:

  • House of Lords, The Science and Technology Select Committee "Forensic science and the criminal justice system: a blueprint for change" HL Paper 333, 1 May 2019, https://t.co/M6utVY8Z0b 
  • The Alan Turing Institute "Response to the House of Lords inquiry: Forensic Science in Criminal Justice", 13 September 2018, https://t.co/OBNeceVqhu
  • Fenton N.E, Neil M, Berger D, “Bayes and the Law”, Annual Review of Statistics and Its Application, Volume 3, 2016 (June), pp 51-77 http://dx.doi.org/10.1146/annurev-statistics-041715-033428. (This is cited in both of the above reports. See also blog posting about this article). 

Sunday, 31 March 2019

Modelling competing legal arguments using Bayesian networks

We have previously always tried to capture all of the competing hypotheses and evidence in a legal case in a single coherent Bayesian network model. But our new paper explains why this may not always be sensible and how to deal with it by using "competing" models. The full published version can be read here.


This work arose out of the highly successful Isaac Newton Institute Cambridge Programme on Probability and Statistics in Forensic Science.

Full reference:
Neil, M., Fenton, N. E., Lagnado, D. A. & Gill, R. (2019), "Modelling competing legal arguments using Bayesian Model Comparison and Averaging". Artificial Intelligence and Law https://doi.org/10.1007/s10506-019-09250-3 .The full published version can be read here
See also

Saturday, 16 March 2019

Hannah Fry’s “Hello World” and the Example of Algorithm Bias




“Hello World” is an excellent book by Hannah Fry that provides lay explanations about both the potential and threats of AI and machine learning algorithms in the modern world. It is filled with many excellent examples, and one that is especially important is in Chapter 3 (“Justice”) about the use of algorithms in the criminal justice system. The example demonstrates the extremely important point that there is an inevitable trade-off between ‘accuracy’ and ‘fairness’ when it comes to algorithms that make decisions about people.

While the overall thrust and conclusions of the example are correct the need to keep any detailed maths out of the book might leave careful readers unconvinced about whether the example really demonstrates the stated conclusions. I feel it is important to get the details right because the issue of algorithmic fairness is of increasing importance for the future of AI, yet is widely misunderstood.

I have therefore produced a short report that provides a fully worked explanation of the example. I explain what is missing from Hannah's presentation, namely any explicit calculation of the false positive rates of the algorithm. I show how Bayes theorem (and some other assumptions) are needed to compute the false positive rates for men and women. I also show why and how a causal model of the problem (namely a Bayesian network model) makes everything much clearer.

Fry, H. (2018). "Hello world : how to be human in the age of the machine". New York: W. W. Norton & Company, Inc. 

My report:
 Fenton, N E. (2019)  "Hannah Fry’s 'Hello World' and the Example of Algorithm Bias", DOI 10.13140/RG.2.2.14339.55844
A pdf of the report is also available here
See also: