Thursday, 4 February 2016

Problems with the Likelihood Ratio method for determining probative value of evidence: the need for exhaustive hypotheses

Norman Fenton, 4 Feb 2016

I have written several times before about the likelihood ratio (LR) method that is recommended for use by forensic scientists when presenting evidence (such as the fact that DNA collected at a crime scene is found to have a profile that matches the DNA profile of a defendant in a case). In general the LR is a very good and simple method for communicating the impact of evidence (in this case on the hypothesis that the defendant was at the crime scene), but its correct use is based on strict assumptions that have been routinely ignored by forensic experts and statisticians, leading to the very kind of confusion and misunderstanding (when presented to lawyers and juries) that it was supposed to help avoid. The papers [1] and [2] provide an in-depth analysis of the problems. In this short article I will highlight just one of these problems which invalidate the LR. Subsequent articles will focus on the other problems and issues.

To recap: The LR is the probability of finding the evidence E if the prosecution hypothesis Hp is true (formally we write this as 'Probability of E given Hp') divided by the probability of finding the evidence E if the defence hypothesis Hd is true (formally we write this as  'probability of E given Hd').

So, to compute the LR, the forensic expert is forced to consider the probability of finding the evidence under both the prosecution and defence hypotheses.  This is a very good thing to do because it helps to avoid common errors of communication that can mislead lawyers and juries (notably the prosecutor's fallacy). Even more importantly, the LR is a measure of the probative value of the evidence because:
  • when the LR is greater than one the evidence supports the prosecution hypothesis (increasingly for larger values); 
  • when the LR is less than one it supports the defence hypothesis (increasingly as the LR gets closer to zero); 
  • when the LR is equal to one then the evidence supports neither hypothesis and so is 'neutral'. In such cases, since the evidence has no probative value lawyers and forensic experts believe it should not be admissible.
However, as explained in [1] and [2] (because of Bayes Theorem) for the LR to 'work' with respect to being a measure of probative value, the two hypotheses considered must be 'mutually exclusive and exhaustive'. This means that the defence hypothesis Hd must simply be the negation of the prosecution hypothesis Hp. So, for example, if Hp is "Defendant was at the crime scene" then Hp must be "Defendant was not at the crime scene".  Now, while there is more or less unanimity within the statistics and forensics field that the hypotheses must be mutually exclusive in order for the LR to be used, there is no such unanimity about the hypotheses being exhaustive. Indeed, the Royal Statistical Society Practitioner Guide to Case Assessment and Interpretation of Expert Evidence Guidelines [3] (page 32) specifies that the LR requires two mutually exclusive but not necessarily exhaustive hypotheses (which, interestingly, contradicts what is stated in the earlier Guidelines by the same group [4], page 96). To see why incorrect conclusions may be drawn when the hypotheses are not exhaustive we consider a very simple example:

Fred is the defendant for a crime.  The main evidence against Joe is that his DNA profile is found to be a match of a DNA sample found at the scene of the crime (for simplicity we ignore the possibility of errors in the DNA match). The DNA profile is of a type that is found in only 1 in 10,000 people. However, Fred has an identical twin brother Joe. Using the following:
  • Prosecution hypothesis Hp:  "Fred is the source of the DNA"
  • Defence hypothesis Hd: "Joe is the source of the DNA"
and
  • Evidence E: "the DNA found matches Fred's profile"
The defence reasons - correctly using the likelihood ratio approach- that the evidence E has no probative value with respect to the above two hypotheses, because the twins have the same DNA profile, i.e.
P(E given Hp) = P(E given Hd) = 1.
Hence, the defence demands the evidence is withdrawn because it is 'neutral'.

The problem here is that, even if we assume the hypotheses are mutually exclusive (i.e. we exclude the possibility that both the twins committed the crime) they are certainly NOT exhaustive. The correct defence hypothesis in this case should be "Fred is NOT the source of the DNA". This is made up of two cases:
  • Hd: "Joe is the source of the DNA"
  • Ho: "Another person (not Fred or  Joe) is the source of the DNA"
If we assume - before any evidence is known - that Hp, Hd and Ho are equally likely then the impact of observing the evidence is certainly NOT neutral - it is probative in favour of the prosecution hypothesis as can be shown from running the calculations in a Bayesian network tool:


The probability of Hp increases from 33% to to just under 50%.

But the supposedly 'neutral' evidence can have an even more dramatic impact in practice. Suppose, for example, that Joe has an alibi that is considered pretty reliable. Then this might reduce our prior belief in his innocence to 2%. In this case the before and after probabilities are:

The belief in the prosecution hypothesis in this case has shifted to above 95% - possibly sufficient for a jury to be convinced it is the truth.

If the DNA evidence in the above example was a non-match then the LR approach using the original hypotheses is even more obviously flawed because in this case:
        P(E given Hp) = P(E given Hd) = 0
But the evidence is certainly anything but 'neutral' because, after observing the evidence, the prosecution hypothesis Hp must be false (as must Hd).

While the example above is obviously simplistic and contrived more realistic examples are provided in [1] which also highlights this very problem in the case of Barry George (convicted and subsequently acquitted of the murder of TV celebrity Gill Dando after an appeal ruled that the gunpowder residue evidence presented in the original trial was inadmissible in a re-trial on the basis that it had a LR equal to one and so had 'no probative value'.)

See also:

References
  1. Fenton, N. E., D. Berger, D. Lagnado, M. Neil and A. Hsu, (2013). "When ‘neutral’ evidence still has probative value (with implications from the Barry George Case)", Science and Justice, http://dx.doi.org/10.1016/j.scijus.2013.07.002.  A pre-publication draft of the article can be found here.
  2. Fenton N.E, Neil M, Berger D, “Bayes and the Law”, Annual Review of Statistics and Its Application, Volume 3, 2016 to appear. Pre-publication version here
  3. Jackson, G., Aitken, C., & Roberts, P. (2015). PRACTITIONER GUIDE NO 4: Case Assessment and Interpretation of Expert Evidence. Royal Statistical Society.  Available here.
  4. Aitken, C, Roberts, P, Jackson, G, (2010) PRACTITIONER GUIDE NO 1:"Fundamentals of Probability and Statistical Evidence in Criminal Proceedings: Guidance for Judges, Lawyers, Forensic Scientists and Expert Witnesses. Royal Statistical Society. Available here.

Thursday, 28 January 2016

Misleading DNA evidence and the current damaged winning lottery ticket story


Norman Fenton, 28 January 2016

This post is primarily about how DNA match evidence is often presented in a way that is highly misleading (it is an important issue in an ongoing case I'm involved with). But in order to illustrate the point it turns out that we can use a simple analogy based loosely on the current lottery story that is getting a lot of media attention in the UK. This concerns an unverified £33 million winning ticket from a recent draw. About 200 people are claiming to have bought the (single) winning ticket but, until today*, none had actually provided proof of possessing such a ticket. The claim of one - Miss Susan Hinte - is the one that has grabbed media attention because she has produced a ticket in which key identifying information cannot be read because, she claims, the ticket was put through a washing machine.

But first let's look at the DNA issue, which is concerned with the following generic problem:
  • The prosecution claims that defendant Joe was at the crime scene. This hypothesis is denoted as Hp.
  • A tiny trace of DNA from the crime scene has been analysed and found to match the profile of Joe. This evidence (of the match) is denoted E
Typically the defence will argue that Joe was not at the crime scene and that any DNA matching Joe - especially as it was a tiny trace - got there through secondary transfer or other means. So the defence hypothesis Hd is simply the negation of Hp.

The DNA experts have correctly recognised that, in determining the probative value of the evidence E,  they have to use the ‘likelihood ratio’ approach [1]. This means they have to consider both of the following probabilities:
  1. The probability that E is the result of the prosecution hypothesis Hp being true  - formally we write this as P(E given Hp)
  2. The probability that E is the result of the defence hypothesis Hd being false  -  formally we write this as P(E given Hd) 
If probability 1 is greater than probability 2 then the evidence E supports  Hp over Hd and vice versa. The likelihood ratio is simply 1 divided by 2 and provides a simple and compelling measure of probative value of evidence. If the ratio is greater than one the evidence E supports Hp, with higher values indicating stronger support. If the ratio is less than one the evidence E supports Hd, with smaller values indicating stronger support. However, for reasons explained in [1], this whole notion of probative value is not meaningful if the defence hypothesis Hd is not the negation of the prosecution hypothesis Hp. One of the common errors made by DNA experts is to replace Hd with a different hypothesis, namely Hd':  "DNA from Joe got there by secondary transfer".  In this case Hd' excludes other possibilities of observing E even though Joe was not at the crime scene (such as errors or contamination during the DNA testing, or the DNA belonging to a different person with the same profile etc) and is not even mutually exclusive to Hp since Joe may have been at the crime scene even though the trace sample was there through secondary transfer. But, while this common error is serious, it is not the real concern I wish to raise here. In fact, let's suppose that no such error is made and that the expert considers the correct Hd.

The real concern is how a jury member reacts when the DNA expert now makes the following assertions:
  1. “The findings are what I would have expected if Hp were true.” i.e. P(E given Hp) is very high
  2. “The probability of the findings are considerably more likely to have been the result of Hp rather than Hd”  i.e. P(E given Hp) is much higher than P(E given Hd)
Notwithstanding the unnecessary redundancy of statement 1, these assertions sound very important and suggest very strong support for the prosecution hypothesis, especially as most people would already have assumed (wrongly) that the DNA 'match' means the trace certainly belongs to Joe.

But to demonstrate how misleading they are I will return now to the lottery example. For simplicity I will assume the old 6-ball lottery with 49 numbers. Suppose the winning numbers were:
1, 7, 21, 28, 40, 46

Mrs Smith has a damaged ticket that she claims has the winning numbers. The evidence E is that the first number (which is the only number clearly visible) is 1.

Our hypotheses are:
  • Hp: “Mrs Smith's ticket is the winning ticket”
  • Hd: “Mrs Smith ticket is not the winning ticket”
In this case we know the following:
  • P(E given Hp) = 1  (it is certain that the first number on the ticket would be 1 if it was the winning ticket)
  • P(E given Hd) is 0.122 (this is the proportion of non-winning tickets that have 1 as the first number) 
So we could certainly make exactly the same assertions in this case as the DNA experts above:
  1. “The findings are what I would have expected if Hp were true.” (since the probability of E given Hp is 1)
  2. “The probability of the findings are considerably more likely to have been the result of Hp rather than Hd” (since 1 is considerably greater than 0.122).
However, despite these (correct) assertions it is almost certain that Hd rather than Hp is true - Mrs Smith's ticket is not the winning ticket. In fact, the probability of Hp being true is less than one in 1.7 million (because there are over 1.7 million non-winning combinations in which the first number is 1).

So what is the moral of this story? The likelihood ratio of the evidence might often suggest the evidence is highly probative in favour of one of the hypotheses, but if the prior probability of the alternative hypothesis was much higher to start with then the evidence will not ‘overturn’ the prior belief in favour of the alternative.

Lay people ignore this in connection to DNA evidence. Because the random match probability associated with a DNA match is typically less than one in a billion, the very fact that the evidence E is a "DNA match" already puts into their mind the notion that this 'must tie the defendant to the crime scene'. But the random match probability is almost irrelevant in this case - it only accounts for a tiny proportion of P(E given Hp). Lay people can also easily be tricked into believing that the (redundant) assertion 1 “The findings are what I would have expected if Hp were true” provides additional weight to assertion 2.

Unfortunately, this type of evidence is increasingly prejudicing juries and, I believe, leading to serious miscarriages of justice.

*The real winner has now been found, and since their ticket was not damaged it can not have been Miss Hinte

[1] Fenton, N. E., D. Berger, D. Lagnado,  M. Neil and A. Hsu, (2014). "When ‘neutral’ evidence still has probative value (with implications from the Barry George Case)",  Science and Justice, 54(4), 274-287 http://dx.doi.org/10.1016/j.scijus.2013.07.002. (pre-publication draft here)

Tuesday, 8 December 2015

Norman Fenton at Maths in Action Day (Warwick University)

Today Norman Fenton was one of the five presenters at the Mathematics in Action Day at Warwick University - the others included writer and broadcaster Simon Singh and BBC presenter Steve Mould (who is also part of the amazing trio Festival of the Spoken Nerd which features Queen Mary's Matt Parker). The Maths in Action day is specifically targeted at A-Level Maths students and their teachers.

Norman says:
This was probably the biggest live event I have spoken at - an audience of 550 in the massive Butterworth Hall (which has recently hosted Paul Weller and the Style Council, Jools Holland) - so it was quite intimidating. My talk was on "Fallacies of Probability and Risk" (the powerpoint slides are here). I hope to get some photos of the event uploaded shortly.
Butterworth Hall (hopefully some real photos from the event to come)

Friday, 27 November 2015

Another international award for the BBC documentary co-presented by Norman Fenton


Earlier this month I reported that the BBC documentary "Climate Change by Numbers" (that I co-presented) won the American Association for the Advancement of Science (AAAS) Science Journalism Gold Award for "best in-depth TV reporting".

Now the programme has won another prestigious award: the European Science TV and New Media Award for the best Science programme on an environmental issue, 2015.

The new award (see photo below) was presented to BBC Executive Director Jonathan Renouf at a ceremony in Lisbon on 25 November 2015. Jonathan thanked the team involved in the programme, saying:


"I'm absolutely delighted to see the film gain such widespread international recognition. It really is a tribute to the way you managed to bring fresh television insight to a very well trodden subject, and to do it in a way that was genuinely entertaining as well as so innovative. Everyone I've spoken to out here is so impressed with the film. Thank you again for all your hard work, passion and commitment in making the show."

The programme has also recently been screened on TV in a number of other countries. Here is a comprehensive review that appeared in La Monde.   

The European Science TV and New Media Award





Wednesday, 11 November 2015

BBC Documentary co-presented by Norman Fenton wins AAAS Science Journalism Gold Award for "best in-depth TV reporting"


1 Dec Update: the programme has now won another award.

In March I reported on my experience of presenting the BBC documentary "Climate Change by Numbers". The programme has won the American Association for the Advancement of Science (AAAS) Science Journalism Gold Award for "best in-depth TV reporting". The summary citation says:
The Gold Award for in-depth television reporting went to a BBC team for a documentary that used clever analogies and appealing graphics to discuss three key numbers that help clarify important questions about the scale and pace of human influence on climate. The program featured a trio of mathematicians who use numbers to reveal patterns in data, assess risk, and help predict the future.
Jonathan Renouf Executive Producer at BBC Science said (to those involved in the making of the programme):
It’s a huge honour to win this award; it’s a global competition, open to programmes in every area of science, and it’s judged by science journalists. I can’t think of a finer and more prestigious endorsement of the research and journalistic rigour that you brought to bear in the film. We all know how difficult it is to make programmes about climate change that tread the line between entertainment, saying something new, and keeping the story journalistically watertight. I’m really thrilled to see your efforts recognised in top scientific circles.
Full details of the awards can be found on the AAAS website.

Friday, 6 November 2015

Update on the use of Bayes in the Netherlands Appeal Court


In July I reported about the so-called Breda 6 case in the Netherlands and how a Bayesian argument was presented in the review of the case. My own view was that the Bayesian argument was crying out for a Bayesian network representation (I provided a model in my article to do that).

Now Richard Gill has told me the following:
Finally there has been a verdict in the 'Breda 6' case. The suspects were (again) found guilty. The court is somewhat mixed with respect to the Bayesian analysis: On the one hand they ruled that Frans Alkmeye had the required expertise, and that he was rightly appointed as a 'Bayesian expert'. On the other hand they ruled that a Bayesian analysis is still too controversial to be used in court. Therefore they disregarded 'the conclusion' of Frans's report. This is a remarkable and unusual formulation in verdicts, the normal wording is that report has been disregarded.
This unusual wording is no accident: If the court would say that they had disregarded the report, they would lie, since actually quite a lot of the Bayesian reasoning is included in their judgment. A number of considerations from Frans's report are fully paraphrased, and sometimes quoted almost verbatim.
Also I noticed that the assessment of certain findings is expressed in a nicely Bayesian manner.
However: Contrary to Frans's assessment, the court still thinks that the original confessions of three of the suspects contain strong evidence. Unfortunately, the case is not yet closed, but has been taken to the high court.
Frans Alkmeye has also been appointed as a Bayesian expert in yet another criminal case.

The ruling that the Bayesian analysis is too controversial is especially disappointing since we have recently been in workshops with Dutch judges who are very keen to use Bayesian reasoning - and even Bayesian networks (in the Netherlands there are no juries so the judges really do have to make the decisions themselves). These judges - along with Frans Alkemade - will be among many of the world's top lawyers, legal scholars, forensic scientists, and mathematicians participating in the Isaac Newton Institute Cambridge Programme on Probability and Statistics in Forensic Science that will take place July-Dec 2016. This is a programme that I have organised along with David Lagnado, David Balding, Richard Gill and Leila Schneps. It derives from our Bayes and the Law consortium which states that, despite the obvious benefits of using Bayes:

The use of Bayesian reasoning in investigative and evaluative forensic science and the law is, however, the subject of much confusion. It is deployed in the adduction of DNA evidence, but expert witnesses and lawyers struggle to articulate the underlying assumptions and results of Bayesian reasoning in a way that is understandable to lay people. The extent to which Bayesian reasoning could benefit the justice system by being deployed more widely, and how it is best presented, is unclear and requires clarification.
One of the core objectives of the 6-month programme is to address this issue thoroughly. Within the programme there are three scheduled workshops:
  1. "The nature of questions arising in court that can be addressed via probability and statistical methods", Tue 30th Aug 2016 - Tue 30th Aug 2016
  2. "Bayesian networks in evidence analysis", Mon 26th Sep 2016 - Thurs 29th Sep 2016
  3. "Statistical methods in DNA analysis and analysis of trace evidence", Mon 7th Nov 2016 - Mon 7th Nov 2016

Monday, 26 October 2015

Cyber security risk of nuclear facilities using Bayesian networks



Scientists from Korea (Jinsoo Shin, Hanseong Son, Rahman Khalilur, and Gyunyoung Heo) have published an article describing their Bayesian network model for assessing cyber security risk of nuclear facilities (using the AgenaRisk tool). It is based on combining two models - one which is process based (considers how well security procedures were followed) and the other which is considers the system architecture (considering vulnerabilities and controls). The full paper is here:

Shin, J., Son, H., Khalil ur, R., & Heo, G. (2015). Development of a cyber security risk model using Bayesian networks. Reliability Engineering & System Safety, 134, 208–217. doi:10.1016/j.ress.2014.10.006