Thursday, 26 March 2015

The risk of flying


I have just done an interview on BBC Radio Scotland about aircraft safety in the light of the GermanWings crash - which now appears to have been a deliberate act of sabotage by the co-pilot*. I have uploaded a (not very good) recording of it here (mp3 file - it is just under 4 minutes) or here (a more compact m4a file)

Because this type of event is so rare classical frequentist statistics provides no real help when it comes to risk assessment. In fact, it is exactly the kind of risk assessment problem for which you need causal models and expert judgement (as explained in our book) if you want any kind of risk insights.

Irrespective of this particular incident, the interview gave me the opportunity to highlight a very common myth, namely that “flying is the safest form of travel”.  If you look at deaths per million travellers then, indeed, there are 50 times as many car deaths as plane deaths. However, this is a silly measure because there are so many more car travellers than plane travellers. So, typically, analysts use deaths per million miles travelled; with respect to this measure car travel is still 'riskier' than air travel, but the death rate is only about twice as high as plane deaths. But this measure is also biased in favour of planes because the average plane journey is much further than the average car journey.

So a much fairer measure is the number of deaths per passenger journey. And for this, the rate of plane deaths is actually three times higher than car deaths; in fact only bikes and motorbikes are worse than planes.

Despite all this there is still a very low probability of a plane journey resulting in fatalities - about 1 in half a million (and much less on commercial flights in Western Europe). However, if we have reason to believe that, say, recent converts to a terrorist ideology have been training and becoming pilots then the probability of the next plane journey resulting in fatalities becomes much higher, despite the past data.

*I had an hour’s notice of the interview and was told what I would be asked.  I was actually not expecting to be asked about how to assess the risk of this specific type of incident;  I was assuming I would only be asked about aircraft safety risk in general and about the safety record of the A320.

Tuesday, 24 March 2015

The problem with big data and machine learning


The advent of ‘big data’, coupled with fancy statistical machine learning techniques, is increasingly seducing people to believe that new insights and better predictions can be achieved in a wide range of important applications, without relying on the input of domain experts. The applications range from learning how to retain customers through to learning what makes people susceptible to particular diseases. I have written before about the dangers of this kind of 'learning' from data alone (no matter how 'big' the data is).

Contrary to the narrative being sold by the big data community, if you want accurate predictions and improved, decision-making then, invariably, you need to incorporate human knowledge and judgment. This enables you to build rational causal models based on 'smart' data. The main objections to using human knowledge - that it is subjective and difficult to acquire - are, of course,  key drivers of the big data movement. But this movement underestimates the typically very high costs of collecting, managing and analysing big data. So, the sub-optimal outputs you get from pure machine learning do not even come cheap.

To clarify the dangers of relying on big data and machine learning, and to show how smart data and causal modelling (using Bayesian networks) gives you better results, I have collected together the following short stories and examples:
The whole subject of 'smart data' rather than 'big data' is also the focus of the research project BAYES-KNOWLEDGE.

Thursday, 26 February 2015

The Statistics of Climate Change

From left to right: Norman Fenton, Hannah Fry, David Spiegelhalter. Link to the Programme's BBC website
(This is a cross posting of the article here)

I had the pleasure of being one of the three presenters of the BBC documentary called “Climate Change by Numbers”  (first) screened on BBC4 on 2 March 2015.

The motivation for the programme was to take a new look at the climate change debate by focusing on three key numbers that all come from the most recent IPCC report. The numbers were:
  • 0.85 degrees - the amount of warming the planet has undergone since 1880
  • 95% - the degree of certainty climate scientists have that at least half the warming in the last 60 years is man-made
  • one trillion tonnes - the cumulative amount of carbon that can be burnt, ever, if the planet is to stay below ‘dangerous levels’ of climate change
The idea was to get mathematicians/statisticians who had not been involved in the climate change debate to explain in lay terms how and why climate scientists had arrived at these three numbers. The other two presenters were Dr Hannah Fry (UCL) and Prof Sir David Spiegelhalter (Cambridge) and we were each assigned approximately 25 minutes on one of the numbers. My number was 95%.

Being neither a climate scientist nor a classical statistician (my research uses Bayesian probability rather than classical statistics to reason about uncertainty) I have to say that I found the complexity of the climate models and their underlying assumptions to be daunting. The relevant sections in the IPCC report are extremely difficult to understand and they use assumptions and techniques that are very different to the Bayesian approach I am used to. In our Bayesian approach we build causal models that combine prior expert knowledge with data. 

In attempting to understand and explain how the climate scientists had arrived at their 95% figure I used a football analogy – both because of my life-time interest in football and because - along with my colleagues Anthony Constantinou and Martin Neil – we have worked extensively on models for football prediction. The climate scientists had performed what is called an “attribution study” to understand the extent to which different factors – such as human CO2 emissions – contributed to changing temperatures. The football analogy was to understand the extent to which different factors contributed to changing success of premiership football teams as measured by the total number of points they achieved season-by-season.  In contrast to our normal Bayesian approach – but consistent with what the climate scientists did – we used data and classical statistical methods to generate a model of success in terms of the various factors. Unlike the climate models which involve thousands of variables we had to restrict ourselves to a very small number of variables (due to a combination of time limitations and lack of data). Specifically, for each team and each year we considered:
  • Wages (this was the single financial figure we used)
  • Total days of player injuries
  • Manager experience
  • Squad experience
  • Number of new players
The statistical model generated from these factors produced, for most teams, a good fit of success over the years for which we had the data. Our ‘attribution study’ showed wages was by far the major influence. When wages was removed from the study, the resulting statistical model was not a good fit. This was analogous to what the climate scientists’ models were showing when the human CO2 emissions factor was removed from their models; the previously good fit to temperature was no longer evident. And, analogous to the climate scientists’ 95% derived from their models, we were able to conclude there was a 95% chance that an increase in turnover of 10 per cent would result in at least one extra premiership point. (Update: note that this was a massive simplification to make the analogy. I am certainly not claiming that increasing wages causes an increase in points. If I had had the time I would have explained that in a proper model - like the Bayesian networks we have previously built - wages offered is one of the many factors influencing quality of players that can be bought which, in turn, along with other factors influences performance).

Obviously there was no time in the programme to explain either the details or the limitations of my hastily put-together football attribution study and I will no doubt receive criticism for it (I am preparing a detailed analysis).  But the programme also did not have the time or scope to address the complexity of some of the broader statistical issues involved in the climate debate (including issues that lead some climate scientists to claim the 95% figure is underestimated and others to believe it is overestimated). In particular, the issues that were not covered were:
  • The real probabilistic meaning of the 95% figure. In fact it comes from a classical hypothesis test in which observed data is used to test the credibility of the ‘null hypothesis’. The null hypothesis is the ‘opposite’ statement to the one believed to be true, i.e.  ‘Less than half the warming in the last 60 years is man-made’. If, as in this case, there is only a 5%  probability of observing the data if the null hypothesis is true, the statisticians equate this figure (called a p-value) to a 95% confidence that we can reject the null hypothesis. But the probability here is a statement about the data given the hypothesis. It is not generally the same as the probability of the hypothesis given the data (in fact equating the two is often referred to as the ‘prosecutors fallacy’, since it is an error often made by lawyers when interpreting statistical evidence).See here and here for more on the limitations of p-values and confidence intervals.
  • Any real details of the underlying statistical methods and assumptions. For example, there has been controversy about the way a method called principal component analysis was used to create the famous hockey stick graph that appeared in previous IPCC reports. Although the problems with that method were recognised it is not obvious how or if they have been avoided in the most recent analyses.
  •  Assumptions about the accuracy of historical temperatures. Much of the climate debate  (such as that concerning the exceptionalness of the recent rate of temperature increase) depends on assumptions about historical temperatures dating back thousands of years. There has been some debate about whether sufficiently large ranges were used.
  • Variety and choice of models. There are many common assumptions in all of the climate models used by the IPCC and it has been argued that there are alternative models not considered by the IPCC which provide an equally good fit to climate data, but which do not support the same conclusions.
Although I obviously have a bias, my enduring impression from working on the programme is that the scientific discussion about the statistics of climate change would benefit from a more extensive Bayesian approach. Recently some researchers have started to do this, but it is an area where I feel causal Bayesian network models could shed further light and this is something that I would strongly recommend.

Acknowledgements: I would like to thank the BBC team (especially Jonathan Renouf, Alex Freeman, Eileen Inkson, and Gwenan Edwards) for their professionalism, support, encouragement, and training; and my colleagues Martin Neil and Anthony Constantinou for their technical support and advice. 

My fee for presenting the programme has been donated to the charity Magen David Adom

Watching the programme as it is screened

Wednesday, 18 February 2015

Climate Change Statistics

From left to right: Norman Fenton, Hannah Fry, David Spiegelhalter. Link to the Programme's BBC website
Please see Update here.

I am presenting a documentary called "Climate Change by Numbers" - to be screened at 9.00pm on BBC4 on Monday 2 March 2015. The trailer is here. This is the BBC Press Release about the programme:
BBC Four explores the science behind three key climate change statistics

In a special film for BBC Four, three mathematicians will explore three key statistics linked to climate change.

In Climate Change by Numbers, Dr Hannah Fry, Prof Norman Fenton and Prof David Spiegelhalter hone in on three numbers that lie at the heart of science’s current struggle to get a handle on the precise processes and impact of global climate change.

Prof Norman Fenton said: “My work on this programme has revealed the massive complexity of climate models and the novel challenges this poses for making statistical predictions from them.”

The three numbers are:

-       0.85 degrees - the amount of warming the planet has undergone since 1880
-       95% - the degree of certainty climate scientists have that at least half the recent warming is man-made
-       one trillion tonnes - the cumulative amount of carbon that can be burnt, ever, if the planet is to stay below ‘dangerous levels’ of climate change

All three numbers come from the most recent set of reports from the Intergovernmental Panel on Climate Change.

Prof David Spiegelhalter said: “It's been eye-opening to find out what these important numbers are actually based on.”
In this programme, the three scientists unpack what the history of these three numbers are; where did they come from? How have they been measured? How confident can we be in their accuracy? In their journeys they drill into the very heart of how science itself works, from data collection, through testing theories and making predictions, giving us a unique perspective  on the past, present and future of our changing climate.

Cassian Harrison, Channel Editor BBC Four, said: “This 75 minute special takes a whole new perspective on the issue of climate change. It puts aside the politics to concentrate on the science. It offers no definitive answers, but it does show the extraordinary achievements and the challenges still facing scientists who are attempting to get a definitive answer to what are perhaps the biggest scientific questions currently facing mankind.”

Executive Producer Jonathan Renouf said: “Who would have thought there’d be a link between the navigation system used to put men on the moon, and the way scientists work out how much the planet is warming up? It’s been great fun to come at climate change from a fresh angle, and discover stories that I don’t think anyone will have heard before.”
 



Saturday, 15 November 2014

Ben Geen: another possible case of miscarriage of justice and misunderstanding statistics?


Imagine if you asked people to roll eight dice to see if they can 'hit the jackpot' by rolling 8 out of 8 sixes.  The chances are less than 1 in 1.5 million. So if you saw somebody - let's call him Fred - who has a history of 'trouble with authority' getting a jackpot then you might be convinced that Fred is somehow cheating or the dice are loaded.  It would be easy to make a convincing case against Fred just on the basis of the unlikeliness of him getting the jackpot by chance and his problematic history.

But now imagine Fred was just one of the 60 million people in the UK who all had a go at rolling the dice. It would actually be extremely unlikely if less than 25 of them hit the jackpot with fair dice (and without cheating) - the expected number is about 35. In any set of 25 people it is also extremely unlikely that there will not be at least one person who has a history of 'trouble with authority'. In fact you are likely to find something worse, since about 10 million people in the UK have criminal convictions, meaning that in a random set of 25 people there are likely to be about 5 with some criminal conviction.

So the fact that you find a character like Fred rolling 8 out of 8 sixes purely by chance is actually almost inevitable. There is nothing to see here and nothing to investigate. As we showed in Section 4.6.3 of our book (or in the examples here) many events which people think of as 'almost impossibe'/'unbelievable' are in fact routine and inevitable.

Now, instead of thinking about 'clusters' of sixes rolled from dice, think about clusters of patient deaths in hospitals. Just as Fred got his cluster of sixes, if you look hard enough it is inevitable you will find some nurses associated with abnormally high numbers of patient deaths. In Holland a nurse called Lucia deBerk was wrongly convicted of multiple murders as a result of initially reading too much into such statistics (and then getting the relevant probability calculations wrong also). There have been other similar cases, and as my colleague Richard Gill explains so well it seems that Ben Geen may also have been the victim of such misunderstandings.

See also: Justice for Ben Geen

Update: See Richard Gill's excellent comments below

Update 16 Feb 2015: Guardian article talks about my statement made to the Criminla Cases Review Board.

How to measure anything


Douglas Hubbard (left) and Norman Fenton in London 15 Nov 2014.
If you want to know how to use measurement to reduce risk and uncertainty in a wide range of business applications, then there is no better book than Douglas Hubbard's "How to Measure Anything: Finding the Value of Intangibles in Business" (now in its 3rd edition). Douglas is also the author of the excellent "The Failure of Risk Management: Why It's Broken and How to Fix It".

Anyone who has read our Bayesian Networks book or the latest (3rd edition) of my Software Metrics book (the one I gave Douglas in the above picture!) will know how much his work has influenced us recently.

Although we have previously communicated about technical issues by email, today I had the pleasure of meeting Douglas for the first time when we were able to meet for lunch in London.We discussed numerous topics of mutual interest (including the problems with classical hypothesis testing - and how Bayes provides a better alternative, and evolving work on the 'value of information' which enables you to identify where to focus your measurement to optimise your decision-making).

Friday, 14 November 2014

An even more blatant case of plagiarism of our Bayesian Networks work


Spot the difference in our article plagiarised by Daniel and Etuk
Last year I reported the blatant case of plagiarism whereby one of our papers was 'rewritten' by Milan Tuba and Dusan Bulatovic and published in the Journal WSEAS Transactions on Computers. At least Tuba and Bulatovic made an attempt to cover up the plagiarism by inserting our work into a paper with a very different title and with some additional material.

In the latest case (discovered thanks to a tip-off by Emilia Mendes) the 'authors' Matthias Daniel and Ette Harrison Etuk  of Dept. Mathematics and Computer Science, Rivers State University of Science and Technology, Nigeria  have made no such attempt to cover up their plagiarism except to rearrange the words in the title and in a small number of other places. So, for example "reliability and defects" has been replaced by "defects and reliability" at the end of the abstract. The only other difference is that our Acknowledgements have been removed.
Here is the full pdf of our published original paper whose full title and reference is:
Fenton, N.E., Neil, M., and Marquez, D., "Using Bayesian Networks to Predict Software Defects and Reliability". Proceedings of the Institution of Mechanical Engineers, Part O, Journal of Risk and Reliability, 2008. 222(O4): p. 701-712, 10.1243/1748006XJRR161: 

Here is the full pdf of the plagiarised paper whose full title and reference is: Matthias Daniel, Ette Harrison Etuk, "Predicting Software Reliability and Defects Using Bayesian Networks", European Journal of Computer Science and Information Technology Vol.2, No.1, pp.30-44, March 2014
What is really worrying is that Emilia came across the 'new' paper when doing some google searches on using BNs for defect prediction; it was one of the top ones listed!!!
We are waiting for a response from the European Journal of Computer Science and Information Technology.

Incidentally, the new third edition of my book Software Metrics: A Rigorous and Practical Approach has just been published and (for the first time) it covers the use of Bayesian networks for software reliability and defect prediction.