Saturday, 16 March 2019

Hannah Fry’s “Hello World” and the Example of Algorithm Bias




“Hello World” is an excellent book by Hannah Fry that provides lay explanations about both the potential and threats of AI and machine learning algorithms in the modern world. It is filled with many excellent examples, and one that is especially important is in Chapter 3 (“Justice”) about the use of algorithms in the criminal justice system. The example demonstrates the extremely important point that there is an inevitable trade-off between ‘accuracy’ and ‘fairness’ when it comes to algorithms that make decisions about people.

While the overall thrust and conclusions of the example are correct the need to keep any detailed maths out of the book might leave careful readers unconvinced about whether the example really demonstrates the stated conclusions. I feel it is important to get the details right because the issue of algorithmic fairness is of increasing importance for the future of AI, yet is widely misunderstood.

I have therefore produced a short report that provides a fully worked explanation of the example. I explain what is missing from Hannah's presentation, namely any explicit calculation of the false positive rates of the algorithm. I show how Bayes theorem (and some other assumptions) are needed to compute the false positive rates for men and women. I also show why and how a causal model of the problem (namely a Bayesian network model) makes everything much clearer.

Fry, H. (2018). "Hello world : how to be human in the age of the machine". New York: W. W. Norton & Company, Inc. 

My report:
 Fenton, N E. (2019)  "Hannah Fry’s 'Hello World' and the Example of Algorithm Bias", DOI 10.13140/RG.2.2.14339.55844
A pdf of the report is also available here
See also:


No comments:

Post a Comment