Monday, 23 November 2020

Latest UK COVID-19 stats roundup

Updated 28 November 2020

Using only the data from today (but taking account of numbers tested) here is an update of the key trend plots.

First we have the new 'cases' (i.e. those testing positive from a PCR test) as a proportion of number of tests (this is all PCR tests conducted in laboratories and lateral flow device tests): 

So the proportion of positive tests continues to fall, but it is important to see this article for why there may be multiple explanations for previous increases.

Consequently, we are now starting to see a divergence between 'cases' (i.e. positive tests) and number of tests as seen in this plot over last the last 6 weeks (obviously different linear scales used for each as shown; moving 7-day averages with 2-day delay on cases)

Next, the proportion of 'cases' which lead to hospital admissions (but, as usual, note that any person admitted to hospital who has a positive PCR test within 14 days prior to admission or within 28 days after admission is classified as a COVID admission irrespective of their reason for being in hospital). 


Next we have the current case fatality rate (proportion of people currently dying having tested positive any time up to 28 days prior to death; so, again as with hospital admissions, these are people with 'COVID19'). This has been steady at around 2% for some time.


And finally the cumulative 'death rate', i.e. the proportion of all people who have died having tested positive any time up to 28 days prior to death. This continues to drop.

As usual all the caveats discussed previously apply.  This includes the fact that the government data actually tells us very little about what we really need to know; for example, none of the above tell us what the real death rate or infection rate is - as explained in the following two graphics:

And of course the fact that we are still waiting for the evidence needed to determine whether the 'benefits' of lockdowns  exceed the 'costs':


Finally here is a summary of the data/evidence we really need to determine if the 'benefits' of lockdowns exceed the 'costs':

Oh and here is an interesting comparison between the UK and Sweden which suggests lockdowns do not reduce deaths.

See also

Friday, 13 November 2020

Statistical analyses attempting to determine election fraud: the need for a causal framework

Norman Fenton and Martin Neil

UPDATED 19 Nov 2020

There has been much discussion about whether statistical analysis alone can establish if there was fraud in the US election. While the claim that Benford's Law provides proof of fraud is easily dismissed (see below) there are certainly other examples of statistical anomalies and sampling anomalies that cannot be. Any self-respecting statistician should be looking at these - it is curious that almost no academic statisticians are. If there is no causal explanation (such as fraud or something else like systemic bugs in the Dominion counting software) for the many observed statistical anomalies, then they would have to suspend their belief in much of classical statistics.  Consider, for example, the following data from Justin Hart that tracks votes in Wisconsin and Michigan respectively :

Similarly, in all of the key swing states (as shown below) Trump was well ahead when the voting was (unusually) stopped in the early hours of 4 Nov, with most of the votes counted; the fact that he ended up losing in each case means that in each case the remaining votes cannot have come (in a purely statistical sense) from the 'same population'. It is argued that it was indeed a 'different' population comprising primarily postal votes heavily favouring Biden; but the question statisticians need to address is whether there was also a difference in the population of early postal votes (which were already counted) and late postal votes sufficient to explain the reversed outcomes in each case.


Other claims to consider:

  1.  The claim that the first digit total votes for Biden in Chicago districts defy Benford’s Law and hence must be fraudulent. This video by Matt Parker provides a description of Benford’s law along with an explanation of why it is not relevant in this case.  In other words, the ‘statistical analysis’ based on Benford’s Law does not establish fraud.
  2. The claim that large batches of postal votes (in one case a batch of over 23,000) all of which were votes for Biden must represent fraud because it is a statistical impossibility otherwise.  This claim only works if ‘fraud’ or ‘luck’ were the only possible causal explanation for 23,000 consecutive votes all cast for Biden. If these were the only possible causal explanations then this would certainly prove fraud even if the ballots came from a district where, say 90% of people really are Biden supporters. The probability that all 23,000 votes would be for Biden purely by chance given that each vote has a 90% probability of being for Biden is 0.9 to the power of 23,000. That is a number much smaller than 1 divided by the number of atoms in the observable universe. People saying it is ‘as likely as being struck by lightning’ are massively understating how unlikely it would be. More like being struck by lightning on several consecutive days. However, the argument is flawed if there is another plausible causal explanation for the bag contents other than fraud and luck. For example, it may be possible that these ballots were part of a set that had already been counted and sorted. Or, perhaps, this was a deliberate hoax or set-up.  So, the focus needs to be on whether any of these alternative explanations is feasible rather than on the statistical analysis. The statistical analysis only proves that the batch cannot have come from a random set of ballots. 
  3. The claim that an unusually high pattern of people voting 'Republican but not Trump' compared to 'straight Republican' in districts in key swing states proves fraud. Assuming the data here are accurate, there could still be causal explanations other than fraud, including the possibility that these are the areas where the Republicans are 'never Trumpers'.
  4. The claim that the sudden large swings to Biden which started happening in key swing states after the counting stopped at 3.00am on election night (as in this analysis by an anonymous data scientist) proves fraud. Assuming the data here are accurate, this does indeed look like convincing evidence of fraud. However, because it is at a State level, there could still be a causal explanation other than fraud. For example, it may be possible that large numbers of ballots that came in late were primarily from Biden-supporting areas.

If there really was fraud, then (ignoring the possibility of automated counting machine fraud) the simplest and most efficient way of identifying it statistically would be a variation of what was done by the anonymous data scientist above but at a much more local/granular level and focused only on postal ballots. In other words, districts sufficiently small such that there is less chance of a systematic or random interference in the natural process by which ballots are collected (no mail sorting, no sorting at the centre into for/against bundles etc. i.e. the “draws” come naturally as close as possible on a per household basis).  The more granular we get, the closer we are at detecting anomalies that are not explainable by anything other than fraud. If there is some model of causal interference, then the normal and hypothesized abnormal process need to be tested against each other i.e. against patterns from previous elections.

We hypothesise that districts with a total of no more than 5,000 postal votes may be a suitable level of granularity to analyse. In other words at this level of granularity there seems to be no reasonable causal explanation for the distribution of votes in postal ballots counted before and after the 3.00am ‘cut off’ point on election night to be significantly different.

So, let’s consider a hypothetical example of how we would undertake the necessary analysis if we had the relevant district level data. Consider, a district with say 4,000 postal ballots. Suppose that 2000 ballots are counted before the cut-off and candidate A has, say, 55% of these (i.e. 1100). Using a Bayesian analysis** (which assumes that the ‘true proportion’ of people favouring candidate A before we see any votes cast can be anything between 0 and 100%) observing the 1100 votes out of 2000 means that we can update the ‘true proportion’ of people favouring A as shown in Figure 1. Specifically, the true proportion is still quite uncertain –  but there is only a 5% chance it will be less than 53.2 and only a 5% chance it will be more than 56.8.

As also shown in the Figure, we can use this revised probability distribution of the true proportion of A voters to predict the expected number of ballots for A out of 2000 counted after the cut-off – since we are assuming that these come from the same population of voters. This also enables us to calculate how unlikely any observed ‘swing’ is.

Figure 1 Bayesian analysis

For example, we can calculate the probability that there will be a swing of more than 5%  between the before and after proportion of votes for A (i.e  at least 1200 votes after, meaning 60% after compared to 55% before) as shown in Figure 1. The probability is extremely low (0.078% is less than 1 in 1000).  Even a swing of just 2% in favour of A is unlikely (less than 10% probability).

Now, assuming we have the before and after ballot count data for a large number of districts in the same state – say 100, then if there is just one district with a swing larger than 5% to candidate A, this would not be so unusual that it cannot have happened by chance. There is a probability of about 7% that at least one district would have such a swing without some other causal explanation.

If, however, 4 out of the 100 had swings of more than 5% and all were in the direction of candidate A then this would also be so unlikely (about 1 in a million probability) that it would almost certainly require some other causal explanation. The same applies if there is even a relatively small number of instances of smaller swings all in the same direction. For example, if there are 10 swings above 2% which are all in the direction of candidate A then this would also be so unlikely (about 1 in 100,000 probability) that it would require some other causal explanation.

Hence, the data needed to establish fraud in the swing states are the postal ballots for a reasonable number of small areas separated into those counted before and after the night of the election.If anybody has such data we would be happy to analyse it.

(pdf version of an earlier version of this article can be found here
**We also assume a Binomial distribution for the number of ballots cast for candidate A

Friday, 6 November 2020

How to explain an increasing proportion of people testing positive for COVID if there is neither an increase in proportion of genuine cases nor increase in the false positive rate

 A few people have argued (see e.g. here and here) that, despite the official Government statistics, real COVID cases are not increasing and that most COVID positive test results are false positives. The obviously strong argument against this is the evidence that the proportion of positive tests is increasing as shown here:

Ignoring the possibility that COVID cases are being redefined, then unless the false positive rate itself is increasing, surely it cannot be possible for the proportion of positive tests to be increasing without a proportional increase in genuine COVID cases?   In fact, as we have pointed out many times, there could be simple causal explanations for an increasing proportion of positive tests being observed even if the underlying COVID infection rate and test accuracy rates are unchanged. The most obvious causal explanation would be a change in the type of people being tested, such as if a lower proportion of people without symptoms were tested, or a higher proportion of older people were tested. In other words, if a higher proportion of people who had the virus were being tested.

However, it has been pointed out, that it is possible for the proportion of people wrongly testing positive (as opposed to the proportion of tests that are wrongly positive) to increase even if there is also no change to the type of people being tested. How? Because if some people are being tested more than once – as is certainly happening for those admitted to hospital – the probability of those people wrongly testing positive at least once does increase. Suppose, for example, the ‘constant’ false positive rate is 1 in 100 (i.e. 0.01 probability). Then, if you have two tests, the probability you test positive at least once is higher than 0.01. In fact, it is almost double. It is one minus the probability you do NOT test positive both times, i.e.

1 - (0.99) x (0.99)   =  1 - 0.9801  =  0.01999

which is close to 2%. But we do not even need to use this calculation to show the impact of testing people more than once.

Consider the following example:

Suppose the real COVID infection rate among those being tested is constant at 5% and that 10,000 people in consecutive periods are tested. Suppose the true positive rate for the testing is 80% and the false positive rate is 1%.

In period 1 everybody is tested just once. Then about 500 of these people have COVID and 9500 do not.  But then 95 of the 9500 without COVID will wrongly test positive. Hence 495 out of 10000 people test positive – i.e. a proportion of 4.95%

If the rates are unchanged in subsequent periods, then the proportion testing positive remains the same no matter how many more (or less) are tested.

However, suppose that in period 2 some of those – let’s say 20% - who test negative are retested and are recorded as a positive case if the second test is positive. Then, if 10,000 are tested in period 2, we know from above that 495 will test positive first time. However, 20% of those who test negative are retested. That means 20 of the 100 who wrongly tested negative and 1881 of those 9405 who correctly tested negative will be retested. Of the 20 with COVID, about 16 will test positive. Of the 1881 without COVID about 19 will test positive. Hence, in period 2 –  because some get tested twice – the number testing positive is now 529 out of 10,000 – i.e. a proportion of 5.29%.

It follows that if, over a sequence of periods, a steadily increasing proportion of people get retested, then the proportion of people testing positive will also steadily increase even though the true infection rate and false positive rate remain constant and there is no change in the type of people being tested. Moreover, if some people get retested more than twice (as is happening with hospital admissions) then this leads to further increases in the proportion testing positive.

It should be noted that an increase in the true positive rate (i.e. a reduction in the false negative rate) will also lead to an increase in the proportion of positive tests even if the true infection rate and false positive rate are unchanged and nobody gets retested. In fact, this is a point which has largely been missed in the discussion about false positives for PCR testing. At the start of the crisis false negatives rates were typically as high as 67%, and it would be understandable in such cases to both a) do retesting; and b) increase the sensitivity of the tests to reduce the false negative rate. The sensitivity can be increased by increasing the cycle threshold of the PCR test. However, it is this increase which has led to claims of the reduction in specificity (the proportion of true negatives), i.e. an increase in the false positive rate.

It is widely assumed that the false negative rates have been significantly reduced. This would normally lead to an increase in the false positive rate. But if, somehow, the false positive rate remained the same, there would still be an increasing proportion of people testing positive even if there was no change in the infection rate and even if we ignore retesting and just consider proportion of tests that are positive (and not proportion of people who test positive). To see this, consider a similar example as above:

Period 1 is exactly as above (5% real COVID infection rate among those being tested is constant at 5% a true positive rate of 80% and false positive rate of 1%), so the proportion of positive tests is 4.95%.

Suppose the only thing that changes in period 2 is an increase in the true positive rate to 90%. Then this time 450 (as opposed to 400) of those who have COVID test positive making a total of 545 testing positive, so the proportion of positive tests is now increased to  5.45%.

So, in summary,  assuming there are no changes to the way COVID cases are defined, then there are still several different ways in which we could see an increasing proportion of people testing positive even if the underlying COVID infection rate, and the false positive rate, are unchanged, namely:

  1.   Causal explanation such as change in the type of people being tested
  2.    Increasing number of people being retested
  3.    Increasing true positive rate for the testing

Obviously combining any of the above will lead to even greater increases.

PDF version of the article

All COVID articles on this blog