A short paper by Neil et al uses Bayesian analysis to examine the latest (up to 22 Sept) Covid data to determine whether there is evidence to support the Government claim of an exponential 'second wave'. Concludes there remains insufficiently solid evidence, despite the number of tests done, to support any claim there is an exponential increase. There is no reason to panic.
Infection prevalence between April and September (Week 1 is 12 Aug and week 39 is 19 Sept |
Link to paper:
https://qm-rim.org/wp-content/uploads/2020/09/Neil-et-al-2020-Limits-to-UK-PCR-testing.pdf
See also:
My understanding of Bayesian analysis of a time series, such as the one here, is that at each time point you can use the posterior probability distribution to update the prior for the next time point. Hence the posterior density for the whole time series is computed taking into account all the data (and in some formulations there is a forward-backward algorithm so each new estimated point takes account of all the preceding and all the following observations up to the present time).
ReplyDeleteWould be interested to know if your analysis does this, or if it just does a static analysis on each data point to give the wide error bars.