This is a good example of correct-yet-duplicitous reporting, with diffuse accountability. The UK dashboard reports “total number of COVID-19 associated UK deaths”, described as “deaths of people who have had a positive test result.” However, as the the CEBM complained, these do not necessarily have COVID-19 coded on the death certificate as a cause.
All along, the NHS/PHE have (blithely) acknowledged:
Nonetheless, their data resulted in the following news reports:Quote:
The Deaths of people who have tested positively for COVID-19 could in some cases be due to a different cause.
The UK is now pausing their death count. But that won’t even fix the problem going forward, much less repair past damage. If you search for “COVID19 UK deaths”, Google’s summary lacks these distinctions. Apple presents a shorter one even before you press search.Quote:
UK Deaths From Confirmed COVID-19 Cases Rise by 148 to 44,798 - NYT
The United Kingdom's death toll from confirmed cases of COVID-19 rose to 44,830 - US News
A further 13 people who tested positive for coronavirus have died in UK hospitals, all of them in England - Evening Standard
Brevity is no excuse. These numbers have occupied headlines for months, yet they are not understood by the vast majority of readers.
I work on “trustworthy” (noise-resilient, fair, interpretable, etc.) machine learning. (So this discussion about noisy data and accountability is of interest to me.) From a technical standpoint, this means eschewing the predominant learning approach of expanding models until standard optimization algorithms work (which is called “improper” learning or overparameterization) in favor of keeping the model fixed, but designing more clever algorithms (which is called “proper” learning.) The models I’m studying are linear dynamical systems and halfspaces.