Ferguson’s Code for COVID Model Finally Released, Heavily Modified by Microsoft, Reviewers Say It’s Crap

Neil Ferguson used 13+ year old code designed for flu to model COVID-19. Most of his previous models and predictions have been wrong. Yet, this was ignored or “forgotten” as he touted his new model as “accurate” for the government to listen to again. And they did. The lockdown in the UK — and subsequently elsewhere — were triggered by his grossly erroneous model. A model, whose source code has yet to be seen by anyone. A model that was never peer-reviewed before governments jumped on it to justify their draconian policies.

At first, he projected 500,00 dead if nothing was done, and 250,000 with lesser mitigation such as businesses remaining open. A full lockdown would produce 20,000 or less deaths. Looking at Sweden, with lesser mitigation at most, that would have produced about 38,000 deaths. Is that what happened in Sweden? No.

This is what it looks like in a graph:


Source
I think his model is crap. Others agree, as the model code has been released. Well, that is to say, a version of it that has been reworked and modified by Microsoft in order to try to make it “better”. We still don’t have the original. Instead of releasing the code with the model for peer-review, Ferguson went to Microsoft in order to “clean” it up and “wash” it. Microsoft, the company of Bill gates, a major funder of Imperial College.

The highly modified code can be found on github. A review of the code by career programmer first said this:

Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.

This problem makes the code unusable for scientific purposes, given that a key part of the scientific method is the ability to replicate results. Without replication, the findings might not be real at all – as the field of psychology has been finding out to its cost.

Conclusions. All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.

Other’s who have reviewed the code on github had a lot to say as well. Basically, even the rewritten version is crap, which indicates the original was even worse.


Wow. Tests are just comparing hashes of the results??

How is that possible that country-wide policies are based on that?


As a software engineer, I’m appalled at the quality of this code and the role its played in public policy. The deficits in testing and quality assurance need to be immediately spoken for to assure the claims made by its data are valid.


In a time where faith in scientific models is more important than ever, it’s truly disheartening to see that such widely-used models are based on such faulty testing logic.


Replication is essential for robust validation. It’s one thing for a model’s assumptions to be incorrect, however if a model used for life and death policy decisions cannot replicate its calculations, it goes from being scientifically useless to being dangerously negligent.


I totally agree. As a software engineer myself, I know we all did bad code like this at least once. But, I cannot condone the usage of such poor quality software for policy-making. Even though I can sympathize with the effort to make the original code more legible and usable, we still NEED to have the original source code for proper auditing AND we need the original input data used to render the results that have been heavily publicized in the media and used by governments. We must be sure that we can reproduce the same results from the same data, using the same code.


This is some of the worst code I have ever seen, there is no way of knowing what it is doing due to giant chunks of bad variable names and no tests. It is on par with some of the worse 1st year code I used to mark.


As if that wasn’t enough the programmer made a second review:

I’d like to provide a followup to my first analysis. Firstly because new information has come to light, and secondly to address a few points of disagreement I noticed in a minority of responses.

Looking at some earlier commits to github — which still wasn’t the original Fergunson code — further demonstrates how poor (shit) this model is, and the false statements teh Imperial COllege has made about the credibility of the model.

  • ICL staff claimed the released and original code are “essentially the same functionally”, which is why they “do not think it would be particularly helpful to release a second codebase which is functionally the same”.

In fact the second change in the restored history is a fix for a critical error in the random number generator. Other changes fix data corruption bugs (another one), algorithmic errors, fixing the fact that someone on the team can’t spell household, and whilst this was taking place other Imperial academics continued to add new features related to contact tracing apps.

The released code at the end of this process was not merely reorganised but contained fixes for severe bugs that would corrupt the internal state of the calculations. That is very different from “essentially the same functionally”.

  • The stated justification for deleting the history was to make “the repository rather easier to download” because “the history squash (erase) merged a number of changes we were making with large data files”. “We do not think there is much benefit in trawling through our internal commit histories”.

The entire repository is less than 100 megabytes. Given they recommend a computer with 20 gigabytes of memory to run the simulation for the UK, the cost of downloading the data files is immaterial. Fetching the additional history only took a few seconds on my home WiFi.

Even if the files had been large, the tools make it easy to not download history if you don’t want it, to solve this exact problem.

You can dig more into this by reading the full reviews and comment within. The github comments I quoted are funny to read. Ferguson is a poor modeler (as his history demonstrates), and politicians gobbled up his crap model prediction. Why?

Was it simply fear of “what if” he’s right? Why not consult others? Why was only this Bill Gates Foundation funded modeler used to justify the worlds reaction when he’s been so wrong before? I suspect this model was somewhat a manufactured excuse to used to justify expanding government powers.

Whether Ferguson intentionally made a crappy model to produce an outcome that favored the control agenda, is another question as well. But regardless, I think there was top-down pressure for political leaders to conform to a narrative of a “devastating” pandemic in order to bring about change on a global scale.

Just as PNAC favored a new Pearl Harbor to bring about “transformation”, I think the hidden hands and secret powers also wanted a PNAC-style “catastrophic and catalysing event” to bring about even greater change on the global scale.


References:


Originally published on Hive

Have something to say? Please let me know.