The Imperial College code

The Imperial College code, the results from which are thought to have changed the UK government’s coronavirus policy, has been available for a while now on github. Since being made available, it’s received criticism from some quarters, as discussed by Stoat in this post. The main criticism seems to be that if you run the model twice, you don’t get exactly repeatable results.

As Stoat, points out, this could simply be due to parallelisation; when you repeat a simulation the processors won’t necessarily return their results in the same order as before. However, it could also be due to other factors, like not quite using the same random number seed. These simulations are intended to be stochastic. The code uses random numbers to represent the probability of an outcome given some event (for example, a susceptible person contracting the virus if encountering an infected person). Different runs won’t produce precisely the same results, but the general picture should be roughly the same (just like the difference between weather and climate in GCMs).

For a while now I’ve been playing around with the Imperial College code. I should be clear that I’m not an epidemiologist and I haven’t delved into the details of the code. All I’ve been doing is seeing if I can largely reproduce the results they presented in the first paper. The paper gives much more detail about the code than I intend to reproduce here. However, it is an individual-based model in which individuals reside in areas defined by high-resolution population density data. Census data were used to define the age and household distribution size, and contacts with other individuals in the population are made within the household, at school, in the workplace and in the wider community.

I’ve run a whole suite of simulations, the results of which are shown on the right. It shows the critical care beds, per 100000 of the population, occupied under different scenarios. If you’ve downloaded the paper, you should see that this largely reproduces their Figure 2, although I did have to adjust some of the parameters to get a reasonable match. The different scenarios are Do nothing, Case Isolation (CI), Case Isolation plus Household Quarantine (CI + HQ), Case Isolation, Household Quarantine plus Social Distancing of the over 70s (CI + HQ + SD70), and Place Closures (PC). To give a sense of the severity, the UK has just under 10 ICU beds per 100000 of population.

I’ve also included (dashed line) the scenario where you impose Case Isolation, Place Closure (Schools and Universities) and general Social Distancing for 150 days (which they show in their Figure 3). As you can see, this really suppresses the infection initially, but there is a large second peak when the interventions are lifted. This is what, of course, is concerning people at the moment; will the lifting of the lockdown in some parts of the UK lead to a second wave?

So, I seem to be able to largely reproduced what they presented in the paper. This doesn’t really say anything about the whether or not the results are reasonable respresentations of what might have been expected, but it’s a reasonable basic test. I will add, though, that there are a large number of parameters and I can’t quite work out how to implement the somewhat more dynamic intervention strategies.

Something else I wanted to add is that I’ve also played around with some other codes, including a simple SIR code, a SEIR code, and one that included an age distribution and a contact matrix. Whatever you might think of the Imperial College code, all of the models seem to suggest that without some kind substantive intervention, we would have overrun the health service.

This entry was posted in Policy, Research, Scientists, The scientific method and tagged , , , , , , . Bookmark the permalink.

296 Responses to The Imperial College code

  1. jamesannan says:

    Um…we did over-run the health service. Unless you think that refusing to treat a huge number of ill people was just for fun?

  2. Um…we did over-run the health service. Unless you think that refusing to treat a huge number of ill people was just for fun?

    Fair point. Poorly phrased on my part. I was simply trying to stress that whether you trust the Imperial College code, or not, there is little to indicate that we over-reacted.

  3. Joshua says:

    > Unless you think that refusing to treat a huge number of ill people was just for fun?

    You might also consider a shortage of PPE as a measure of being overrun.

  4. Everett F Sargent says:

    Any way you slice it, the World will not drop below 1000 deaths/day in 2020 …

    World = World – CN = EU + US + RoW
    RoW = World – CN – EU – US
    SA = South America
    RHS = x-axis log scale divisions are 30, 60, 90 and 120-days

    A 2nd wave in the fall will make the time series look like a double humped camel with a global saddle point above 1000 deaths/day. That is the best prognostication that I can make as of today.

  5. Everett F Sargent says:

    I forgot to mention that those time series are 7-day rolling means. Done so to remove the so-called weekend effects (which is real).

  6. dhogaza says:

    I dug into this non-determinism issue being raised to discredit the model.

    First of all, in regard to getting two different results in single-processing mode, it was a bug introduced by the MicroSoft team. There’s really no reason to believe it exists in the original version of the model used by IC to generate the earlier projections used by government.

    As so often happens, this bug was introduced when a new feature was added. Generating the network of individuals, households, etc is time consuming, and the developers decided to introduce the option of generating and saving a network without running the simulation, then in the future loading the simulation. This resulted in the pseudo-random number generator being seeded twice. If you generated and then simulated the network in one run, the seeding was only done once. Different results. Hmmm. BUT once saved, every time you run the simulation on the same network you get the same results using the same seed. The developers had never compared the generate-save-restart-load-simulate workflow with the generate-simulate workflow and hadn’t noticed the two scenarios gave different results with the same seed. It was fixed two days after it was reported and diagnosed, but the fallout has not died.

    Now, regarding the multi-processing case, given the expense of network generation they don’t serialize the processes to guarantee that each time it is run, individuals are assigned to the exact same households or schools. The assignments do follow the parameter values used to drive the algorithm, so the statistical distribution does fit those. The developers state this is intentional because the time savings is more important to them than guaranteeing reproducibility – after all, you can save a network and then rerun using that same network to your heart’s content (regression testing, debugging, etc).

    When run in multi-processing mode they guarantee reproducibility when you simulate the same network with the same number of threads. Again, important for regression testing etc.

    Now, I can think of two possibilities here:

    1. The developers from MicroSoft who are working on it haven’t actually tested reproducibility under the conditions where they guarantee it and are lying about the fact that they have and, indeed, depend on it. I’ve found no evidence for this.

    2. lockdownsceptic doesn’t know what he’s talking about. Having read his commentary, I’ll say this is definitely true in some cases, at least.

  7. dhogaza,
    Thanks, that is very useful.

  8. Willard says:

  9. Keith McClary says:

    Has anyone read Tommaso Dorigo’s post, starting out:

    “First off, let me say I do not wish to sound disrespectful to anybody here, leave alone my colleagues, to most of which goes my full esteem and respect (yes, not all of them, doh). Yet, I feel compelled to write today about a sociological datum I have paid attention to …”

  10. Keith,
    No, I haven’t read that. I’m not even sure of the context Do you have a link?

  11. Joshua,
    Thanks. I think that article makes some pretty good points.

  12. Joshua says:

    Yup.

    With respect to comparing the effects of a “lockdown” in one country to voluntary social distancing in another, (or, I might add, extrapolating a national fatality rate from an infection rate in a non-random sample from one locality that isn’t nationally representative on such basic metrics such as SES and race/ethnicity) ::

    >… I saw curves describing data from one country overlaid with other curves describing data from other countries, shifted according to ad-hoc criteria;…

  13. dhogaza says:

    Willard

    Actually there really was a bug, as I described above. lockdownsceptic clearly didn’t understand what the bug is when he jumped on it, nor the fact that it had been recently introduced along with the new feature I described, and ignored the fact that it was fixed two days after being diagnosed with the help of the team that reported it. Or that the development team hadn’t noticed because the way they configured and ran the model DID lead to deterministic (reproducible) results. Blah blah.

    It’s just FUD, though. It’s the same strategy used to try to discredit models like the NASA GISS Model E. Scream for them to be open sourced. Then scream “the code is unreadable (I don’t think Model E is, though the physics is incomprehensible 🙂 )”, OMIGOD it is written in FORTRAN it must suck!!! etc etc.

    Of course everything thus far is unrelated to the MODEL part of the model source code, i.e. the parts that implement the SIR model that moves people through various states, the parts that model the geographic spreading of the disease, the parts that model the pace of infection through households and schools and all that. The parts that correspond to Ferguson’s paper describing how the model works. It’s much easier to say “your variable names suck!” than to address issues of substance.

  14. Willard says:

    > OMIGOD it is written in FORTRAN it must suck

    Jinx:

    If you read back the thread you’ll notice auditors who fail to realize that R (which is as ugly as it can get imo) is a bit newer than they presume.

  15. dikranmarsupial says:

    Requiring bit-level reproducibility for stochastic simulation is a bit unreasonable – in most cases we ought to be more interested in whether the stochastic variability from run-to-run is suitably small that we can place reasonable confidence in any given run.

    I did an experiment a few years ago that was basically a low-priority job that could be run when there wasn’t something more important to be done, and it ended up taking a N months to generate the results. The individual simulations took longer than the maximum allowed on the cluster (@5 days), so I designed the system to checkpoint, including saving the state of the random number generators, so in principle it would be possible to re-run all the simulaiions again and get the same numeric answer (I’m not naive enough to think that would hold in practice). This ended up taking quite a bit of work to get just right. But then it occured to me it would be an egregious waste of computer time to re-run all of these simulations for another few months, so what was the point in aiming for bit-level reproducibility for a study that would never be replicated (because of the cost and because stochastic reproducibility would have been acceptable for all practical purposes)?

  16. dikranmarsupial says:

    There is some irony in people preferring R to Fortran as programming languages – they are both fairly horrible in their own peculiar fashions! ;o)

    (They both have their uses though)

  17. dikranmarsupial says:

    It was also amusing in that twitter thread some were also suggesting MATLAB as better than Fortran, despite MATLAB (especially back in the 90s) was mostly a front end for a set of highly efficient and reliable library routines written in, errr, Fortran (with some nice graphics)!

  18. dhogaza says:

    dikranmarsupial

    “Requiring bit-level reproducibility for stochastic simulation is a bit unreasonable – in most cases we ought to be more interested in whether the stochastic variability from run-to-run is suitably small that we can place reasonable confidence in any given run.”

    Well, the issue here was whether running the same executable on the same machine, no parallel processing, same seed, same data would give the same result twice. Which it should, and does. The point is for regression testing – do your changes to the code change the output? Sometimes changes should change the output, after all that’s the point of changing the underlying theoretical model and then implementing those changes. Other changes – say speeding up I/O or the like – shouldn’t.

    Obviously this has nothing to do with the stochastic simulation itself, and the developers talk about doing a lot of testing to make sure the model is giving reasonable outputs for a range of parameters, seeds, etc.

    But as I said earlier, it’s just FUD. When calls were made to open source the thing, it was obvious the point was to discredit it, just as was attempted when climate model sources were published.

  19. Willard says:

    > Requiring bit-level reproducibility for stochastic simulation is a bit unreasonable

    It’s at the very least bit-level unreasonable.

    I’ll grab my coat.

  20. Willard says:

    You can be sure that no Real Programmer would be caught dead writing accounts-receivable programs in COBOL [1], but

    [1]: See https://web.mit.edu/humor/Computers/real.programmers

  21. David B Benson says:

    Pfooey! Ed Post was 30 years too late.

    My mother was a REAL programmer on the brace of IBM 704s at what was then named Los Alamos Scientific Laboratory. Only assembly code, FORTRAN II wasn’t good enough for blowing up atomic bombs to propell Stan Ulam’s space ship.

    Operating systems? What’s that? REAL programmers ran their codes from the front panel after stacking their cards in the hopper.

  22. dikranmarsupial says:

    dhogaza true – my simulations were all compute-bound batch jobs, and methods for speeding it up (e.g. choosing different methods to solve large sets of linear equations) don’t necessarily give bit-level identical results either, so not really typical. I’ve been experimenting with reproducible code for quite a while (the aim is to be able to type “make” and have the computer re-run the experiments, patch the results into the LaTeX source for the paper and then recompile that), but it is much harder to do once the time taken to run the experiments is more than a couple of days.

    Fully agree about the FUD, has *any* skeptic ever done *anything* substantive with the source code of a climate model? [I’d genuinely be interested in positive examples]

  23. Ben McMillan says:

    What about the divide-by-zero error in the agricultural damages in FUND? Skeptics identified the problem and showed that it lead to climate damages being systematically underestimated.

    Of course, you meant ‘skeptics’, rather than actual skeptics. And this is not really a ‘climate model’ in any normal sense.

    Or in a different field, the ‘fun with Excel’ in Reinhart-Rogoff.

  24. dhogaza says:

    dikranmarsupial

    “methods for speeding it up (e.g. choosing different methods to solve large sets of linear equations) don’t necessarily give bit-level identical results either, so not really typical.”

    Sure. Change compilers and you might see differences, too. And, back in the day before the IEEE floating point standard was adopted, floating point hardware differences guaranteed bit-level results for floating point operations would aways differ from machine to machine.

    And Willard’s comment points out how unfortunate it is that log2(10) is an irrational number, and how annoying bankers are for wanting their pennies to balance 🙂

  25. dhogaza says:

    Willard

    FORTRAN had its issues. The first Mariner probe failed because a DO loop of the form

    DO 10 I = 1,100

    Was accidentally written as an assignment statement of the form

    DO 10 I = 1.100

    Insignificant spaces, in space …

    FORTRAN ignored spaces (should that be present tense???), and this error was probably made by a keypunch operator, not the engineer. Don’t quote me on that. Real programmers always blame the keypunch operator 🙂

  26. Steven Mosher says:

    “despite MATLAB (especially back in the 90s) was mostly a front end for a set of highly efficient and reliable library routines written in, errr, Fortran (with some nice graphics)!”

    most of the heavy duty math in R is just wrapped FORTRAN libraries
    ( for matrix calcs)

    in the end it is quite a bitch, because some things ( Like SVD) will call these old fortran libs
    and you get errors # that reference this old legacy Fortan code. so ugly I gave up

  27. Steven Mosher says:

    thank you for that Joshua

    ‘As I said at the very beginning, I hope my colleagues understand that here I am not accusing anybody. For I have, many times over during these days of confinement, felt that urge to interpolate data and make sense of the apparent inconsistencies they present. But I resisted it, largely because I knew it would have been a dilettantesque form of entertainment – and I do have a good substitute to data fitting in that compartment, as I spend hours playing online blitz chess, where I am, indeed, a dilettante (although a good one at that). So we are human beings: and we want entertainment, and we find it in weird ways sometimes. Nothing to be too concerned about.”

    Every time I start a chart or download covid data I ask myself the same question.
    why? why mosher? really, think about hard steven. Why are you doing this?
    oh? you’re afraid. And then I realize that no amount of number fiddling will work better than
    a mask and hand washing.
    And so I have spent my time playing blitz chess again after years.. maybe 18 years since I was at a board.. oh wait it was 9-11 last time I sat down at the board. how long ago was that?

  28. dikranmarsupial says:

    dhogaza indeed – I’ve always thought that “significant birthdays” ought to be 16, 32, 64 and then aim for 128. The years get shorter as you get older, so it sort of makes them equal intervals.

  29. dikranmarsupial says:

    The good bit about R is the libraries, which is the main reason for using it (python is a bit like that, but not as bad).

  30. I’ve got very to using python libraries. I do remember the days when I used to type out subroutines from the Numerical Recipes in Fortran book (I eventually got hold of a disk that had them all on there – still had to copy them directly into the code though).

  31. jamesannan says:

    From the linked crackpot article:

    “I saw fits using Gaussian approximations to uncertainties which ignored that the data had completely different sampling distributions”

    This sounds pretty much exactly what the nonsense IHME model was doing. Completely ridiculous. These people are supposed to be pros, and yet they pumped out this garbage for weeks (it’s improved recently). I’m afraid I am increasingly coming (rather belatedly) to the realisation that “mathematical epidemiologists” aren’t really mathematicians at all, rather they are biologists who are slightly better at maths than other biologists. Which is a pretty low bar.

  32. James,
    Yes, as I think I may have pointed out to you before, I’m finding this quite a tricky situation. I’m a big fan of leaving things to the experts (as that article about physics crackpots is suggesting) but there are some indications that the experts themselves didn’t always do a great job. Of course, that doesn’t necessarily invalidate the point being made in the article 🙂

  33. dikranmarsupial says:

    ATTP – I had a similar experience, but with “Numerical Recipies in [Fortran Brutally Transliterated into] C”, before moving to MATLAB, where I eventually started adding C “mex” files to optimse the code by selecting the best BLAS routines for the operations, rather than letting MATLAB choose them. Then we got a High Performance Computing facility, which meant that I had to spend less time (over-) optimising code and could (in theory) spend more time thinking about the theory and algorithms (in practice, I have been teaching instead).

  34. dhogaza says:

    James Annan

    “This sounds pretty much exactly what the nonsense IHME model was doing. Completely ridiculous. These people are supposed to be pros, and yet they pumped out this garbage for weeks (it’s improved recently).”

    Their PR machine is still pumping out garbage. That bit has not improved, IMO.

  35. Joshua says:

    Steven –

    > Every time I start a chart or download covid data I ask myself the same question.
    why? why mosher? really, think about hard steven. Why are you doing this?

    Bingo. Is this just pure inquiry? Or are there other “motivations” as well? Why are you being an armchair epidemiologist?

    > oh? you’re afraid.

    Sure. I think there are other factors as well. But yah, much does boil down to fear. The need to distract from the reality of mortality. Mortality has become much harder to avoid lately.

    What’s ironic is that we wind up being divided by an experience that is so fundamentally common among us.

    Don’t know if you saw this:

    https://www.theguardian.com/world/2020/may/15/flying-long-haul-during-covid-19-air-travel-has-never-been-stranger

  36. Willard says:

    > My mother was a REAL programmer on the brace of IBM 704s at what was then named Los Alamos Scientific Laboratory.

    Wonderful.

  37. Steven Mosher says:

    “Don’t know if you saw this:

    https://www.theguardian.com/world/2020/may/15/flying-long-haul-during-covid-19-air-travel-has-never-been-stranger

    ya someone passed that on to me.

    This has been the absolute weirdest few days of my life.

    In all of it there was this little piece of luck

    I was sitting at LAX waiting for the Hotel shuttle ( no cabs no uber) pretty beat.
    I got on the shuttle and as we got to the hotel I realized.

    I
    Left
    my
    Computer
    Bag
    At
    The
    Bus
    Stand.

    That would be all my phones, passport, everything except my wallet.

    Opps. I have never done that in years and years of travel

    I paid the driver $100 bucks to race back to the airport.
    Since I am typing this you know the ending. We got to the airport and there were 3 flight attendents
    standing where I had left my bag. Thankfully they had given it to security.

    Whew.

    Otherwise, I would be stuck in LA.

  38. Willard says:

    If y’all have a chance to watch Midnight Gospel, go for it.

    It reminded me:

    The Tao gave birth to machine language. Machine language gave birth to the assembler.

    The assembler gave birth to the compiler. Now there are ten thousand languages.

    Each language has its purpose, however humble. Each language expresses the Yin and Yang of software. Each language has its place within the Tao.

    But do not program in COBOL if you can avoid it.

    https://www.mit.edu/~xela/tao.html

  39. dhogaza says:

    Willard

    “The Tao gave birth to machine language. Machine language gave birth to the assembler.

    The assembler gave birth to the compiler.”

    Well I first learned machine language, and began writing an assembler because we didn’t have access to the assembler from the manufacturer.

    And then became a compiler writer …

  40. Bob Loblaw says:

    Oh, my. Reading about the company that had to revert back to COBOL because they didn’t understand rounding brings back so many memories.

    Numerical Methods is a specialized branch of Computer Science. I’ve seen too many people that are “computer experts” that have no clue. I started learning computer programing (nobody ever finishes) in the punch-card/mainframe days, and we were actually taught about the perils of floating-point arithmetic. Things like:

    What you think is a simple decimal number in base 10 probably can’t be represented exactly in base 2. (0.5, 0.25, 0.125, 0.0625 etc can be, because they are integer powers of 2, but 0.1, 0.2, 0.3 can’t – they are infinite repeating series in base 2.)

    Avoid logical comparisons that expect exact equality when using floating point values.

    …so things like “IF (b-a) = 0.2…” probably won’t behave the way you expect. And thinking that “(N/10)*10” will actually be N is a dangerous place to put your brain. Double-precision reduces the problem, but does not remove it.

    A quick Google search produced this nice short list:

    https://my.eng.utah.edu/~cfurse/ece6340/LECTURE/links/Embarrassments%20due%20to%20rounding%20error.htm

    (The stock exchange example was the one I was looking for).

    …and that site has a link to the ubiquitous Risks Digest that has been tracking these kinds of boners for decades.

    http://catless.ncl.ac.uk/Risks/

  41. Mal Adapted says:

    Dikranmarsupial:

    The good bit about R is the libraries, which is the main reason for using it (python is a bit like that, but not as bad).

    I was working as a Linux system administrator at Los Alamos National Laboratory when I retired. I didn’t do a lot of programming there, mostly adhoc scripting with Bourne shell, perl and finally python. I wrote a fair amount of Fortran77 and C early in my career, but perl was my most productive language ever since it was introduced, for its string processing. Assembly language was fun when I took a course in it, and felt like talking to the machine in its own language, but I didn’t use it after that.

    The computational physicists I worked with at LANL, OTOH, were familiar with Fortran and C, but were happy to use R and Matlab for their diverse add-on functionality, because it saved them a lot of writing what they needed themselves: imagine that! Python immediately became popular because of the easy access to contributed extensions and class libraries. The lab contracted with a private outfit to maintain a python IDE, together with specified modules and extensions. And many of [the scientists – W] found python’s object orientation more intuitive than their earlier procedural approach. TBH, I myself did not. I thought python’s rich pre-written functionality was quite handy, but my own code was all procedural. I eventually decided it might be time to retire 8^)!

    David B. Benson:

    My mother was a REAL programmer on the brace of IBM 704s at what was then named Los Alamos Scientific Laboratory. Only assembly code, FORTRAN II wasn’t good enough for blowing up atomic bombs to propell Stan Ulam’s space ship.

    Operating systems? What’s that? REAL programmers ran their codes from the front panel after stacking their cards in the hopper.

    I grew up in a college town and liked science, so I was inculcated with the Los Alamos legend early. I still like the stories from the first decades, of historic physics achieved by heroic scientists who happened to be building weapons of mass destruction. I’m afraid I was rather disillusioned when I started working there. They’re still working on WMDs, but it’s hard to imagine the present-day LANL producing any Nobel-level physics 8^(. I came to recognize various causes for the lab’s decline from its heyday, to be sure. The computing infrastructure at my arrival was especially underwhelming, although the raw processing power was impressive. Be that as it may, the PhD physicists of my acquaintance were domain specialists, who cared about meeting scientific standards. They didn’t care so much what language or OS REAL programmers used, but went with whatever got in the way of their productivity the least. I for one am a Linux evangelist, so they probably all thought I was a geek. Hey, if the foo shits… 8^}.

  42. Mal Adapted says:

    I orphaned a “them”. The sentence should be “And many of the scientists found python’s object orientation more intuitive than their earlier procedural approach.”

    [Fixed. -W]

  43. dikranmarsupial says:

    Vaguely remember Goldberg’s “What every computer scientist should know about floating-point arithmetic” was a reasonable place to start (doi:10.1145/103162.103163).

  44. Clive Best says:

    I finally got the IC code to run on my iMac. with 32GB. It works fine and is not too slow at all.
    Basically you must also
    1. Install cMake
    2. switch off the parallel processing option in the make file
    3. install r package “sp”
    4. Fix various host specific paths.

    The example “non-intervention” scenario they provide is extremely scary with R0=3 and over 600,000 deaths in the UK over 2.5 months! In his original paper I believe Neil Ferguson actually used R0=2.4 and an IFC of 1% resulting in 500,000 deaths.

    There is nothing wrong with FORTRAN. You can write clear and structured code.
    However, the code has now all been transferred to C++ 😉

  45. dikranmarsupial says:

    Mal – I’m starting to move to Python, but only reluctantly. The main reason is so that I can give away research toys for others in my field to play with. Python is widely used in machine learning, but mostly because of libraries like SciKit-learn and most of the deep learning tools have python bindings (I have been experimenting with Stan which is a probabilistic programming language with a python interface). I tend to write object-oriented code a fair bit, but I don’t like python’s approach to this (especially the lack of proper encapsulation), but with matplotlib, it is a good second best to MATLAB, with the advantage of being free.

    C & C++ are my favourite languages for general programming, but I get to do hardly any of that these days (mostly just answers to the coursework I set). It is a shame that programmers don’t get much exposure to assembly these days, it helps you to have some appreciation of what the computer is likely to do with your code, and having some empathy with the hardware makes you a better high-level language programmer (IMHO). It’s also enjoyable in the same way that a difficult sudoku puzzle is enjoyable.

  46. Willard says:

  47. dikranmarsupial says:

    Clive wrote “There is nothing wrong with FORTRAN.”

    I wouldn’t go quite that far!

    “You can write clear and structured code.”

    agree with that though. Use a programmng language that allows you to program in the style that is best suited to the structure of the problem you are trying to solve. Shoehorning a procedural problem into an object-oriented structure is sometimes a recipe for inefficiency and a less maintainale baroque architecture.

  48. Clive Best says:

    Object Orientated Programming became too much of of a religion.
    It’s brilliant for user interfaces and smartphone apps, but fairly irrelevant to scientific FORmula TRANslation 🙂

  49. Joshua says:

    dhogaza (or anyone else for that matter) –

    Would you do me a favor? Andrew rightfully reprimanded David and me for gumming up the recent comments.

    I want to respect that, but on the other hand David’s now commenting on a statistics thread and I think that Andrew should have a little background. I was going to post the following and thought that maybe it’s better of someone else does it (It’s still a distraction but maybe less so if I don’t do it?). Anyway, here’s what I was going to post – a few tidbits from David’s past reflections on Andrew. If you post it, don’t forget to let Andrew know that dyp6629 = David Young

    ————————————-

    dpy6629 | April 19, 2020 at 11:29 pm |
    Josh, This Gelman is a nothingburger. He admits he’s not an expert on serological testing and that he doesn’t know if the Ioannidis paper is right or not. I think I’m done with your low value references.

    and

    Gelman looks like someone who likes to hold forth on subjects he is ignorant of such as serologic testing. He then tries to shame other scientists who know much much more than he does. Typical blog thrill seeker whose conclusions can’t be trusted.

  50. Joshua says:

    Woops. I forgot the time stamp for the 2nd comment…

    dpy6629 | April 20, 2020 at 5:15 pm |
    Gelman looks like someone who likes to hold forth on subjects he is ignorant of such as serologic testing. He then tries to shame other scientists who know much much more than he does. Typical blog thrill seeker whose conclusions can’t be trusted.

  51. Mal Adapted says:

    Clive Best:

    Object Orientated Programming became too much of of a religion.
    It’s brilliant for user interfaces and smartphone apps, but fairly irrelevant to scientific FORmula TRANslation 🙂

    The “brilliant for user interfaces” is compatible with DM’s “research toys for others in my field to play with”. Python was an immediate hit with guys doing exploratory data visualization with sliders 8^).

    In my callow formula-translating days, I said “COBOL is the COmmon Business Oriented Language for common business-oriented people” in self-congratulation. I hereby apologize.

  52. Mal Adapted says:

    Thanks Willard!

  53. Joshua,
    Where is that comment from DPY from?

  54. Willard says:

    > Gelman looks like someone who likes to hold forth on subjects he is ignorant of such as serologic testing. He then tries to shame other scientists who know much much more than he does. Typical blog thrill seeker whose conclusions can’t be trusted.

    Where has David Young from the Boeing Company (who recently made a white paper about engineering practice disappear from his publications’ page) said that?

  55. Joshua says:

    If you go to Gelman’s, you’ll see he has a listing in the recent comments thread where he asks Andrew a question.

    He also, of course, left a dig after I apologized to Andrew for gumming up the recent comments. Such a classy guy, eh?

    The two comments I put up were from Climate Etc., a few threads ago:

    https://judithcurry.com/2020/04/14/in-favor-of-epistemic-trespassing/#comment-914924

    https://judithcurry.com/2020/04/14/in-favor-of-epistemic-trespassing/#comment-914978

    Afterwards he did walk it back just a tad:

    https://judithcurry.com/2020/04/14/in-favor-of-epistemic-trespassing/#comment-915002

  56. Joshua says:

    If anyone is going to drop off David’s comments, please only do it briefly and in the thread where he asked Andrew his question. I don’t want to add anything to the other thread where Andrew asked that the childishness cease.

    I’ll also point out that when I criticized the quality of the human subject methodology in the Santa Clara study and said it shouldn’t have passed an IRB review, and that anyone who does human subject science would know that, David first explained that I am not a scientist and then he explained that I post anonymously, and then he explained that he’s been publishing research for 40 years.

    I didn’t realize that they do human subject research at Boeing.

    David’s gonna David.

  57. dikranmarsupial says:

    “Josh, This Gelman is a nothingburger. ”

    ROTFLMAO

  58. Joshua says:

    Oh, and also, aside from the childishness, I think some folks here might be interested in the discussion up at Andrews about “informative priors.” I recall that was a subject of a discussion here a while back – I think in particular in connection w/r/t Nic’s ability to determine which priors are “objective?” 🙂

    James left a comment in his customarily diplomatic tone:

    https://statmodeling.stat.columbia.edu/2020/05/17/are-informative-priors-incompatible-with-standards-of-research-integrity-click-to-find-out/#comment-1339344

  59. dikranmarsupial says:

    Mal “The “brilliant for user interfaces” is compatible with DM’s “research toys for others in my field to play with”. Python was an immediate hit with guys doing exploratory data visualization with sliders 8^).”

    ????

    By “research toys” I meant libraries implementing my methods so that other researchers could build on them. No sliders involved (although they are often object oriented as that facilitates their extension/modification by the users).

  60. dikranmarsupial says:

    Re James’ comment – calling it “objective” is even worse (as it is a jargon meaning of “objective” rather than the one in general usage, but that distinction is rarely made by those promoting them).

  61. Ben McMillan says:

    This is a neat demonstration of a “play with a research tool”. “Build your own zero-emissions energy system”:
    https://model.energy/

    For those who are bored of armchair epidemiology and looking forward to armchair energy system planning.

  62. Joshua,
    Which of Andrew’s threads was the one where he “reprimanded” you and DPY?

  63. Everett F Sargent says:

    “…so things like “IF (b-a) = 0.2…” probably won’t behave the way you expect. And thinking that “(N/10)*10” will actually be N is a dangerous place to put your brain. Double-precision reduces the problem, but does not remove it.”

    In over 45 years of Fortran programming. particularly since my default is dp code for most of that time, I have very rarely run into any such issues, as long as you keep your floats over here and your integers over there, when dealing with mixed ops I have always converted integers to floats (well the integers are still there, I just will not do comparisons between data types). I code in baby steps now, in fact all my codes are baby codes, so that I rather quickly run into problems that need corrections. The baby algorithms are mostly in my head now.

    I am very wary of those potential issues though. Always.

  64. Joshua says:

    Anders –

    Here:

    https://statmodeling.stat.columbia.edu/2020/05/14/so-much-of-academia-is-about-connections-and-reputation-laundering/#comment-1339357

    I seriously love David’s parting shot. He steadfastly turns down the chance to show any grace whenever provided an opportunity.

  65. Reading that thread, I thought it nice that DPY excused the human failings of the researchers involved in the Stanford study. Based on my past interactions with DPY, that seemed quite out of character. You might think that it’s because the Stanford study produced results that suited DPY’s preferred narrative, but it can’t be that, surely?

  66. Joshua says:

    I’ll be generous to David and assume that he has no idea what kinds of expectations there are for human subject research. Too bad that he can’t just admit that than insist thst he has some expert perspective on omit by virtue of his background. It just makes him look bad instead of just lacking knowledge.

    It will be interesting to see what happens with that “whistleblower” report described at BuzzFeed. Is it really a whistleblower? Is there evidence to support the accusations?

    If so, I would hope there will be disciplinary action taken. If not, if they defer to the reputation and esteem of Ioannidis et al., it would be a stain on Stanford.

  67. Willard says:

    > Which of Andrew’s threads was the one where he “reprimanded” you and DPY?

    This one:

    To all in this sub-thread: enough has been said on this topic! Give it a rest, as it overwhelms our comment threads. Please agree to disagree, or take the disagreement elsewhere. Thank you.

    https://statmodeling.stat.columbia.edu/2020/05/14/so-much-of-academia-is-about-connections-and-reputation-laundering/#comment-1339357

  68. Clive Best says:

    Willard,

    “Real programmers always blame the keypunch operator”

    Real programmers used IBM card punch machines and then fed the cards through a RIOS (Remote Input Output Station)

    http://cds.cern.ch/record/1816218

  69. dhogaza says:

    Joshua

    “I didn’t realize that they do human subject research at Boeing.”

    Boeing 737 MAX … researching how quickly human pilots can react in a crisis situation …

    OK, that’s not very nice of me.

  70. Joshua says:

    I’m trying to imagine what would explain a researcher testing a few thousand participants for antibodies, with a test that gives false positives, and knowing that at least some participants have been told that a positive test could be a passport for going back to work, and then being resistant to informing the participants the implications of a false positive and offering them a follow up test.

    To the point where someone else in the research team would withdraw from the publication becsuee of rhe ethical implications.

    I mean I get motivated reaoning – but that’s just indefensible in my book. Knowing that some infectious people could be walking around thinking they have been informed that they can’t infect anyone – as in, say, their grandmother or spouse or child?

  71. Willard says:

    I guess that puts me into the “data cleaning” bin:

  72. Steven Mosher says:

    what did you do before enlightenment? chop wood, clean data.

  73. Bob Loblaw says:

    “…as long as you keep your floats over here and your integers over there, when dealing with mixed ops I have always..”

    Well, that ‘s much of the point. In the code, (0.4-0.3) and (0.3-0.2) might be expected to be equal, but in base 2 there is no guarantee. In VBA single-precision code in Excel, the results are 0.1 and 0.09999999. You won’t notice it unless you force Excel to show you more decimals than it wants to. (Excel normally uses double-precision internally. Fewer problems, but not zero.

    And (N/10)*N might be equal to N for sufficiently large values of N, after rounding, but if N is an integer, and N/10 is converted to floating point, and the result*10 is stored back into an integer variable with truncation, you can bet your sweet bippy that something will eventually go wrong.

    I was also taught that when doing mixed mode, make sure that you make the decisions where to cast to a new type. When I and J are integers (there’s that FORTRAN train of thought), and the (maybe ancient) compiler is happy doing integer math without forcing to floating point, I=1 and J=10 and K=I/J will result in K=0.Even with I/J is cast to float, truncation can give you a smaller result that you expect.

    Don’t trust the compiler to cover your @$$.

  74. Willard says:

    Well, we have a winner:

  75. izen says:

    While the arcania of the various features and flaws of FORTRAN, PYTHON, R MathLab are undoubtedly of importance in all this, (whats wrong with FORTH? ) it somewhat misses the purpose to which this argument has been put.
    All the major newspapers in the UK are now running the story that the computer modelling used is WRONG, with the clear implication that the lockdown, social distancing, and testing are all a malicious imposition of government control that is both unnecessary and economically disastrous.
    The ‘SUN’ as usual has the most dismissive headline –

    “-‘IT’S A MESS’ Professor Pantsdown’s ‘Stay At Home’ lockdown advice based on badly written and unreliable computer code, experts say”

    The issue has ceased to be about the quality of the code, or the modelling, but this is now being used to Attucks and change the policy response to the pandemic for reason other than scientific quibbles about computer language or variations in stochastic models.

    As with climate change, the attacks on models have little to do with computational purity, but are a proxy battle against the inevitable policy conclusions that can be derived from the best scientific knowledge we have of the issue

  76. dikranmarsupial says:

    “whats wrong with FORTH?”

    great language for a small computer (like my first computer – the Jupiter Ace)

  77. jsam says:

    Treating it as a Fermi problem (“how many piano tuners are there in New York?”), just take fifteen minutes to knock up a spreadsheet, and you’ll soon find out you’ll overwhelm the health service within weeks.*

    Box was right. All models are wrong. But Ferguson’s was useful.

    *there are also a slew of studies that point out almost every spreadsheet on the planet contains errors. We still use them.

  78. izen says:

    @-dikran
    “great language for a small computer (like my first computer – the Jupiter Ace)”

    Yeah, I wrote a sound-to-light show program in FORTH on a Sinclair Spectrum back in the day…
    easier than machine code/assembler (grin)

  79. Ben McMillan says:

    The articles (not sure they are really in ‘all the papers’ but certainly in most of the populist and right-wing ones) are a pretty broad-spectrum hit-job on Ferguson, really, although some of them lead with the code stuff. Actually the Daily Mail one even includes more relevant/reasonable criticisms about the choice of parameters used in the models.

    This is a general challenge for any domain which involves computer models, which is basically all of them. It is now standard practice to open-source anything with public-policy implications, and I think nobody really expects or wants that to change. But any bug, or even a deviation from someone’s idea of ‘coding standards’, no matter how minor, will be presented by some as a fatal flaw discrediting the work. Won’t get that much traction unless it significantly changes the results.

    But the most effective ‘big hit’ against Ferguson is still going to be his personal contravention of lockdown. I mean, obviously, if you are in the public gaze, you are going to be subject to personal attacks, so your life isn’t going to be the same. You are now a minor celebrity: deal with it. Maybe the peak science bodies should do more to provide media and public image advice/services for heavily exposed scientists?

    Don’t think much of this is a problem that scientists can solve. You need a press corps with sufficient science literacy and integrity to be able to correctly identify which issues are consequential, rather than one that makes a habit out of skewed hit pieces. Don’t think that will happen any time soon, given the slow apocalypse traditional media is undergoing.

  80. Ben,
    One problem (I think) is that modellers don’t always do a great job of highlighting the limitations of their model. In the case of a model trying to represent something complex, like how our various interactions might spread a virus, it’s probably impossible to capture all the complexity. So, it seems unlikely that any model results will accurately represent what happens. However, this doesn’t necessarily matter if you’re trying to check if we might swamp the healthcare system (it mostly matters whether or not we will end up over-capacity, not whether it’s10 times over, or 50 times over). Similarly, a model might not be able to precisely predict the outcome, but it can still probably tell you something of how various different strategies might change the outcome. If there are indications that we might end up many times over-capacity if we do nothing, we would want a strategy that substantially reduces this, rather than one that would only have a modest effect.

    But the most effective ‘big hit’ against Ferguson is still going to be his personal contravention of lockdown.

    Yes, even though this has little bearing on the science, I still find it annoying. He must have been aware that he would be in the spotlight.

  81. Bob Loblaw says:

    “…for any domain which involves computer models…”

    A minor nit to pick, but it always bugs me when people talk about “computer models”. Science makes use of mathematical models. Computers are just one way of solving these. Calculus and Algebra exist independently of computers. Analytical solutions exist for many cases of mathematical models, independently of computers. Computers just happen to be a convenient and fast way of finding solutions to some problems, but the concepts are inherently mathematical.

    If the contrarians’ bloviating of “you can’t trust computer models” were expressed as “you can’t trust mathematics”, then it would be obvious how empty their rhetoric is.

  82. dikranmarsupial says:

    You can’t make predictions without a model, even if there is no computer and no maths either, there is still some conceptual model involved. If you don’t have that, it isn’t a prediction, it is merely a guess.

    In my experience, most contrarians bloviating about other peoples computer models become rather reticent when you ask them about the model underpinning their predictions.

  83. Bob Loblaw says:

    Yes, even a “descriptive model” is a model…. a collection of words describing how you think something behaves.

    Mathematics is just a well-defined collection of such descriptive models, with standard symbols (it’s own language) using well-defined concepts with well-defined rules on how those concepts link together. “Y = mX+b” is just shorthand for “Variable Y appears to be related to variable X in that any increase in X is matched by an increase in Y in proportion m, and when X is zero, Y still has a non-zero value of b”.

    The definition of “model” that I tend to use is “an abstract representation of reality”. There is always a level of abstraction. It is never an exact duplicate of reality. Yet such abstractions can be useful if they do a good job of approximating some portion of reality. (There is probably a shorter, niftier, well-recognized descriptive model for that concept. 🙂 )

    Even this comment does not completely express the true nature of my thoughts – it uses the English language to provide you with a descriptive model of my thoughts.

  84. Everett F Sargent says:

    Is this a model, or just Excel curve fitting at its so-called finest?

    I’d go with curve fitting. Taking the most conservative (slowest decay), which is the power law decay (log-log), I’d expect half a million deaths in the July time frame (same applies to the exponential decay (log-normal).

    Really not much of a prediction. All fits may decay slower than as shown.

  85. Everett F Sargent says:

    ‘log-normal’ should be ‘log-linear’ per graph labels

  86. Willard says:

    It’s simple, really. No model, no measurement. No model, no data. No model, no implementation of any theory whatsoever. Unuseable science.

    And yet:

  87. Joshua says:

    Just for the sake of amusement…

    I didn’t let Andrew know of David’s previous reflections on Andrew’s contributions. And after Andrew noted that David was rather “rude,” David respondied.

    David Young says:
    May 18, 2020 at 12:05 am
    Andrew, Joshua is quote mining a long comment thread with lots of other comments. What I meant to say is that your post is in my opinion a nothingburger. I don’t know about the statistics part but it seems to me the critical issue is the serologic testing. In that comment thread it is also stated that you are a world class statistician, a comment I agree with. I apologize for the thrill seeker comment.

    Joshua has been heckling me on blogs for several years. He shows up with very repetitious and often unscientific comments and always brings up his motivated reasoning ad hominem. It just gets very frustrating to have every interaction taken down the same road into internet diagnoses of my “reasoning” and state of mind. In addition its unethical.

    ++++++++++++++++++

    Ah, irony. Where would the blogosphere be without irony?.

  88. Joshua says:

    Ooops. I “did” let Andrew know…not “didn’t.”

  89. Joshua says:

    On perhaps a positive note – that perhaps presents a different dynamic than the attacks on scientists dynamic:

    I think a few folks who read here are from Oz? I’m curious if any of them have any thoughts?

  90. Dave_Geologist says:

    “whats wrong with FORTH?”

    Dunno. I was waiting for FIFTH to come out.

    Still waiting 😉

  91. Ben McMillan says:

    Yeah, sure, what the “skeptics” are attacking is really details of the code implementing a mathematical model. Because it turns out that whining about the appearance of code is a lot easier than actual real verification and validation or understanding maths.

    ATTP: I agree that the limitations of the models are not well described in popular media. But I don’t think it is realistic to expect a careful conversation with any significant amount of nuance. And I certainly don’t think this is the ‘main problem’.

    ‘We predict between X and Y deaths if no policy measures are taken’ is about as good as it is going to get. I guess you could have ‘this will require 50x as many hospital beds as we have’ as well.

    Something like ‘the model uses a continuum model that represents viral spread on a coarse-grained single-pool population level, rather than simulating the details of individual interactions, or considering inhomogeneous subgroups’ seems several steps too far to me.

    I guess you could have ‘these are highly idealised models of disease spread through the population, but have been effective at capturing the broad features of previous epidemics’.

  92. Ben,

    I agree that the limitations of the models are not well described in popular media. But I don’t think it is realistic to expect a careful conversation with any significant amount of nuance. And I certainly don’t think this is the ‘main problem’.

    Indeed, but I wonder if the model limitations are even made clear to the policy makers.

  93. Joshua,
    One of the issues I have with DPY (amongst a number) is that he seems to expect people to respect his expertise while regularly dismissing the expertise of others (Gavin Schmidt and Andrew Gelman being two prominent examples). It’s not only that this is rather rude, it also suggests a rather a severe case of motivated reasoning, which would then suggest that one should be cautious of taking anything he says seriously (IMO, at least).

  94. Joshua says:

    Anders –

    Agreed. His apparent disregard for, or inability to recognize, his obviously hypocritical complaints about ad hominems also suggests a severe case of motivated reasoning.

  95. Willard says:

    I left a comment at Andrew’s:

    > Joshua is quote mining

    Interested readers may wish to read David’s comments at AT’s:

    https://andthentheresphysics.wordpress.com/2020/05/09/attacking-scientists-who-tell-the-truth

    He made 13 comments on that thread until he got caught telling a second porky.

    Readers may also be interested in the comment thread on a post dedicated to him:

    http://julesandjames.blogspot.com/2020/04/euromomo_10.html

    David made more than 25 comments over there.

    Readers might also wish to read David’s comments at Judy’s:

    https://judithcurry.com/2020/05/06/covid-discussion-thread-vi/

    David made more than 30 comments under his other sock.

    David’s modus operandi should be fairly obvious.

    Caveat emptor.

    More than one link gets you in moderation at Andrew’s.

    But srsly, we all should give it a rest. David’s gonna David.

  96. Joshua says:

    Willard –

    > But srsly, we all should give it a rest.

    I agree. Done.

  97. Bob Loblaw says:

    “Indeed, but I wonder if the model limitations are even made clear to the policy makers.”

    Well, having worked in government for the last 25 years, and having seen some policy shops in action, the policy wonks frequently don’t have the time to listen, and don’t necessarily have the technical skills to understand the limitations. The really bad policy wonks have drunk the kool-aid and honestly think they can learn everything they need to know to advise the policy makers, after only a few hours of “talking to the right people”.

    The good policy wonks will have a strong background in the subjects they are asked to advise on. The bad ones think that changing areas/departments every few years is a sign of broad experience and knowledge, and is required for a successful career. (i.e., my idea of bad is their idea of good.)

    Welcome to “Yes, Minister”.

  98. Ben McMillan says:

    “Yes, Minister” was the first thing I thought of, too…

    AFAICT, the initial idea that it would be a bad idea to intervene too early was an actual error of judgement, though, rather than communication.

  99. Nathan says:

    I’m Australian and the Federal Govt hasn’t changed it’s ideology. They have long been anti-China and blandly racist; this was just an opportunity to shine… Cutting off flights from China was a key part of the response.
    Then the State Govts did their part so we can’t travel between States without going into quarantine.

    Australia doesn’t have the ‘individualism’ that seems apparent in the US, our history is more about helping each other (as long as you’re not black or Asian) through the tough times, so there is a streak of collectivism. We worship events (like Gallipoli, Tobruk, The Kokoda Trail) that were tough and where we all had to work together to succeed or fail (like Gallipoli) rather than heroic individuals (we tend to mock heroes), so it wasn’t hard to get people to work together.

    The Federal Govt will use this to pursue their agenda of watering down labour laws and the stimulus will support their big-business mates… Same same really.

  100. Nathan says:

    Australia also did very well during and after the GFC because the Labor Govt (the more Socialist one) spent big on infrastructure programs and handed out cash. Although the Conservatives mocked them and ridiculed the programs it was pretty clear they worked, so in this worse situation there’s no way they couldn’t follow suit.

  101. Steven Mosher says:

    Hmm. In the beginning I was rather surprised at De Blasio’s get it done approach
    and heartened by Cuomo’s approach to data driven decision.
    However, De Blasio has gone mad and watching Cuomo’s team explain needlessly complex metrics has been a disappointment.

    Looks like NY will employ Imperial College. watch the guy .
    yuck.

    meanwhile,

    feet on the gound
    https://wwwnc.cdc.gov/eid/article/26/8/20-0633_article

  102. JCH says:

    I think part of the leveling that started in April is due to the fact April is warmer than March. That would mean the worry about the reopening soon causing a 2nd wave may be all for nought. If so, the fall could be a disaster with nation behaving recklessly with a virus that has its mojo back.

    COVID-LAB: MAPPING COVID-19 IN YOUR COMMUNITY

  103. David B Benson says:

    A successful virus type is able to replicate and then transmit. So if too virulent the host goes to hospital, breaking the transmission.

    Examples include the viruses causing the so-called common cold. Just enough to keep going.

    My amateur opinion.

  104. jamesannan says:

    Hey ATTP, would you care to output a zoomed-in view of what was happening in the early stages of the run, say up to mid or late march, in terms of cases and deaths? Would be interesting to see how it compares to reality…

  105. James,
    Sure. I’ve put two figures below. Unless I’ve messed up, these are from the sample parameter files they provided using R0 = 3.0. Some of what they presented in the papers was for R0 = 2.4. I’ve plotted cumulative deaths and then infections, both for 100 days, starting on 01 Jan (i.e., ending around 09 April). In both cases it’s for scenarios where no interventions were introduced.

    Let me know if there is anything else you’d like. I’m certainly not yet completely familiar with all the parameters in the code, but I can give it a try.

  106. Ben McMillan says:

    James,
    Do you have any idea what is going on with this Imperial model: they have a ‘country-specific lockdown factor’ that does most of the work, but I’m struggling to explain how it worked for Sweden:
    https://mrc-ide.github.io/covid19estimates/#/details/Sweden

    I’m wondering if the country-specific results on the web are produced using the country-specific stan code.

  107. an_older_code says:

    what’s illuminating is some of the critics complain about the model being built for “Flu”

    this is an example i have seen on a “skeptic” website

    “The model itself does not withstand independent scrutiny and is based on some deeply flawed assumptions, namely that it’s based on the spread of a flu like virus”

    I think these idiots mentally picture a “model” a bit like a model aircraft or model boat

    “The model itself does not withstand independent scrutiny and is based on some deeply flawed assumptions, namely that it’s based on a model of a Sopwith Camel not a Messerschmitt BF 109”

    the modellers have literally built a physical model of a Sopwith Camel – doh

    i don’t think they have the mental cognitive strategies to internally compute what a computer model is and what it isn’t

  108. jamesannan says:

    Thanks ATTP, I was hoping for something visible:-) how about log scale so we can see the rate? Putting real data on would be good too 🙂

  109. James,
    Now you’re asking a lot 🙂 Let me see what I can do.

  110. verytallguy says:

    Seeing as you’re here James, it would be intriguing to see what your model now calculates R for Sweden. Any chance of running that? My assumption is R~1 as deaths are stable or very slightly declining?

  111. Clive Best says:

    @Ben

    That is a different IC “model”. It is some mainly R code which tries to derive some Baysian statistical inference from the actual case data.

    https://github.com/ImperialCollegeLondon/covid19model/tree/master/stan-models

    It is indeed very confusing .

  112. Ben McMillan says:

    Clive: yes, I guess I should have mentioned that the Imperial model I linked is not the model in the OP. I don’t think the model itself is too complicated, but I’m struggling to understand the results…

  113. This looks suspiciously good. Of course, it assumes no interventions (which started on around day 83). I have a telecon starting in 15 minutes, but will try to look at this some more later. Black points is the data from here.

  114. Dave_Geologist says:

    David, this virus has little or no selection pressure favouring less lethal forms.

    Death rates are minuscule among the population that will breed the next generation of hosts. And its long asymptomatic but infective period and high reproduction ratio means that even those who are sick enough to stay at home undirected won’t slow it down. Doubling rates for deaths (as a proxy for infection 2-3 weeks earlier) were down to 2.5 days in the UK before lockdown, despite its current lethality. It would need a “hopeful monster” mutation that made it less harmful without impacting transmission, rather than a chain of small mutations each favouring a slightly less lethal form.

    Something like Ebola, with a 50% death rate across generations, would be a completely different matter.

  115. This is what I get if I include the basic interventions. PC = place closures. SD = social distancing. CI = case isolation. HQ = household quarantine.

  116. Finally, this is what I get if I consider cumulative infections. The black dots are, again, from here and almost certainly undercount the total number of infections. The model would suggest that a few million people in the UK have already been infected. This would be ~5% of the population which is – as far as I’m aware – consistent with testing that has been done elsewhere.

  117. dhogaza says:

    Dave_Geologist

    “Something like Ebola, with a 50% death rate across generations, would be a completely different matter.”

    Ebola’s a wildcard because the bodies themselves are highly infectious … and in fact burial practices were a major factor in its spread and a cultural barrier that had to be overcome to help knock it down. The 50% death rate just increased the odds of coming into contact with bodily fluids (let’s face it, Ebola’s symptoms are gross) while preparing a body for burial, etc.

    In the case of respiratory diseases, bodies don’t breath, so a 50% death rate would be a different matter, and indeed MERS, with a death rate of about 36% in humans, has a basic reproduction rate of less than 1. Except it doesn’t kill camels, who just get a snotty nose so once the mutated form capable of infecting humans became established in camels, oh well, things sucked.

    So for David Benson … things aren’t as simple as they might seem on the surface.

    Of course I find wanting to maintain the burial practices of one’s culture a lot more defensible than the urge to maintain our cultural practice of assembling in large groups waving AR-15s and proclaiming the desire to use force to stop government from implementing policies meant to save lives, but that’s just me.

  118. dhogaza says:

    ATTP

    That’s very cool and does suggest that the underprediction of 7K-20K UK deaths with interventions was perhaps due to their use of an R0=2.4 rather than any fatal structural issues with their agent-based SIR model.

  119. jamesannan says:

    That’s interesting ATTP though it’s clearly not the simulation they presented in March. I wonder how much has changed apart from R….they said in the paper it was initialised to have the same deaths to March 14. Maybe I should try to get it running – is it easy? I have a macbook pro laptop.

    How do the interventions cause a change before they are introduced, I wonder? Is that just a random seed thing?

  120. James,
    Yes, there are a large number of parameters and I’m not convinced that how they’re set in the publicly available version is the same as was used in the model results they presented in March.

    Maybe I should try to get it running – is it easy? I have a macbook pro laptop.

    It is pretty straightforward. However, I’m running on a cluster which I think has all of the necessary compilers installed by default. As a rough guide, it’s taking about 20 minutes using 24 cores. So, probably a few hours on a mac (depending on how many cores you have). I do know of someone who was trying to run it on their macbook and ran into memory problems.

    How do the interventions cause a change before they are introduced, I wonder? Is that just a random seed thing?

    Yes, I think it must be this. The interventions should be doing nothing until after day 84.

  121. dhogaza says:

    James Annan

    “That’s interesting ATTP though it’s clearly not the simulation they presented in March. I wonder how much has changed apart from R….”

    When MicroSoft first took charge, moved to C++, using OpenMP, etc they did validate against the original model outputs. However it’s not clear if that original model was exactly the same as the one used to present information to the UK government back in March, i.e. the IC team might’ve been working on improvements between then and when MS got involved.

    Currently, the model is definitely changing. For example this commit:

    “Added Death from influenza-like-illness (ILI). Previously could only die from Severe Acute Respiratory Illness (SARI) or Critical. ” Comforting to know we can die in three ways now 🙂

    Ferguson has been doing some tweaking in portions of the model, as have some other members of the team. So it’s a moving target. What you see in the repository is what they’re using in their active work consulting with governments, judging from comments they’ve made.

    So the closest you can get to the model’s state back in March would be to use git to grab the initial files used to create the repository, but it’s not clear how close that would be.

  122. Everett F Sargent says:

    vtg sez …
    “Seeing as you’re here James, it would be intriguing to see what your model now calculates R for Sweden. Any chance of running that? My assumption is R~1 as deaths are stable or very slightly declining?”
    .https://www.folkhalsomyndigheten.se/contentassets/4b4dd8c7e15d48d2be744248794d1438/riket-skattning-av-effektiva-reproduktionsnumret-2020-05-15.pdf
    See Figure 2 for R (in Swedish, but correctly looks to be ~0.9)

  123. Everett F Sargent says:

    OK something is being parsed so that it inserts a big empty space,
    riket-skattning-av-effektiva-reproduktionsnumret-2020-05-15.pdf
    From here …
    folkhalsomyndigheten.se/smittskydd-beredskap/utbrott/aktuella-utbrott/covid-19/analys-och-prognoser/

    [Added a dot in front of the PDF line; it seems that sometimes WP wants to embed it and gets stuck. -W]

  124. verytallguy says:

    Tack så mycket Everett!

    Vi skriva på svenska i dag.

  125. Joshua says:

    Ioannidis meta-analysis if IFR. Shocker, he finds it Lowe than other analyses.

    https://www.medrxiv.org/content/10.1101/2020.05.13.20101253v1

    This part is unreal:

    > For the other studies, healthy volunteer bias may lead to underestimating seroprevalence and this is likely to have been the case in at least one case (the Santa Clara study)19 where wealthy healthy people were rapidly interested to be recruited when the recruiting Facebook ad was
    released. The design of the study anticipated correction with adjustment of the sampling weights by zip code, gender, and ethnicity, but it is likely that healthy volunteer bias may still have led to some underestimation of seroprevalence. Conversely, attracting individuals who might have been
    concerned of having been infected (e.g. because they had symptoms) may lead to overestimation of seroprevalence in surveys.

    So he ignores some of the ways that the Santa Clara study might have been an overestimation because of the recruitment processes.

    But then he doubles down to ignore the many reason why that Santa Clara would be an overestimate – do to higher median income, lower minority population, etc. with respect to a broader exptrapolation beyond Santa Clara.

    Another example of John’s thumb on the scale:

    >Locations with high burdens of nursing home deaths may have high IFR estimates, but the IFR would still be very low among non-elderly, non-debilitated people.

    He ignores the uncertainty in the other direction; i.e., does Santa Clara have *fewer* long term care facility residents than what would be nationally representative? He consistently looks at the uncertainties only in one direction.

    As someone who has long respected Ioannidis, I am having a hard time understanding how poorly he’s approaching the uncertainties in all of this.

    More on the Santa Clara team:

  126. Steven Mosher says:

    “So he ignores some of the ways that the Santa Clara study might have been an overestimation because of the recruitment processes.”

    I am not a fan of his but there is a fundamental problem in designing the collection
    of any serology data.

    Take something as simple as where the testing is and the requirement that people drive to the testing center.

    Imagine that mass transit is a source of transmission. That, if you ride the bus your chance of catching a case is 2x that of someone driving. We dont know.

    Now we know that race plays a role in mortality and gender does, so we can adjust for these factors in our sampling. We can sample across factors we know play a role.
    age, race, gender, etc. But there are holes in our understanding of all the factors that lead to
    infection, asymptomatic presentation, and outcomes.

    Like in Korea, all of our early cases were young people. The testing was skewed waay young.

    Bottom line. I would hate to have to design any sampling strategy for these tests. Whatever you
    do is going to be subject to second guessing and unknowns.

    In new york it appears they will test 280,000. 140K of essential workers, and 140K of people
    who sign up.

    My bet is they wont collect enough profile data or behavioral data and definitely wont publish it.

    meanwhile, where is the CDC?

  127. dikranmarsupial says:

    Model suggests masks effective ;o)

  128. Joshua says:

    Steven –

    I agree the rush to characterize seroprevalence has people way out in front of the data. The data are what they are. The problem is trying to extrapolate from those data as Ioannidis is doing.

    He then rationalizes the data to match his priors, as in saying that Santa Clara should be an underestimate – how has he quantified his speculated reasons for less prevalence their as compared to relevant factors such as race and ethnicity and SES, and their associated factors such as access to healthcare, comorbidities, likelihood of being an essential worker, prevalence of multi-generational households (and exposing older people to infection) yes, rate of use of public transportation, etc.? It doesn’t appear he has quantified any of that. Just speculated away without providing evidence.

    I think it’s bizarre.

  129. dhogaza says:

    David Benson

    Well, there’s nothing new in the phys.org piece that I can see.

    They build a SEIR model and found it’s non-linear and highly sensitive to R and the length of the incubation period. You don’t have to build a model to learn that, a few minutes in google would suffice. And it’s well known that the quality of the available data isn’t great. It’s a bit like an epidemiologist rushing up excitedly to a physicist and saying “I just experienced this cool thing called ‘gravity’, have you heard of it?”.

    This conclusion is a bit odd, though:

    “”Preliminary results show that implementing lockdown measures when infections are in a full exponential growth phase poses serious limitations for their success,” said Faranda.”

    When else would you? In the very beginning when there are relatively few people infected, it’s in “full exponential growth phase”. After a few doublings it’s still in “full exponential growth phase”. When exactly is it a good time to knock Rt down below one, if not then? Are they suggesting we sit back and wait for herd immunity to kick in rather than lock down?

  130. Everett F Sargent says:

    “meanwhile, where is the CDC?”

    Same old question, same old, and all to obvious, answer. Two words. Small Hands.

    You haven’t been watching the 4th season of The Apprentice: White House Edition wherein Small Hands plays with himself on both sides of the table.

  131. Joshua says:

    Meanwhile…in Sweden…

    I’m not particularly critical of Sweden’s approach. It’s one of the variety of bad choices.

    But when you look at the metric of deaths per capita, you will note that the rate of decline is Sweden considerably lower than in many other countries, such as Switzerland, the Netherlands, even France, and many, many other countries. Sweden is rising up the chart at a consistent pace.

    In fact, Sweden has had the higher per capita deaths in Europe over the last seven days. Even higher than the UK.

    Cross-country comparisons are of limited value. And the reasons for Sweden’s relatively slower decline than elsewhere are complicated. And there are necessarily tradeoffs in all of this, but you can’t even evaluate the tradeoffs if your vision is limited by your ideological blinders.

  132. Willard says:

    Audits never end:

  133. Joshua says:

    Willard –

    Assuming what will happen with the rapid peer-review…..can wrong preprints be considered to be “wrong” published research?

  134. Willard says:

    > can wrong preprints be considered to be “wrong” published research?

    Preprints are not published.

    I think about that often.

  135. dhogaza says:

    “Preprints are not published.”

    They’re just publicized …

  136. dikranmarsupial says:

    pre-prints are fine, it is the press releases that are the problem.

  137. dikranmarsupial says:

    (IMHO)

  138. Clive Best says:

    I just ran Ferguson’s model for Sweden

  139. Clive,
    How did you set the parameters for Sweden?

  140. dhogaza says:

    “IMPORTANT: The parameter files are provided as a sample only and do not necessarily reflect runs used in published papers.”

    Now on Clive’s page, regarding the run for the UK with lockdown results using R=3.0 which he thinks is too high, well, because … he says:

    “If you look at the bottom red curve which shows excess deaths directly related to COVID-19 then you can see that the lockdown predictions more or less agree with the outcomes. Does that mean that Neil Ferguson really did call the right shot?

    Probably he did, yet still today we have a fundamental lack of knowledge of the real-time numbers of infected and recovered persons in the UK.”

    So it works well for the UK case, suggesting that maybe it’s not a model flaw causing the results for Sweden to be so off. I’ve seen no indication that Sweden has been working with the IC group so I’m guessing that no one, except Clive, cares about the model run results for that country.

    “This current obsession with R is actually misleading government policy because R will naturally change during any epidemic. R will always reach 1 in coincidence with the peak in cases and then fall rapidly towards zero.”

    Well, yes, R will always be 1 at the peak. It doesn’t appear that Clive understands that without intervention, that happens when herd immunity kicks in, i.e. lots of cases and lots of deaths. The obsession (as he calls it) with R is because the goal is to get control over the thing without reaching that very high level of infections and deaths.

    And he goes on:

    “At some point the costs of continuous stop-go lockdowns based on fear alone will become unbearable. We will have to learn to live with this virus.”

    Based on fear alone?

  141. dhogaza says:

    “How did you set the parameters for Sweden?”

    I would assume:

    ./run_sample.py Sweden

    But note the caveat about the parameter files …

  142. dhogaza,
    Yes, I realise that it has admin files for Sweden, but the available parameter files appear to be for the UK, the US and Nigeria.

  143. dhogaza says:

    ATTP

    Right, for the US and Nigeria it uses those param files, everything else uses “preUK_R0=2.0.txt”

  144. dhogaza says:

    ATTP

    So a reasonable assumption would be that they’re only working on production projections for the US, Nigeria, and the UK.

  145. Clive Best says:

    I used “run_sample.py Sweden” However I think there is definitely a bug because the intervention scenario comes back as “Sweden_PC7_CI_HQ_SD_R0=3.0.avNE.age.xls” but it is still using the UK population size !

    This is what it should look like (IMHO)

  146. dhogaza,
    Yes, that’s my understanding too. So, that may impact why Sweden’s results look a bit odd.

  147. I don’t know if anyone else noticed, but James Annan got a mention in George Monbiot’s latest column for highlighting that starting the lockdown a week earlier would have substantially reduced the number of deaths.

    I posted to Twitter a set of IC model runs which considered lockdown starting one week earlier, or two weeks. I deleted it when James pointed out that the earlier lockdowns had more deaths early on than the later lockdowns. I thought this might because I’d messed up the initial start times between the different runs, but I get the same kind of result when I try to fix that. I’m not quite sure what’s going on, but it could be that closing schools and universities then leads to some extra contact in other environments (the model does assume that there are enhanced household and community contacts when schools and universities close). Anyway, the figure I produced is below. It certainly shows quite a substantial reduction if intervention had started a week earlier, but maybe not quite as much as James suggests.

  148. Joshua says:

    This is interesting.

    “Bayesian adjustment for preferential testing in estimating the COVID-19 infection fatality rate:

    https://arxiv.org/abs/2005.08459

    Very mathy.

    But seems they only looked at likelihood of sampling reflecting a disproportion with respect to infected people wanting to get tested.

    So much other important unrepresentativeness (e.g., variables highly predictive of health outcomes like SES and race/ethnicity) that basically make most of these seroprevalence studies worthless, in my non-expert opinion (that plus $2.50 woukd have gotten you a cup of coffee pre-shutdown)

  149. Joshua says:

    Anders –

    > I’m not quite sure what’s going on, but it could be that closing schools and universities then leads to some extra contact in other environments (the model does assume that there are enhanced household and community contacts when schools and universities close).

    That’s interesting.

  150. Clive Best says:

    @dhogaza

    Thanks ! I think you are right !

    Is there any documentation as how to set up the parameter files ?

  151. Josua,
    It’s one of the parameters in the parameter file. I don’t know if the assumption about enhanced contact in the household/community if schools/universities close is reasonable, or not.

  152. Clive,

    Is there any documentation as how to set up the parameter files ?

    Other than the brief instructions on github, I don’t think so. I have had trouble trying to work it out. I’ve worked some of it out, but I still can’t work out how to trigger interventions on and off, for example.

  153. Clive Best says:

    ATTP,

    I posted to Twitter a set of IC model runs which considered lockdown starting one week earlier, or two weeks. I deleted it when James pointed out that the earlier lockdowns had more deaths early on the the later lockdowns.

    This seems to happen in all scenarios at the beginning of a lockdown even their sample run.

    It doesn’t make sense to me either.

  154. Joshua says:

    Anders –

    It certainly was my reaction when I first heard that they were sending university students home – that they would just spread virus to their families. . But when I thought about it more, I thought that just leaving students in student housing would maybe be even worse long-term.

    The again, if it makes intuitive sense to me, that’s probably an indication that it’s wrong. 🙂

  155. Ben McMillan says:

    I’m wondering if the model is automatically rescaling to match data at a certain date. I guess the llog-scale graphs would show this, if that is what is happening. In other words, the pre-intervention curves should match each other.

  156. Joshua,
    There is an age dependence, so maybe the model suggests that leaving students at school/university leads to fewer deaths, even if it doesn’t impact the overall number of infections.

    Clive,
    Yes, even if you look in the Table in the Report 9 that they produced, it also shows some oddities. I think it may be that there are assumptions about contacts once the intervention start that can lead to some counter-intuitive results. These parameters may be wrong, of course.

  157. Ben,
    No, even the log-scale graphs don’t match. It could be that I’ve made some change that somehow changes when the infection is initialised.

  158. Joshua says:

    Anders –

    > There is an age dependence, so maybe the model suggests that leaving students at school/university leads to fewer deaths, even if it doesn’t impact the overall number of infections.

    Except it’s unrealistic, imo, to think that you can keep them segregated, long term. They would destroy the student housing before moving out into the community (says someone who has rented houses to students).

    That’s what so many of the rightwingers miss with the whole “just protect the old people and stop stealing my freedom ” rhetoric. That’s also unrealistic. Especially for older people in poorer communities, only the least because they’re more likely to live in multi-generational households.

  159. Steven Mosher says:

    “Joshua,
    There is an age dependence, so maybe the model suggests that leaving students at school/university leads to fewer deaths, even if it doesn’t impact the overall number of infections.”

    some of the models have “mixing” matrices that “capture” how much/often old people mix/contact
    with young people.

    As a lover of models I have to say I think they are being misused at this point.

    I dont think you can use them to fine tune policy. well you can use them, just not convinced it
    will beat trial and error.

    to put it bluntly. Some communities will open beaches, some will open for walking only,
    some will drag you from the water if you swim, and some will keep them closed.
    None of it is based on any science or data analysis whatsoever. People will use
    models as cover for whatever they want to do.

  160. Willard says:

    I too like models:

  161. Joshua says:

    > People will use models as cover for whatever they want to do.

    And people will use the policies local governments implement to address COVID for whatever they want to do.

    As one example, freedom fighters will yell “freedom” to feel good about their identity.

    Think of “Keep the government’s hands off my Medicare” if you want a good example.

  162. Joshua says:

    Oops. Anyone have Nic’s phone number?

    > STOCKHOLM (Reuters) – A Swedish study found that just 7.3 percent of Stockholmers developed COVID-19 antibodies by late April, which could fuel concern that a decision not to lock down Sweden against the pandemic may bring little herd immunity in the near future.

    https://www.reuters.com/article/us-health-coronavirus-sweden-strategy/swedish-antibody-study-shows-long-road-to-immunity-as-covid-19-toll-mounts-idUSKBN22W2YC

  163. izen says:

    @-David B Benson

    From your link to the medicalexpress article –

    “Sweden’s strategy is aimed at pressing down the curve so the healthcare system is not overwhelmed, while allowing the rest of society to function as near normally as possible.”

    A strict lockdown also avoids overwhelming the healthcare system, AND reduces the total number of excess deaths. But at the ‘expense’ of the near normal functioning of society. Especially the economic activity.
    This makes explicit the policy choice that governments are making between the number of people who die and sustaining the status quo during a pandemic with a significant IFR. The near normal functioning of society is surprising resilient to a big jump in daily deaths, so that is preferred by some (US, Brazil ?) to the alternative of a significant disruption of the economic and social system by policies that minimise the number of dead.
    A death toll is chosen over a economic cost.

  164. Joshua says:

    What’s up with article? The headline says:

    > 1 in 5 in Stockholm have virus antibodies: Sweden

    The first paragraph says:

    > Sweden, which has controversially taken a softer approach to the coronavirus pandemic, said Wednesday that more than one in five people in Stockholm were believed to have developed antibodies to the virus.

    The 2nd paragraph says::

    > An ongoing study by the country’s Public Health Agency showed that 7.3 percent of a sample of randomly selected people in Stockholm—Sweden’s worst-hit region—had antibodies when they were tested in the last week of April.

    ? 7.3% = more than 1 in 5?

    Also intersting that the souffle I linked has a very similar text but a comeletely different slant.

  165. Joshua says:

    Lol. Souflle = article.

  166. Joshua,
    Isn’t the argument that it was around 7% towards the end of April but is now something like 20%?

  167. Ben McMillan says:

    7.3% is the number actually found in tests a couple of weeks ago. The ‘1 in 5’ is an estimate for the current number thrown out by the head of the health agency in a press conference: not clear how they arrived at that figure, but they have been saying similar things before, so I suspect they haven’t ‘updated their estimate to reflect new information’.

    (clearly more than 7% have now been infected, but three times that amount now seems unlikely)

    5% of overall population is really not encouraging for the ‘herd immunity’ proponents.

  168. verytallguy says:

    Anyone who believes Swedish society is operating near normal need only speak to a Swede.

    The effect of their voluntary lock down is little different economically to other countries mandatory lock downs.

    Sweden currently has the highest death rate per capita in Europe.

    Whether their strategy works better over the long term regarding morality remains open.

    They have accepted more deaths as the price for more freedom in the short term. The economic short term effects have been similar to other European countries.

  169. Steven Mosher says:

    Here joshua,

    santa clara sampling

  170. Dave_Geologist says:

    Apropos various comments:

    So Sweden only has to kill ten times as many people to reach herd immunity. Assuming of course that infection, and particularly mild symptoms, actually confers immunity and that it lasts.

    The large impact of a week’s delay in lockdown should be obvious. Assuming the same R time series after lockdown, the final number infected and final death toll simply scales with the number of seed cases at the time of lockdown. Since UK deaths and presumably prior infections were doubling more then twice a week, every week’s delay equates to five times as many cases and five times as many deaths before it’s over.

    As I’ve said before, estimates of high enough asymptomatic rates to make anywhere close to herd immunity yet and the IFR anything like flu require vastly different asymptomatic-to-symptomatic ratios between countries. Otherwise there are countries and regions with more than 100% infected. While that may be plausible if you compare Italy with countries in sub-Saharan Africa with young populations, it would seem unlikely for those affected so far.

    Or alternatively, that people in those places have already had it more than once and infection, asymptomatic infection at least, doesn’t confer immunity. In which case “painless” herd immunity is a pipe dream.

  171. Joshua says:

    Anders –

    > Isn’t the argument that it was around 7% towards the end of April but is now something like 20%?

    OK. So guess that’s their argument. But where’d they get that number? Why would it take months to get to 7% then a bit under 4 weeks to go from 7% to more than 20%. Would exponential growth alone explain that without some kind of in described reason for rate change?

  172. Joshua,
    In the UK, the doubling time was initially less than 5 days. If that were the case in Sweden, then going from 7% to 20% would occur in less than 4 weeks. However, that assumes that people don’t change their behaviour, which does seem a bit unlikely.

  173. Joshua says:

    Anders –

    So with a constant doubling rate, would 7% over two? months (first case in Sweden was Jan 31, but let’s assume it took a while to get up to doubling speed) get you to over 20% in 3 months?

  174. Joshua,
    Yes, I think so. If you look at the blue line in the figure in the post (“Do nothing”) the UK would have gone from virtually nothing to the peak in just over 2 months.

  175. Joshua says:

    Steven –

    Thanks!

    So I’m not a total idiot afterall?

    I absolutely love the illustration of the problem with convenience sampling (at about 28 minutes)!

  176. Joshua says:

    Anders –

    Thanks. But clearly behaviors did change over that entire period. They’re always talking about how well they’re social distancing. So the doubling rate didn’t remain constant. Seems we could presume the doubling rate was higher during that Feb. – late April period. And I’m presume that one or two weeks at the highest doubling rates has a disproportionate of impact.

    So I’m skeptical about that number

  177. Ben McMillan says:

    ATTP: yeah, if the pre-intervention curves don’t even match and the initial infection changes then the code and/or the way you are running it isn’t right.

    I also think Dave and James’ analysis make sense: the whole thing is close to linear, so moving the intervention a week early should just stop two doublings from happening in the model.

  178. Ben,
    Yes, that is odd. All I’m doing is adding 7 days on to the time when the interventions should start, but this seems to then be influencing what happens before the interventions are introduced. The only other possibility is that it’s to do with the stochastic nature of the simulation, but it seems too big of a difference for that. I’ll maybe have another look.

  179. Okay, I’m not entirely sure what is going on, but it looks like there is some Trigger threshold that you set and also the day on which it occurs. When you then change the start of the interventions, it looks like this then changes the initial phase of the infections so as to then reach this threshold on the specified day.

  180. Ben McMillan says:

    Going from 7% to over 20% would require 2/3rds of the infections to happen in the last month. That isn’t consistent with the case or death curves and would suggest that things are getting out of control. But given the 20% figure seems to have ‘limited empirical support’ it doesn’t seem worth spending too much mental energy on.

  181. Ben,
    Yes, I agree. It doesn’t look like it’s going out of control, so 7% to 20% in a month might be an unrealistic rise.

  182. Joshua says:

    Steven –

    Great point at about whether they should have even provided confidence intervals for convenience sampling!

    And that chart at @39 minutes is killer. Do the Santa Clara authors even understand the implications of the base rate fallacy to their studies?

    Wtf is wrong with Ioannidis?

  183. Joshua says:

    Oh – the point about confidence intervals and convenience sampling is at 45 minutes.

  184. dhogaza says:

    ATTP

    “When you then change the start of the interventions, it looks like this then changes the initial phase of the infections so as to then reach this threshold on the specified day.”

    In the original flu modeling paper the scenario was that the virus crosses from animal to a person, then the infection ramp-up begins but unknown to authorities. Action by authorities is impossible during this time of course. Then at a certain point when you have a certain number of infected people, authorities become aware something novel’s going on, and can intervene. The parameterization you’re talking about seems related. Maybe …

  185. jamesannan says:

    ATTP, I do agree that a more sophisticated model might not precisely reproduce the result of the simple SEIR model. Lockdown increases transmission in the household, for example, and the effect of school closing depends on when schools are open. But it seems very strange that by taking action on day 69 (isn’t it?), the model generates a perceptible rise in deaths in a mere 5 days, rising to several hundred by about day 80. I would suspect the modelling as being not quite right there unless a plausible explanation can be found.

    Again, a log plot of the start would be useful. I also very much prefer looking at daily values rather than cumulative – you can see any change in slope so much easier, for both cases and deaths.

  186. jamesannan says:

    Ah I’ve crossed a few comments there while having my lunch. Point about looking at daily numbers on a log scale still stands though 🙂

  187. James,
    I’m trying to run everything again, but I think the issue is that the parameters are set to match a certain number of deaths on day 100, which is then influencing the early phase of the infection when you set the interventions to be much earlier.

  188. Willard says:

    > Souflle = article.

    If only.

    In other news:

  189. jamesannan says:

    Ah ATTP that makes some sort of sense as they specifically talk about hitting a death total in their 16 March paper (on 14 March in that case).

  190. James,
    Yes, that must be what it is. Whatever date I set the interventions to start, the model will try to match the number of deaths on 14 March, which then means that the initial spread of the infection ends up being influenced by the assumed date on which the interventions start. I think I’ve worked out how to change that, but it now seems that all my colleagues have found time to start using our cluster and my jobs aren’t starting as fast as they used to 🙂

  191. Joshua says:

    Anyone seen David?

  192. Joshua says:

  193. Everett F Sargent says:

    Steven Mosher,

    “santa clara sampling”

    Thanks for that video presentation, very informative.

  194. Everett F Sargent says:

    Joshua,

    That twitter thread is also very informative, JA posted it and that;s where I read the whole thing.
    https://threadreaderapp.com/thread/1262956011872280577.html
    The whole thread (I hope) all in one place.

  195. Clive Best says:

    In reality Neil Fergusson got it mostly right for the UK !

  196. Joshua says:

    More thoughts about that 7.3% to 20% in Sweden.

    The number for late April reflects infections for two weeks earlier – around April 7th. . And the 20% now isn’t seroprevalence but actual number. So that makes it more like 7 weeks to go from 7% to 20%.

  197. Clive Best says:

    If R reduces naturally in Sweden to say ~1.2 then herd immunity can be reached at just a 30% final infection rate.

  198. Everett F Sargent says:

    Clive Best,

    Don’t really know what you are showing, mirror image like?

    Per comment upthread the Swedes currently think that their R~0.9 not R-1.2.

  199. Steven Mosher says:

    “If R reduces naturally in Sweden to say ~1.2 then herd immunity can be reached at just a 30% final infection rate.”

    except of course there are neighborhoods in the Bronx that have infection rates over 40%

    Oh, and they are still reporting cases.

    basically. NYC realized that cases are highly geographically concentrated
    so they have started doing full population testing in key areas.

    this stuff has a very high spatial frequency
    so extreme care should be taken with any spatial averages.

    there is no UK, no Sweden, no USA. heck even in Korea we have different patterns for different regions.

    Even cities have wildly varying numbers. Saw that early on in Beijing which had crazy different
    numbers across the various districts. Same with NYC

    as data this reminds me of rainfall data ( complete with “floods” overwhelming dams) There are downpours and droughts and drizzle. Its ugly ugly stuff from a geostats perspective.

  200. Steven Mosher says:

    “So I’m not a total idiot afterall?”

    nope. I was on WUWT after this first came out trying to explain the problems with sampling.
    its funny how all the skeptics became gullible -ists

    The slide at 39 minutes is great.

    Only a few more days left in Quarantine .

    going kinda stir crazy. Its been quite an adventure since leaving Beijing in Jan.

    not exactly what I would have predicted for 2020

  201. Steven Mosher says:

    “This also means John can go back to writing his sequel – I propose calling it “Why Most Published Research Findings Are False – with Personal Examples from Ongoing Pandemics”.

    that left a mark

  202. Willard says:

    Guys,

    It’s time for some game theory:

  203. dhogaza says:

    David B Benson

    The article makes it quite clear as to what went wrong.

  204. Steven,

    except of course there are neighborhoods in the Bronx that have infection rates over 40%

    Is there a source for this?

  205. Okay, I found it. Seems to be data here. There are regions where the percentage of cases that test positive is around 40%. Not sure what the testing strategy is.

  206. Dave_Geologist says:

    Clive, Sweden will only be able to maintain herd immunity with 30% infected only for as long as they also maintain their current soft lockdown. Which (combined presumably with cross-border effects like loss of tourism and customers and broken supply chains) looks set to hurt their economy just as much as the hard lockdowns are affecting their neighbours. As soon as they release it, up goes R and bang goes their herd immunity. So they’re stuck in a circular treadmill like all the other hamsters, it’s just a different-shaped treadmill.

  207. Clive Best says:

    Dave_Geologist ,

    Yes I think you are right. We are all caught in a Catch-22 situation. The more successful we are at curbing the the outbreak the harder it becomes to return to normal. Even countries like New Zealand who may even eliminate the virus completely will have to self isolate from the rest of the world indefinitely.

    At some point the knock on effects including deaths from a collapse in the world economy will outweigh the effects from the virus. The only way out of this dilemma is either a vaccine or a new drug treatment which renders COVID-19 a mild disease.

  208. jamesannan says:

    The other solution is herd immunity via all the under-40s getting it and none of the over-60s (at least, over-70s). That way basically no-one dies. Though it’s not trivial keeping the oldies safe while this happens.

  209. verytallguy says:

    I think the reality may turn out more complex.

    As we work out best clinical practice, with or without new drugs, we’ll reduce the fatality rate.

    As we understand transmission better, we’ll be able to better target relaxing restrictions.

    As vaccines become available (perhaps as early as late this year) even if not 100% effective, they may reduce transmission and/or fatality enough to reduce the seriousness of the pandemic to levels we can tolerate.
    https://www.theguardian.com/world/2020/may/21/astrazeneca-could-supply-potential-coronavirus-vaccine-from-september

    As we understand the immune response and its longevity, and observe outcomes in different jurisdictions, the effectiveness of herd immunity type strategies will become clearer.

    As testing speed and accuracy improves, and tracing workflows are honed, less restrictions will be needed to contain the virus.

    And our response will vary accordingly as evidence emerges on all these points.

    I think the catch-22 scenario outlined is possible, but very much worst case.

  210. Joshua says:

    Make of this what you will

    > The Population Fatality Rate (PFR) has reached 0.22% in the most affected region of Lombardia and 0.57% in the most affected province of Bergamo,which constitutes a lower bound to the Infection Fatality Rate (IFR)…Combining PFR with the Princess Diamond cruise ship IFR for ages above 70 we estimate the infection rates(IR) of regions in Italy, which peak in Lombardia at 23% (12%-41%, 95% c.l.), and for provinces in Bergamo at 67% (33%-100%, 95% c.l.).

    https://www.medrxiv.org/content/10.1101/2020.04.15.20067074v2

  211. Joshua says:

    James –

    > The other solution is herd immunity via all the under-40s getting it and none of the over-60s (at least, over-70s). That way basically no-one dies. Though it’s not trivial keeping the oldies safe while this happens.

    Not trivial?

    In the United States, at least it is unrealistic, in the extreme.

    There are many multi-generational families who cannot segregate older people. There are millions of older people who serve as primary caregivers for grandchildren (snd they have higher rates of comorbidities than their non-caregiver counterparts). Many older people (say 65+) are employed. They have to shop. They take public transportation to get to doctors, to go shopping. Many depend on others to come into their homes for caregivjbg. They have to interact with the general public in myriad ways.

    We can’t get anywhere near to your no one dies scenario.

    On top of which, some younger people do die – at varying prevalence in different localities, morbidity from COVID is also a huge impact.

  212. Joshua says:

    In the US, we have nowhere near the infrastructure to realize that scenario. Talking about it in Sweden, with a robust national healthcare infrastructure and a general attitude of shared responsibility for the public welfare, it is one thing to talk of a herd immunity approach.

    Talk of a herd immunity approach in the US is an entirely different discussion.

    I imagine the UK might be somewhere in between, but still light years away from Sweden (given the % if people who live alone, population density, etc.)

  213. Joshua says:

    Er… Multi-generational households (not families).

  214. Ben McMillan says:

    Life as normal but travel to infected countries is inconvenient = nice problem to have.

  215. Joshua says:

    Yeah. I’m gonna go there.

    I hope that everyone who enjoys the ability to isolate themselves and their families from risk, to at least some significant degree relative to their desire to do so, reflects on the difficulty that people from other segments of society (i. e., people of lower economic status) have in protecting themselves from risk.

    As such, I sincerely hope that they remain cognizant of those differences whenever they reflect on the wisdom of a “herd immunity” approach, or those approaches that might be closely related.

    Yes, it’s theoretically possible that longer term the same number of people will be infected either way. It’s also possible that longer term considering economic impact, a herd immunity approach will result in less pain and suffering differentially than an approach that relies on government mandated social distancing to some extent.

    But there’s no way to avoid the reality that it’s a gamble either way. It’s decision making in the face of vast uncertainty.

    Respect the uncertainty. Avoid magical thinking.

  216. izen says:

    @-Clive Best
    “At some point the knock on effects including deaths from a collapse in the world economy will outweigh the effects from the virus.”

    I am unconvinced this is an unavoidable inevitability.
    Deaths could occur from a collapse in the agricultural infrastructure so that famine becomes a problem. But historical examples show that food production within a Nation and the transport of basic necessities can sustain a food supply that avoids widespread death, it just means you no longer have a choice of 57 varieties of breakfast cereal or much meat.

    The ‘world economy’ is a very recent invention, much of it was not in place a few decades ago, 75 years ago the world economy was a very different animal. While the current world economy may lack resilience in the face of extended lock-downs and other measures to reduce the death toll, it may be possible to modify it in ways that avoid significant deaths from its collapse. Unless there really is no way that the basic needs of food, water management, and shelter can be met EXCEPT by BAU, or that the resistance to change is so great that alternatives that would avoid extra deaths cannot be made.

  217. Clive Best says:

    I just had a zoom conversation with my cousin. He was in University College hospital London for a heart problem (130 bpm when resting) and awaiting surgery. He was instead discharged home because of the coronavirus emergency with a follow up appointment 1 month later to which he went. Unbeknown to him all appointments had since been made telephone only, but the bureaucracy had failed to informed him. When he arrived for the appointment the staff were drinking coffee and was asked what was he doing there. He needs an operation but everything else is now delayed indefinitely because of the coronavirus panic. His consultant said there are now up to a 1000 on the waiting list, so if he can afford it go private.

    I am sure there are thousands of other examples across the UK

  218. Willard says:

    Indeed, Clive.

    And I’m sure you thought about what this implies regarding your usage of “panic.”

  219. Clive Best says:

    Willard,

    Perhaps a better description would in the benefit of hindsight be “over-reaction”.

  220. Willard says:

    You indeed overreacted, Clive. You’re describing a situation by implying that doing nothing would have been even worse for the medical system. That’s false:

  221. Just to be clear, it must be terrible to be waiting for surgery that is now delayed because of the current crisis. However (as I think Willard is highlighting) it may well have been preferable to have acted faster, rather than slower.

  222. Willard says:

    Of course it’s terrible to wait for surgery. But consider the kind of magical thinking it requires to believe that less interventions would lead to more surgery and overall more GRRRRROWTH.

    Unless one can show that less intervention leads to less death, the whole line of argument is pretty theorical. Decisions had to be made. They were. They mostly were suboptimal. Now what?

    Modelling is not worth much without constructive proposals, and in fact most policies are quite agnostic regarding what contrarians are trying to peddle.

    But then what else is new.

  223. JCH says:

    “coronavirus panic”

    If they had panicked when China panicked there would be a very low number of COVID-19 deaths and the population oof the SARS-CoV-1 would be essentially zero worldwide. The economic damage would be minimal, and we would be going forward to a largely SARS-CoV-2 free world, just as we live in a SARS-CoV-1 free world.

  224. JCH says:

    China has a drug therapy in the works which appears to be exceedingly effective, and they just reported successful results on one of their Phase-2 vaccine trials in the Lancet.

    Herd immunity with no vaccine, and I have seen it in real life more than any of you, was/is a stupid plan. JFC.

  225. Joshua says:

    The other counterfactual: Imagine no government mandated shelter in place.

    More infected people wandering around. More supply chain interruptions. More over-burdened healthcarw workers. More people unable to get surgery for a longer period of time.

    Prove that wouldn’t have happened. Hell, prove that it’s less likely than the ‘Things would have been better absent a “lockdown.”

    Meanwhile, I have a fixed value that takes higher priority for me. We owe deference to the heroes who have put their lives on the line to make others safer and healthier, myself and Clive included.

    Ask them what they think. Value their input

    Choose your cfscodite counterfactual. It’s your right. But don’t pretend you actually know what the fuck would have happened had things been different. And don’t discount what the heroes have to say.

    Thank you wife for me, Willard.

  226. Joshua says:

    JCH –

    Imagine a viable vaccine, manufactured and is tributes, before the curves of “herd immunity” and government mandated shelter in place would have equalized.

    Now imagine the openers responding when it is pointed out that the herd immunity approach cost lives through the faster drive to infecting enough to achieve her immunity.

    Whatch how they respond. We can ask Clive what he thinks if that happens.

  227. Willard says:

    God that makes me cringe:

    Meanwhile, American billionaires got $434 billion richer during the pandemic.

    That’s something like 1200$ per American.

    Not family.

    Person.

  228. Joshua says:

    Willard –

    > Meanwhile, American billionaires got $434 billion richer during the pandemic.

    Source?

    But draconian!

    But Tyrants!

    But Lysenko </strike? Oh, wait, that's climate change.

  229. Willard says:

  230. Everett F Sargent says:


    World = World – CN, RoW = World – (CN + EU + US) and SA = South America
    LHS = log-linear and RHS = linear (LHS = RHS just one is log and the other is linear)

    Speaking of herd immunity, we just might see that occur (e. g. in SA) sooner rather than later. I’d also expect to see the totals for many 3rd world nations (which includes my homeland, the US) to be gross underestimates as the poorest people will go mostly unnoticed (numerically speaking). In other words, BAU. 😦

    I’d call them The Uncounted.

  231. izen says:

    @-Clive Best
    ” He needs an operation but everything else is now delayed indefinitely because of the coronavirus panic.”

    The NHS in the UK has a regrettable tendency to delay and postpone any elective treatment until it develops into an unavoidable emergency. Like Sweden it has be subject to ‘austerity measures’.

    But just to present a hypothetical alternative, your brother is admitted and gets his heart surgery while the hospital is dealing with an increasing number of COVID19 cases. Hospitals are notorious for cross-infection problems. He catches the Corona virus and requires ICU or treatment on a high dependency ward when there is a shortage of beds. The fatality rate of his surgery gets multiplied by the risk of the fatality rate from COVID19. There are reasons why hospital close to all but emergencies when they have an outbreak of antibiotic staph infections.

    There is a judgement that has to be made about the trade-off between continuing with the ‘normal’ pattern of elective treatment when the risks are increased by an infection, and the risk of deterioration and acute conditions because of the cancellations during a period of high cross infection risk and a shortage of facilities.
    Such judgements are sub-optimal, often because the system is under-resourced for ‘normal’ operation, never mind a rapidly expanding infection with a 10% IFR for the elderly with comorbidity.

  232. Ben McMillan says:

    In the UK, the hospitals have indeed stopped doing any non-urgent procedures.

    This is because the COVID cases were forecast to occupy the full capacity of the healthcare system, and that was pretty close to the mark. Also, a substantial fraction of patients coming in without COVID were contracting it in hospitals. Now that there are somewhat fewer cases coming in, it looks as though hospitals will go back to more normal soonish.

    This is why the ‘official government slogan’ said ‘protect the NHS’. By squishing the epidemic, you prevent the epidemic from overwhelming the health care system, and prevent too many of the health care workers getting sick at the same time, and the NHS can get back to work sooner.

    What would have mitigated this problem significantly is starting policy/lockdown earlier (1 week means about 4x fewer beds occupied by COVID patients). If the lockdown were more effective (e.g. track+trace), that would also help shorten the time that the NHS is not able to offer the usual services.

  233. Everett F Sargent says:

    As to the World, it just might have a 2nd peak, which is really just the 1st peak circling the globe, largely unchecked.

  234. izen says:

    That should be – Methicillin-resistant Staphylococcus aureus (MRSA) infection – above.
    Its a reasonable parallel with how a hospital has to respond to any raised risk from cross infection.

  235. Everett F Sargent says:

    “What would have mitigated this problem significantly is starting policy/lockdown earlier (1 week means about 4x fewer beds occupied by COVID patients).”

    Which is what I said almost a month ago (following JA’s lead) …
    “Except NYC started with a doubling time of ~ONE DAY! IMHO tens of thousands of lives could have been saved if NYC lockdowns had started 2-3 weeks before their actual 2020-03-22 lockdown.”
    https://bskiesresearch.wordpress.com/2020/04/20/5-day-doubling-and-the-great-covid-19-uncalibrated-modelling-fiasco/#comment-758

    The CU study used March 8th and March 1st which is almost exactly 2-3 weeks before the official NYC March 22 lockdown.
    https://www.medrxiv.org/content/10.1101/2020.05.15.20103655v1

    In hindsight, I made a lucky guess.

  236. Ben McMillan says:

    Izen: Sorry, I cross-posted. You covered most of this ground.

    Another thing starting measures a week earlier does is dramatically shorten the down-slope of the epidemic, because it takes about 3 times as long to go down as to go up (at least with a UK-style response).

  237. Everett F Sargent says:

    The infamous, and fugly, doubling time graph …

    World = World – CN, RoW = World – (CN + EU + US), SA = South America and BR = Brazil

    Of greatest concern is the RoW (black line), which has taken ~37 days to double its doubling time (from ~10.5 days to ~21 days, with the last 30 days looking almost frightening linear in log-normal space). In other words, the RoW currently has the lowest long term doubling slope.

  238. I think I’ve managed to get the Imperial Code working so that I can check the impact of earlier lockdowns. I’ve essentially set the code to fit the numbers on 5 March, which is earlier than the start of all the interventions. I’ve also set the number on this date so that the cumulative deaths in the latest intervention roughly matches what’s occured.

    It looks pretty much the same as James got. Locking down a week earlier would have reduced the number of deaths by almost 30000.

  239. jamesannan says:

    Thanks ATTP that looks rather good. I’m sure you will be tweeting it 🙂

    Of course the long-term outcome is still highly uncertain. We may end up all getting it and 1% dying anyway, but even in that case we can try to manage it at a tolerable level..

  240. Just sent you a tweet 🙂

  241. Ben McMillan says:

    Seems to me this gives essentially the same answer as solving some simple coupled ODEs (i.e. a basic SEIR model), but without the benefit of being able to quickly see exactly what is going on. But I guess the point was to check that this complicated IC code is roughly equivalent to the very simple models.

  242. Ben,

    But I guess the point was to check that this complicated IC code is roughly equivalent to the very simple models.

    Yes, exactly. The Imperial College code does have lots of parameters and I still don’t have a good sense of how sensitive it is to these parameters (i.e., could I have got a wildly different answer if I’d made some small changes to some of the parameters). However, it does seem consistent with what the basic SEIR codes are suggesting, which does add some confidence that locking down a week earlier would have had a substantial impact (I also don’t think that this is all that surprising).

  243. Dave_Geologist says:

    ATTP, perhaps too much effort was made (in government committees as well as with the public) to educate people who don’t understand the exponential function on what the second term in A x 2^B does, and not enough into explaining what the first term does.

    In fairness most people didn’t take maths far enough to reach natural logarithms and the exponential function, whereas multiplication is part of the 3R’s. But people have blind spots about stuff they know as well as about things they think they know but don’t (“exponential means really really fast”). How many think you have a fixed amount of post-lockdown deaths and just add them onto the pre-lockdown deaths, so losing a week and having a few thousand more deaths is small beer compared to the tens of thousands of post-lockdown deaths? I suspect rather a lot. I also suspect that of those countries which instituted European-style lockdowns, the biggest impact on the death toll is at what point during virus spread lockdown was imposed, rather than how tight the rules were or how compliant the population.

  244. Everett F Sargent says:

    Dave_Geologist.

    I don’t believe you.

    Over hear wee haz Pheedoom Phiters an hay dont ned know bok learnin’. Oven hear Small Hands iz ann biznezzmen whoz nowz compond intrest lik itz waz nside hiz gutz.

  245. dhogaza says:

    Clive Best

    ” He needs an operation but everything else is now delayed indefinitely because of the coronavirus panic.”

    Be careful what you ask for, Clive.

    Izen and Ben McMillan outlined the potential dangers for you in the abstract. The dangers have nothing to do with “panic”.

    In the real world, ten days ago we lost an acquaintance in the UK to covid-19. He was checked into the hospital for non-elective surgery, caught covid-19, and while at first he appeared to be doing OK, after a week or so deteriorated and was transferred to the ICU, where he died two weeks later.

  246. Everett F Sargent says:

    Exponentiation explained so that even morans like Small Hands would (maybe) understand …

    “This exercise can be used to demonstrate how quickly exponential sequences grow, as well as to introduce exponents, zero power, capital-sigma notation and geometric series. Updated for modern times using pennies and the hypothetical question, “Would you rather have a million dollars or the sum of a penny doubled every day for a month?”, the formula has been used to explain compounded interest. (In this case, the total value of the resulting pennies would surpass two million dollars in February or ten million dollars in other months.”
    https://en.wikipedia.org/wiki/Wheat_and_chessboard_problem

  247. Dave_Geologist says:

    It’s the starting value I was getting at too Everett. So in your example, do it again but put two grains on the first square. That’s several days delay in lockdown. Then do it with four grains on the first square. That’s a week or so delay in lockdown. Actually I had my own blind spot and the curve was already gently flattening with the hand-washing, self-isolation of symptomatic cases and initial social distancing. So allowing for time-lag, make it one week for two grains on the first square and two weeks for four grains.

    I suspect that if UK ministers think about the consequences of a week’s delay, they think in terms of the 74 deaths on March 23rd vs. the 1684 deaths in the following week. About 1600. Or if they’re clever and slip it by two or three weeks to allow for incubation time and sickness before death, the thousand or so per day we were getting at the peak. But some of those would have died anyway so call if 5,000 and while regrettable, it’s on a par with an average flu season and better than a bad one.

    But most of the deaths are in the long slow decline from the peak, and are driven by the height of that peak. By the time we reached our peak daily deaths were doubling about once a week, so a week’s delay makes the peak twice as high and two weeks’ four times as high. And all the plateau and decline deaths correspondingly double and quadruple. And they’re already three quarters of total deaths and daily deaths are barely down to pre-lockdown levels. With most of the deaths post-lockdown, you can say to a first approximation that a week earlier would have halved deaths, and a fortnight earlier quartered them. So I have no problem finding 30,000 plausible. Of course I could download the code and do it the hard way, but this is a more mentally stimulating rainy-day activity 😉

    You can do it the delayed-action way, and do it well, but only if you go for a draconian lockdown and comprehensive test, track and trace, so you get R right down post peak infection and don’t have that long, slow decline.

  248. Clive Best says:

    @ATTP

    Did you update the change times for all the interventions, or just Social Distancing ?

    If I remember correctly, case isolation and household quarantine had already been implemented about a week earlier. The biggest effect was Social Distancing (closing shops, pubs, work places, transport etc.)

  249. Clive,
    What I did was set them all to start at the same time. I’m, however, trying what you’re suggesting now (i.e., what’s the impact if some of the interventions start earlier).

  250. Willard says:

    Sometimes I wonder why we should write fiction:

    The source says that during the meeting, the attending government officials suggested that the UK will not implement strong restrictions on citizens’ movements – of the kind seen in China and Italy – and is instead aiming at “flattening the curve” of the contagion, staggering the number of cases over time in order to avoid overwhelming the hospitals.

    That seems to chime with the strategy outlined two days ago by the head of the government-owned Behavioural Insights Team, David Halpern, who said the government would be “cocooning” vulnerable patients while the general population attains “herd immunity.” The Department for Health and Social care did not respond to WIRED’s questions about whether aiming for “herd immunity” is official government policy.

    When asked about the meeting, a Number 10 spokesperson said that tech companies had been invited to discuss what they could to to help model and track the disease and the impact of government interventions.

    https://www.wired.co.uk/article/dominic-cummings-coronavirus-big-tech

    That was on March 12.

    I don’t always want to flatten a curve, but when I do I simply say “we need to flatten the curve” and cocoon.

  251. Ben McMillan says:

    Hmm, I guess the other one is, if the UK had managed to catch half the incoming cases before they stepped off the plane (and ideally before they got onto the plane) and taken other quarantine measures, it is plausible they could have cut the number of seed infections by a factor of two. That should also reduce the epidemic peak by a factor of 2.

    But great news, everyone: quarantine for incomers will now be introduced in the UK! I’m finding the media coverage of that bizarre. Surely the obvious response is ‘seriously, are you joking introducing this now rather than in Feb/March?’, but the BBC coverage didn’t take that angle. People are talking as if the UK is a low-risk country that could allow no-quarantine travel with other low-risk countries in Summer. France has kindly introduced a reciprocal 2 week quarantine period, even though they weren’t consulted.

    The change to infection load made by even small timing changes to quarantine procedures and other interventions is massive: this is a big part of the reason that the epidemic is orders of magnitude lower in some countries than others.

    But this was all clear to the Sage committee at the time: the problem was initial ‘flatten the curve’ strategy, which appears to be herd-immunity in all but name, meant that the UK didn’t even really try to suppress. It took them a while to realise that killing off a few hundred thousand might not be politically viable.

  252. Willard says:

    > the BBC coverage didn’t take that angle

    The Beeb proves once again to be a leftist megaphone:

  253. Steven Mosher says:

    “In the real world, ten days ago we lost an acquaintance in the UK to covid-19. He was checked into the hospital for non-elective surgery, caught covid-19, and while at first he appeared to be doing OK, after a week or so deteriorated and was transferred to the ICU, where he died two weeks later.”

    when I first got to Korea I also got a normal pneumonia vaccine.
    No way I wanted to come done with something normal and have to go to the hospital.

    The spread in hospitals is pretty well documented here in SK. I dont know why they dont write more papers. Every cluster is pretty detailed. number of staff, number of patients, visitors and cases from contacts with those.

    I imagine those numbers, secondary and tertiary attack rates would be important in calibrating models.

  254. Steven Mosher says:

    “ATTP, perhaps too much effort was made (in government committees as well as with the public) to educate people who don’t understand the exponential function on what the second term in A x 2^B does, and not enough into explaining what the first term does.”

    reminds me of my experience on WUWT int the early days when folks were arguing that there were
    only 68 cases and 0 deaths.

  255. Willard says:

  256. Steven Mosher says:

    “The source says that during the meeting, the attending government officials suggested that the UK will not implement strong restrictions on citizens’ movements – of the kind seen in China and Italy – and is instead aiming at “flattening the curve” of the contagion, staggering the number of cases over time in order to avoid overwhelming the hospitals.”

    China was uniquely positioned and equipped to do a hard lock down.
    First I want you all to see something

    before I could grab this data from my Chinese sources the data was removed. But trust this
    this chart is correct.

    What you will see is something quite remarkable.

    On Jan 23rd Wuhan was physically closed. flights ended, trains, cars, everything.
    So there were seeds that spread out before that, but nothing after that.
    EVERY CITY in china followed the same path ( with 3 exceptions).
    Curve flattened in 15 to 20 days. ( the 3 exceptions had imported cases)

    And that is exactly what you expect given the details of the disease.

    So what was unique about china.

    1. Jan 24th was the start of CNY. which meant BUSINESS SHUTS DOWN, not all business but
    the vast majority. usually people return “home” like salmon. but in this CNY they just hunkered
    down in place.
    2. Staying inside: the approach to housing in many areas makes monitoring and control
    pretty easy. There are large living compounds, think 30 story apartment buildings.
    dozens of them in gated communities. ordinarily the guards just let you come and go.
    But if they want to they can demand an ID and see of you belong there. heck your ID will
    tell the guard where you are allowed to live, go to school, etc.
    These guards and the local party members were immediately put into service.
    A) checking people.
    B) helping the elderly.
    As an American travelling around China I always kinda chuckled at these guards that
    patrol the communities. excess labor. Kinda like the road workers in the USA who stand
    there watching the 1 guy who digs. But that labor pool was ready to be activated to
    enforce the rules.
    3. Delivery: Normally I hate these guys

    they clog up the sidewalks and roads. Along with the motorcycle guys. But they were
    key to surviving those initial 20 days

    4. Wide testing coverage

    anyway its almost June and some people still dont get it

  257. Steven Mosher says:

    9K new cases a day for UK

  258. Everett F Sargent says:

    “anyway its almost June and some people still dont get it”

    Well we know of one individual that will never get it. COVID-19 that is. And to them that is all that really matters.

    Oh wait, you meant testing strategy. Same person. Same problem, Two words. Small Hands.

    Willard, that NYT tweet, what a hoot, the visuals of Small Hands, as golfer as demented as reaper as personality disorders … gallows humor at its finest.

  259. Steven Mosher says:

    “Good vid.”

    I havent been exactly honest when I said I did no charts.
    early on I did one chart.
    X axis: tests per million.
    Y axis Positivity rate.

    what mattered was the outliers on this chart.
    Low penetration rate of testing and High positivity. ( not testing enough)
    Low penetration and low positivity ( looking the other way)
    High penetration and high positivity. ( overload, this was NY for the longest time)

    watching the testing numbers in Korea daily it was pretty clear. for every person who
    tested positive they were finding an extra 50-100 people to test. positivity rates were consistently
    low, while penetration was Good .

    And the numbers were consistent with the typical number of contacts per case.

    when I look at the metrics people are using for reopening, I’m a little concerned that they are looking at the wrong thing. Like deaths, which is a lagging indicator. Deaths is what you get
    for last months fuck up. Deaths is also demographically dependent. They also measure silly things like Number of tracers. tracers per 100K is a metric for reopening in the US.

    That is a HEAD COUNT approach, which means you can hit the metric by hiring, NOT BY ACTUALLY DOING THE JOB. really bad from a Operations excellence standpoint. you dont care about the heads, you care about the output.

    So you want to meter your system by 1) contacts traced, contacts actually tested, and
    CLOSURE of epidemiology links. Korea closes 80-90% of cases. They report this metric.
    they work to improve this metric. I cant find how many tracers they have, because the number doesnt matter. you can only improve what you measure. they dont report head count, they report
    cases resolved. if that takes 30 tracers per 100K, fine, 15 fine, 300, fine. head count is not
    some magical number. They focus on case closure.

    Closure means: Dude tests positive. you tie him to PREVIOUS CASES 80% of the time.
    Not sure if there is a magic number, but at 80% they are successful.so you measure THAT
    not head count.

    So the CDC has given metrics that really dont allow you to build and monitor and improve
    a “test, trace, and isolation MACHINE.” which is what you need for containment.
    They look to be instrumenting a process for mitigation. Control deaths, control ICU.. etc.
    rather than building a machine to contain the spread.

    Still reactive.

  260. Willard says:

    Personal communication with Eric Winsberg (who’s not happy with the whole ordeal) made me find this:

  261. Steven Mosher says:

    I would think that with data on how many people in ICU and knowledge of the harvesting rate
    weekly deaths ought to be easy to forecast.

    count deBody

  262. Steven Mosher says:

    Kinda off topic, but if you are bored

    From the director of parasite

  263. izen says:

    The UK quarantine rules start on the 8th of June.
    On the same morning (23rd May) that the plans were announced the news also had drone footage of mass graves being dug in Sao Paulo, Brazil.
    On the same morning the flightradar site showed a plane arriving from Sao Paulo into Heathrow…

  264. izen says:

    There are now places in the US refusing to admit people WEARING a mask.
    This seems deserved….

  265. Ben McMillan says:

    In a way the public health people have it even worse that the climate people, because their work is all about tail risks that happen extremely infrequently (once per century) so the control measures are almost always excessive in hindsight. But not always always.

    At least a fair fraction of the climate stuff is gradual and observable.

  266. Willard says:

    The NYT issued a correction:

  267. Some updates. I’ve rerun the IC code and set the RO to be 3.5 (to better match the initial phase of the infection). I’ve also changed it to set the number of deaths at 74 days, rather than 65, and – consequently – am only considering the case where interventions were to start a week earlier. Results essentially the same.

    I’m also now doing boths run off the same build, which fixes the issue with the initial phases not being the same.

    I’ve also done a run where some of the interventions start earlier (CI and HQ), rather than them all starting on March 23. This slightly reduces the impact of the late start, but not by a huge amount.

  268. dhogaza says:

    ATTP

    Good stuff.

    “I’m also now doing boths run off the same build”

    Does this mean you’re generating a network, then reusing it? Curious because network generation was the cause of the supposed “non-determinism” of the model. The code didn’t bother serializing network generation to the point where the same seed will generate the exact same network (though the network will satisfy the parameterized statistical properties). Apparently guaranteeing that the same network will be generated form the same seed makes it run too slowly, and for production they generate one network and use it repeatedly for different intervention scenarios anyway.

    Also, the jump in the actual data at around day 95 … is this when the UK began reporting out-of-hospital deaths? At least that’s my interpretation of what James Annan was talking about on April 29th:

  269. dhogaza says:

    ATTP

    No, April 29th was well after the time of that little jump in your plotted black real data points.

  270. dhogaza,
    I think it does generate a network and then reuse it.

    The jump is because I’ve misread some of the lines in the data files. I’m fixing that now.

  271. Willard says:

    ICUs matter:

    he new coronavirus is believed to be spreading throughout Yemen, where the health-care system “has in effect collapsed,” the United Nations said on Friday, appealing for urgent funding.

    “Aid agencies in Yemen are operating on the basis that community transmission is taking place across the country,” Jens Laerke, spokesman for the UN Office for the Coordination of Humanitarian Affairs (OCHA), told a Geneva briefing.

    “We hear from many of them that Yemen is really on the brink right now. The situation is extremely alarming, they are talking about that the health system has in effect collapsed,” he said.

    Aid workers report having to turn people away because they do not have enough medical oxygen or sufficient supplies of personal protective equipment, Laerke said.

    https://www.cbc.ca/news/world/yemen-coronavirus-health-system-1.5579982

  272. Okay, fixed the problem with the data and have rerun the model.

  273. dhogaza says:

    ATTP

    That’s very nice.

  274. Clive Best says:

    @ATTP

    Here are my results (linear scale). It looks like you fixed the timing issue – just by adjusting R0?

  275. Clive,
    In the preUK_R0=2.0.txt file, there’s a parameter called “Number of deaths to accumulate before alert” and “Day of year trigger is reached”. As far as I can tell, these essentially set a target number of deaths to occur by that day (although it doesn’t seem to be exact, and I may not fully understand this). At the moment, this is set for day 100, which is probably why all your runs seem to coincide on that day. This probably means that your week earlier has more deaths than it should have (i.e., the model is forcing the infection to start earlier), while your 1 week later may have too few (the model is forcing it to start later).

    What I’ve done is vary the number so that the baseline intervention case roughly matches (by eye) the data, and then used that for all the other runs. I’m also now running them all off the same build (I’m just creating new parameter files and increasing the number of elements in the root array in run_sample.py).

  276. I should have added that I’ve shifted the day from day 100, to day 74, and the number of deaths from 10000 to around 35 (IIRC) for RO = 3.5.

  277. Clive Best says:

    Thanks !

    I wasted a few hours trying to fine tune the change day parameters for each intervention in p_PC7_CI_HQ_SD but they are simply dependent on preUK_R0=2.0.txt.

  278. Steven Mosher says:

    Gay clubs in Iteawon, cluster climbs to over 200.
    reading about the chains of transmission is fascinating. club to restaurant to taxi driver, etc
    chains 6 deep.

    Now, because club goers gave false info to club owners, the city will
    get a new system

  279. Clive Best says:

    @ATTP

    I think that the target value of 10,000 deaths on day 100 is because this corresponds to the real number of deaths as recorded on 10th April.

    In other words Ferguson seems to be calibrating the model so as to agree with this figure on that date. So when we change the lockdown start date all results agree on April 10 ( My birthday).

  280. Pingback: Did the UK lockdown too late ? | Clive Best

  281. Clive,
    Yes, I think that’s right. If you leave the target date as day 100 (April 10) then they will all agree on that date, irrespective of the date on which interventions start. This can then be unrealistic, because earlier interventions may not reach 10000 deaths, and so to match this target date, the model will then assume that the infection starts earlier when the interventions are earlier.

    However, in their Report 9 they claim to have used a target date of March 14, so the target date of April 10 in the public version may not be the same as what they used in their first report.

  282. jamesannan says:

    My prediction is that they don’t want to release the code/parameters for the real, uncalibrated runs that were shown to SAGE.

  283. Clive Best says:

    On March 14th (day 74) UK had 24 deaths

  284. Clive,
    Yes, but I don’t think that parameter sets an exact number. Also, I’m trying to set it so as to best fit the overall data (my simulation with interventions starting on 23 March had 31 deaths on day 74). It is stochastic, so I should probably be running an ensemble of simulations, but our cluster is about to go down so I probably can’t do anymore at the moment. Also, given all the ones I have run it doesn’t seem to much change the overall comparison between the two scenarios (i.e., it always seems to end up being that starting a week earlier would have reduced the number of deaths by ~75%, bearing in mind that this assumes that we do nothing before implementing lockdown).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.