It’s mostly about risk

I wanted to post this video (see end of post), that I first came across in this comment (H/T Pehr Björnbom). It’s a few years old, so some things may have changed, but it’s still mostly relevent.

It’s a discussion between Kerry Emanuel (Professor of Atmospheric Science at MIT) and John Christy (Distinguished Professor of Atmospheric Science at the University of Alabama, Huntsville), moderated by Russ Roberts.

John Christy promoted a great many of what I would normally call “skeptic” themes. Fossil fuels are, and will continue to be, the most reliable and economically viable energy source, climate has always changed and there is nothing special about today’s changes, models have projected much more warming than has been observed, and we can’t tell how much of the recently observed warming is natural, and how much is anthropogenic.

Without a carbon tax, his first point may well end up being right. However, the same can’t be said for the rest of what he promoted. The climate has indeed always changed, but studying these past changes has played a key role in understanding what’s causing it to change today (mostly us). When comparisons are done carefully, models actually compare well with observations (should also be careful to ensure that the comparison is really like-for-like). We actually can disentangle how much of the observed warming is natural, and how much is anthropogenic. The best estimate is that we’re responsible for slightly more than all of it.

Kerry Emanuel highlighted that the basics have been well understand since the 19th century, and that a key thing is that this is mostly about risk. We are taking a risk with our climate by emitting CO2 into the atmosphere; doing so will change our climate, these changes could be substantial and the impacts of these changes could be severe, potentially catastrophic. Of course, there are also risks associated with what we might do to address this. Hence, this is not simple and we should think about this rationally.

What I found quite interesting was how Kerry Emanuel approached the discussion. It was very measured and thoughtful and he regularly broadly agreed with what John Christy was saying. Observations are not perfect, climate models do have problems, there is a lot of uncertainty, etc. However, he kept going back to the basics and highlighted that even though we can’t be certain as to what will happen, we are still taking a risk.

I partly thought that this was quite good as it came across well, but I wondered how it would be perceived by a more neutral observer, or by those who are already doubtful. It’s possible that they would walk away thinking that the doubts are quite justified and that maybe there isn’t really any reason to do anything just yet. On the other hand, I don’t really know how else to approach this kind of discussion. Being more confrontational may well come across poorly and be ineffective. I think it mostly highlights how difficult these kind of discussions can be. Anwyay, I’ve said more than enough. Video below.

This entry was posted in Climate change, ClimateBall, ethics, Global warming, Policy, Roy Spencer and tagged , , , , , . Bookmark the permalink.

147 Responses to It’s mostly about risk

  1. Something I thought I would add is that there are really two main factors that will determine how much we change our climate and, consequently, how severity of the impacts. These are climate sensitivity (which is still somewhat uncertain) and how much we emit into the atmosphere. It would be nice if we could better constrain climate sensitivity, but we haven’t really succeeded in doing so.

    So, climate sensitivity could be low enough that we’d need to emit a lot of CO2 into the atmosphere for the impacts to be very severe. However, it could also be high enough that the impacts will be severe even if we don’t emit as much as we possibly could. Similarly, we could emit enough that even if climate sensitivity is low, the impacts will still be severe.

    Given that we can’t really do anything about climate sensitivity, we tend to focus on emissions; it’s one thing over which we do have some control. So, if we think that we should do something to mitigate the risk associated with climate change, then all we can really do is try to emit less CO2 into the atmosphere than we possibly could (okay, we could also consider geo-engineering, but that also carries risks). How we do this, and how much less, are questions that don’t have easy answers, but there is still quite a simple reason as to why we typically think that addressing climate change will require emission reductions.

  2. Michael Lloyd says:

    I’ve just listened to the speakers’ introductions. Interesting that John Christy started by saying it was a moral issue and that burning fossil fuels was a good thing.

    I am in agreement that there are more issues at state here that focusing on climate and emissions reduction and welcome any discussion that moves out of narrow compartmentalising.

    It is important to state that fossil fuels are non-renewable (except over geological timescales) and that there will come a time when extracting fossil fuels will become uneconomic. The uncertainty here is when and there are studies on just this question.

    We will need fossil fuels to build a renewable energy infrastructure and hopefully such an infrastructure will be sustainable by renewable energy alone.

    We should start sooner rather than later and perhaps it would not be moral to encourage a fossil fuel infrastructure on those who currently don’t have it.

  3. Michael,
    Indeed, there will come a time when we will need to be developing, and implementing, alternatives. I do find it interesting that those who seem to think that we’re so innovative that we can find ways to deal with the impacts of climate change often seem reluctant to promote the idea that we use the same innovative nature to develop, and implement, alternatives ahead of when we might need to if the only constraint were the availability of fossil fuels.

  4. Leto says:

    “there will come a time when extracting fossil fuels will become uneconomic.”

    Yes. As soon as the accounting of all the costs is fair and complete.

  5. HAS says:

    a TTP couldn’t disagree with that. I’d just basically repeat what I said partially in response to the video when posted at Climate etc.

    …. there are better ways to deal with the uncertainty than to act regardless, particular when the uncertainty evolves over time and acting to avoid carries a high cost. The issue shouldn’t be about an imperative to do everything possible now (and ignore the consequences) which is where the [precautionary principle] takes you if you aren’t careful. Rather the issue is what should we be doing when.

    This leads to investing today into the things that will buy you options in the future, should the evidence start to roll in that things are going awry. Significant investment in R&D would be a classic example. Relatively low cost, you’ll wish you’d done it if things go pear shaped, and through potential spill-overs you probably won’t regret it too much if you got it wrong.

    In retrospect I’d add that investing in shinning light on the uncertainty falls into the same category.

  6. Marco says:

    “This leads to investing today into the things that will buy you options in the future, should the evidence start to roll in that things are going awry.”

    …and then you hope we can roll out the proposed solutions fast enough in a system where you know there is inertia, meaning that your (late) actions will not have any truly positive effects until many years (decades) later.

    Of course, the evidence is *already in* that things are going awry.

    Someone may need to be reminded of the ozone depletion effect of CFCs: we can expect the ozone layer to have recovered to 1980s levels in 50 years from now, despite having taken action already 30 years ago. But taking action was so costly, and it was so uncertain CFCs were involved! To quote Heckert, chair of DuPont in 1988, when the Montreal Protocol was actually already in place: “we will not produce a product unless it can be made, used, handled and disposed of safely and consistent with appropriate safety, health and environmental quality criteria. At the moment, scientific evidence does not point to the need for dramatic CFC emission reductions. There is no available measure of the contribution of CFCs to any observed ozone change”.
    I don’t even want to think about what would have happened if the obfuscationists had had their will and we had waited and just done more research. Fortunately DuPont had a few scientists in-house that they trusted – and equally fortunately, DuPont didn’t financially depend on CFCs.

  7. Hyperactive Hydrologist says:

    aTTP,

    What evidence is there that low climate sensitivity would result in low impacts? Given the uncertainty in climate projections there could be a scenario where by relatively small increases in GMST could lead to significant changes in the probability and magnitude of extreme events. This could be caused by a broader destabilisation of the current atmospheric circulation patterns as a result of rapid Arctic sea ice loss, for example.

    There has always been an assumption that low sensitivity = low impacts but what is the evidence for this?

  8. Magma says:

    As far as I’m concerned Christy’s religious biases (creationism and certain aspects of Christian dominionism) have swamped whatever scientific competence he may possess. (I won’t comment on the latter.) Personally, I like a medical analogy for climate change risk assessment.

    “Sure, maybe 19 oncologists out of 20 would recommend urgent surgery in your case, but that’s expensive and people have died from post-op complications. I’ve been a GP for 50 years, and I say let’s just take it easy and see what happens.”

  9. Sceptical Wombat says:

    I only got through a small part of the video but I thought that Christy got away with quite a lot. He is obviously referring to his satellite measurements of tropospheric temperatures and his claim that this is quite straight forward is clearly wrong. He also argues that surface instrumetation changes without acknowledging that satellites and their instrumentation also change and there is less scope for calibrating the new against the old. Not to mention the cooling stratosphere and the drifting satellites.

  10. Do you think enough attention has been focused on risks pertaining to the proposed strategies for combating anthropogenic contributions to climate change?

    To take just one example, I have seen much discussion of the social cost of carbon. I have seen none regarding the social cost of removing carbon.

  11. I have seen none regarding the social cost of removing carbon.

    What do you mean by “the social cost of removing carbon”?

  12. Ragnaar says:

    Assume we agree on everything except the risk part. The insurance agent gets in the middle. Between you and the risk. And gets paid. I get between my clients and the IRS, and get paid. So from this we learn, the way to get paid is to get into the middle.

    I’ve seen commercials where I can buy insurance for my A/C getting broke, or my truck getting broke. I pass. Yes, I consider those middlemen as possibly being exploitative. By not paying a premium to them, I have more money to pay for my stuff getting broke. In long run, I should come out ahead as the middle man is not taking 10% of the handle.

    Self insurance is underrated. I have seen the rise of Health Savings Accounts and High Deductible Health Plans (HDHP). I think they are a positive development with the difficult problem of healthcare in the United States. A HDHP is a form of self insurance. You don’t buy insurance to get your oil changed. Why buy insurance to get medical check up? When you do, the insurance company gets in the middle.

    The rich have the advantage of being more able to self insure in general. Sometimes the poor self insure as they have no money to pay the premiums.

    Everyone must have insurance. This is part of the divide. Conservatives say no and call you a commie. You parade victims and say this is the result of you not paying enough in taxes. Because you have a solution to risk.

    Here’s some money so you can put solar panels on your roof. We call this insurance. We’ll take money from other people and give it to you, and the planet will be better. It’s like giving you a stop smoking kit like people got in Minnesota when some shysters sued big tobacco. The insurance companies sued big tobacco because insurance companies were too stupid to know that smoking was bad. And the liberals cheered.

    How do we do insurance? Ask the shysters. I have two clients who are workers comp attorneys. They sue for the workers. Both are in my top 10 of most income earned.

  13. A social cost for remediation of other pollutants is both described in theory and observed in practice. For example, when China rejects shiploads of electronic trash that it used to accept, it is good for China, good for the environment and an overall gain to society. But there is a real cost to the armies of poor Chinese who used to scratch out a living by picking through this trash.

    When the UK government re-aligns incentives for utility companies, allowing them to pass capital costs for renewable installations to their customers, it may be good for the UK. It is certainly good for the utilities. But the increased cost to the consumer contributes to energy poverty at the margins.

    When USAID and the UK’s DFID bluntly tell the World Bank not to loan money for the construction of a coal plant in South Africa, it reduces global emissions of CO2 and other pollutants. It also retards development amongst a population that sorely needs it.

    History gives us numerous examples of progress having costs and consequences and many historians have noted and lamented those costs. Now in the age of spreadsheets and ‘AI’, we can more easily calculate the value at risk, the potential gains from addressing that risk, but also the costs of addressing that risk and use the results of those calculations as a part of policy formulation.

    So why wouldn’t we? Why haven’t we?

  14. Joshua says:

    Anders –

    I watched a good portion of that video also, having seen it over at Judith’s. I very much liked Kerry’s framing: in particular his care in constructing the framework of overlapping domains of risk in the face of uncertainty (uncertainty w/r/t impact from omissions and uncertainty w/r/t economic outcomes from action to mitigate emissions). I recall quite a while back seeing some other interesting stuff from Kerry where he talked about the problem of confronting low probability, high damage function tails of risk – and being similarly impressed with his approach.

    You say:

    I partly thought that this was quite good as it came across well, but I wondered how it would be perceived by a more neutral observer, or by those who are already doubtful. It’s possible that they would walk away thinking that the doubts are quite justified and that maybe there isn’t really any reason to do anything just yet. On the other hand, I don’t really know how else to approach this kind of discussion.

    My own belief is that there may not be a less sub-optimal way to approach this discussion. We can’t control, necessarily, whether people might walk away with doubts about taking immediate action, IMO – because (1) in fact, uncertainties do exist, and, (2) we can’t exercise control over how people approach risk in the face of uncertainty. Thinking that you can strong-arm away either of those realities is likely, in the end, to be more sub-optimal.

    Being more confrontational may well come across poorly and be ineffective.

    I don’t think that the main problem with such an approach is that it comes across poorly. I think that the main problem with such an approach is that it isn’t likely to be effective. Being confrontational does not target, directly, the uncertainties or the problematic aspects of how people approach risk. You can’t deal with those issues directly by ignoring them or trying to strong-arm them away, IMO.

    I think it mostly highlights how difficult these kind of discussions can be.

    Yup.

  15. Joshua says:

    HAS –

    …. there are better ways to deal with the uncertainty than to act regardless, particular when the uncertainty evolves over time and acting to avoid carries a high cost.

    I notice that you are suggesting a form of acting “regardless” in characterizing action as necessarily carrying a high cost. In doing so, you aren’t dealing with uncertainty.

    The issue shouldn’t be about an imperative to do everything possible now (and ignore the consequences) which is where the [precautionary principle] takes you if you aren’t careful.

    We could create the same one-sided portrayal of the otter side – where we find people advocating that nothing should be done, based essentially on a precautionary principle (say, the law of unintended consequences resulting from government action to implement mitigation). Such a position basically, ignores inconvenient uncertainties.

    A big part of the problem, IMO, is that some people are more interested in characterizing otters than actually negotiating the uncertainties.

    Rather the issue is what should we be doing when.

    Hmmm. I agree that in the end, the issue is what we should be doing when. But perhaps a more immediate issue is what is preventing us from getting to addressing the issue of what we should be doing, when.

    This leads to investing today into the things that will buy you options in the future, should the evidence start to roll in that things are going awry.

    And here you ignore uncertainties.

    Significant investment in R&D would be a classic example. Relatively low cost, you’ll wish you’d done it if things go pear shaped, and through potential spill-overs you probably won’t regret it too much if you got it wrong.

    And again.

  16. Thomas,
    Except, I think that that is essentially taken into account whe estimating the social cost of carbon. Ideally we should include all the costs associated with generating energy (from whatever source). Then, we can balance the benefit of using that energy, against the cost of generating that energy. Also, we would also presumably use the cheapest form of energy. That’s really all that a social cost of carbon is trying to do; it’s trying to estimate all the future costs associated with emitting CO2 into the atmosphere so that, today, we pay the full cost, rather than passing some of these costs onto future generations (technically, I’d argue that we still do this, but by including this cost we’d be doing this in the most optimal way).

  17. Joshua,

    My own belief is that there may not be a less sub-optimal way to approach this discussion.

    I’m not sure if you mean less optimal, more optimal, but if you mean that probably isn’t a better way to do this, then that may be true. I do think that Kerry Emanueal could have pused back a bit harder against some of what John Christy said (all models are running too, for example) but that’s just my view.

  18. JCH says:

    Health Savings Plans are idiotic. My wife and I are in the upper 5%, and the federal government is helping up pay for fancy eyeglass frames, expensive lens, and bolt-on teeth. Maybe they get it back somewhere, but with the deficits we have how can that be argued? We’re rich; Trump gives us welfare. It freakin’ nuts.

  19. Marco says:

    ATTP, it was 2014, right when the models did ‘worst’. Three years later, however…

  20. Marco,
    Good point. Yes, quite a lot has changed in that respect in the last 4 years or so.

  21. RICKA says:

    Nuclear power is a replacement for fossil fuel power. It emits very little carbon, will last quite a while and if technically available right now.

    Since we have to run out of fossil fuels someday, it is surprising how much resistance there is to building lots and lots of nuclear power plants right now.

    A bit irrational if you ask me.

    The problem of nuclear waste is is much smaller than the problem of CO2 emissions (in my opinion).

    If we were really worried about CO2 emissions, we could use thorium for reactors in 3rd world countries we do not trust (like Yemen or Syria).

    Or we could supply small scale reactors which could be swapped out every 30 years for a fresh one, with self-contained waste.

    But instead of focusing on nuclear power technology, greens want 100% renewable, or some other fantasy.

    Talk about not letting the perfect be the enemy of the good!

    Nuclear works.
    It provides 20% of our commercial power now.
    It could easy provide 80% of our power, or even 100% (if we wanted).
    Yet new reactors are very rare (except for military vessels).

    I suspect we will get there eventually.

    But fear of the invisible radiation is quite powerful (except people are not afraid of the sun).

  22. Professors Emanuel and even Professor John Marshall, both of MIT, are non-alarmists in the sense that while they believe there is a risk of pronounced if not abrupt change, there is no evidence for it as yet. That said, they are both theorists and modellers and, while they have both published excellent work which explains many features of the atmosphere and oceans as observed in the large. Still, I wonder about a few things:

    (a) There are limits to the explanations, e.g., synoptic scale and mesoscale oceanic eddies, which is where the frontier of their science is.

    (b) Their science does not extend directly to the behavior of viscoelastic flow in ice sheets.

    (c) Is there some diagnostic in their explanations which would signal if climate has dramatically shifted to a new set of meteorological rules which makes historically applicable rules-of-thumb and explanations less successfully predictive?

    In a recent Lorenz-Charney Symposium, a panel discussion on which they participated also left aside the relevance of machine learning and data science work to their field (where it has seen some successful applicability), which set off alarm bells ringing in my head. I’m no geophysicist, or fluid dynamicist, but I know a bit about numerical modeling and the increasingly impressive string of successes using their in computational biology, including neurosciences, and even in explaining nonlinear physical phenomena. To disregard the relevance of such results in a technical forum, in such a cavalier manner smells to me of scientific ossification.

    The frontier of oceanography, geophysics, and biology is experimental. This is why I’ve decided, with my wife, Claire, to contribute significantly to Woods Hole Oceanographic Institution (WHOI) and no longer give a dime to the MIT Lorenz Center.

  23. Willard says:

    Those who frowns upon the precautionary principle might prefer Nassim’s way of putting things in perspective:

    Push a complex system too far and it will not come back. The popular belief that uncertainty undermines the case for taking seriously the ’climate crisis’ that scientists tell us we face is the opposite of the truth. Properly understood, as driving the case for precaution, uncertainty radically underscores that case, and may even constitute it.

    Click to access climateletter.pdf

    An analogy:

    Better to miss a zillion opportunities than blow up once. I learned this at my first job, from the veteran traders at a New York bank that no longer exists. Most people don’t understand how to handle uncertainty. They shy away from small risks, and without realizing it, they embrace the big, big risk. Businessmen who are consistently successful have the exact opposite attitude: Make all the mistakes you want, just make sure you’re going to be there tomorrow.

    https://www.esquire.com/lifestyle/money/amp19181300/nassim-nicholas-taleb-money-advice/

    Since we only have one planet so far, not blowing it sounds like a good idea.

    That said, I’m no fan of Nassim’s SpeedoScience, so mileage varies.

  24. Rick,
    Yes, I agree that there is a lot of irrational objections to nuclear and that this has impacted its implementation. However, I’m also not convinced that nuclear is the solution in all situations. Partly, we currently do not have enough people to build and maintain them, and there are probably still parts of the world where this would not be the obvious solution today, even if it might be in the future.

  25. Willard says:

    > there is a lot of irrational objections to nuclear

    In fairness, there is a lot of irrational arguments in favor of nuclear too. Take RickA, who’s a self-proclaimed libertarian. He’s pushing for a product that comes with more regulations and bigger government involvement in the economy.

  26. Ragnaar says:

    “Health Savings Plans are idiotic.”

    The tax treatment of all health costs frequently are idiotic.

    I have been self employed for a long time. At first we won the ability to deduct our health insurance premiums above the line, while for years prior, many employees got that treatment while we didn’t. When we tried to buy our own policies since we lacked a certain group size, we paid higher premiums than employees of larger companies.

    So we bought HDHPs. They were invented for us. They are now being adopted by more and more large companies. My sample size of about 900 clients is the basis of this statement. The company might throw Health Saving Account (HSA) money for free (and tax free) at the employees at the same time as they lose their Cadillac coverage.

    W-2s a few years ago started showing the employers cost of the insurance the employee gets. The record holder is Hennepin County frequently exceeding $20,000/year. They get Cadillac coverage and will be one of the last to adopt HDHPs.

    Is the subject still insurance?

    When I started doing this early 80s, medical expenses were more deductible. Congress has squeezed that over the years. For instance the 10% or 7.5% AGI threshold for medical deductions. Most of my medical itemizers are old people. A lot of it driven by insurance premiums. Squeezing old and sick people. That’s our democracy.

    HSAs are self insurance. Pay into them. No middle man. If you don’t get sick, go online and look at your money. Yes they are a tax break which favors people self insuring. HDHPs cut out out the insurance companies to some extent. In the alternative, I want you to pay into insurance for the most minor thing that may happen to you. Self insurance is putting on your big boy pants.

    I think this is a good example of health insurance, taking on a life of its own and dominating the field.

  27. Joshua says:

    Anders –

    I’m not sure if you mean less optimal, more optimal,

    I basically mean more optimal…but I think of it as less sub-optimal because there is no really good (or even optimal) way to go about this.

    I do think that Kerry Emanueal could have pused back a bit harder against some of what John Christy said (all models are running too, for example) but that’s just my view.

    Perhaps. I did walk away from watching the video with questions on to the technical issue of whether the models are as far out of sync with the actual warming as Christy argues. But ultimately it doesn’t matter that much because what were dealing with, as Emanuel says, is how to deal with low probability high impact events. The precision of the models vs. observations is relevant to that question, but ultimately I will never be in a position to resolve that question and so IMO, the point is to approach policy development in that state of ignorance.

  28. Willard says:

    > Self insurance is putting on your big boy pants.

    It’s also not insurance.

  29. Ragnaar says:

    “It’s (self insurance) also not insurance.”

    A rose by any other name would smell as sweet.

    1. a practice or arrangement by which a company or government agency provides a guarantee of compensation for specified loss, damage, illness, or death in return for payment of a premium.

    2. a thing providing protection against a possible eventuality.

    1. I get paid. The money is not protection from change, but reimbursement for it.
    2. A seawall is insurance and provides protection from. Planting wheat instead of corn is protection from lack of rainfall in general. Restoring plowed and pulverized annually farm fields to grasslands protects the soil from carbon depletion and protects watersheds.

    I prefer 2. Getting reimbursed isn’t the point. I want a protector. And I want to protect my stuff. I can advise you to have a 16 character password but more important is my password.

  30. Hyperactive Hydrologist says:

    A protector can be exceeded. Hence the need for insurance.

  31. HH,

    There has always been an assumption that low sensitivity = low impacts but what is the evidence for this?

    Good question, and I don’t really have an answer. Of course, I would argue that it’s not so much low sensitivity, but that a low sensitivity implies that for a given level of emissions, the resulting warming (and the resulting impacts) will be lower than if the sensitivity were higher. I would argue that if sensitivity, and our emissions, are low enough that we warming to something similar to the range we’ve experienced in the last ~thousand years, then I would expect the impacts to be low. However, we’re probably already there, so there is probably little we can do to avoid warming to a level higher than anything we’ve seen for ~1000 years.

  32. Willard says:

    > A rose by any other name would smell as sweet.

    A car crash seldom smell sweet, Ragnaar. Neither does a burnt house. Victims of “self-insured” persons who go bankrupt after getting sued may not agree with your estimation of who’s putting which big boys pants.

    While it makes sense “to keep powder dry” as Nassim recalls, it’s called savings. And personal savings need to be pooled to meet societal risks. In the case of car or home ownership – you’re oftentimes required by the law to do so.

  33. Willard says:

    > I would argue that it’s not so much low sensitivity, but that a low sensitivity implies that for a given level of emissions, the resulting warming (and the resulting impacts) will be lower than if the sensitivity were higher.

    OTOH, less warming for the same amount of energy seems to imply more energy in the system somewhere else than in the atmosphere.

    I doubt it’s a good thing for the oceans.

  34. RICKA says:

    Each state could build 2 new nuclear reactors, and we would double the power generated with nuclear from 20 to 40%.

    If we standardized the design, got it approved and built 100 of the same exact reactors (4th or 5th generation – with passive cooling), the costs would drop, the time to build would drop and we could convert 20% of our power from fossil fuel to nuclear within 5 years.

    I would design for on-site storage (since that is de facto what we are doing anyway).

    Or I would guild 8 or so regional recycling reactors and send all processed fuel to the regional recycling centers to be processed again.

    If it works, we could do the whole thing again, and by 10 or 15 years, be generating 60% of our power with nuclear.

    That would cut carbon emissions, and make electric cars a whole lot cleaner than getting their electricity from plants run by coal.

    Or we could do what Germany did and triple the cost of electricity and not reduce carbon emissions at all.

    This really isn’t that hard people.

    It is a matter of priorities.

    Which is more important, being anti-CO2 or being anti-nuclear?

  35. @RICKA,

    Nuclear works. It provides 20% of our commercial power now. It could easy provide 80% of our power, or even 100% (if we wanted). Yet new reactors are very rare (except for military vessels).

    While I agree using nuclear would in many senses be ideal, actually, it doesn’t work in the sense that we really don’t know how to build them, as I’ve noted here before, elsewhere. (See the link.) That isn’t subjective: My criterion for knowing how to build an item involving some technology is that we’ve mastered it enough so copy N+1 costs less to build than copy N. That’s not true with nuclear, even if one controls for additional safety and environmental regulations, and sets aside complications of waste storage.

    Modular Thorium nuclear power stations sound attractive, but the technology hasn’t been mastered.

    And nuclear’s need for cooling sometimes puts it in awkward situations: Both in Peru and in, I believe, France, near the Alps, the rivers which were counted on to provide water to cool are running dry because of depleting mountain ice. At least in France, the utility is working as quickly as possible to offset their reactors, which need to be shut when this happens, with onshore wind.

  36. Rick,
    I’m really not quite sure what your point is (other than suggesting that accepting AGW implies being anti-nuclear, which is clearly not true). Yes, it would possible to develop a nuclear energy infrastructure in a relatively short space of time if we could overcome the political, and societal, factors that get in the way. I think many would argue that over-coming these is actually remarkably difficult. It’s similar to suggesting that simply highlighting simply scientific truths will lead to acceptance and policy action (deficit model thinking).

  37. @Ragnaar,

    I s’ppose health insurance plans have something to do with risk, but, as there seems to be an offtopic subthread here …

    I had lunch and dinner with my older son in NYC a few weeks back. He works for a financial firm in London and lives there. On a stroll one of the afternoons, he remarked that when their company posts people to other countries, they pay something like US$2000-US$5000 per person per annum for health insurance, excepting the United States. (I think the US$5000 per person per annum is Switzerland, but I could be wrong.) In the United States it costs more like U$12,000 per person per annum.

    Obviously, health insurance in the United States is inexpensive. Don’t want any of that single payer stuff. Sure. Obviously. Silly.

  38. RICKA says:

    ATTP:

    My point is that we already have a solution – but it is not being implemented yet.

    Why?

    I guess because the risk of CO2 emissions is not perceived to be bad enough yet to overcome the perceived risk of the known solution.

    Nuclear is the only known solution (in my opinion).

    We have already proven we can generate 20% of our power needs with nuclear.

    We have no clue how to generate more than about 35% with renewable and it doesn’t really reduce CO2 emissions enough, because of back-up power. We haven’t yet invented power storage on the scale necessary. We haven’t yet invented a non-nuclear non-carbon power source which is cheap enough to replace fossil fuels.

    But we refuse to go nuclear.

    We refuse to pull our military nukes into port and hook them up to provide power.

    Instead, we wait for pie in the sky, rather than going with what we know can work.

    France showed it can be done, but Europe is rejecting the French model – why?

    We will be going nuclear anyway once fossil fuels get expensive enough – why not get going now and reap the CO2 emission reductions early rather than late?

    This is an easy call and I really wonder why we cannot at least agree to double our nuclear percentage.

    It seems like a no-brainer to me.

    If CO2 emissions are the existential crisis people say they are.

  39. Rick,
    I know what point you’re making. I’m just not quite sure why you keep making it.

    We have already proven we can generate 20% of our power needs with nuclear.

    This isn’t correct. It’s about 11% of electricity and about 4% of primary energy.

    Today, there are currently about 447 nuclear power plants. If we wanted nuclear power to provide all electricity, we’d need to build another ~4000. We’re currently building 61.

    Given what France has done, it must be possible to fairly quickly build a nuclear power plants if we wanted to. However, I expect there are regions of the Earth where this is currently not viable, either because of lack of cooling, or lack of expertise (on top of a lack of societal willingness).

  40. RICKA says:

    ATTP:

    I am sure your numbers are correct worldwide.

    I was speaking USA only for 20% and 100 new reactors (2 per state).

  41. @Willard, @Ragnaar,

    Let’s give The Bard a little respect, please:

  42. Rick,

    I was speaking USA only

    Oh, I see. Carry on, then. I think Kerry Emanuel made the point in the video that if France could build a large nuclear power infrastructure in about a decade, then the US could too. I’m sure this is true. Not quite sure why you keep making your point about the US in comments on a blog run by someone in the UK.

  43. RICKA says:

    ATTP:

    Sorry about that.

    For some reason I had gotten the idea you lived and worked in the USA.

  44. Rick,
    Okay, no problem. One other correction, if you’re talking USA, then nuclear is 20% of electricity and 9% of total energy.

  45. KiwiGriff says:

    Nuclear power ?
    It’s mostly about risk
    At this point in time no one has developed practical safe long term storage for high level waste.
    The majority of shut plants are in safstor …long term storage with no plans to fully decontaminate.
    The risk of disaster is uninsurable on the commercial market .
    To place this in perspective the cost of Fukushima is estimated to be 192 billion.The cost of Decommissioning is estimated to be 90billion. Looking at the past history of cost estimat inflation we could reasonable expect both these figures to grow into the future.
    In the USA the total available to cover nuclear liability is 13 billion.
    Without goverment taking up this risk there is no viably nuclear industry.,
    I can not see how anyone who claims to be for market forces would back an industry only able to operate on a privatize profits socialize the risk model.

  46. Ragnaar says:

    “Yes, it would possible to develop a nuclear energy infrastructure in a relatively short space of time if we could overcome the political, and societal, factors that get in the way.”

    Which sounds like to this heretic that it’s possible to develop wind and solar in a short time if we could overcome resistance to doing so.

    I see a parallel but others may not. The nuclear energy blocking has been practiced, and refined to the point of pro-nuclear forces giving up. They won. That option is off the table. You can’t draw on it.

  47. Ken Fabian says:

    RickA – “I guess because the risk of CO2 emissions is not perceived to be bad enough yet to overcome the perceived risk of the known solution.”
    True of Conservative/Right politics. Where most existing popular support for nuclear can be found. Climate science denial and obstructionism prevents that body of support being mobilised and that is more profoundly damaging to nuclear than all the anti-nuclear activism.

    “Nuclear is the only known solution (in my opinion).”
    There are no all-of-problem solutions using 100% nuclear on offer and climate science denial has prevented any being developed. Nuclear requires far greater levels of sustained government intervention in energy markets than RE and that is not possible with climate science denial infecting mainstream politics.

    We don’t know how the end game of moving to near zero emissions will play out but the early-middle stage of the game is going to be dominated by growth in renewable energy and any future nuclear build will get incorporated into energy systems with lots of them in place. ie they will probably need to operate intermittently.

    In market terms, they will have to be financially viable just with the power they sell outside the times when wind and solar are abundant – which means nuclear will not be competing with them directly, but competing with hydro, batteries and demand management in that space. The value of energy in that space is going to be higher than any average price and a lot higher than Wind and Solar whenever those are operating, but the competition is going to be fierce and it remains to be seen if that shortening periods at ‘off peak’ prices are enough to support nuclear power.

  48. izen says:

    @-ATTP
    “…there are really two main factors that will determine how much we change our climate and, consequently, how severity of the impacts. These are climate sensitivity … and how much we emit into the atmosphere.”

    I see HH and Willard have already suggested that giving such status to climate sensitivity may be a mistake.

    If the SAME amount of energy is added to the system it may get ‘expressed’ in a variety of ways. Ocean current changes and ITCZ movement will be the source of the impacts, not a global mean temperature that will depend on the pattern of warming, not the total added energy.

    Climate sensitivity derived from a metric – GMST – is a variable dependent of the distribution of energy at least as much as the amount. (Issac Held had a couple of post on this a while ago)

    A low climate sensitivity could result from the regional shifts in weather patterns generating a low GMST while those same regional shifts are far more impactful than the GMST and derived TCR/ECS might imply.

    I understand that it is a simple and available metric that captures the main issue, that this is a GLOBAL change of significance.
    But the apposite convenience of the metric does not really help pin down the likely impatcs. Certainly not to the extent that justifies arguments about differences of a degree or so.

    It is much more likely that the magnitude of the impacts will scale in some way with the amount of energy added, via regional changes in weather patterns, ocean currents and sea level rise. Such detail about regional changes is beyond present modulz. It is an area of uncertainty that is obscured by the obsession with ECS.

    Uncertainty increases risk. Uncertainty should promote the urgency for taking climate change more seriously as a collective global problem because we are so uncertain of the local impacts.

    Uncertainty is not a justification for Luckwarmism, quite the opposite.

  49. izen says:

    @-Ragnaar
    “The rich have the advantage of being more able to self insure in general. Sometimes the poor self insure as they have no money to pay the premiums.”

    Renaming wealth (self)insurance smells a little fishy. I am not convinced it can be called any kind of insurance, and if it can can, it is the least efficient.
    You recognise one problem with individual self insurance in this comment –

    @-” When we tried to buy our own policies since we lacked a certain group size, we paid higher premiums than employees of larger companies.”

    There are economies of scale. More than this, the insurance middle-men can use the regular income from premiums to be significant capitol investors. With the size to diversify and balance financial risk/return, there are some insurance markets where the outgoing costs are covered by investment income, premiums are just the factor that enables the investment side of the business.

    But there is a more fundamental reason why self insurance fails. It is incapable of providing the financial resources in a manner that can provide protection. Take a basic social protection, a Fire Service.
    If you were sufficiently wealthy and could rebuild from a fire, or employed enough staff who could prevent the worst damage perhaps you are self insured.
    But eventually small groups collected enough resources to insure their property. BY financing a dedicated Fire Service, just for them. To show they were wealthy enough to self insure in this way they had signs on their houses to indicate which Fire Service they had financed.

    The ensuring farce, wherever and whenever it has played out is predictable. Competition led to arson to bankrupt rival Fire Services. The funds raised by the various groups where too small or diverted and did not provide a service.
    The result is that in most civilisations, sooner or later, they establish a communal and universally applied Fire service. With a consistent, often communal tax as the source of finance. A protection, or insurance against loss far better than any self insurance on the individual level might achieve.

    Medical services are subject to similar problems. A consistant source of funding for education, research and treatment facilities is not available where individuals pay when they need to from current resources.

    @-“I have two clients who are workers comp attorneys. They sue for the workers. Both are in my top 10 of most income earned.”

    So… the the big money is where business is rich enough and willing to pay out the cost for breaking rules about worker treatment. Is that self insurance ?

  50. Ragnaar says:

    “Where most (from the right) existing popular support for nuclear can be found. Climate science denial and obstructionism prevents that body of support being mobilised and that is more profoundly damaging to nuclear than all the anti-nuclear activism.”

    I puzzled over the above for a few minutes before I understood it. The right is moderately in favor of nuclear power. When you look at the lack of new plants, that agrees with this. Some redneck state seems to failing getting their new one online. It’s a failure because of its costs it seems to me. But at least they are moderately on the right side of the issue.

    You’re saying that the right is not sufficiently worried about global warming thus not being motivated enough to support nuclear power more. Part the problem, and I have my bias, is the deal is tied up in knots by regulation. Part of the problem is someone including huge corporations hasn’t gone modular and just keeps making the same thing. Part of the problem is fossil fuel natural gas which is cheaper and agile and is modular.

    I think the right would more in favor of taking the lead on the issue, if the left covered them. But it may be about winning elections, which applies to both parties.

  51. Joshua says:

    Ragnaar –

    The right is moderately in favor of nuclear power.

    Ostensibly.

    By wharlt scenario to you see significant nuclear buildup with support from “the right,” particularly the libertarian right?

    Support for nuclear is a convenient rhetorical tool for “the right.” Out lack of nuclear power is not simply attributal to “the left. “

  52. Steven Mosher says:

    “Nuclear power ?
    It’s mostly about risk
    At this point in time no one has developed practical safe long term storage for high level waste.”

    we beg to differ
    http://www.deepisolation.com/

  53. Ragnaar says:

    izen:

    Rather than looking at why self insurance doesn’t work, we should look at why regular insurance doesn’t work. The biggest problem with healthcare is insurance. Remember the Hennepin County worker with $20,000 employer’s cost per year for health insurance. He’s about 60 and married. I have single guy about age 62 holding out until Medicare kicks in at age 65. $12,000 a year for a HDHP. This is regular insurance applied to a problem and regulated by our governments.

    People switch to health sharing deals. Frequently the self employed. The system is screaming that there’s a problem.

    As I said, larger companies see the problem and are going toward HDHPs and HSAs. The money knows. Big Health is being transformed.

    This may have started out as, Someone wanted climate insurance to solve the problem. I am saying it didn’t solve the healthcare problem.

  54. KiwiGriff says:

    Steve Mosher, Director for Asia/Pacific………

    It may be a solution.
    The technique does not yet exist beyond a proposal so does not negate my statement .
    I wish your team the best in this endeavor and hope the technique survives full examination of its risks and costs.

  55. Steven Mosher says:

    Kiwi. Costs are well known and predictable since it uses known technology with no required innovation.

    On risks..

    Click to access Deep-Isolation-Repository-Technical-Discussion.pdf

  56. izen says:

    @-“The right is moderately in favor of nuclear power.”

    Except in Iran, apparently.

  57. angech says:

    “It’s mostly about risk”.
    Tricky business, insurance.
    I look around Melbourne [small Australian city] and see the biggest buildings and biggest type of buildings are insurance companies.
    Why is this so.
    Because they have a product, snake oil, with miracle properties which costs them nothing.
    The customers give them the money and they give a little bit back to some of them and wine and dine, sail big yachts and live in large houses with all the food in the world, etc.
    Harvey, cry your eyes out.
    Most insurance relies on claims, in general being a lot less than the premiums.
    So no-one will give you insurance for things you really need unless they charge you more than you can afford.
    Why is this so hard to understand.
    You can I believe get insurance playing roulette, still lose.
    Your odds of having a burnt house are remote, like throwing money in the fire.
    What drives insurance is fear and the promise of a big reward for what seems on the surface a small output.

  58. angech,

    Most insurance relies on claims, in general being a lot less than the premiums.

    In total, maybe (otherwise it wouldn’t be viable), but individually not true. Insurance is meant to protect you against things would be far too costly for you to deal with by yourself.

  59. “Because they have a product, snake oil, with miracle properties which costs them nothing.”

    we can add insurance to the list of things angech doesn’t understand….

    Most insurance relies on claims, in general being a lot less than the premiums.

    you don’t say? LOL! Hint: the skill lies in being able to set premiums high enough for that to be true (at least in probability), but low enough not to lose custom to other insurers.

  60. izen says:

    @-angtech
    “Most insurance relies on claims, in general being a lot less than the premiums.”

    Most insurance relies on claims, in general being a lot less than the return on investments.
    Premium rates are mainly set by stock and bond return rates rather than claim size.

  61. Dave_Geologist says:

    hyper

    Their science does not extend directly to the behavior of viscoelastic flow in ice sheets.

    I presume glaciologists do consider viscoelastic fllow of ice sheets. For example, A Viscoelastic Model of Ice Stream Flow with Application to Stick-Slip Motion was the first hit on Google for “viscoelastic flow in ice sheets.” I would not expect GCM builders to model individual ice flows as they are too small and need to be parameterised anyway. I’d expect them to focus on upscaling the work of more detailed glaciological modelling.

    At the ice-sheet scale, I’d see the stick-slip component as more challenging to predict then the viscoelastic component. Especially if you are concerned about timing of tipping points. It’s essentially the same problem as earthquake prediction. How far have we got with that?

    There’s also a literature on the interaction between the ice sheet and isostatic loading (which will be important for sill/grounding-line interactions as well as SLR) Fast computation of a viscoelastic deformable Earth model for ice-sheet simulations.

    And even on heating of the Earth’s mantle by loading/unloading of large ice sheets Short time-scale heating of the Earth’s mantle by ice-sheet dynamics. Possibly enough to trigger short-lived bursts of volcanism, at least in a volcanism-prone region, although it would be hard to distinguish in practice from volcanism induced by stress changes or deformation.

  62. Dave_Geologist says:

    Obviously, health insurance in the United States is expensive.

    (Yes, I know that’s what you meant hyper, you were being sarcastic with inexpensive).

    It was brought home to me when I bought travel insurance last year. There were three or four categories, home (UK if you haven’t guessed from the spelling), where healthcare is free and it’s only travel and accommodation costs that are covered; EU/EHIC area, where healthcare is free (reimbursed by reciprocal arrangement between countries) but there may be repatriation costs; one or two for RoW (based on distance from UK and healthcare costs); and a whole separate category for the USA (but not Canada or Mexico, which are the same repatriation distance).

    Why? Because the USA healthcare cover was three or four times that of the next most expensive country.

  63. Joshua says:

    Most insurance relies on claims, in general being a lot less than the premiums.

    I’ve read (I think here) that actually insurance companies don’t make all of their money in that fashion: instead, income from premiums and payments on claims are not grossly out of balance, but insurance companies make most of their money by leveraging the capital from cash flows.

  64. Joshua says:

    Oops. I see izen already weighed in.

  65. verytallguy says:

    I would not expect GCM builders to model individual ice flows as they are too small and need to be parameterised anyway. I’d expect them to focus on upscaling the work of more detailed glaciological modelling.

    My understanding is that ice sheet dynamics are not included in GCMs at all.

    Indeed, albedo effects from ice sheet retreat are specifically excluded from the definition of ECS, being “slow” rather than “fast” feedbacks.

    A quick google suggests even earth system models (ESMs) don’t current account for ice sheets:

    Most ESMs do not directly simulate the growth and decay of ice sheets on land, but ice sheet model components are being developed to address the potential for ice sheet collapse in the future.

    https://www.nature.com/scitable/knowledge/library/studying-and-projecting-climate-change-with-earth-103087065

  66. Sceptical Wombat says:

    I presume that Christy’s claim that the models are running hot (three times too hot) is based on the assumption that the UAH temperature series represents reality. That should have been challenged

  67. Dave_Geologist says:

    Surprisingly, SM, I find myself agreeing with you for once re your Deep Repository. At least wrt claystones as the optimal containment formation, and with a “fire-and-forget” disposal strategy. I know/knew a couple of people who worked in the British Geological Survey’s radwaste programme and was told over a beer that they thought the London Clay was the ideal repository, but politics prevented them from considering it.

    I’ve always seen hard-rock disposal sites as less attractive because (a) they’re always highly fractured and fracturable so prone to leakage – hence the irony of radwaste and geothermal energy targeting the same rocks; and (b) they’re strong enough to support tunnels which tempts you into making it accessible for inspection and maintenance, which leads to terrorism/proliferation issues and means you have to maintain them for millennia. Better to put it somewhere it’s inaccessible and where the natural earth processes act as a barrier. Salt is an alternative but it’s quite mobile when hot or when it gets wet, so I wouldn’t trust the waste to remain in place. For example, I’ve seen salt come 1000 feet up a ruptured wellbore in the 3-4 years it took to organise an intervention, and abandoned machinery in a salt mine entombed within a decade.

    Cost will be an issue. But of course the alternatives are not cheap either. If you want to use off-the-shelf drilling tech you’ll be restricted to an 8.5 inch diameter horizontal borehole. Even if you drill multilaterals, and only bury the most radioactive materials that way, with spacing between the waste cylinders you’ll need a lot of boreholes. You rather gloss over the local thermal effects on the shale, although you acknowledge that so perhaps have looked in more detail. I accept that the damage envelope will be small relative to the depth of burial and thickness of the shale (assuming you pick the right shale) ,but would worry about wellbore isolation. IOW leakage up the cemented annulus of fluids which had used the damage zone to bypass the original barrier.

    I can foresee issues with getting a shale with the right degree of compactedness. I’ve seen shales approaching microdarcy matrix permeability which are sufficiently hard to be pervasively fractured. You’d ideally want something softer than that which means you may get up to concerning levels of matrix permeability. Or have to go uncomfortably shallow (1km is uncomfortable to me, that’s too close to the rule-of-thumb maximum depth to which rocks can sustain absolute tensile stress, permitting vertical, open fractures to surface). Having said that, the weakest link is almost always the borehole. In zonal isolation assessments we typically used a rule-of-thumb maximum permeability for oil-industry Class “G” cement of 1 millidarcy. That’s conservative, as I’ve seen tests on cement from real North Sea wells undergoing decommissioning give 0.1 millidarcy. But that’s for a one inch core plug, probably not containing microfractures or it would have shattered during plugging. (As a side issue, that means that most wellbores are not absolutely sealed to gas, just enough that leakage is acceptably close to background rates.) A little dribble of methane is acceptable in a basin where there are natural leaks from the subsurface and from ponds and lakes. Is a little dribble of polonium acceptable?

    Unfortunately I wouldn’t choose the Bakken or their ilk for a number of reasons. (1) Too hard. It’s why they can be fracced. If you read the literature on fraccing and particularly around microseismic monitoring and frac optimisation, you soon realise that these reservoirs are already naturally fractured. Just not enough to provide unstimulated commercial flow rates. (2) Some of them already leak. Some of the shale gas provinces don’t just have natural seeps of biogenic gas, they also have thermogenic gas seeps typed to the source shale. Which is prima facie evidence of a pre-existing leakage path to surface. (3) The best performing ones are partially mature oil and gas source rocks. Part of the fracturing and leakage is probably due to the volume increase associated with kerogen maturation, especially to gas. (4) Related, heating will generate more oil and gas, fracturing the shale and potentially exceeding overburden pressure. That happens fast. I’ve run pyrolysis samples through a portable Source Hound machine in minutes.

    Which is not to say I don’t like it, at least for fuel pellets and other small volume stuff. Just need to be sure all the angles are covered, and, more difficult, overcome the political objections to a “fire-and-forget” technology. Which also applies to CCS BTW. Anything which requires a caretaker to monitor or intervene for centuries or millennia is not geological storage in my eyes.

  68. Dave_Geologist says:

    My understanding is that ice sheet dynamics are not included in GCMs at all.

    You’re right vtg, at least AFAICS and to date. I was sure I had seen GCM SLR projections that include ice sheet melting as well as ocean temperature and circulation changes, but this one, for example, just takes global SLR from an ice sheet melting projection and adds it onto the GCM-modelled SL.

  69. Steven Mosher says:

    “Cost will be an issue. But of course the alternatives are not cheap either. If you want to use off-the-shelf drilling tech you’ll be restricted to an 8.5 inch diameter horizontal borehole. Even if you drill multilaterals, and only bury the most radioactive materials that way, with spacing between the waste cylinders you’ll need a lot of boreholes.

    Cost is estimated to be around 10M per bore hole. 400 or so are needed to store the 80000
    tons of watse, plus an addition 10 bore holes per year ( we add 2k tons per year)
    A standard assemble is 13cm on a side. Fits down the hole with no changes.

  70. @dikranmarsupial,

    I think it’s necessary to distinguish between insurance as presently done and insurance as it was first conceived. The origins of insurance were, as I understand it, with life insurance and benefit societies, where upon death of the insured, principally a breadwinner, there would be some monies and income for the survivors without which a trip to the poorhouse and imprisonment was nearly assured. It’s an interesting story, because it’s bound up with the emergence of modern Statistics as a field, that being known, but principally applied to games of chance. Before life insurance there were attempts to insure cargoes, something which the Great Atlantic and Pacific Tea Company (GAPTC and its forebears) pursued. Indeed, much of the early development of insurance saw its purpose as a great social good, and that as part of its mission. It operated and won because deaths and payouts were not correlated with one another, and rates of death at ages could be reasonably predicted (actuarial tables) so annual payouts were known, and minimum number of subscribers therefore known. The earliest ones, as I understand it, were non-profits. Of course GAPTC saw itself as providing customers it a service at which it could profit. However in their case they clearly were incentivized not to lose insured cargoes.

  71. @Dave_Geologist,

    I presume glaciologists do consider viscoelastic flow of ice sheets. For example, A Viscoelastic Model of Ice Stream Flow with Application to Stick-Slip Motion was the first hit on Google for “viscoelastic flow in ice sheets.” I would not expect GCM builders to model individual ice flows as they are too small and need to be parameterised anyway. I’d expect them to focus on upscaling the work of more detailed glaciological modelling.

    Hi Dave, yes, thanks. Of course they do. But the MIT Lorenz Center consists of mostly atmospheric scientists with an interest in oceanography and fluid dynamics. I’m not plugged in enough to understand all the political undercurrents, but, from what I understand, there’s a certain sense of superiority expressed by people there and at places like NCAR over “mere” field people, to the degree to which glaciologists and others who want to work campaigns have a hard time getting funded in comparison with the big computer modellers.

    But what I meant about ice sheets was that even Navier-Stokes is insufficient to deal with the viscoelastic flow and stick-slip, because it doesn’t naturally handle effects of asperities, as I’m sure you know. Indeed, as I understand it, even rock fracturing is something which is treated using a more empirical approach, particularly on the small scale. That is an area where, in my judgment and hunch, the kind of data-driven, model-free forecasting pioneered by Perretti, Munch, and Sugihara (reported in 2013) is likely to shine. In fact, I’m working on a blog post addressing data-driven explanations for such processes. (See also.)

  72. @verytallguy,

    Well, when Professor Manabe gave a talk at Harvard, afterwards I went up and asked him about this, and he said that GCMs would need to be changed to consider details of ice flow in the same way that oceans needed to be modeled in some detail. I expressed astonishment, saying that that was really hard. And, I’ll always remember his response: “Yes, it is. But computers are getting very fast.”

  73. Dave_Geologist says:

    hyper
    I’m not familiar with how glaciologists handle it, other than noting that the quoted paper used a MATLAB FEM. Which is similar to what we’d do for rocks, i.e. finite/boundary/discrete element or ball-and-spring model. Which sorta side-steps deep questions of continuity because everything is discretised and least-squared. Wrong, but, hopefully, good enough to be useful. At least rocks, and I presume glaciers, don’t have the additional complexity of turbulence (except perhaps when we get to Mantle or Core conditions). In the sort of things I am familiar with (not earthquakes, other than reading papers), we cheat when it comes to things like faults or fractures. Set some threshold of stress or elastic or plastic strain at which the rock will break or a fault will slip. Then stop and redraw the mesh and change or add new properties, then restart what is effectively a new model. Rinse and repeat.

    Of course a lot of the time you don’t have to worry about that level of detail, which is why I thought GCMs might just take the glaciologists’ results and parameterise them. We can’t predict when and where the next earthquake will occur on the Parkfield Fault, but we can predict that the Pacific Plate will continue to slide past the North America Plate at about 1.5cm/year. We also happily use “effective” properties which are history-matched and often wildly different from theoretical or lab values. For example conductive-fracture models use playing-card-shaped or ellipsoidal fracture apertures and impose a permeability orders of magnitude different from the analytical calculation. We shrug our shoulders and say that’s down to the natural fracture roughness. The Hoek-Brown rock-failure criterion has a fudge factor which civil engineers honestly call a “damage” or “jointedness” parameter, but (some) geologists use it as a fudge factor at depths where the rocks in question can’t support open joints. (I don’t like Hoek-Brown because there are alternative fudge factors which can be related to something more physical, at least visualisable if not easily measurable). Auditors would have a field day!

  74. Ragnaar says:

    JCH:

    “Health Savings Plans are idiotic.” There are two main types. Flexible Spending Accounts (FSAs) and Health Savings Accounts (FSAs). The FSAs are often paired with the Cadillac plans. Use it by the end of year or lose it, with a quite minor save amount. HSAs can rollover from year to year. Once you die it may have to be kicked out of the account.

    HSA contributions are tied with no doubt to a HDHP. Having a HDHP is the key that allows HSA contributions. HSA contributions are in the top five of best tax deductions. If done through your paycheck, you save by paying less Social Security and Medicare taxes. Social Security taxes cap so if you are over that cap, you don’t save on Social Security taxes. Compared that to a 401(k) contribution where you still have to pay Social Security and Medicare taxes even on that contribution with the same cap conditions.

    Now withdraw the 401(k) money. The government has been patiently waiting for its money. Pay them. Instead, use HSA money and pay no tax. 401(k) contributions are huge. The other retirement for many people. 50 million taxpayers can’t be wrong and HSA contributions are better.

    What will happen to healthcare? Whatever happens, having a large balance in your HSA makes it less worse. So rather than having to be saved by some government, you can save yourself.

    Does my health insurance company have any say in what I do with my HSA money. No. As far as I know, you are in complete control and only need to worry about the IRS. Follow the rules and keep adequate records.

    HDHPs and HSAs are libertarian to an extent in that they increase personal responsibility and control. But since we can’t have all private roads, maybe they aren’t a good idea.

  75. Willard says:

    > But since we can’t have all private roads, maybe they aren’t a good idea.

    Maybe not:

    Once you start paying attention to the unequal distribution of the capacity for self-control, there is a certain feature of libertarianism that immediately becomes apparent. Over the years, I’ve spent a lot of time listening to libertarians criticizing one or another form of intrusive state power and demanding that it be rolled back, to create more room for individual freedom. And yet in all these years, I’ve never heard a libertarian demand this in a case where he or she did not also expect to benefit personally from such a roll-back.

    http://induecourse.ca/what-do-libertarians-and-pedophiles-have-in-common/

    The answer to Joseph’s rhetorical question is: Before the internet, nobody realized how many of them there were.

  76. @Dave_Geologist,

    The Hoek-Brown rock-failure criterion has a fudge factor which civil engineers honestly call a “damage” or “jointedness” parameter, but (some) geologists use it as a fudge factor at depths where the rocks in question can’t support open joints. (I don’t like Hoek-Brown because there are alternative fudge factors which can be related to something more physical, at least visualisable if not easily measurable). Auditors would have a field day!

    This is an interesting problem which affects a lot of the newer data science and machine learning methods. In particular, there’s a dearth of techniques for validating that the gizmo which has just been trained to forecast or predict something actually works. They are typically opaque to inspection. Available techniques generally rely upon methods like cross-validation, but that assumes one has in hand a large and representative set of data labelled as to what kind of thing it is, that is, the objective of the prediction or forecast. Often, such a set of data is small, and there is no systematic way of telling how representative it is because covariates of significance just aren’t known.

    Still, the gizmo might seem to do well in instances that are known, and it separates data into groups sensibly. But how the next step is best taken under these circumstances — to use it in an important task — is not known. In fact, much of my professional work for the last two years has concern developing auxiliary statistical methods for validating such techniques.

    Note, in the cases where there is a forecast to be made, presumably if the gizmo runs long enough and predicts in advance well enough, it builds credibility, using tests and circumstances suitably judged by things like Brier Scores. Could there be a surprise which left the gizmo quivering helplessly on the floor? Sure.

    But the thing is, and, I think, this is where this is relevant to climate forecasting, it’s reasonable to think that the set of forecast-making models is bigger than the set of forecast-making models which are inspectable, understandable, and interpretable. If there’s some kind of score applicable to the forecasts against what actually happens, there is no reason to necessarily believe the inspectable, understandable, and interpretable models will contain “the best” forecaster. That could be a model which is opaque. So it’s possible that the best climate forecast might be made by a calibrated model which no one can understand. I’m not saying this is true, but it’s possible.

    And that’s something we, in my field, need to figure out.

  77. Steven Mosher says:

    “And yet in all these years, I’ve never heard a libertarian demand this in a case where he or she did not also expect to benefit personally from such a roll-back.”

    thus proving the superiority of libertarianism.
    go figure… when state power takes what is ours ,demanding its return will benefit us.
    by
    definition

    finally never trust people who promote positions antithetical to their interests.

  78. Ragnaar says:

    Willard:

    Your like talks about the self control aristocracy. Most people are not part of that but libertarians are. So what libertarians want is everyone to have self control. But the world is truly not like that. I have clients that all they have for income is social security benefits and that’s all they will have for the rest of their lives. Before you say, What do they need me for, we have a deal called a property tax refund in Minnesota.

    Let’s make America great again. Well we don’t do that by insuring the world. We do it by insuring our country. The same criticisms of libertarians certainly applies to the United States as a whole. We want other countries to have self control, and when they don’t, then we need to change because lord knows, the libertarians can’t be right. They have so little control over their own lives, 2.3 inches of sea level rise per decade is too much for them.

    When we buy insurance, we buy it for ourselves. But if the Republicans win, we continue on our insurance theme but because of the meanies, now we have to do it ourselves. So we can continue with mitigation or adaptation. And because of Republicans, money is tight, so now we need to buy the insurance with the most value. Not the kind that gets spread all around the world.

    Now we might feel bad as poor countries are being flooded while we eat quiche. So then we may say, Florida now has self control and can deal with its own problems while we go off and save more people overseas. We can say most of us have self control but a lot of people who aren’t like us don’t. But then we’d have to say, Florida has to self insure, but of course that can’t be right, then we’ll remind them how many time they voted for Republicans and they say they are kind of the party of self control. But not as much as those libertarians. But for once they might be right because Florida is going to have to self control so we can save someone less fortunate. Because self control is good in some respects, because then you can afford to free up some of your money for us to keep saving the world.

  79. JCH says:

    “2.3 inches of sea level rise per decade is too much for them.”

    Ridiculous.

  80. Willard says:

    > So what libertarians want is everyone to have self control.

    I wish they’d simply want a pony like everybody else.

    Libertarianism may not be final state in the self control business. It may not even be a middle state. When risks are involved, they may not even be in a stable state at all if what you say is representative of their position, Ragnaar.

    ***

    > never trust people who promote positions antithetical to their interests.

    Establishing policies that redistributes wealth need not be antithetical to anyone’s interests. The Niskanen Center is trying to sell a free-market welfare state as we speak:

    If libertarians need to rebrand good ol’ social democracy to get in the US of A’s political landscape, so be it. Proper labels are less important than getting the country socially up to speed.

  81. Ragnaar says:

    JCH:

    The 2.3 inches per decade is about the average of the two middle scenarios from AR5 straightlined to about 2090. While above I wrote Like, I meant Link, Willard’s link where I am in the libertarian self control aristocracy. The closest I’ll get to being royalty.

    “As a result, they have great difficulty seeing the world through the eyes of someone who lacks it. And so they spend their days advocating political ideas that would, in many cases, only benefit members of their narrow social class, and yet this never even occurs to them.”

    So while Florida has a problem with sea level rise, I don’t. But they are smart. They’ll figure it out. Self control is the opposite of saying, give me some government money. I figure they could just move to Minnesota. And have self control to boot. But we can up the lack of self control and bring up all the poor of the world. They all can’t move to Minnesota but I’d favor 5 million Chinese doing so. Bill Holm came up with that one.

    The link helps explain why we never get elected. I actually can empathize and more so if I am getting paid, but I don’t like opening my wallet.

  82. izen says:

    @-W
    “If libertarians need to rebrand good ol’ social democracy to get in the US of A’s political landscape, so be it. Proper labels are less important than getting the country socially up to speed.”

    Thanks for the pointer to the Niskanen Center ‘libertarian'(?) take on social welfare systems and individual freedom. It was an interesting read.

    Click to access Final_Free-Market-Welfare-State.pdf

    There is something slightly farcical about the American political dispute about healthcare when almost every other advanced Nation manages to provide better healthcare at half the cost. The a prior rejection of a universal state run system that everyone else has found effective amputates the discussion in the US. The central flaw, healthcare provided by employers, disrupting equal universal coverage by pooling risk, is apparently off limits.

    The main beneficiaries of the US healthcare system that is twice as expensive and less effective than every other state welfare system are the healthcare insurance and provision business that extracts twice as much from the US economy as it does in any other modern Nation-state.

    That similar arguments apply to unemployment welfare systems is unsurprising. The essay identifies the benefits ;-

    “Well-designed welfare
    programs go beyond relieving poverty and inequality
    to represent a type of cooperative institution, not
    unlike the market, which exists to enable individual
    self-authorship and planning in the face of uncertainty.”

    The essay did display the common Randian myopia on this subject.
    As does the rests of the Niskanen notes and essays.
    The welfare, health and education of children is conspicuous by its absence.
    The glaring example of the state involvement in child welfare and education at least at some basic ‘safety net’ level, is accepted(?) but ignored as an empirical outcome in free societies.

  83. Dave_Geologist says:

    hyper

    Could there be a surprise which left the gizmo quivering helplessly on the floor? Sure.

    Which is why I prefer to be guided by something physical where possible. As in this partially-fictional anecdote, which is a bit long so I’ll split it into chunks like chapters.

    1. The unphysical approach
    The Hoek-Brown “fracturedness” parameter is related to the tensile strength of the rock (or the ratio of compressive to tensile, depending on the formulation). Fractures weaken the rock more in tension than in compression. OK so far. Measuring compressive strength is (relatively) easy. Tensile strength? Relatively easy on a plug of intact rock, although even that typically uses indirect methods like the Brazilian Test to get round edge and rig-coupling effects. But that’s not what we’re interested in. We want to know the strength of the bulk fractured rock. Short of testing a sample the size of a car, we can’t directly measure that.

    So you empirically calibrate, which is fine at shallow depths where civil engineers work. Confining stresses are low and differential stresses small, and gaps can open up between blocks during deformation. So something that relates to an isotropic tensile strength is a reasonable physical representation of reality. In the deep subsurface (outside of hydraulic fracturing), we have large confining stresses, large stress anisotropy and, in mudstones, large anisotropy in the orientation of fractures and other weak planes. Which, deeper than 1km, will almost invariably fail in shear rather then in tension, even in an extensional stress regime (readily demonstrated using a Mohr Diagram). The Hoek-Brown parameter is still used as a generic weakening knob, but the physics is all wrong.

  84. Dave_Geologist says:

    2. The problem
    Now suppose you have to develop a field using data from vertical exploration wells. A H-B model works fine for vertical development wells. Then you start drilling inclined wells and they fail. So you increase the mudweight until they don’t fail, and keep doing that until you get to whatever angle you need to reach your farthest target. At least you got there, but you wasted tens or hundreds of millions of dollars and suffered months of production delays in the process.

    But how do you plan future wells when your numerical model doesn’t work? You tweak it by letting the weakening parameter vary with inclination. Effectively introducing strength anisotropy, but purely empirically. Unfortunately your software probably only lets you use one value, so you have to run the model five or ten times and stitch the results together. if you’re lucky the software is scriptable and you can go for a coffee while it runs. But now you want to drill N-S as well as E-W, and you know there’s a horizontal stress anisotropy. You have no idea how that will affect your empirical calculation, because you don’t understand it beyond a hand-wavey level. So you rinse and repeat.

    3. Modifying the existing model
    You then notice that there are some recent academic papers proposing an anisotropic modification of H-B, but they’re so new there is no consensus and three different equations are in the frame. They came too late to help with the current field, but at least you get some assurance that it is a shared problem and the industry is addressing it.

    Unfortunately the new equations make different out-of-sample predictions, and while one is fairly simple, another has introduced FOUR new fitting parameters, which risks over-determination, and the third is simply an empirical fit to lab data so has no out-of-sample power. You bookmark them and if you’re lucky, maybe you have enough data to pick the one which best suits your application, at least in-sample.

  85. Dave_Geologist says:

    4. A fully physical approach
    Alternatively you stop and think and ask: is there more appropriate physics I can use? Well yes, there’s a large body of literature which says the Modified Lade failure mechanism best represents a wide range of rocks under triaxial tests, is fully three-dimensional and can be further modified to handle strength anisotropy. That does introduce two new tuning parameters (one of the scalar parameters is now a diagonal tensor), but they directly relate to physical properties which can be measured in the lab. Still, you go ahead and build a big, chunky model and for good measure include poroelastic effects, because wireline logs have shown evidence of wellbore fluid invading the rocks on a timescale of a week or so. You back that up with an extensive programme of triaxial testing. Expensive, but cheaper than a failed well.

    Hurrah for physics! This more physically realistic approach works really well, out of the box and with little or no tuning. But it’s impractical for well planning because it takes a huge effort to make a point calculation, and for drilling because in heterogeneous rocks, a point-by point calculation would run slower than real-time.

  86. Dave_Geologist says:

    5. A slightly less physical, but good-enough-in-practice approach
    Again you look at the literature, and discover that tunnelling engineers had addressed a very similar problem in the 1950s-1960s, when deep tunnels under the Alps took them into territory where approaches that had worked fine for shallow tunnels and for building foundations and bridge abutments had failed catastrophically. They used a simpler approach: identify and assign/measure the strength of the weakest pre-existing plane in the rock. Then do a matrix calculation and a weak-plane calculation, crucially involving shear failure (Mohr-Coulomb), because they had had the advantage of re-entering 30-50-foot diameter tunnels and seeing how the rocks had failed.

    You try that and, hey, it works really well! You’re a bit puzzled, but then you review the poroelastic simulations and see that the invasion you were worried about was preferentially along the weak planes, and that over a week, the more complex model with 3D anisotropy degenerated to something virtually indistinguishable from the matrix-plus-weak-plane model. The simpler model also works pretty well out of the box, at all inclinations, with small modifications to laboratory-measured parameters.

    So now you’re in a win-win situation. You understand what was going on in 3D, you understand why, in the time-frame of interest for drilling a hole section, it degenerates to 3D matrix plus weak planes, you can cut cores in new areas and use lab data to intelligently change your parameters, and you can go in different directions relative to the maximum horizontal stress confident that the physics is right and your prognosis is reliable. As a bonus, you can run the calculation pretty fast in a script, and because the tunnelling industry got there 50 years earlier, there are tried and tested algorithms out there which can be quickly incorporated into your favourite software.

  87. Dave_Geologist says:

    6. Lessons?
    One should be obvious. The more physics-based approach requires much less fudging and knob-twiddling than the physics-which-is-OK-but-out-of-its-comfort-zone, and performs much better out-of-sample than the brute-force or curve-fitting approach. Same with climate science, I would argue.

    A second, more subtle one (which also applies to climate science where it’s not politicised) is that being able to point to the underlying physics is a very powerful persuader when the fundholders or gatekeepers are drilling engineers or field development managers (who’ll often have a civil, chemical or mechanical engineering background). They’re used to controlled experiments using specified materials, and are uncomfortable with the uncertainties in geoscience at the best of times. Saying “we’ll make it up as we go along” or “we’ll just extrapolate from what we’ve done so far” leaves them even more uncomfortable. Even the financial-background economists get that: “stock prices can go down as well as up”, “the past didn’t predict the present, so the present can’t predict the future”.

    The third is the old adage, “a model needs to be good enough but no better”. We ended up with a happy medium, not an impractical Rolls-Royce.

    The fourth is that “good enough” depends on what the model is used for. The all-singing, all-dancing model led to an understanding of the physical process, but was impractical for field-scale application, But it could be used to validate and benchmark the simpler model and demonstrate that it was good enough for field-scale use. There are obvious climatology analogies with local models or models where a parameter like SST is imposed, and indeed in reservoir engineering where pseudos (parameterisation) are tested in sector models.

    The fifth is that sometimes you don’t have to re-invent the wheel. It’s already been invented in another industry. The digitisation of the literature should help there, but its proliferation does add to the needle-in-a-haystack problem.

    And finally, share as much as you can. The climate community is pretty good at that, better in my view than most sciences (open(ish) data and code, CMIP). Notwithstanding the calumnies of the denialati. The oil industry is actually pretty good too through SPE and others. Lab data is widely published (ours was, same for many others, and most data in the literature comes from industry-funded not grant-funded experiments). As well as open conferences, there are no-names-no-notes-no-packdrill meetings where people talk about and share issues they’ve faced, knowing that only anonymised lessons-learned will be published and managers won’t be embarrassed or trade secrets revealed.

    Despite that, I haven’t given too much away about the source of my anecdote because about a dozen companies went through the same experience at the same time, and most came to a similar resolution which is now available in off-the-shelf software. Small operators can buy a full service from Schlumberger, who did a lot of the lab and theoretical work, partly for hire and partly to position themselves as the go-to experts.

    The literature looks like there was a lot of wheel-reinventing going on, but actually we were all talking to each other (and most fields involve several Partners who are in the loop) and sharing data through JIPs. The issue had come to prominence with the widespread adoption of large-stepout directional drilling for reasons like deep water (expensive single installations vs. many satellites), Arctic drilling (expensive pads to build and maintain, as few as possible), mountain/jungle areas (ditto) and scale/environmental concerns in the Lower 48 (vertical wells with nodding donkeys every block were OK when there were hundreds, when you drill 20,000 wells in a few years, people want their surface facilities confined to industrial sites).

  88. Dave said:
    ” We can’t predict when and where the next earthquake will occur on the Parkfield Fault”

    There’s a significant difference of opinion on this topic in relation to orbital gravitational forcing terms. The bigwig at the USGS @SeismoSue says No! but several of her colleagues are saying yes. She actually has a twitter poll running on the question on whether seismologists have this figured out:

  89. Hypergeometric said:

    “This is an interesting problem which affects a lot of the newer data science and machine learning methods. In particular, there’s a dearth of techniques for validating that the gizmo which has just been trained to forecast or predict something actually works. They are typically opaque to inspection. Available techniques generally rely upon methods like cross-validation, but that assumes one has in hand a large and representative set of data labelled as to what kind of thing it is, that is, the objective of the prediction or forecast. Often, such a set of data is small, and there is no systematic way of telling how representative it is because covariates of significance just aren’t known.”

    This is what I have been struggling with. Due to the lack of controlled experiments in the geosciences, all one can really do is cross-validation. It’s sub-optimal to have to wait decades before untainted fresh validation data is available. Yet, there is a large amount of data available from historical proxy records that provides another approach to validation. Over the last week I have been using a 200-year proxy time series to validate a model fitted over a 100-year instrumental record. It appears to work very well, but only after some tweaking, and so of course the next question raised concerns the validity of the tweaking.

  90. Dave, I kind of lost where you were going with your 6 -point anecdote. I think it is by well understood by now that fracking and injection activity is responsible for the recent increased earthquake frequency in places like Oklahoma. Are you trying to find safer ways to do the drilling without risking the possibility of triggering an earthquake?

    I would at the very least suggest staying away from drilling or fracking near Yellowstone or the Monterey Formation in California.

  91. Dave_Geologist says:

    We can’t predict when and where the next earthquake will occur on the Parkfield Fault

    I had in mind the sort of earthquake we care about Paul, say M4 or above, like the 4.5 a couple of days ago. Moderate shaking, enough to notice. Not microseisms. Tidal forcing of them has long been known about. Indeed people doing microseismic monitoring of offshore oilfields have to filter out the tidal cycle. That is also required when analysing the late-time pressure response of a well test, where the decline rate is very small. You can even derive information about natural fracture properties by treating the daily tidal cycle as a forcing which squeezes oil into and out of the fractures. Since most naturally fractured fields are critically stressed, some of those dilations and contractions will make something go pop. These are all very small effects which require very sensitive pressure gauges (0.001psi resolution) and downhole seismometers to detect.

    The question is whether they ever trigger anything bigger than a microseism, if so, how and how does that aid predictability. It’s certainly possible in principle that a tidal cycle causes a fault which has accumulated almost-but-not-quite enough stress to fail to be triggered on a Tuesday and not the preceding Thursday. But that’s the easy part of the prediction. The hard part is tracking the accumulating strain energy and predicting when it has reached the almost-but-not-quite stage. IOW not just predicting that it will fail at a particular point in a bi-weekly cycle, but that it will fail on cycle 323 since last time and not cycle 287 or 401.

    I’d answer “fairly well established” to Susan’s question. But you’ll note that the characteristic earthquake model doesn’t say we could have predicted a 4.5 last Tuesday. It says we can predict the statistical properties of earthquakes (size and magnitude distribution) and that there is some sort of consistent and persistent relationship between that distribution and the type of fault involved. I see Tuesday’s quake has an oblique-slip focal mechanism so it’s obviously not on the main strand of the San Andreas, to the extent that there is one, and it’s on a bend so “it’s complicated”. Which would fit with the “small but frequent” idea.

  92. Dave_Geologist says:

    Dave, I kind of lost where you were going with your 6 -point anecdote.

    It wandered a bit Paul! I tried to help by splitting it… The point was that models which incorporate physical mechanisms are less likely to suffer out-of-sample surprises than models which are purely based on interpolation, or on the right physics extended out of the context where it is right. Nothing to do with fraccing, the issue is wellbore collapse, usually in mudstones, because the mudweight (imposed wellbore pressure ) is too low, not too high.

    I think it is by well understood by now that fracking and injection activity is responsible for the recent increased earthquake frequency in places like Oklahoma.

    Actually it’s not the fraccing you have to worry about. That very rarely causes a noticeable earthquake, probably one in a hundred or one in a thousand. For the same reason that most tidal cycles don’t trigger one, You need a very special confluence of circumstances, which rarely happens (but is hard to prevent because it’s very hard to predict the particular one which will be favourable).

    It’s the waste-water injection which is causing the earthquakes, but 99% of the wastewater doesn’t come from the frac backflow, which only lasts a few days. And is often trucked to a surface waste disposal site because the pipelines are not yet in place to take it down the production system. It’s the formation water which flows back over the decades of the well’s life which drives the volume. And it’s also nastier, with hydrocarbons, heavy metals and naturally radioactive salts, so has more stringent disposal requirements. You get more of it in unconventional reservoirs, due to a combination of small pores with high capillary entry pressure and multiple short oil or gas columns with little buoyancy, which leads to a high water saturation in the pore space. Plus if it’s a carbonate-rich or organic-matter rich reservoir it will probably be oil-wet, which means that the oil is coating the pore walls rather than sitting in the middle of the pores. So you essentially have to wash it out of the rock by sucking lots and lots of water past it.

    From published papers, some of the disposal wells appear to have had poor management practices. In one of the big quakes (I forget which), it was obvious to me that things had been going wrong for years before the big one (wells had fractured out-of-zone). But obviously not to the engineers who were pumping.

  93. @Dave_Geologist,

    Thanks for all the efforts and the entertaining story, and I mean that in the best sense of “story.” (I’ve often thought problems as in major components of problem sets ought to be thought of as narratives and presented that way.)

    My concern is that there are a set of techniques which a conservative approach will miss, not necessarily recursive neural networks, but things like boosting, recommender systems, random forests, manifold alignment, and topic models which are well developed and have been shown to work, partly through theoretical derivations, particularly if well-labelled instances of testing and training data are in hand. The upshot of the miss is that money is being left on the table.

    I also suspect that in complicated applications like rock fracturing or viscous physical flow the criteria for success when appealing to a physical model might be in the end inappropriate. It’s often some variation on mean-square-error or, in other words, an L_{2} norm. This permits approach of these in the same general manner the atomic theory of gases was done. There the model was approximate but it nevertheless works well, especially after corrections for various kinds of special interactions (van der Waals) are incorporated. I posit that there are systems were that model is inappropriate. My poster child for that is the design of FIR filters where the L_{2} approach gave rise to the phenomenon of Gibbs towers and the problem succumbed only when the L_{\infty} norm was brought in (minimizing maximum error). But that required the adoption of a vastly different kind of calculation, the Remez exchange algorithm which looks nothing at all like the least squares approach. I’d say things like boosting offer such alternatives.

    By the way, and to nod to @JeffH and @mal_adapted, boosting has seen a successful application in species distribution modeling, per

    C. Kremen, A. Cameron, A. Moilanen, S. J. Phillips, C. D. Thomas, H. Beentje, J. Dransfield, B. L. Fisher, F. Glaw, T. C. Good, G. J. Harper, R. J. Hijmans, D. C. Lees, E. Louis Jr., R. A. Nussbaum, C. J. Raxworthy, A. Razafimpahanana, G. E. Schatz, M. Vences, D. R. Vieites, P. C. Wright, M. L. Zjhra, “Aligning conservation priorities across taxa in Madagascar with high-resolution planning tools”, Science, 11 Apr 2008: 320(5873), 222-226, DOI: 10.1126/science.1155193.

    The notable thing about this application was where the idealized machine learning discrimination problem has both negative and positive instances of the phenomenon under consideration, this problem featured only positive examples, and was used, actually, to do density estimation.

    You appeal to the auditors and inspectors looking over your shoulders. I would submit that in the case of climate models, the demand is higher, faster accuracy in climate impact projections. Now, the chaos people and others will say that can’t be done without time frames of 30 years and longer, at least based upon how they look at things. But quantum people have managed to sneak around Bell’s inequality, and it’s not clear to me, if the appropriate techniques are applied we might not be in for some surprises here, too.

  94. Dave said:
    “It’s certainly possible in principle that a tidal cycle causes a fault which has accumulated almost-but-not-quite enough stress to fail to be triggered on a Tuesday and not the preceding Thursday.”

    OK, here is a remarkable bit of research if the guy Kolvankar didn’t goof it up. This is a plot of the loci of earth-moon-distance plus sun-earth-moon-angle against universal time for detected earthquakes in the major global database

    The implication is that the majority are triggered when and where the combined pull of the sun and moon is strongest. Two points: (1) this should have been done long ago, and (2) perhaps Kolvankar is plotting time versus time and this is a meaningless correlation.

  95. @Paul_Pukite_(@WHUT),

    Hi Paul!

    This problem is not limited to the geosciences. Results from cross-validation can be pretty compelling (to @Dave_Geologist’s auditors, for instance), but when the data’s not there, the direct cross-validation cannot be done.

    This is why a lot of work is being done in semi-supervised training and in understanding means of automatically generalizing findings based upon the features and manifold structure of the problem domain. There’s also work being done on means of validating ML algorithms and codes using statistical means, often treating them as black boxes.

    This is best pursued in areas where the cost of failure isn’t high, such as in Internet applications. In fact, I’m pursuing these techniques professionally, at Akamai (LI).

  96. Ragnaar says:

    Here’s the story with some parts left out as they aren’t hard to look up.

    Wage controls. Get around them by giving your employees health insurance.

    Health insurance becomes wedded to employment because why?

    Health insurance as provided by almost all large companies is tax free.

    How to give employees more tax free money? You think this doesn’t matter. The IRS has 50 accountants thinking up ways to prevent that.

    Give employees Cadillac health insurance. High personal tax rates increase the incentive to do this.

    At the same time give none of this to the self employed and small employers. The real un-bleeped deal emerges. High Deductible Health Plans (HDHP)s. The HDHPs signal reality.

    Reality is ignored. Repeat, Repeat.

    Problems emerge.

    Ignore the problems and be an afraid politician.

  97. Hi Jan,
    I am willing to bet that semi-supervised training and machine learning will soon identify many patterns in nature before humans will finish arguing over mechanisms.

  98. izen says:

    @-Paul
    I see you have already discussed this at your blog, but …
    While the Kolvanker graphs look impressive, they may not be as definitive as they first appear.
    The Earthquake count is for small selected areas and includes ALL magnitudes, (i did not see a mention of a lower cutoff.)
    It is not clear that any of the areas chosen actually had a Earthquake of a magnitude with any significant surface/human impact.

    What might be revealed here is that indeed most Seismic events happen during particular Sun-Earth-Moon ‘window’, but the correlation remains poor for large events that still fail to show any strong timing constraints.
    I doubt the results are good enough to tell People in Chile (on a common Longitude) that a major Earthquake is most likely to happen during a particular hour of the day.
    This limit is hinted at in the conclusions;-

    Click to access Sun-Moon-and-Earthquakes.pdf

    “3. Earthquakes of magnitude up to 3.0 and at a shallow-focus depth range of up to 10 km are triggered directly by the combined pull of the Moon and Sun. However in some areas even earthquakes in the higher magnitude ranges (typically 3-5) at shallow depth up to 10 km are triggered by the combined pull of the Moon and Sun.”

  99. @Paul_Pukite (@WHUT),

    It’s interesting that CERN/LHC is turning to these methods, mostly out of desperation.

  100. Willard says:

    > Here’s the story with some parts left out as they aren’t hard to look up.

    You’re reinventing RickA’s trick, Ragnaar.

    Say one thing.

    Or half a thing.

    Then skip two lines.

    And then add another one.

    And another one.

    Just so stories.

    Or whatever.

    Over and over again.

    This time, just insinuate that unless one does not subscribe to your view of what isn’t even an insurance, one is a frightened politician.

    Whence All. The. Industrialized. Countries. Prove. You. Wrong.

    If you like a story:

    This is not Judy’s, you know.

  101. RICKA says:

    Willard:

    Thank you for giving me credit for inventing double spacing.

    Quite a trick!

  102. izen said:

    “It is not clear that any of the areas chosen actually had a Earthquake of a magnitude with any significant surface/human impact.”

    More important is that the gravitational pull has any effect at all. I was more expecting that someone would suggest that Kolvankar had screwed up in his analysis. The next step would be to find out what the difference is between small and large earthquakes, and what the transition is. That’s what earth science is about — finding any kind of correlation and then pursuing it. There are no controlled experiments available, so finding this kind of connection is data mining gold.

    Seismic Susan Hough apparently isn’t worried about this mode, but people that work directly under her at the USGS are looking closely at it:

    Delorey, Andrew A., Nicholas J. van der Elst, and Paul A. Johnson. “Tidal triggering of earthquakes suggests poroelastic behavior on the San Andreas Fault.” Earth and Planetary Science Letters 460 (2017): 164-170.

  103. Dave_Geologist says:

    Paul, I’m not saying gravity (or the change in fluid pressure tides induce) can have an effect. All earthquake zones and most naturally fractured reservoirs are critically stressed in some sense (Mark Zoback would say the whole world is), which by definition means they are very close to failure.

    The challenge from a useful earthquake prediction framework is to predict on which tidal cycle a particular fault will break. Being extremely confident (pretending for the moment that it’s an exact fortnightly cycle) that you’re more likely, even much more likely, to get an earthquake on every second Tuesday than on other days is no help to planners if you can’t say which second Tuesday it is within a ten or twenty year window. They’re not going to evacuate San Francisco every second Tuesday or send the kids home from school* every second Tuesday on the off-chance that the earthquake hits this Tuesday and not on a Tuesday twenty years into the future.

    *An aspect of human nature which exasperates risk professionals (gallantly trying to be vaguely on-topic 🙂 ). Twenty kids dying in one classroom will cause far more public angst and attract far more opprobrium than forty kids dying one-by-one in their bedrooms. Even though the bedrooms may be less safe (at least in school they have desks and tables to duck under).

  104. Dave_Geologist says:

    Or, Ragnaar, look at the RoW or at least Europe. Where universal provision means that everyone gets decent healthcare, Not just 80% or whatever, and per head it only costs half of what the US pays. There is of course the option to raise spending to US levels and give everyone rich-person’s healthcare, but the voters seem to like the existing tradeoff, while grumbling around the edges. The ultimate insurance: everyone contributes, and everyone is protected.

    For all the grumbling about the NHS, it kept Stephen Hawking alive for three quarters of a century*. As the disease set in when he was a student, he’d have been dead long ago if he’d been born American. He’d have been unable to get private healthcare, and no employer with a healthcare plan could have afforded to take him on. Maybe he’d have used his skills to become a Wall Street quant and paid his own bills, but then the world would have been deprived of his science.

    In practice, his brilliance showed early enough that if he’d made it to his thirties, he could have emigrated to Europe and got free healthcare. Immigration officials might have grumbled about health tourism, but I bet they’d have been over-ruled to bring a superstar to town.

    My advice would be: be a brave politician, ditch the ideology and accept that healthcare is one of those things, like armies or central banks, that is best run by the state. You can still have ideological fights about whether service provision is contracted out to private providers. Europe has pretty much the whole spectrum in that regard.

    *Yes I know he made money from his books, but he has said himself he used that to meet his non-medical needs like the vocoder, assistance with transportation etc. All his medical treatment was on the NHS, Of which he was a vocal supporter, to the extent that right-wing media in the UK attacked him for it. The old how-dare-famous-scientists-express-views-outside-their-science line.

  105. Dave_Geologist says:

    Paul

    Susan Hough apparently isn’t worried about this mode, but people that work directly under her at the USGS are looking closely at it

    Well, I would imagine she has some say in what people under her work on 😉 . The USGS is a managerial institution, not like a university department where faculty get to pursue whatever takes their fancy and grants are awarded by bodies outside of their line-management chain.

    I quickly skimmed the paper and they are clearly well aware of tidal forcing and of poroelastic behaviour around faults. They’re not reporting its discovery, they’re using established science as a tool to probe the properties of the fault.

    Unsurprising in the case of porelasticity as it’s been part-and-parcel of the geomechanics, pore pressure/fracture gradient and reservoir depletion toolkit for decades. There’s a chapter devoted to it in the two most commonly recommended textbooks, Reservoir Geomechanics
    and Fundamentals of Rock Mechanics (although not so much in the 1969 1st Edition), and it’s the subject of entire decades-old textbooks Theory of Linear Poroelasticity with Applications to Geomechanics and Hydrogeology.

    From Zimmerman et al. (random page opened in the chapter):

    Roeloffs and Rudnicki (1984) analyzed the induced pore pressures caused by creep along a planar fault in a poroelastic medium. … Using parameters appropriate to the rocks along the San Andreas fault near Hollister, California, and an assumed propagation velocity of 1 km/day, they were able to model with reasonable accuracy the pore pressure changes measured in wells located near the fault.

  106. Dave said:

    “They’re not going to evacuate San Francisco every second Tuesday or send the kids home from school* every second Tuesday on the off-chance that the earthquake hits this Tuesday and not on a Tuesday twenty years into the future.”

    Two points to this. First, I was coincidentally talking about this situation to a USGS representative at the AGU who was presenting a poster on last year’s Mexico earthquake recovery efforts. I mentioned to her that of course it couldn’t predict the timing exactly but that you could be creative about reducing risk. For example, one could time school recesses to occur during the risky intervals. Then kids would be outside on a playground instead of inside buildings.

    Secondly, it’s important just for the scientific knowledge it brings to the table. Since the post topic is on risk, the process of risk reduction is also about incremental knowledge.

  107. Dave said:

    “Well, I would imagine she has some say in what people under her work on 😉 . The USGS is a managerial institution, not like a university department where faculty get to pursue whatever takes their fancy and grants are awarded by bodies outside of their line-management chain.”

    Because of her position, Susan Hough can push views and make statements that get lots of publicity. Her most famous latest research was a peer-reviewed paper from 2018 titled “Do Large (Magnitude ≥8) Global Earthquakes Occur on Preferred Days of the Calendar Year or Lunar Cycle?”. Her abstract contained one word: “NO”. This got some press publicity:
    https://gizmodo.com/study-with-one-word-abstract-finds-moon-phases-dont-pre-1822190714

    Her flippancy is pretty ridiculous IMO. I am waiting for someone that works for her to take a look at Kolvankar’s paper and either debunk it or figure out what to do with the knowledge.

  108. ATTP, thank you for paying attention to this video, which I appreciate very much. I think this discussion was giving me valuable views both of the fundamentals of climate science, important aspects of climate models and on the role of climate science as a basis for risk analysis. It influenced my personal opinion and became important for my decision to support the Paris Agreement.

  109. Pehr,
    Thanks for highlighting it. I often I don’t have time, or can’t be bothered, to listen to something that’s an hour long, but I found this very interesting.

  110. Dave_Geologist says:

    Her flippancy is pretty ridiculous IMO.

    But is she wrong Paul? The data appears to be freely available so you could check. I did download a couple of Kolvankar’s papers, despite my misgivings on observing that he’s an elderly retired physicist attached to an atomic research centre, apparently pursuing a hobby in an area of science outside that of his primary career. I know that shouldn’t influence my thinking, but it is a bit of a crank template. Since I have to triage what I read, and his favourite publication forum (Journal?) has the slogan “What is today’s contrarian science, may become tomorrow’s established science.”, he falls into the “probably wrong” category in today’s triage.

    The image you posted looks impossibly-too-good-to-be-true to be real data. It looks like something cyclical plotted against itself. Apart from a few outliers, the scatter is about an hour. That must be smaller than the uncertainty range on timing for some earthquakes, due to complex raypaths and different receiver choices and reduction algorithms. I think that somewhere hidden in the calculations is an identity or circular reasoning.

    The other paper “EARTH TIDES AND EARTHQUAKES” has more realistic-looking data. In fact it looks like noise.

    If one looks at this plot then generally it is difficult to find any pattern or conclude anything. However the four corner areas … provide good information. We observed more earthquakes where perigee is a common factor. Also the number of earthquakes differs substantially. … Among the four corner areas, we observed more earthquakes at Perigee end, than at Apogee. So it can be concluded that at perigee due to higher gravitational force we have more earthquakes.

    So, no statistical tests. Just the Mark One Eyeball.

  111. Dave,
    I don’t know how to access the earthquake data from USGS NEIC, otherwise I would try to duplicate Kolvankar’s plots.

    ” he’s an elderly retired physicist attached to an atomic research centre, apparently pursuing a hobby in an area of science outside that of his primary career. “

    I guess his career was “Chief designer, radio telemetered seismic network Bhabha Atomic Research Center, Mumbai, science officer c, 1972—1977, science officer d, 1977—1983, science officer e, 1983—1989, science officer f, 1989—1996, science officer g, 1996—2001, science officer h, 2001—2007, science officer h+, since 2007, head seismic instrumentation. Consultant seismic instrumentation United Nations Educational, Manila, 1977.” Apparently, he discovered the correlation after dealing with seismic data over a long career.

    Over at the Azimuth Project forum we did try to figure out whether he was inadvertently plotting X=X, but gave up on that. The fact that his transformed data has outliers points to the possibility that there is something there, otherwise the outliers wouldn’t exist.

    Even if this doesn’t pan out, it does show some creative ways to look at data. One of the keys to finding new patterns in data is to do variant transformations, which is what Kolvankar is doing. The transformation is not always obvious and usually requires some insight, otherwise you have an infinite number to consider. Consider the contrasting situation of moonquake patterns. The effect of the earth’s gravitational forces on moonquakes doesn’t need a transformation because it is visible from the fault-lines mapped on the moon, see below. Of course, the moonquake situation is more clear-cut because the earth generates a much greater asymmetrical gravitational pull on the moon than vice-versa. According to Kolvankar, the earthquake pattern is temporally-driven so that the historical record would smear over all geographical locations on earth. He may have teased out the pattern that remained invisible to others all these years.

  112. Dave_Geologist says:

    Paul
    “I guess his career was …”. Well yes, but that doesn’t make him a seismologist. Looks more like an electrical and telemetry engineer. Most of his publications are to do with data acquisition and management, not data analysis. Although I do see his hobby (EM and tides) goes back into his professional career. I’d expect to see some papers on focal plane solutions, on earthquake mechanisms, etc. if he’d been working as a seismologist or with seismologists. If nothing else it would demonstrate a knowledge of the field and a low likelihood of falling into rookie errors. It would be surprising if he did not have an opportunity to discuss his ideas with the seismologists who used his data. If he did, he probably got feedback he didn’t like. Or perhaps not. In my experience (around 1990) Indian State institutions were very rigid and hierarchical and at least in those days, perpetuated some of the worst features of the Raj. It was hard for a subordinate to challenge a superior.

    Or maybe Bhabha was just a clearing house and others did the analysis? Oh dear, if I Scholar the Institute the first hit is cold fusion. OTOH they have lots of mainstream-looking stuff in chemistry, biology, medical isotopes and environmental radiation, so I presume it is a typical large research reactor/synchrotron/whatever site that both does its own research and provides a service to universities etc. Also looks to be involved in civil nuclear. Which doesn’t mean it can’t have a few cranks. Think MIT or Princeton.

    The simplest explanation for the outlers in the “perfect” plot is that they represent coding errors in transcribing the original data to the catalogue. Time zone confusion or -999s recognised as nulls but not 999s. With cyclic data an anomalously high value may not be obvious, it appears in place but in the wrong cycle.

  113. @Dave_Geologist,

    … [H]e’s an elderly retired physicist attached to an atomic research centre, apparently pursuing a hobby in an area of science outside that of his primary career. I know that shouldn’t influence my thinking, but it is a bit of a crank template.

    Hah!

    While I’m not yet retired, and a Quant and engineer rather than a physicist (even if my undergrad was in Physics), that could describe me very well.

    I knew a mathematician once who said his ambition was to survive IBM until he retired, and then go off like the retired British colonel, tend his garden, and work on his ciphers. He was a professional combinatorial expert, and specialized in an area where, he said rather proudly, that there were perhaps 20-30 other people in the world who worked on it.

    But, too, I’m a professional Statistician and I get to play in many people’s backyards. One downside of doing that is that most people ignore one’s technical advice. The best thing of being one right now is that because Statistics is changing so fast, with the introduction and methods of Data Science and Machine Learning, it amounts to the deprofessionalization of not only Statistics, but anyone who has quantitative worries. That isn’t to say there aren’t standards and criteria. But there, too, are greater expectations, including performance with actual data, typically predictive performance. But, too, I’ve been around long enough to realize people who’ve trained long and hard and had to suffer through arbitrary changes in perfectly acceptable papers to get them published might not like the March of the Quants.

    And, besides, Statistics belongs to the people, and educating people to be more numerate can only be good.

    So, if I’m a Crank, I’m a very happy Crank. And perhaps a bit of an Alchemist.

  114. Dave_Geologist says:

    hyper
    re cranks and retired physicists (BTW I’m a retired geologist 😉 )

    I’ll quote form a high-school Venn Diagram text, which use made-up categories so people couldn’t cheat by knowing that dogs and whales are mammals but crocodiles aren’t:

    All Lipe Shends are Umpty, but not all Umpty Shends are Lipe.

    Again a bit of humour (Lewis-Carrollesque in this case) goes a long way when it comes to learning retention.

  115. Dave, You may be right about Kolvankar. Yet the tidal triggering of earthquakes evidence is gaining steam based on other peer-reviewed papers. In the Wikipedia article, there is a link to the work from U. of Tokyo ( Ide, Yabe & Tanaka, 2016 doi:10.1038/ngeo2796)

    Susan Hough may want to tell one of her underlings to remove that reference from Wikipedia. The Japanese needn’t worry about being able to predict earthquakes 😉

  116. I hyper-agree with hyper. At some point down the line, we will be arguing against the results from machine-learning experiments, but our counter-arguments will be met with deaf-ears. The machines will not listen to our criticisms and will keep on going deeper and deeper. The statistical analysis will be built in to the algorithms, as they will include complexity factors that will automatically reject patterns that are deemed too complicated.

    Guys like hypergeometric will be around to reap the rewards as they sit back as they wait for the results and pick out the promising directions to pursue.

  117. @WHUT, @Dave_Geologist,

    There are intriguing cycles in the Keeling Curve, too, and not only the respiration from Northern Hemisphere forests. What’s interesting are the deviations from these. (See also, earlier.) That is, take away the longterm trend, and take away the periodic portion of y_{t} = a (t-t_{0}) + d_{t} + x_{t} + \eta_{t}, which I (and others) define to be so 0 = \sum_{i=0}^{N-1} x_{i}, where N is the number of equispaced y_{t}s measured in a single year. So the first term of a (t-t_{0}) + d_{t} + x_{t} + \eta_{t} is the trend (with rate of increase a), the d_{t} is the signal I’m talking about, x_{t} is the periodic portion, and \eta_{t} is some kind of non-stationary noise term, principally identified because |\eta_{t}| is much smaller than the remaining components and \eta_{t}‘s long term spectrum is white. (Okay, maybe a little red.) And, yes, there are identifiability issues confounding d_{t} with \eta_{t}, but (a) that’s where all the fun is, and (b) this is not a stranger to climate work. Here’s it’s a CO2 curve, but, as near as I can tell, for global mean surface temperature, \eta_{t}-like things are what climatologists mean by internal variability.

  118. Dave_Geologist says:

    Paul

    This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

    I don’t disagree with that, because importantly, it proposes a physical mechanism. But it also qualifies for my previous comments. They allow a whole day for matching, so you can’t tell the kids to get under their desks at 3pm on Tuesday, you have declare all day Tuesday as a red alert.

    Also, look at Supp. Fig. 2 (the SM is usually free, even if the article is paywalled; I can’t tell as this one is as I have a library subscription active). Yes the quake happened on the peak with rank 1, which was indeed a local maximum. But not on ranks 12-15, which were bigger and occurred on the previous cycle. So between those two cycles something changed on the fault which made it more vulnerable. Maybe a butterfly flapped its wings, but there is a deterministic explanation. The fault was gradually accumulating strain energy/stress but not quite enough to be triggered by the first cycle. By the second cycle it had accumulated a tiny bit more and was triggered even though that was a weaker cycle. Back-of envelope calculation: for a 50-year repeat and steady accumulation, which is reasonable for a master subsection fault, it was 99.985% of the way there by the first cycle, and 99.995% by the second. So to tell which cycle it will go on, you need to predict the criticality of the fault ‘s stress state to within 0.01%. We can’t even do it to 10%. Not only that, if it had reached 99.995% by the previous cycle, it probably wouldn’t have waited for the peak stress, but would have gone a day or two earlier. Plus I need to read how they do the stress calculation. is it for a simple layered Earth, or is each one different with fault geometry, pore pressure etc specified. If the latter, imagine doing that pre-emptively for 2,000 faults.

    I’m not trying to be difficult here, but those are the kind of questions you have to address if this is to be useful for prediction, as opposed to understanding the nitty-gritty of event mechanisms.

    P.S. Did you try here for the earthquake data :

    https://ngdc.noaa.gov/nndc/struts/form?t=101650&s=1&d=1

  119. Dave_Geologist says:

    Sorry, the should have been: Yes the quake happened on the peak with rank 11

  120. Dave_Geologist says:

    And the spell-checker changed subduction to subsection…..

  121. Dave said:

    “I’m not trying to be difficult here, but those are the kind of questions you have to address if this is to be useful for prediction, as opposed to understanding the nitty-gritty of event mechanisms.”

    Not everything is about prediction. Having written a textbook on reliability modeling, I can safely say that prediction in the nature described is not essential. Yet, having the knowledge that tidal forcing plays a factor in triggering is important with respect to the mechanisms of physics of failure.

  122. Dave_Geologist says:

    Indeed Paul, although the mechanisms of physics of failure are the same whether the forcing is tidal or not.

    The proximate cause of failure is different, but at the rock-level the physics don’t change whether it’s tidal, shaking from another earthquake, pore-pressure changes caused by a passing P-wave, shear stress changes from a passing S-wave, pore pressure changes due to water injection or just gradual stress accumulation reaching a critical point.

    The ultimate cause is stress accumulated due to plate-tectonic strain (if we ignore the likes of volcanic earthquakes and mining- or dam-induced events).

    And yes, understanding the science helps us understand the science, by definition. And your last reference is interesting, because it goes to why previous statistical approaches failed to discern a pattern. BTW the reason why there were so many failed attempts previously is not because seismologists though it was a kooky idea. They looked for a correlation because it was a perfectly sensible idea, based on known physics. But failed to find it so concluded it was a good idea that didn’t work, not that it was a kooky idea. Hough doesn’t say that it’s a kooky idea. She says that it’s “either well below the level that can be detected with the present catalog or gives rise to effects that are far more complex than a global clustering through either the calendar year or the lunar month” and that “A low level of modulation would be of no practical use for earthquake prediction”.

    Ide et. al. show that there is a correlation, but only by using an approach “far more complex than a global clustering through either the calendar year or the lunar month”.

  123. @WHUT, @Dave_Geologist,

    Dave_Geologist:

    I’m not trying to be difficult here, but those are the kind of questions you have to address if this is to be useful for prediction …

    WHUT:

    Not everything is about prediction.

    Oh, then, I was confused about @Dave_Geologist’s post. Perhaps some terminology distinction is appropriate? Not trying to be pedantic, only suggesting a distinction.

    Setting aside the colloquial equivocation of the two terms, predict and forecast, formally, to predict generally means estimating what a (possibly multivariate) response will be given a set of values for its treatment parameters (and covariates if need be) which are different than upon which the model was fit. So, the idea is that there was a first, calibrating treatment upon which the model was fit, and, then, in an experiment, the treatments are changed to estimate how well the model does in an out-of-sample situation.

    A forecast can be thought of as a special case of prediction, but for a later time in a temporally evolving system. Often forecasts are predictions which are recast in temporal terms for interpretability or actionability, such as when sales in a future quarter are forecast. Nevertheless, most predictions tend to be best done using phenomenological proxies instead of time, explicitly. Phenomena rarely if ever have an internal clock which they consult. Also, this has the advantage that different subjects can be fit in the same model, even if they are not measured at the same time, as long as the values of treatments (and, if necessary, covariates) are pinned down.

    So I was confused since the idea of prediction is very much to understand mechanism when conditions vary around faults, but only forecasting is related to @Dave_Geologist’s `oh, it’s 99.99% there’ thing.

    Also, on an unrelated question, doesn’t the nature of the fault matter? I mean, I believe in this instance there’s a lot of discussion about strike-slip faults, but things could be different with the other kinds, normal, or reverse, no? Or is the mechanism the same in all of them?

  124. Dave_Geologist says:

    My loose terminology hyper. In those terms, the characteristic earthquake model is a prediction and is well supported (consensus-wise and evidence-wise). “The next big one will hit San Francisco in 2025” is a forecast. earthquakes are 1% (or 2%, or 10%, or 50%) more likely to occur during a certain lunisolar conjunction is a prediction. “It will happen on conjunction 253 staring the count from today” is a forecast. I’m challenging the feasibility of the latter, not the former. And emphasising the societal value of the latter. But not denying the scientific value of the former.

    Ah, fault mechanisms, my favourite topic! The physics is the same for all and is usually represented in a Mohr Diagram. The combination of effective shear stress and effective normal stress (effective = total minus pore pressure) exceeds the critical stress for failure. High shear stress promotes failure, high normal stress (acting perpendicular to the fault plane) inhibits failure by “clamping” it together. The failure envelope is defined by the properties of the fault plane (simplistically, friction angle and cohesion) and may change over time. I’m not explaining that very well and will try to do better tomorrow when I have more time.

    The fault doesn’t know what stress regime it’s in. Only the stress magnitudes, pore pressure, fault plane orientation and fault properties. Changes in those parameters can be driven by all sorts of things – plate tectonics, gravity, pumping, other earthquakes, volcanic doming or collapse, nuclear weapons test, tides, building an airport extension (Nice, although heavy rain also contributed by changing the pore pressure and frictional properties of the rock). Strike-slip vs. reverse etc. doesn’t change the physics of the process. All the calculations are done in the local coordinate frame of the fault. There may be another definitional problem here. The physics of the fault plane failure is like the Greenhouse Effect. The driver (tides, pumping, whatever) is like the forcings. I’m tempted to say strike-slip vs. normal is like CO2 vs. aerosols, but that’s probably pushing the analogy too far.

  125. Dave said:

    “Indeed Paul, although the mechanisms of physics of failure are the same whether the forcing is tidal or not. “

    And I want to keep hammering this point home: Since there are no controlled experiments possible in earth sciences, having any knowledge of a quantitative forcing should be considered a gold-mine of information. It’s really a matter of whether seismologists want to put the effort into the analysis, or have their minds made up already as with Susan Hough. Recall that she was the one that gave a binary answer to an obviously shades of gray physical phenomenon.

    And the other point about prediction — in presenting a reliability model for a part failure or system failure, a reliability analyst will never make a prediction as to when a failure will occur. At best, it will be cast as either a MTBF for parts or a probability of system failure after a certain operating time.

  126. @WHUT, @Dave_Geologist,

    (Dave_Geologist):

    Changes in those parameters can be driven by all sorts of things – plate tectonics, gravity, pumping, other earthquakes, volcanic doming or collapse, nuclear weapons test, tides, building an airport extension …

    (WHUT):

    Since there are no controlled experiments possible in earth sciences, having any knowledge of a quantitative forcing should be considered a gold-mine of information.

    For @Dave_Geologist, surely the “critical stress for failure” is not the same all along a fault. That is, there much be local variation. Accordingly, the dynamics observed suggest something of keen interest is how, if a fracture occurs at one spot, whether quickly or slowly, how does it propagate along a fault? At a much smaller scale, rock fracturing mechanisms are similarly involved. Does the fracture propagate along like energy along a waveguide? Can this be energized with remote triggering? Can it stop? I mean, it must stop. When and why?

    @WHUT, @Dave_Geologist, so, given an event is describable only in terms of probability within a duration, what’s the nature of that probability? Is the model fundamentally Poisson statistics? Or is there under- or overdispersion so something like a quasi-Poisson or Negative Binomial are more appropriate? And are the means of these, in fact, AR(1) or other ARIMA processes, so that presence, proximity, and magnitude of an event affects the rate of the event? Instead, is there some model which uses sloshing around in energy wells with varying height walls which is appropriate?

    @Dave_Geologist, has not any of this kind of thing been subjected to experiment, whether in direct materials at high pressures, or perhaps in physical analogues? So, for instance, drying wet corn starch has been found to be an analogue for formation of columnar structures in, for instance, cooling basalt.

  127. On that last point, @Dave_Geologist, see:

    L. Goehring, Z. Lin, S. W. Morris, “An experimental investigation of the scaling of columnar joints”, Physical Review E, October 2006.

  128. Dave_Geologist says:

    Paul and hyper, There’s an encyclopedia’s worth of theoretical and experimental papers on fault nucleation, (re)activation, propagation and termination. Using analogue materials, and actual rocks at the temperature, pressure, stress and slip rates of actual faults. And using numerical models. And wells have been drilled through the San Andreas Fault and a Japanese subduction-zone fault and fitted with measuring equipment. And statistical distributions have been generated for experimental and observed data (generally they fit a power-law). Hence the characteristic earthquake model and Gutenberg-Richter Law. The statistical and scaling properties of joints, faults, pressure-solution seams, granulation seams, you-name-it have been studied since forever. It’s in textbooks. It’s so standard that tools for deriving them and applying them stochastically in reservoir models have been in commercial software for decades. I’ve used them in anger since the 1990s. The graduate student in the office next to mine in the 1970s was working on it (pressure-solution seams or veins IIRC). Her supervisor had been publishing on it for a decade.

    On your “critical stress for failure” paragraph, yes to pretty much all of the above. The fault fails at a point where the combination of frictional strength, stress and pore pressure makes it weakest. Propagation is generally easier than initiation so it can grow until it hits a barrier it can’t propagate through. Earthquake slip rate and extent are routinely measured from seismograms. With sufficient precision to know that it generally propagates supersonically. I wouldn’t think in terms of a waveguide because it’s the interface between two discrete blocks which is slipping. And remember, the energy doesn’t come from the slip event. It’s stored in the wall rocks from the elastic strain which accumulated while the fault was locked but the plates were still moving. Obviously it can be remotely triggered. There’s a whole literature about earthquake A triggering earthquake B, and about which aftershocks are on the main fault, which are away from it and triggered by stress relaxation after the quake, which are triggered by pore pressure changes and which are directly triggered by seismic waves. Tidal triggering in hard rocks would be analogous to the first case, tidal triggering in porous rocks by sea-level induced pore-pressure changes would be analogous to the second case.

    If I’m beginning to sound a bit exasperated it’s because we’re veering into the earthquake equivalent of “how do you know CO2 is a greenhouse gas”, “how do we know it’s ours”, “why don’t you include clouds or the sun”, “is it volcanoes”. When Susan Hough gets approached by someone who thinks they’ve solved it all with an Excel spreadsheet, as I’m sure she doe regularly because earthquakes are of public interest (I appreciate you’ve done more than that), she probably feels the same way Michael Mann or James Hansen do when they get approached by someone waving a graph with the “pause” highlighted in red. IOW if someone whose background knowledge of the subject doesn’t meet undergraduate level, it’s possible that they’ll make a massive breakthrough by looking at some number series. But far more likely that they’ll re-invent the wheel, explore already-mapped blind alleys or make rookie mistakes. Remember, Einstein was thoroughly versed in contemporary physics and Galileo in contemporary astronomy. They we’rent working in isolation.

    Sorry if that sounds a bit harsh.

  129. @Dave_Geologist,

    If I’m beginning to sound a bit exasperated it’s because we’re veering into the earthquake equivalent of “how do you know CO2 is a greenhouse gas”, “how do we know it’s ours”, “why don’t you include clouds or the sun”, “is it volcanoes”.

    My questions came from curiosity and a genuine interest in the subject, even if I am ignorant about

    The statistical and scaling properties of joints, faults, pressure-solution seams, granulation seams, you-name-it have been studied since forever. It’s in textbooks. It’s so standard that tools for deriving them and applying them stochastically in reservoir models have been in commercial software for decades.

    My tone, if it seemed challenging, was simply that of the graduate school seminar one.

    I very much appreciate your answers, and respect your knowledge, and those of others in the field.

    The closest I ever came to any of this was some of the maths involved in seismic tomography. I also did examine slant stacking, and some of the signal processing problems attending recovery when the whatever-its-called (I forget) rack of acoustic sensors with the spark gap at the bottom got snagged in a borehole and wasn’t uniformly sampled as it was hauled up. One of the favorite books from the time was Underground Sound. But I never got much into rock mechanics back in those days.

  130. Dave_Geologist says:

    Thanks for the polite reply hyper and apologies if I seemed a bit grumpy this morning. Hadn’t had lunch 😉 .

    And I do appreciate that you and Paul are not like the people climatologists have to deal with, where “Just Asking Questions” is usually done not out of interested ignorance but in bad faith.

    Both can be frustrating though as the thought, even for the honest enquirer, is “can’t you Google that”. But on reflection, if you don’t know what you don’t know, it can be hard to formulate the search.

    Zoback’s Reservoir Geomechanics textbook is very good, although somewhat expensive (about $100). More than half of it is general, not just oil-and-gas focused. More readable than Jaeger, Cook & Zimmerman. You’ll probably find older editions (authored just by Jaeger & Cook) secondhand, but if you’re interested in earthquakes you need to consider poroelasticity, which is greatly updated and expanded in the latest edition.

    The Stanford online course Reservoir Geomechanics
    is also very good, although I see it’s archived at the moment. Apparently though you can still do it, just get no course credits (email registration required). That’s basically the slides from Zoback’s private and in-house industry course, which was expanded to form the textbook.

  131. Dave, you said:

    “And statistical distributions have been generated for experimental and observed data (generally they fit a power-law).”

    Curious as to what the current understanding of the power-law relationship is for earthquake magnitude and frequency. Someone with knowledge of universal behavior can derive this distribution. A good book on this is D. Sornette, Critical phenomena in natural sciences,
    chaos, fractals, self organization and disorder

  132. Dave_Geologist says:

    Oops, guess I forgot to close the hyperlink 😦

  133. Dave_Geologist says:

    Thanks for fixing the link, (Willard?)

  134. Dave_Geologist says:

    Paul and hyper
    Gutenberg–Richter law. The Delorey et al. paper is worth reading in full if you haven’t already (link to non-paywalled version/a>). It’s pretty much what I was saying.

    1) It puts numbers to my 99.99% sentence. The stress drop on the Parkfield quake is of order 1 to 10 MPa so based on return period the fault is stressing up at around 166 Pa/day. The tidal-cycle stress range is of order 1kPa, In my example the tidal cycle was about the same magnitude as the tectonic stress increase per cycle; here it’s an order of magnitude smaller. But the principle is the same. For hundreds or thousands of cycles the tide is not enough to flip the fault over the edge. Eventually you get close enough and the tide flips it. But the presence or absence of tides would make a difference of only a few days or weeks to earthquake timing. And note that they are not measuring “the big one”, just microseisms. Gutenberg-Richter notwithstanding, it’s not guaranteed that the events remain self-similar over such a large range,

    2) It’s complicated by stress shadowing, periods when the tide is ineffective, which would require a bespoke calculation for each fault and not just simple cycle-matching.

    The stress shadowing is so significant that APS is only exceeded ∼8% of the time for a background loading rate of 166 Pa/day, and ∼25% of the time for a background loading rate of 1660 Pa/day. These times are not evenly distributed over the semi-diurnal or fortnightly cycles

    3) They’re using the tidal forcings as a tool to probe the physics of the fault. That’s not heretical, it’s mainstream. If van der Elst works under Hough, I’d be very surprised if she objected to this work. We can’t carry our a controlled experiment on our solitary earth, but when Nature experiments and we can measure or calculate the forcings, that’s the the next-best-thing. Same as using natural earthquakes to determine the layering of the Earth.

    hyper, in my book most things in nature are self-similar, self-organising and display critical behaviour. So power laws should be the norm. On difficulty in persuading people of that is that if you Montecarlo a 3-order-of-magnitude power law with upper- and lower-bound rolloffs it will typically pass a test for lognormality. Statisticians tend to plump for normal or lognormal by preference, so lognormal is the null hypothesis and you have to prove power-law. Rarely the other way round.

    In my experience faults and shear fractures are almost always power-law. Joints can be all sorts: power-law, regular, Poisson-with-an-exclusion-window-around-each-preexisting-joint, exponential length distribution but anti-clustered spacing… Faults can be multi-fractal with the exponent depending on fault intensity, but statistics can be extracted by considering domains of similar intensity. I would rationalise that using the same logic as the damage parameter in the Hoeke-Brown failure criteria. Once you add enough discontinuities 9which can be stronger or weaker than the host), you’ve changed the bulk mechanical properties of the rock and it follows different rules (or the same rules with a different exponent).

    Oh, and dikran, they use confidence rather than significance 😉

  135. @Dave_Geologist,

    On difficulty in persuading people of that is that if you Montecarlo a 3-order-of-magnitude power law with upper- and lower-bound rolloffs it will typically pass a test for lognormality. Statisticians tend to plump for normal or lognormal by preference, so lognormal is the null hypothesis and you have to prove power-law. Rarely the other way round.

    Of course, Gaussian or Poisson or Gamma or Weibull are all good constructs for conceptual modeling, because there are mechanisms and processes known to produce them. And they are natural building blocks for models. But I would disagree modern statisticians “plump for normal or lognormal by preference” just as I would disagree on the “pass a test for lognormality” part. On the latter, doing a formal test of compatibility with a given distribution is seen today as pretty much a fool’s errand, since all you need to do is take more data and it will, with most actual datasets, fail it. For rough modeling purposes a Q-Q plot is fine as a check.

    On the former, plumping for normality or lognormality, the field has grown up enough to stop expecting reality to conform to our abstractions and tends to accept reality for what it is. In that respect, much effort goes to using empirical densities, mixtures of densities, empirical likelihoods, constructed complicated hierarchical likelihoods, and, for the worst systems and when heavy computation is feasible, likelihood-free inference (see approximate Bayesian computation). This is, as I said, a mark of maturity, but it is also because ample computational resources and larger, plentiful datasets make techniques from Data Science, Machine Learning, and Genetic and Stochastic Optimization available, even if their rough edges and ad hoc origins had to be filed off a bit.

    Modeling of the Internet went through a time when power laws were all the rage. Eventually, I think, people realized declaring something was a power law was pretty useless from a theoretical perspective: It’s not constraining enough. You’ll get things like Zipf, Zeta, Pareto, and Gibrat distributions by creating objects in various random ways. Zipf on words, for example, can be had by composing words of length M from a fixed alphabet (including a single space for prefixes and suffixes to obtain words of length < M) by choosing elements from that alphabet at random. When I model such, say some set of patterns, if I find a frequent non-Zipf part of a density, followed by a Zipf-like tail, I'll manually split the density where the Zipf begins, and throw away the Zipf part, because there's little or no information there.

    By the way, on rock mechanics, besides my tectonics book, and a couple of volumes related to mineralogy and rocks, I have and study from time to time,

    * R. R. Long, K. R. Wadleigh, Mechanics of Solids And Fluids, 1961
    * S. F. Borg, Matrix-Tensor Methods in Continuum Mechanics (2nd edition), 1990
    * R. K. Matthews, Dynamic Stratigraphy, 1974.
    * J. F. Nye, Physical Properties of Crystals: Their representation by tensors and matrices, 1985.

    and that’s about it. (The rest of the tensor stuff i have is fluid mechanics-related.) This is quite separate from my oceanography and hydrology mini-library, which is much richer, and larger, including 2 texts on physical oceanography, one on descriptive, and Jenkins and Doney’s book, Modeling Methods for Marine Science. I have many more biology-related texts, and then perhaps 5 times as many of those in maths and stats, the latter being dominated by Bayesian books.

  136. Dave_Geologist says:

    the field has grown up enough to stop expecting reality to conform to our abstractions and tends to accept reality for what it is

    Ah, I guess I have a nineties and partial oughties chip on my shoulder 😦

    As you can guess from commercial software being available for decades and used where hundreds of millions or billions of dollars are at stake, geology and the O&G industry ignored the naysayers and used distributions that describe the data 😉 . If it walks like a duck and quacks like a duck…

    In general though, we were not trying to derive insight into the underlying phenomena or worrying about the “true” vs. the sample distribution. When you have a zillion datapoints from sampling multiple wells at 6-inch spacing, you have a sample distribution which is asymptotically close to the population for those wells. Trouble is, those wells represent only a trillionth of the reservoir volume. What have you missed?

    In practice we tended to take the approach of “representing what is there”. For example sometimes you get multi-modal distributions because you’re lumping two rock-types together which you should really split, but can’t because most wells don’t have the right logs or you need core to do it. One of the popular software packages “has an app for that”. You can choose to be completely agnostic about the expected form of the distribution and sample directly from the raw PDF for your Monte Carlo simulation.

  137. @Dave_Geologist,

    Did you use kriging much?

  138. Power-law is the default model because it’s straight-forwardly generated by Maximum Entropy dispersion on the mean value (maximum entropy is always the default model when all one knows is a mean value). The data on California earthquakes over the span of 5 orders of magnitude follows the MaxEnt-derived power-law distribution.

  139. The distinction between lunisolar forcing effects on earthquakes versus other physical phenomena is that earthquakes are discrete while the other behaviors are continuous. Consider these continuous responses to lunisolar forcing:
    1. Ocean tides — everyone knows this one
    2. Length-of-day (LOD) variation — almost completely lunisolar angular momentum interactions
    3. Chandler wobble — precession of pole forced by nodal cycle
    4. Ocean standing wave dipoles — angular momentum response resulting in sloshing
    5. Atmospheric tides — best example is the equatorial QBO
    6. Eclipses — continuous (but appearing discrete) yet not a significant physical behavior

    That’s why the earthquake effects can only be detected in statistical ensembles, where the discrete nature of individual earthquakes is smoothed into a functional response relating to spatiotemporal coordinates.

  140. Dave_Geologist says:

    Did you use kriging much?

    Lots. I was surprised climatologists didn’t.

    Given that it was developed by and for the mining industry, there’s a delicious irony in the thought that Cowtan and Way are giving a certain retired mining executive conniptions 🙂

  141. “Given that it was developed by and for the mining industry, there’s a delicious irony in the thought that Cowtan and Way are giving a certain retired mining executive conniptions”

    Given his background, I could never understand why McIntyre was so interested in climate science, while instead analyzing the futility of oil sands mining would have been such easy pickings. But then it all made sense. Watch his incredulous response when confronted with that topic.

  142. Frank says:

    ATTP wrote: “I partly thought that this was quite good as it came across well, but I wondered how it would be perceived by a more neutral observer, or by those who are already doubtful. It’s possible that they would walk away thinking that the doubts are quite justified and that maybe there isn’t really any reason to do anything just yet.”

    Exactly. Interestingly, the balance between mitigation and adaption depends very much upon the discount rate used in calculations. In theory, the optimal discount rate depends on two factors; one of which is the future economic growth rate. Expressed non-mathematically, the richer you expect your descendants to be, the less you should spend now minimizing their future problems. So rich environmentalists – who believe we are destroying our planet and who expect their descendants to be less prosperous – mathematically and emotionally want to spend far more on mitigation than those who are far more optimistic about future economic and technological growth and man’s ability to adapt to future circumstances. And both will have very different views from those in the less developed world (and the poor in the developed world), who want more than anything to emulate the recent Chinese economic growth. (Their emissions/capita are already equal to those in the EU.) Essentially all pledges from developing countries amounted to little more that a continuation of business as usual (emissions growth in 2016-2030 at the same rate as 1990-2015) contingent on delivery of unrealistic promises of aid.

    Then there is the debate between central planning and the free market. And skepticism about the ability of democracies (such as the one that elected Trump) to carry out any kind of long-term planning and policy. Bankruptcy of the US Social Security Trust Fund (with a 30% cut in benefits under current law) is anticipated in a decade. Any system that hasn’t already solved that well-understood looming problem arguably has no business worrying about the vastly more uncertain and challenging problem of GHG-mediated climate change.

  143. Willard says:

    Is this a Poe, FrankB?

  144. Frank says:

    Willard: Not sure what Poe refers to, but I have commented here before as “Frank” and never as “FrankB”. If you are interested in the mathematical rational for why different people might logically choose different discount rates when calculating a SCC, the passage below is from Nordhaus:

    “We can use the Ramsey equation to evaluate the SCC as a function of the key variables. The Ramsey equation provides the equilibrium rate of return in an optimal growth model with constant growth in population and per capita consumption without risk or taxes. In this equilibrium, the real interest rate (r) equals the pure rate of social time preference (ρ) plus the rate of growth of per capita consumption (g) times the consumption elasticity of the utility function (α). In long- run equilibrium, we have the Ramsey equation r = ρ + αg. The key variable will be the “growth-corrected discount rate,” r−g. Under our assumptions, r−g=ρ+ (α − 1)g. To simplify, assume that α = 1, or that the utility function is logarithmic, which implies that r − g = ρ. (These long-run growth and discounting are used in the Stern Review and are approximately the case for the DICE model.)”

    https://www.journals.uchicago.edu/doi/pdfplus/10.1086/676035

    There are, of course, a wide variety of opinions about an appropriate discount rate, but the Ramsey equation has been mathematically proven to be optimum under a particular set of constraints. And it provides a good rational for understanding why people who are more optimistic about the future (or relatively poor today and hoping to catch up) are less willing to invest in mitigation today in hopes of making the world a better place for their far-richer descendants in the future.

  145. Dave_Geologist says:

    the Ramsey equation has been mathematically proven to be optimum under a particular set of constraints
    Yes, like the invisible hand of the market is mathematically proven to be optimum under a particular set of constraints. Constraints which, of course, don’t exist in the real world, never have, and never will.

    I can prove mathematically that the three internal angles of a triangle add up to more then 180°. I just have to make the claim under a particular set of constraints. Viz., a 2D universe with positive curvature, like the surface of a sphere.

  146. @Dave_Geologist,

    😉

    Yeah, but I prefer saddle surfaces so the angles add up to less than 180 degrees … It makes me look thinner.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.