Watts Up With That (WUWT) has a recent post called Mann on mathematics, alcohol and “proof”. The post discusses a recent comment by Michael Mann in which he says

Proof is for mathematical theorems and alcoholic beverages. It’s not for science.

The WUWT post links to an article on the Heartland Institute website called Michael Mann redefines science which is critical of what Michael Mann has said. As already pointed out elsewhere (Open Mind – Michael Mann understands science) Michael Mann is essentially correct.

Science doesn’t really work the way that some think. Scientists don’t simply propose theories or hypotheses that can be tested. Typically, scientists are using well-tested theories to understand the world around us. Given that this has been discussed elsewhere, I thought I would discuss something related that I’ve been thinking about recently. This is the issue of falsifiability, an idea attributed to Karl Popper. The basic argument is that something is only scientific if it can be falsified. This, however, is not how scientists think and isn’t really how science works today. Let me try and illustrate this using what Tamino used in his post. Before I start, however, let me make it clear that I’m not an expert at the philosophy of science and I’m sure there are many who could come here and out-philosophize me. These are my thoughts as an active scientists.

So, Newton proposed what is now called Newton’s Law of Universal Gravitation. He surmised that the force, F, between two bodies of mass M1 and M2, a distance R apart was F = G M1 M2 / R2, where G is the gravitational constant. So, how did we test this? Well, as far as I’m aware people did experiments that actually measured the force between two masses. You could suggest that they framed this as an expectation that no force would be measured, but once they’d shown the force existed, experiments were carried out to determine the value of the gravitational constant.

However, there are at least two problems with Newton’s Law of Universal Gravitation. One is that it assumes the force is transmitted instantly (or, rather, it doesn’t address how the force is transmitted). Einstein’s theory of Special Relativity tells us that nothing can travel faster than the speed of light. Furthermore, observations tell us that Mercury precesses faster in its orbit around the Sun that it should if only Newton’s Law of Gravity applied. Both of these problems were solved by Einstein’s Theory of General Relativity which suggests that gravity is actually a manifestation of the curvature of spacetime (which is curved in the presence of mass), rather than some kind of invisible, instantaneous force.

So, there you go. Newton’s Law of Universal Gravitation has been falsified. We should stop using it; right? No, it works extremely well in most circumstances where we want to consider the influence of gravity. There are some circumstances where it doesn’t work, but as long as we understand where and when, we can add suitable corrections or use the full theory of General Relativity. So, we have a theory that’s been falsified that we still use. Let me take this analogy one step further though.

Imagine we want to try and understand the formation and evolution of the planets in our own Solar System. Once the Sun has finished forming, it should be surrounded by a disc of asteroid-like bodies that collide and grow to form the planets (there’s some gas involved in the formation of the outer planets, but let’s ignore that for now). One can set up a simulation that puts a large number of asteroid-like bodies in orbit around the young sun, and run this forward in time. The evolution will be determined, largely by gravity. However, the chances that you’ll end up with a final system that matches our own is vanishingly small. What does this mean? Do we claim that something’s been falsified? If so, what? It can’t be Newton’s Laws because the model was built on these laws, so the model might depend on the laws, but the laws don’t depend on the model. Has the model been falsified? Well, its results might not match our reality, but that’s not the same as the model having been falsified.

So, what might we conclude from such a situation? There are a large number of possibilities. Maybe the initial conditions weren’t suitable. Maybe some physics has been left out. Understanding why the model results differ from reality provides evidence about the system, so even if the model is “wrong” doesn’t mean that it has no value. Also, maybe the system is inherently chaotic? If so, then the model isn’t actually wrong. It has simply produced a different reality to the reality that we observe. Understanding such a system would then require a large number of simulations to determine if our Solar System is a possible outcome and to determine the likelihood of such an outcome. So the basic point I’m trying to make is that applying falsifiability to a model doesn’t really make sense. It is neither a theory nor a hypothesis. Also, just because a model result doesn’t match our observed reality doesn’t necessarily make it wrong or valueless.

So, how does this apply to climate science. Well global climate models are simply models that are based on fundamental, well-tested, well-founded science. Just because the climate models do not produce results that match our observed reality doesn’t mean that something’s been falsified. You could argue that even if this is true, the model is still wrong. Well, if your goal was to exactly match our observed reality then that may be true, but such a goal – given the complexity of the system – is probably completely unrealistic. It could be that the model is ignoring something important and so, in some sense, is wrong. It could, however, be that it’s not possible to include some aspects accurately (some natural variability, for example) or it could be – as it almost certainly is – that the models are inherently chaotic.

Hence using climate models to understand the future evolution of our climate requires a large number of simulations so that we can use them to determine the most likely outcome and the range of possible outcomes. That some short-term variations are very difficult to model also means that such ensembles will tend to smooth out short-term variations and enhance any long-term trends (which is what we’re essentially interested in). So, that the model results don’t precisely match our observed reality doesn’t mean the models have been falsified, nor does it mean that they are simply wrong. The results still provide information about the future evolution of our climate. So, I’m suggesting that applying falsifiability to global climate models doesn’t really make sense. Additionally, just because the model results don’t precisely match our observed reality doesn’t mean that they’re wrong. You need to understand something about how these models work and also how likely it is that such models could precisely match our observed reality. Judging them simply on how well they match our observed reality is not really enough.

Really, the people who should be judging the evidence provided by these models should be the climate scientists themselves. They understand the models and their limitations. They know what’s included and what isn’t and the likely influence of what might have been left out. I know that it’s going to take a lot for some to trust what climate scientists say, but I really can’t see an alternative. Trusting those who think we should use the philosophy of science to determine the merits of global climate models, rather than trusting those who actually understand the models, just seems like the wrong thing to do.

This entry was posted in Anthony Watts, Climate change, Global warming, Watts Up With That and tagged , , , , , , . Bookmark the permalink.

### 54 Responses to Watt about falsifiability?

1. Latimer Alder says:

Newton’s Laws are useful because – for most practical circumstances – they can be relied upon to give accurate predictions of what will happen. And we know enough to know when those circumtances do apply and we have to use a more refined approcah.

‘Just because the climate models do not produce results that match our observed reality doesn’t mean that something’s been falsified’

Maybe not. But it means that they are of no practical use to us. Models that do not produce accurate results are useless. And it could be argued that they are positively dangerous.

‘Really, the people who should be judging the evidence provided by these models should be the climate scientists themselves.’

Oh perleeese!

Science says the judge of the theory/model/hypothesis is ‘does it agree with experiment’. Not ‘does the guy who dreamt up the theory think its pretty cool?’

And bitter expereince over many generations tells us that self-assertion of one’s virtue really isn’t a good way to judge anything.

Here it is again:

‘Then you compute the consequences. Compare the consequences to experience. If it disagrees with experience, the guess is wrong. In that simple statement is the key to science. It doesn’t matter how beautiful your guess is or how smart you are or what your name is. If it disagrees with experiment, it’s wrong. That’s all there is to it’.

AFAIK not a single one of the 30+ climate models has been able to accurately predict anything (beyond the most banal – ‘it’ll be warmer’) about the climate of the last 30 years. Rather than attempting to brush over this Very Inconvenient Truth, modellers would be better advised to spend their time and effort working out why their work is wrong..

2. I disagree that they have no practical use, or maybe I think the term “practical use” doesn’t apply. The models provide information/evidence about the system even if the results don’t precisely match reality. They either tell us that something that’s been left out is more important than we think, or that it’s just not possible to model certain aspects accurately.

My comment about trusting climate scientists rather than those philosophizing was not a suggestion that noone else should be involved, simply a comment that taking the views of those who probably do not understand the models over those who do, does not make sense to me.

I think you’re doing what I was criticising in this post. These models are neither theories not hypotheses and so applying the simple “right” or “wrong” analysis is simplistic and ignores a great deal of information that these models are providing.

3. chris says:

Latimer, your post shows a lack of understanding – it’s usually a good idea to think a little before jumping in with ill-considered opinionating.

Here’s a tiny bit of what we know of climate model success. Early models successfully predicted polar amplification of warming in an enhanced greenhouse world and predicted the delayed response of the Antarctic to warming (models in late 1970,s early 1980’s). They predicted enhanced tropospheric warming, enhanced tropospheric moistening with the expectation that the troposphere would preserve something close to constant relative humidity; they predicted enhanced height of tropopause and stratospheric cooling. The predicted changes in hydrological cycles that are broadly observed in the real world…and so on.

Some of these examples are exactly pertinent to the “philosophical” issue at hand with respect to model value; here’s two:

1. It was predicted by models that as the troposphere warmed so it’s moisture content would rise with a positive feedback on primary warming. A very prominent atmospheric scientist, Richard Lindzen, asserted (early 1990’s) that models were rubbish and that the upper troposphere would in fact dry out. There was an interested exchange between Lindzen and James Hansen, in fact in the pages of Nature. We know that Hansen (and models) was correct and that Lindzen was wrong. In other words models are fundamental for encapsulating our expectations about the natural world according to our understanding of physics and our empirical observations.

2. During a long 15 year period a couple of scientists named Roy Spencer and John Christy made measurements form microwave sounding units which they interpreted as showing tropospheric cooling in a greenhouse-enhanced world. The models predicted that the troposphere should warm as a result of greenhouse forcing and water vapour feedback (see (1.). In fact so confident were scientists in their physical understanding and their models that they largely disregarded the MSU data and got on with enhancing their knowledge through empirical measurements and improved models.

We now know (since around 2005 as is well document in the pages of Science) that Spencer and Christy were entirely incorrect in their tropospheric temperature analysis. The made fundamental (one astonishingly misguided) errors in their analyses. The models were right.

That should tell you something about the value of models Latimer. They provide an updatable encapsulation of our knowledge of the natural world and its responses. Without models we’d be floundering incoherently in the face of empirical observations. With respect to current climate models and the fact that the are generally “running warm” for perhaps a decade, that observation provides incredibly useful knowledge since it defines specific areas for deeper investigation. In doing so we discover that there has been a (likely temporary) enhanced sequestering of greenhouse-forced heat into the deeper oceans and a strong excess of warming in the Arctic – neither of which was (fully) expected.

So when models match reality that’s great. When they don’t that’s pretty good too.

4. You make a very good point about observations that I didn’t address in this post. There is a sense that all that we need to do is collect data. The problem is that simply taking measurements tells us very little if we don’t also do modelling of some sort so as to interpret this data. So modelling isn’t only about making predictions about the future. It’s also about interpreting existing data. Both are an important part of advancing our knowledge.

5. Latimer Alder says:

@chris

‘So when models match reality that’s great. When they don’t that’s pretty good too’

And when we can reliably distinguish which model is going to do which *in advance* then I ‘ll agree. But looking back and saying ‘look number 27 got this right and 16 did that bit and 43 another’ is just Texas sharphooting. Theyt are of no practical value.

If all you want to use models for is to play around with ..that’s fine . But when they are used to make predictions – at which they are demonstrably extremely bad – they are not practically useful. They are wrong

And thanks for the patronising remark about ‘ill-considered opinionating’. I suspect that I started writing atmosphere-related models before you were out of nappies.

And the test of a good model we used then remains as it is now – do the models reflect reality? Can we make accurate predictions? If they don’t they are not good models.

6. That’s why the models give likelihoods and ranges. Currently the models are consistent with observations at the 5 – 10% level. How is that wrong?

7. chris says:

Oh dear Latimer,

I showed you a whole bunch of phenomena that were successfully predicted by atmospheric and climate models. Successfully predicted “*in advance*”. That’s what “prediction” means Latimer! Are you playing Monty Python-style “wot did the Romans ever do for us” games?

I don’t think I was being patronizing was I? I was expressing a truism. Wotts’ top post was barely dry on the page and you produced a response that must have been made with the most limited consideration. Your opinions are objectively incorrect, since climate models have been shown to be correct *in advance* in many examples (I gave you a whole bunch); likewise both Wotts and I have explained rather carefully about the epistemological value of models It’s not patronizing to point that out you failed to address this.

Anyway, it’s interesting to hear that you wrote atmospheric models in your prime. Can you tell us a bit more about these. Did you publish any of this work? Were your models successful/useful?

8. Latimer Alder says:

Nope – they weren’t useful/successful. Sitting alongside me was our experimental team and they were able to devise practical experiments to check the theoretical predictions from the model. When they ran the experiments the theory was shown to be wildly out. So we ditched it. That’s exactly how science is supposed to work. Test the theory against experiment. See Feynman.

And I’d got more interested in practical appications of IT than I was in atmopshere, so that’s where we parted company.

You hae missed the point that to be *useful* the models have to be reliably accurate in advance. Not in hindsight. That a model got particular phenomenon B right does not make it a reliable predictor for phenomena D E F G etc. And espcially not a reason to believe that model Y will tell you anything about Z. If you fire enough bullets randomly at a target one will hit the bullseye. If you can do it consistently and reliably you have demonstrated shooting skill. If it just happens once in a while its random.

It may be that professional climatologists find models useful – or at least it gives them something to play with – but from out here in reality, they are of as much interest as engine management systems are to most car drivers.

They care not a jot about the internal workings or the elegance of the design or whether Bloggs or Coggs or Doggs interpretation of catalyst temperature is best. At a BMW dealer’s convention they are probably of huge significance. But to the man at the Clapham traffic lights they are of no interest whatsoever. His test is ‘does the car go’.

Similarly the test of climate models is ‘can I use this model to accurately forecast the future climate?’

If they can’t do that, they have failed. And they can’t.

9. chris says:

I might just add to the points that Wotts and I have made (since I’m trying to write a grant application and any diversion is a welcome relief!). I hope my points aren’t so obvious as to be patronising, but sometimes simple explanations of the obvious can appear to be so:

1. Climate models evolve temporally according to their parameterizations which themselves are based on empirical knowledge and theory. Just like the climate in the real world, the particular trajectory of a climate model depends on initial conditions and so if one wishes to encompass the possible range of values of an observable of the model (say the surface temperature) the model should be run a number of times to produce an ensemble (or a number of different models run, which would actually be testing something a little different). Only one or a small group of these will match the trajectory of the observable (surface temperature) in the real world.

2. This has a direct real world correlate. Within a particular climate regime (say the temperate maritime climate of the UK) a range of trajectories are possible. One year (2012) late Feb-March may be gloriously warm…another year (2013) the same period may be unpleasantly cold. This is rather unpredictable. However an ensemble of simulations of a climate model parameterized either to simulate the global climate or (say) the temperate maritime UK climate should encompass these behaviours. Obviously if we ran an ensemble of simulations projecting forwards from 2010 (say) we might find only one (or maybe none) that actually “predicted” a warm late winter 2012 and a cold late winter 2013.

3. That’s because short term (decadal perhaps) real world (and computed) trajectories are dominated by the range of natural variability that can occur within a particular climate state (set of parameterizations). However in a greenhouse warming world we expect that the real world will warm such that local climates (e.g. the temperate maritime UK climate) will transition to a new climate state (characterized likely by a warmer annual temperature, earlier springs, enhanced rainfall (boo-hiss!) and so on). We expect that this behaviour will be captured within the ensemble of model runs. That would be a successful model. We would likely find that very few, or only none or maybe none, matched the actual real world trajectory on the decadal timescale. That wouldn’t mean that the model is wrong….

10. chris says:

come on Latimer. The examples I gave were predictions from models that were reliably accurate in advance. For goodness sake! If models predict tropospheric warming and tropospheric moistening and yet not only were empirical measurements insufficient to determine those parameters at the time of model prediction, but a tiny number of prominent and vociferous scientists were making erroneous assertions and misanalysis, then clearly the models were making predictions *in advance* of the empirical observations required to test the model success.

I hope I’m not being patronising in suggesting that that it really isn’t so difficult to understand the temporal relationship of model construction and prediction/publication, and the subsequent acquisition of the correct analytical tools to test whether the predictions were correct!

It’s a shame that your models were unsuccessful. But even if your models were poor (or its theory base) surely your model was useful since it stimulated a specific set of experiments.

11. Latimer Alder says:

‘warmer’ ‘earlier’, enhanced’. Lovely terms. And all so deliciously vague and unscientific. could be straight out of TV cosmetics ads.

Let’s do Newton the same way.

Here we are. The 2nd Law in climo speak:

‘If you push something hard enough for a long enough time it’ll go faster and be harder to stop’

Probably true – but of no practical value. I can’t make good policy based on ‘warmer’ or ‘earlier’ any more than I can go to the moon with ‘push harder’. I can’t design a seawall from them any more than going ‘faster’ helps me with building a railway.

For all practical purposes such models are useless. And if they are of no practical value they are mere curiosities.

12. Firstly, I think you’re seeing everything through the eyes of an engineer and are unwilling to accept that your experiences in engineering don’t necessarily mean that you have the abilities to critique the way climate scientists should conduct themselves. As I may have mentioned to you in the past, applying engineering like procedures to science is unlikely to make for better science, in my opinion at least.

As far as your second law analogy is concerned, I don’t think I’d need to run a model to convince you that not fitting brakes to a car might be a bad idea.

13. BBD says:

And thanks for the patronising remark about ‘ill-considered opinionating’. I suspect that I started writing atmosphere-related models before you were out of nappies.

Bluff.

14. BBD says:

So, in summary:

– LA violently dislikes what models indicate – that increasing RF from CO2 will warm the planet significantly on a centennial timescale.

– So LA resorts to an array of spurious argument in an attempt to deny the validity of the information derived from the models.

15. Fragmeister says:

Luckily for Newton and his second law, there are only three easily measurable terms. I think it is a surprise that we can even get close to modelling the climate for an entire planet with all the variables involved in it. To get a reasonable approximation means we have done rather well.

16. Yes, that’s what I find very odd about this whole issue. I imagine climate models – although based on some fairly basic physics and chemistry – are remarkably complicated and hence it should be seen as impressive that they perform so well. I instead they’re completely rejected by some because they don’t satisfy an ideal that is probably virtually impossible to actually achieve.

17. BBD says:

Demanding an impossible standard of evidence/proof then using this as a basis for rejecting the scientific basis for AGW is standard denialist rhetoric. As you doubtless know, but if you feed me the lines…

🙂

18. Yes, something I have yet to fully appreciate about this whole “debate” is that it’s important to not feed people the lines they want you to feed them 🙂

19. BBD says:

You might find this amusing. Card tricks.

20. Martin says:

I think that gets Popper wring on several levels:

1) You mention “Science doesn’t really work the way that some think. Scientists don’t simply propose theories or hypotheses that can be tested.” Then “The basic argument is that something is only scientific if it can be falsified. This, however, is not how scientists think and isn’t really how science works today.” But this conflates, parapharsing Popper, “psychological problems with epistemological ones”. Popper indeed says things like “I said above that the work of the scientist consists in putting forward and testing theories.” – but this is not about what scientists are actually doing. Rather, Popper gives a logical framework in order to judge a certain truth content of a theory. I quote at length from “Elimination of Psychologism” (“Logic of Scientific Discovery”, 2002, Routledge Classics):

“Some might object that it would be more to the purpose to regard it as the business of epistemology to produce what has been called a ‘rational reconstruction’ of the steps that have led the scientist to a discovery—to the finding of some new truth. But the question is: what, precisely, do we want to reconstruct? If it is the processes involved in the stimulation and release of an inspiration which are to be reconstructed, then I should refuse to take it as the task of the logic of knowledge. Such processes are the concern of empirical psychology but hardly of logic. It is another matter if we want to reconstruct rationally the subsequent tests whereby the inspiration may be discovered to be a discovery, or become known to be knowledge. In so far as the scientist critically judges, alters, or rejects his own inspiration we may, if we like, regard the methodological analysis undertaken here as a kind of ‘rational reconstruction’ of the corresponding thoughtprocesses. But this reconstruction would not describe these processes as they actually happen: it can give only a logical skeleton of the procedure of testing. Still, this is perhaps all that is meant by those who speak of a ‘rational reconstruction’ of the ways in which we gain knowledge.”

I.e. what you are alluding to is exactly what Popper is not talking about in his entire epistemology, as he makes clear from the get-go. His is the question of a logical criterion to judge the truth-content of a theory, not a working procedure.

2) Classcial mechanics makes certain predictions. Some of them are refuted by observations, e.g. where relativistic effects are relevant. One can, therefore, assert, that classical mechanics as a theory is refuted in the sense that we can say with some confidence that its predictions do not hold universally. We do not know if relativistic mechanics are true, but we know that classical mechanics are false. We can still say that – depending on the requirements for precision and/or the limits of measurement – classical mechanics is a good enough approximation to reality where relativistic effects are small enough. But no corresponding claim can be made in the other direction. That is, classical mechanics may be useful as a model, but as a theory about reality it has been refuted. Who ever said that we should “stop using it”? It’s falsified as a theory, as a model within certain boundaries it is still very useful (thus, as an approximation to reality, though we now know that it is only that), not doubt about that. But that has nothing to do with falsificationism. Which brings me directly to

3) Throughout your whole post you make no distiction between theory and model. Falsificationism makes assertions about the scientific validity of the former, not about the “usefulness” of the latter.

21. I did point out in my post that there would be some who could out-philosophize me and you may well have done just that. This wasn’t meant to be a post about Popper as such, but a post about those who misuse Popper to make claims about, in this case, climate science. I wasn’t intending to suggest that Popper got it wrong. I was intending to suggest that those who use Popper get it wrong.

As far as your point 3 is concerned, either you didn’t read the post, didn’t understand the point I was making, or I didn’t explain myself clearly. I think I did differentiate between theory and model. Models tend to be built using theories but are not themselves theories. The whole point of the post was to make the case that one can falsify theories but not models. So, as you quite rightly say, one can falsify a theory, but not a model. If that wasn’t clear, I apologise. If you didn’t actually read the post properly, maybe you could try doing so before making another comment.

22. Martin says:

I quoted you for 1) If my quotes are not a positive assertion by you, but a critique of assersion made by others, you say it nowhere.
For 2), you write “So, there you go. Newton’s Law of Universal Gravitation has been falsified. We should stop using it; right?” Again, if this is anything else than a positive assertion about what you yourself would think has to be deduced from a theory being falsified, I could only have guessed it, as you never point it out. On the contrary, you seem to double down in the same paragraph:”So, we have a theory that’s been falsified that we still use.” The one has nothing to do with the other. How else is this to be read that an implied refutation of a claim that a falsified theory is to be discarded altogether? I am open to suggestions that I read you text more carefully. But while I find several quotable instances where you make claims that seem to corroborate my interpretation, I find none that would even ressemble the claim “that those who use Popper get it wrong” are the problem. Could you help me to find anything the like in your text?
For 3) you directly go from an exploration about Newtons gravitional theory to a model about the genesis of the solar system (and then to climate models). Nowhere do you mention the switch in what follows. Yes, you point out that models are not falsified with the observation that their results do not match reality. However, if you ascribe this to a difference between models and theories, I have missed it, indeed. Also, how does the last sentence make sense, if you think that not Popper is the problem, but those wrongly applying him?:”Trusting those who think we should use the philosophy of science to determine the merits of global climate models, rather than trusting those who actually understand the models, just seems like the wrong thing to do.” Given your response, one would think that what you want to say is that those who got the philosophy of science wrong are the problem. But this is not what you say.

23. As I mentioned above, you’re almost certainly out-philosophizing me. The point of the post (which I may not have made clearly) was to discuss how people perceive Popper rather than what Popper was suggesting himself. I accept that I may not have made this as clear as I could have. So, it seems that your criticism is more in how I expressed myself in the post, than in the point I was trying to make. Maybe, more correctly, I don’t actually disagree with what I think your comments are trying to say. You may well be making the point better than I was able to do myself. So, again, if I’ve written this post poorly I apologise. I don’t claim to get everything right or to be able to write everything as clearly as I would like. So, unless I’ve mis-interpreted the gist in your comments, there doesn’t seem to be much point in discussing – at length – my post writing abilities. Of course, if I was getting paid for this and you’d paid to read it, your criticism may well be justified 🙂

So, to clarify. The point of the post was not to criticise the philosophy of science. It was to discuss how some seem to misuse the philosophy of science so as to try and make claims about the validity of some area in science (climate models in this case). One can falsify a theory or hypothesis, but one cannot falsify a model based on a fundamental theory given that the model may be based on the theory, but the theory is not based on the model. That was really the point and if I didn’t make it clearly then that’s my mistake (the whole Newton’s law discussion followed by the discussion as to how one might build a model of how the planets in the Solar System formed was meant to illustrate that, but maybe it wasn’t as obvious as I had hoped). On the other hand, if you disagree with that basic assessment, then maybe I’ve misunderstood the basis of your comments.

24. chris says:

This topic is a minefield of potential semantic confusion so it’s important to be clear about what one means by certain terms:

1. There are at least two meanings of the term “model”. (A) Our view of any aspect of the natural world is a model, and as scientists, our investigations are done in the context of a model even if we may not explicitly formulate this. This is a mental or conceptual model. In climate science such a model might be “as the Earth warms water expands and land ice melts and sea levels rise as a result.” A scientist might choose to investigate mechanistic and quantitative aspects within this model (e.g. how much sea level rise accrues from how much warming). (B) Nowadays we might construct and run computational models, such as General Circulation Models. It’s worth being clear about what sort of model one is referring to (a mental/conceptual model inherent in all worldviews or a computational model).

2. Are models “falsifiable”? I would have thought the answer is yes. But we have to be clear about the criterion of falsifiability in any particular case (as highlighted in the top article). So a GCM model isn’t falsified because a single trajectory doesn’t match the real world progression of a chosen observable (e.g. surface temperature). Clearly a computational model could be fundamentally wrong (e.g. because the parameterization is rubbish). If so the model might well be shown to be “falsified” ‘though we likely wouldn’t use that term.

3. Is a model a theory (or maybe a hypothesis)? This is tricky! Operationally speaking I think of computational models as theories (a weakish analogy maybe) and the results/interpretations of models as hypotheses (a strong analogy). The latter can be tested experimentally. For example a computational chemist might run an atomistic simulation of a protein known to be a drug target, using a model in which interatomic forces describing interactions between bonded and non-bonded atoms are parameterized according to well characterized empirical measurement. The result of the simulation might be the observation that a drug molecule binds at this particular site on a protein. To my mind that observation is a hypothesis. We could test this experimentally (e.g. by measuring drug binding to mutated versions of the protein where amino acids in the predicted drug binding site are changed to perturb the drug interaction).

4. It’s commonly asserted that in a Popperian sense hypotheses and theories can only be disproven but not proven. In reality the sort of hypothesis in the example in 3 can be corroborated to the extent that there may be little doubt of its truth. For example you might manage to determine a crystal structure of the protein drug complex and find the drug sitting exactly in the site predicted by the model.

5. Generally speaking this whole area and its “philosophical” aspects are very difficult to deal with if one doesn’t discuss specific examples. Otherwise one can generate fruitless argumentation in which the “debaters” are continually accidentally or willfully missing each others point. Lots of efforts to misrepresent science in support of dubious agendas make use of the tactic of contrived semantic confusion to avoid focusing on the fundamental and specific aspects of the matter at hand.

25. Martin says:

Once interpreted in a certain way, it’s near-impossible to see it from a different point of view without some distance (I often make general claims like this when I am actually talking about me). I’ll re-read it tomorrow, perhaps I am more able then to judge if there is actually anything I disagree with, or if I am just unable (in the moment) to change perspective.

26. Yes, I suspect you’re out-philosophysing me too here 🙂 Indeed, it is a minefield and I’m sure I haven’t defined things as clearly as maybe I could have. When I was referring to a model I was probably primarily thinking of computational models which, typically, would be developed using what one might regard as fundamental theories and initialised with a set of initial conditions. Your final point may be the most crucial. Understanding the context is extremely important. One needs to understand if the lack of agreement between a model and reality is important (i.e., did you expect the model to represent observations accurately) or not. There will be some systems that are sufficiently well understood that a mismatch between a model and an observation may be very significant. There are others where a mismatch may not be that significant given that there are plenty of uncertainties or unknowns in the modelling.

I guess, however, that the basic point of the post was to make the case that to simply reject a model because it doesn’t match reality sufficiently well (according to some criteria) isn’t necessarily justified, especially if you’re basing your judgement on some philosophical argument, rather than on a detailed understanding of the model.

28. BBD says:

I wouldn’t hazard my feeble wits in the minefield, but I agree with your concluding paragraph wholeheartedly.

29. chris says:

Actually, I didn’t think I was philosophizing in the knowledgeable sense at all, but trying to think about this in the context of how scientists actually do stuff and find things out. I agree with your assessment of climate models; I expanded on your point in explicit detail in my post at August 3, 2013 at 11:07 am above, because that to my mind is a way of cutting through any semantic confusion (i.e. spell things out with explicit examples). However as Latimer showed in his response to that post it’s easy to continue the pretence of a scientific disagreement simply by (in his case) taking one phrase, misrepresenting this and then sneering at the misrepresentation. Excellent!

Perhaps another philosophical question to address would be why people write comments on blogs.

30. My comment about you out-philsophising me was intended as a compliment, not a criticism 🙂 Although, maybe that wasn’t obvious as I’d used it in an earlier, less pleasant exchange – although even there the intent was for that to be a compliment, even though the exchange itself had a somewhat curt element to it.

31. Chris, on a related note, an interesting philosophical question might be why people write blogs in the first place 🙂

32. It is long ago that I read about the philosophy of science, so I hope I do the philosophers justice in my attempt to out-philosophize Wotts.

As an aside, I would recommend every student who wants to become a scientist to read about this. You can do science without, but looking at what other scientists do. However, in this way, it is less likely that you do something interesting, by doing science in a different way as your tutors.

The question Karl Popper wanted to answer is “what is the difference between scientific and non-scientific ideas”. The answer is that scientific ideas are formulated so precisely that you can falsify them. This was a very important idea, but it does not tell you how to do science, nor does it say the non-scientific ideas are not important in their own right.

I thus do not agree with Martin’s representation of Poppers work:

Martin : “His [Popper] is the question of a logical criterion to judge the truth-content of a theory”

Popper did not care whether a hypothesis is falsified or not. Wotts’ example of gravity is a good one. Classical mechanics is falsified. It is still a scientific theory, just a wrong one.

Even if it is wrong, it is still useful. Popper did not write about which theories are useful.

The morphic fields of Rupert Sheldrake are a clear example of ideas that are not sufficiently precise to be science. There is no experiment that could refute Sheldrakes ideas. He could always claim that the effect was just a bit weaker as could be detected by the experiment, because he does not quantify anything. Other examples would be the weekly horoscopes or the psychology of Freud. Too vague to be science. (Still maybe helpful to some.)

Wotts: “I was intending to suggest that those who use Popper get it wrong.”

The people who quote Popper and say the climate models are falsified, are saying according to Popper: climate models are scientific theories. Possibly without realising they are saying that.

chris: “2. Are models “falsifiable”? I would have thought the answer is yes. But we have to be clear about the criterion of falsifiability in any particular case (as highlighted in the top article). So a GCM model isn’t falsified because a single trajectory doesn’t match the real world progression of a chosen observable (e.g. surface temperature).”

I fully agree with that. Models are falsifiable, they are well defined (the model code and the input data) and produce clear output. No vagueness of the kind of Sheldrake or Freud. Thus I would see models as falsifiable theories about the climate system.

The only reason I could see to deny them that status is that the code is rather long. That makes them a little less crisp and probably no one lives that can claim to have read and understood every line of a comprehensive global climate model. However, putting a maximum length on a theory seems rather arbitrary.

Some more name dropping (for the students). Thomas Kuhn wrote a good book about what scientist do when the do science, especially about the difficult times when a major shift in scientific thinking occurs. He can tell you a bit more about the usefulness of theories and models.

Imre Lakatos wrote a good book about what scientists do when they do not do science.

Feyerabend is famous for this quote: “Anything goes” when it comes to coming up with scientific ideas or what to do when one idea is not superior to another one in all respects. Creativity and intuition are very important in this phase.

Reading these four philosophers will bring you a lot. (In case of Popper, it is probably sufficient to read a few chapters, for a “user” Logik der Forschung is rather repetitive.)

33. “You can do science without, but looking at what other scientists do.” should be
“You can do science without, by looking at what other scientists do.”

34. BBD says:

Perhaps another philosophical question to address would be why people write comments on blogs.

35. Reich.Eschhaus says:

“Models are falsifiable, they are well defined (the model code and the input data) and produce clear output.”

Not trying to contradict what you have said, I want to add the dimension to the discussion that falsifiability may be hampered by the chaotic nature of the models as well. As discussed recently here:

So, although the models may be well defined, output of model runs may vary strongly with small changes in initial states. My guess is that such model behavior actually makes it more difficult to falsify the model.

36. I’ll repeat what I’ve said on my blog here:

I’ve also seen the claim that global warming has stopped a lot lately, often this is done to criticize the IPCC and claim that the projections made by climate models are wrong. But that is at the very least based on a misunderstanding of the models involved and how these projections are made. As plateaus, like the one we are currently experiencing, are replicated by the models.

This is the very reason why I used an explanation from Ben Santer on climate noise in my video on this subject:

This is from a climate model, a Japanese climate model, uhm, these are from experiments that were performed in support of the IPCC fourth assessment report in 2007.

What you see here are tropospheric temperature time series. And you can see that there is this black line here, this small overall warming trend. This is an experiment, or a model, that is driven by changes in greenhouse gases, changes in the sun’s energy output, changes in aerosols in ozone. We call them also colloquially everything in the kitchen sink experiments, where you try and drive a climate model with your best estimate of the actual.. uh… factors that have been important over the twentieth century.

And you can calculate from this climate model something akin to the satellite temperatures that I’ve shown you. And in this particular run of this Japanese model you have a La Niña, that’s that blue thing, near the end which tends to cool it down. And you have an El Niño near the beginning, that’s the red thing, which tends to warm things up. Because this is a short record uh… only a little over twenty years, and you have a warm blip near the beginning and a cool blip near the end you don’t get much overall change.

[…]

But in the climate model world you can essentially rerun the twentieth century many times and get many different sequences of these things. And what you do then at the end is you average over all of these things, you average over these five realisations, and that beats down the noise. Because the noise is not correlated from one realisation to the next so you get a better estimate of the thing you’re really interested in which is the slow overall increase in temperature. In this case in response to human caused changes in greenhouse gases and and all that other nice stuff.

To summarize: the models used by the IPCC replicate periods similar as to what we are now experiencing. But you don’t see those periods in the projections as they are filtered out by doing multiple runs so you can see the underlying climate signal. They do this to make sure climate noise doesn’t mask these signals or skew the projections.

Also the scenario Ben Santer used to explain this is very close to our current scenario. As we had a very strong El Niño in 1998 and we’re now in a period where we’ve had more La Niña events and weaker El Niño events. All this isn’t new to climatology.

Context matters when you’re talking about projections and how accurately they represent our world.

37. Victor and Reich, you make some interesting and valid points. If I was being a little pedantic, I might suggest that one should be careful when referring to a model being “falsified”. I could envisage a scenario in which a model based on well-defined and well-tested theory never even comes close to matching reality. In this case, it could simply be that the model is missing something, so is essentially wrong. One could say that the model has been falsified, but the fundamental, underlying theory is not. On the other hand, one could envisage a model that never matches reality that you believe is complete. In such a case one might consider that the underlying theory is wrong. If true, then the theory would have been falsified. So, I suspect that we all agree that this can be quite complex but applying to philosophy alone to determine if a model is wrong/falsified is not a particularly scientific strategy.

Victor, I should probably go and read some of the books you recommend. There’s certainly much I could learn 🙂

38. Collin, that’s very interesting. Thanks.

39. toby52 says:

Just to distinguish theory from model, a model is (usually) a representation of a theory that can (often) be simulated on a computer.

I think most philosophers of science would recognise that the model depends on what are known as auxiliary hypotheses, for example that the theory represent by approximate equations on a grid can supply a output that corresponds within limits to reality. If the model output fails to represent reality, then we have to find is it the theory or one of the (often many) auxiliary hypotheses that are flawed.

The Duheim-Quine thesis is that that it is impossible to test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses). The hypothesis in question is by itself incapable of making predictions. Instead, deriving predictions from the hypothesis typically requires background assumptions that several other hypotheses are correct (thanks, Wikipedia).

Popper’s theory of falsifiability is much more complex than observing white swans unitl we find a black one. He understood this himself very well, but his detractors prefer to over-simplify his theory. I also think he was addressing Hume’s Induction Problem – how can we validly derive scientific laws from sequences of observations that depend on the assumption that nature is bound by laws?

The philosophy of science is richer than many scientists understand, but Popper’s falsifiability principle is still the one “9 out of 10 scientists prefer”, even though it may not be as simple as they believe.

40. chris says:

nice post Toby. I think we’re all in agreement about the relationship between models and theories. I said above that a theory might be a weak analogy of a theory whereas the results of running a model may be more strongly considered (testable) hypotheses. However I think you (and Wotts) depiction of a model as a representation of a theory is more sound.

However at that point we might find a disagreement between the philosophy of science accounts and science-at-the-coalface realities, and here I align with David Hume who I think deals with this in a very Scottish fashion. This relates to the Duheim-Quine thesis relating to the testability of hypothesis. I can understand the philosophical difficulty of hypothesis testing in isolation according to D-Q; however practically-speaking hypotheses in the real world are eminently testable since many auxiliary hypotheses are so well supported as to be virtual truisms. So in the example I gave in a post above about the observation from a computational model that a drug binds at such-and-such a site on a protein target (I would call observation from a computational model a hypothesis), the determination of a crystal structure that shows the drug bound just as the computational mode predicts is a very strong test (and confirmation in this case) of the hypothesis from the model, since the auxiliary hypotheses (that crystallography is a valid means of structure determination; that X-rays are diffracted by electrons arranged on a regular lattice according to Braggs law etc. etc.) are largely beyond practical question. That’s not to say that we don’t recognise the contingent nature of all our interpretations and conclusions.

So whereas the philosophy of science notion, and especially the D-Q thesis would lend to a severe skepticism (it’s very difficult to be confident we know stuff), Hume recognised that this is a severely limiting stance in consideration of reasoning especially in a well-informed world and proposed a more practical skepticism with a strong element of common sense. I suspect that in the modern world where magic has largely been expunged from explanations about the real world and the nature of causality is extremely strongly supported, that we can be even more confident of our ability to test hypotheses.

Where does this leave us wrt climate models? Popper’s criterion of falsifiability is an excellent one, and I think most working scientists have internalised this concept whether by learning or experience (there might be an element of innateness about this idea that we could expand on). A climate model (or the theory of which it is a representation) has to be falsifiable else it isn’t science. But as I think we all agree, the criteria by which a model (or its theory) is falsified, requires careful consideration.

41. Chris, possibly this is semantics, but I guess my issue is with your last paragraph. A climate model could be wrong (because it doesn’t include everything necessary to represent reality) but does that mean it’s been falsified. The climate model being wrong doesn’t mean that any of the underlying theories from which the model was built have been falsified. If, by falsified, you essentially mean “wrong” then I largely agree (with the caveat that chaos can add an extra complication) but even this is non-trivial. As you say at the end, it does indeed require careful consideration, as virtually all models (simulations to be precise) will be wrong at some level and so some judgement is required so as to assess the credibility of the model results and whether or not we regard them as “wrong” or not.

42. chris says:

I guess what I’m saying is that we can’t afford models special privilege with respect to falsifiability, especially when we may be using them to inform policy. I think we agree that climate models must be assessed in relation to their ensembles, and that the wrongness of a model (i.e. extant reality isn’t represented within the ensemble) may be a result of underlying problems with the theory, or poor model implementation (e.g. bad parameterization or misformulating interactions between parameters). The model may be only partly wrong (e.g. it gets the climate sensitivity about right so that the model under a forcing re-equilibrates at a temperature that matches the real world altered surface temperature….but (say) the time-constants for atmospheric, surface and ocean warming are not right, since real world trajectories of these observables aren’t represented within the ensemble). But there has to be some criteria against which we test the right/wrongness of models otherwise the situation isn’t very sciency!

Yes, I am interchanging “wrong” and “falsified” with respect to models in my description and this might be semantically deficient! Clearly there is a scale of “wrongness” (all models are wrong!), but surely a seriously wrong model has to be considered to be falsified (whether or not the problem is with the theory underlying the model or model parameterization/implementation which we would hope to discover by investigation). I do think “falsified” is an appropriate description since a serious inability of a model ensemble to represent reality seems to map onto how Popper designates falsifiability. Hopefully by being explicit in my description I’m disarming semantic confusion, but perhaps not.

Incidentally, I would be including smiley emoticons in my responses if I knew how to do so, to indicate that I am completely relaxed about the possibility that I might be wrong and that I have no problem with being robustly criticised (since you raised that possibility in a response yesterday) or even insulted or ignored…. [smiley emoticon].

43. I think we largely agree then 🙂 (unless your system is different to mine it’s a colon, dash, right bracket). Indeed, I was certainly not suggesting that models can’t be criticised, be wrong, or maybe even falsified, but it does require an understanding of the model and it’s strengths and limitations if one is to do so. In a sense the hypothesis is not defined in some simple sense and so falsifying it is non-trivial.

44. Popper really only gives the the criterion for distinguishing between scientific and non-scientific ideas. It does not matter whether falsification is difficult or easy. It just has to be possible in principle.

A statement such as that “the sum of two squares is always zero”, is thus a scientific theory. It is also wrong, falsified and useless.

Other scientific theories such as classical mechanics are also falsified, but still useful.

Whether a theory is falsified or not, is thus not always of importance. Popper would not care.

What kind of theories or models scientists prefer is partially subjective, based on questions such as fitness for purpose and explanatory power. In times of paradigm shifts (Kuhn) you start to notice such subjective aspects, you start to notice that the preference for one model over another is sometimes comparing oranges and apples. When the dust settles, it is typically so clear which model explains the most, that the subjective part becomes almost irrelevant. Still fundamentally, the choice is subjective. Reading “The structure of scientific revolutions” by Kuhn, I was especially amazed at how good the intuition of scientists is, how early they were betting on the right horse. On the other hand, before an established theory goes away fully, often the old scientists that have worked with this theory all their lives have to go with pensions.

Ensembles are indeed a good way to study the qualities of a climate model. You would not expect a single realisation to fit the observations due to chaos theory, but you would expect that the observation is within the ensemble spread.

However, also is you look at ensembles, climate models are wrong in some aspects. I am not a modeller, but as far as I know, the models have problems with tropics convection (the seasonal cycle of the Intertropical Convergence Zone and the Madden–Julian oscillation) and most models do not model the Quasi-biennial oscillation right, as is well known in the blogs, most models did not predict that the Arctic sea ice would disappear so fast and there is just a discussion on the missing tropical hotspot.

Are these deviations a problem? That depends on what you are interested in. If you are interested in precipitation in the tropics, clearly yes. For climate change due to greenhouse gasses? Not likely, but also not impossible. Most like this is like classical mechanics, there are deviations, but for most applications the theory is sufficiently accurate. However, such a judgement is pure subjective interpretation.

Whether it is important we will only know with more certainty when we understand why we do not model these effects well. For the QBO, a first hint is that we need climate models with a higher top height. That is why research is never finished, there are always holes somewhere. The more you study something the more you find, but also the less likely it is that they are important, but you always only know that in retrospect.

Karl Popper, Conjectures and Refutations
Thomas Kuhn, The Structure of Scientific Revolutions
Paul Feyerabend, Against Method
Proofs and Refutations by Imre Lakatos is also supposed to be a classic, did not read it yet.

45. chris says:

Pretty much agree with all of that Victor. Agree especially that it doesn’t matter if falsification is easy or difficult and that Popper doesn’t care whether a theory is falsified or not.

I read both Kuhn and Fayerabend a long time ago. In my memory Kuhn seems to me to be more relevant for the sweeping changes in scientific fields, rather than the more incremental progression of a subject which in my opinion applies to climate science. I don’t think there’s much of a revolution going on in climate science (although perhaps one can only ascertain this in hindsight). As a youngster starting out on a scientific career I found Fayerabend quite exhilarating since he represents some of the messy aspects of science-at-the-coalface that I think it would be helpful for the public to be aware of. We’re not well dressed models in lab coats with neat hair and clean fingernails! In fact Fayerabend might recognise some of the appalling behaviour of some of those scientists that wilfully misrepresent the science and consider this nothing particularly untoward. In many respects that view would be correct since the final arbiter of scientific ideas is the reality of the natural world, and however much some misrepresent or cheat or engage in dodgy “scientific” practices, science gets there in the end.

46. Feyerabend might have a hard time explaining why appalling behaviour is bad. 🙂 However, he did believe in scientific progress, so I guess he was more thinking of taking short-cuts to get to the right answer.

On the topic “we are not well dressed models”, there is a beautiful post on the advocacy discussion by Sophie Lewis at Honeybees&Helium, titled: I have a confession to make.

47. Tom Curtis says:

Victor, take the time to read Lakatos. His is by far the most coherent theory of the four. Interestingly, both he and Kuhn agreed that the sole substantive difference between their respective descriptions of science lies in Kuhn’s claim that scientific theories belonging to different paradigms are incommensurable (by which Kuhn meant that modern Relativists cannot understand the terms of classical Newtonians, and vice versa (and so on across all differing paradigms). I (and Lakatos) disagree. There is, however, clearly not a one-one correspondence in terms. “mass” in Newtonian mechanics does not mean the same as “mass” in special relativity. It does, however, mean the same as “rest mass”.

Your exposition of Popper is incorrect in that Popper did not consider either tautologies such as (x)(∃y)(x^2+y^2=/=0), nor contradictions such as (x)(y)(x^2+y^2==0) scientific. After all, there is no possible universe in which the former could be false and hence they are not falsifiable. Nor is there any possible universe where the later could be true. For Popper, propositions are scientific, iff, they are falsifiable by empirical observation.

I will note that Popper’s theory cannot handle such instances as the conservation of energy. For a long time, that theory was accepted and accepted as fundamental even though appearances were saved by a transparent book keeping device, ie, potential energy. Had the Leibnizians and 19th century thermodynamicists been Popperians, they would have considered the hypothesis falsified by the simple expedient of measuring the velocity of a projectile shot vertically upwards. Talk of gravitational potential energy would have been dismissed as a means of making the theory unfalsifiable (which is what it was).

Of course, the theory of the conservation of energy was nested in a broader theory which was certainly empirical, so it passes both Kuhn and Lakatos’ criteria for being scientific. That is not that the individual propositions within the theory are falsifiable, but that the paradigm (Kuhn) or research program (Lakatos) be judged on its ability to extend empirical content.

48. Martin says:

Wait, what? Of course Popper cared if a theory is falsified or not, in that this is the only thing we can say about a theory: if it is “wrong” (in the sense of whatever “falsified” means), or if we don’t yet know wheter it is. The sentence about truth content is straight out of his later usage as in “All life is problem solving” (google “Popper truth content”). That he did not say that a flasified theory is useless is nothing I dispute. Actually, I could quote myself saying exactly that. Actually, I made it with regard to the exact same example, classical mechanics.

Feyerabend’s “anything goes” is a reductio ad absurdum of Popper’s epistemology, which he strongly opposed (in “Against Method”). Feyerabend complained loudly how virtually everybody had been getting that one completely wrong (in ” Science in a Free Society”). “Anything goes” is not something Feyerabend approved of, or standing for “creativity”.

Elisabeth Lloyd has the money quote by Feyerabend himself:””anything goes” does not express any conviction of mine, it is jocular summary of the predicament of the rationalist: if you want universal standards, I say, if you cannot live without principles that hold independently of situation, shape of world, exigencies of research, temperamental peculiarities, then I can give you such a principle. It will be empty, useless, and pretty ridiculous-but it will be a “principle.” It will be the “principle” “anything goes”.

http://www.jstor.org/stable/188420

That’s what “anything go” is, according to Feyerabend, “empty, useless, and pretty ridiculous”.

49. > A climate model (or the theory of which it is a representation) has to be falsifiable else it isn’t science.

As far as the theory part is concerned, this claim presumes that the demarcation problem has been solved. It’s not as clear cut as Popperians would wish:

In a lecture in Darwin College in 1977, Popper retracted his previous view that the theory of natural selection is tautological. He now admitted that it is a testable theory although “difficult to test” (Popper 1978, 344). However, in spite of his well-argued recantation his previous standpoint continues to be propagated in defiance of the accumulating evidence from empirical tests of natural selection.

http://plato.stanford.edu/entries/pseudo-science/

As a guideline, it makes sense. But as a rule, I don’t think it works yet.

***

As far as the models are concerned, I think simulations does not serve a predictive function, but a projective one:

To me, arguing that climate models have to be falsifiable is like arguing that telescopes should be too. Just as telescopes help up see at a distance, model runs help us see in possibilities that are models could unfold. They are tools to help improve decisions, and might never be powerful enough to become predictive devices of their own.

50. I would argue that the equivalent of the telescope is the computer.

I see no fundamental difference between solving equations analytically and numerically. Except that analytical work is typically more insightful, if possible.

51. I would argue that our sense organs, our instruments and our theorical apparatus should be considered as a whole, Victor. I’m sure you expected me to say that, as I try to remain within character, at least as far as epistemology is concerned.

52. Maybe I should reread Feyerabend. Until then, “Anything goes” when it comes to coming up with scientific ideas or what to do when one idea is not superior to another one in all respects. Creativity and intuition are very important in this phase. is simply my own opinion.

Robert Test at Open Mind made an interesting comment on Popper:

Popper writes: “no conclusive disproof of a theory can ever be produced … if you insist on strict proof (or strict disproof) in the empirical sciences, you will never benefit from experience” (LoSD, p. 50). The parenthetical comment on disproof was added by Popper in the English edition of the book to resolve an irritating misinterpretation of his view.

53. Pingback: Science – Stoat

This site uses Akismet to reduce spam. Learn how your comment data is processed.