If the observations do not fit the theory, then the observations must be wrong, right?

Thanks to the esteemed William Teach, I found this article:

95% of Climate Models Agree: The Observations Must be Wrong
February 7th, 2014 by Roy W. Spencer, Ph. D.

I’m seeing a lot of wrangling over the recent (15+ year) pause in global average warming…when did it start, is it a full pause, shouldn’t we be taking the longer view, etc.

These are all interesting exercises, but they miss the most important point: the climate models that governments base policy decisions on have failed miserably.

I’ve updated our comparison of 90 climate models versus observations for global average surface temperatures through 2013, and we still see that >95% of the models have over-forecast the warming trend since 1979, whether we use their own surface temperature dataset (HadCRUT4), or our satellite dataset of lower tropospheric temperatures (UAH):

Click to enlarge

Whether humans are the cause of 100% of the observed warming or not, the conclusion is that global warming isn’t as bad as was predicted. That should have major policy implications…assuming policy is still informed by facts more than emotions and political aspirations. And if humans are the cause of only, say, 50% of the warming (e.g. our published paper), then there is even less reason to force expensive and prosperity-destroying energy policies down our throats.

I am growing weary of the variety of emotional, misleading, and policy-useless statements like “most warming since the 1950s is human caused” or “97% of climate scientists agree humans are contributing to warming”, neither of which leads to the conclusion we need to substantially increase energy prices and freeze and starve more poor people to death for the greater good.

Yet, that is the direction we are heading.

More at the link.

Now, if global warming or climate change or whatever the in-vogue term is this week is an hypothesis, and the scholarly articles purporting to prove the hypothesis are correct, we should expect to see the predictions made under the hypothesis confirmed through real-world observations. From a very brief section on hypothesis testing for doctoral dissertations:

  1. State the hypothesis in the null form. The null can test for either differences or relationships. If for differences, the null is either non-directional or directional, but you must be aware of which type you are using.
  2. Select your level of significance or level of probability either .05 or .01. .05 establishes a 95% confidence level and is more liberal. .01 establishes a 99% confidence level and is more conservative.
  3. Compute your statistical analysis. Determine whether you have a statistically significant result
    • No statistically significant result: Accept your null as true.
    • Yes, a statistically significant result: Reject your null as false.
  4. Determine the significance of your results. Is the statistical difference meaningful? Or is this a “so what?” finding. Concerning the last step, don’t let your ego overtake common sense.

If 95% of the observed conditions fall outside of the predicted patterns, then the researcher has not only failed on the 95% confidence level — not to mention the tighter, 99% confidence level — but it can be said, with confidence, that the observations have disproved the statistical model.

Now, a model, or, in this case, multiple models, having been proven to be incorrect predictors of events does not mean that the hypothesis is necessarily invalid. The possibility exists that the basic hypothesis is valid, but that the researcher, in his formulation of expected results, erred. Nevertheless, given that so many of the researchers have projected results which failed the test, greater weight has to be given to the probability that the basic hypothesis was invalid. But one thing can certainly be said: it is wholly unwise to try to base other actions on a basic hypothesis which has had such poor test results.

Comments are closed.