Guest post: Surface and Satellite Discrepancy

This is a guest post by Steven Mosher, whose previous guest post was about skeptics demanding adjustments. This one is about discrepancies between surface temperature and satellite temperature datasets. I don’t really need to say more.

Guest post: Surface and Satellite Discrepancy

With the publication of a new version of RSS’s data product the controversy over the accuracy of satellite data is likely to intensify. Prior to the publication of this new data, I took some time to do exploratory data analysis of RSS and the Berkeley Earth surface product. My tentative conclusion was that there were two areas that merited deeper investigation: The performance of satellite products at high latitudes and the transition between MSU data and AMSU data. With the publication of the new data, my analysis will have to be revisited ; however, there are still some things to be learned from looking at the prior version. In a complete analysis the various uncertainties of the data would have to be considered. At this stage I am only looking for fruitful areas to explore based on the known differences between the products.

The most superficial way to compare satellite data with surface data is to compare the global anomalies. Figure 1 illustrates the difference between RSS and BE. RSS data is for TLT, from the surface to several kilometers, while BE is a combination of air temperature over land and water temperature.

Figure 1

Figure 1

Over the entire period of record for RSS the difference in trend is roughly .05C per decade. However, as we will see when we push deeper into the data the reasons for this difference are not simplistically resolved.

There are several reasons why we should not be surprised about a difference between the two series:

  1. They use different estimation techniques: Satellites estimate the temperature of the bulk atmosphere by inference, translating brightness at the sensor face into temperature of the air near the surface. Surface datasets use more direct measurements to do their estimates
  2. They estimate different things. Satellites estimate the temperature of the entire column of air, while the surface data represents a layer of air and a layer of water.
  3. They take measurements at different times: Surface air measurements are made twice a day at Tmax ( whenever that occurs ) and Tmin (whenever that occurs ). SST measurements are not made twice per day, but rather randomly with respect to time. Satellite measurements are not made at Tmax or Tmin, but rather are made whenever the satellite overpasses the section of earth being sampled. This will change over time and requires adjustment.
  4. They have different coverage. While it is generally assumed that satellites have global coverage, this is not exactly true. Both the RSS and UAH product interpolate over swath gores, and both exclude high elevation areas where emissions from the earth’s surface corrupt the brightness data.
  5. The satellite record covers a period in which there was a major change in the sensor type used at a specific date. This is the shift from MSU to AMSU which occurs in May of 1998.

There are other differences between the datasets , but for now I will not focus on them.

Figure 2 illustrates the difference between BE global and RSS global anomalies. Since they use different baseline periods, the series were shifted to the same period. This shift does not change the difference in trends we see between the series. Two linear regressions were performed on the data. A slope of 0 indicates that the difference between the series is not changing over time.

The green line below represents the slope of the difference between BE and RSS over the MSU period: 1979-May 1998. The red line represents the difference between the entire series—roughly .05C per decade.

Figure 2

Figure 2

As the figure suggests during the MSU period the two records do not differ. The both measured the same amount of warming. The principle difference between the records arises because of differences that occur after the inclusion of AMSU data. Unfortunately, the switch over to AMSU happens at an inopportune time. This somewhat complicates the diagnosis of the difference to merely being a change of instrument issue. However, on its face the data suggests that further investigation of this transitional period is warranted. In other words, the data says dig here.

The difference is coverage and time of observation is more difficult to assess unless we turn to actual temperature fields. As noted above UAH does not publish temperatures. They compute them, but don’t publish that data. RSS absolute temperature fields are available, a sample month is shown below to illustrate the spatial coverage of the data. White areas have no data.

Figure 3

Figure 3

The RSS field is not global. There is no data at the poles and no data at locations of high land elevation ( see the white gores near the Andes and Tibet ). RSS temperature fields are produced in a 2.5 Degree equal angle grid, while BE is a 1 degree equal angle. Both were resampled to a ½ degree grid to maintain the resolution of the data during raster algebra operations, and then the BE field was masked to match the RSS field. Where RSS has no data we remove the data from BE so that an apples to apples comparison can be made. Also note that the BE fields are generally warmer. The reason for this is that BE samples a layer at the surface, whereas RSS estimates the temperature of a volume of air from the surface to several kilometers. The bulk of this is around 700hPA.

Figure 4

Figure 4

To compare the difference between the masked fields and determine if coverage differences between the products has an effect the masked the temperature fields were integrated over space and an monthly anomaly was calculated for both using the base period of 1979-2015. The RSS curve, of course, does not change and the trend in the BE curve is diminished slightly as a result of matching the RSS coverage.

Figure 5

Figure 5

The differences between the series does not appear to result from a difference in coverage. And we still see a difference during the MSU and AMSU periods

Figure 6

Figure 6

Next we turn to comparisons over land and ocean. The primary reason for doing this comparison is that land temperatures in surface products are made twice a day whereas satellite products are adjusted to a single time of day. Also, the Satellite approaches make different assumptions about sensor returns over land versus sensor returns over the ocean. We start with the land fields.

Figure 7

Figure 7

Figure 8

Figure 8

Integrating the fields and turning them into anomaly series and differencing yields the following

Figure 9

Figure 9

Figure 10

Figure 10

The masked land series shows a difference that is twice as large. 0.113C versus .05C. However, the AMSU effect is still apparently present. Turning to the ocean data we see the following

Figure 11

Figure 11

Figure 12

Figure 12

Figure 13

Figure 13

Figure 14

Figure 14

We should recall that the ocean represents over 70% of the surface of the planet. Over that portion of the planet the satellite series shows a much better agreement with the surface data.

Global ocean land
Berkeley trend .170C .130C .278C
RSS trend .123C .107C .165C
Difference trend .047C .023C .113C

Further, we also note that the AMSU effect is still present. Over the 1979-1998 period there is effectively no difference between the rate of ocean warming measured by MSU and the rate of warming of SST measures. However, there is a difference once we include AMSU data into the satellite series.

The analysis above suggest two lines of inquiry: A more comprehensive look at MSU and AMSU differences and a more comprehensive look at how temperatures over land are estimated.

There are two aspects of land surface measurements that we can look at in this analysis. The first has to do with time of observation. The land data is measured twice a day. Once at Tmax and once at Tmin. Satellite data is not measured at any consistent time. Consequently RSS adjust their data to represent   the temperature at local noon. For example, if the satellite overpass was at 8:49AM, the data is adjusted to represent a “local noon” measurement. RSS does this by applying a diurnal adjustment taken from GCM results. For the next series of charts, we will compare the trend in Tmin from BE with RSS “local noon” trend, and the Trend in BE Tmax with the RSS local noon. The concern here is as follows. The trends in Tmax and Tmin are not the same: Over this period BE Tmax trend is .3C per decade, while the trend in Tmin is .22C per decade. Averaged, they come out to a trend of .27C decade. However, RSS only measures the trend at local noon.

Figure 15

Figure 15

Figure 16

Figure 16

Land Only Tave Tmax Tmin
Berkeley trend .278C .309C .225C
RSS trend .166C .166C .166C
Difference trend .122C .143C .059C

One thing that has been suggested is that the difference between RSS and land records has to do with UHI. Since UHI typically impacts nighttime temperature rather than daytime temperatures we might expect to see larger differences in the Tmin comparisons. We don’t. In summary, the primary difference between RSS and BE is in the land and not in the Ocean. And on the land the difference is greater if we look at Tmax rather than Tmin, suggesting that UHI is probably not a sufficient explanation for the difference between the series. The fact that the MSU are trends are small re enforces this point.

In reading through the RSS documentation there was one other assumption that got my attention: an assumption of constant emissivity. In order to estimate the temperature the RSS approach depends upon an assumption that the emissivity of the earth is constant over time. However, given the changes in landcover over the period in question (1979-Present) we know this is not strictly true. Cities change emissivity. Greening of the planet changes emissivity. And changes in snow cover could change the emissivity.

The next question was is there any spatial pattern of difference between the two records? To anwser this the monthly difference between the fields was calculated on a spatial basis. The pattern that emerged suggests another avenue for investigation. When we difference the BE fields and the RSS fields we can immediately note areas where there are temperature inversions. RSS is effectively estimating the temperature of the air kilometers above the surface. And in general it is colder than the surface. However when we difference RSS with BE over land we find areas where the temperature at altitude is warmer than the surface. As an example I have taken Radiosonde data for a single location as an illustration of what an inversion pattern looks like. Below I have plotted the sonde temperature at 00Z (UTC), 12Z and the records for BE Tmin, Tmax and Tave. The RSS temperature has been forced to fit the line at around hPa750. Since RSS measures the bulk air column and not a specific pressure level this is an illustration only, but it highlights as well the difficulties inherent in comparing a satellite product that integrates over an entire volume of space at changing times with a Sonde record that captures air temps at discrete levels at different times and a surface product that captures temperature twice a day.

Figure 17

Figure 17

Globally we see the following areas where RSS suggests a temperature inversion. Red areas are locations in which there never was a temperature version over the entire record. Blue depicts the regions where there was at least one month with a temperature inversion. The crosses depict the sonde stations where records are complete enough to do meaningful comparisons.

Figure 18

Figure 18

Below is a gray scale version of the percentage of time a particular area has temperature inversions according to RSS comparisons with BE

Figure 19

Figure 19

Using the inversion mask we can then compare the difference in those areas where there are no temperature inversions to those areas where there are inversions. First the non inversion areas.

Figure 20

Figure 20

Figure 21

Figure 21

The data here suggests the following. As with the Ocean there is little difference between land temperatures and satellite temperatures. And the differences that exist are strictly confined to the AMSU period of coverage. Over the ocean, over the land where there are no temperature inversions, during the MSU period, there is no difference between what satellites estimate and what the surface shows. This comparison hardly argues for a systematic bias in surface products. Turning to areas where RSS indicates there is a temperature inversion we see the following.

Figure 22

Figure 22

These results are summarized in the following table.

All Land No Inversion Inversion
BE trend .278C .152C .48C
RSS trend .166C .115C .257C
Difference trend .122C .037C .223C

And finally:

Figure 23

Figure 23

At this point we can point to a couple areas that merit further investigations. First is the transition of MSU to AMSU. The transition occurred at an inopportune time ( the middle of 1998 ) basically in the middle of a temperature spike and the differences between MSU and AMSU may turn out to be unimportant. Given the new RSS dataset, this will be an interesting metric to re compute. Also, the data suggests that differences in land temperatures dominate and in particular those areas where temperature inversions dominate. Those are the areas I will dig in the new RSS data.

HatTip: To Eli and Tamino for inspiring the line of inquiry

This entry was posted in Climate change, Science and tagged , , , , , . Bookmark the permalink.

48 Responses to Guest post: Surface and Satellite Discrepancy

  1. Sam taylor says:

    Could we get an acronym buster in? I’m sure there’s one or two in there I’ve forgotten.

  2. MikeH says:

    Carl Mears now has a blog post up explaining the reasoning behind the changes in v4.0

    “The RSS Middle Tropospheric Temperature Now V4.0”
    http://www.remss.com/blog/RSS-TMT-updated

  3. Interesting. I hope you are doing this for fun. If you find the reason, WUWT will call it an “excuse”.

    Three first thoughts for further work.

    1). Part of the difference will be because the tropospheric temperature responds stronger to El Nino than the surface temperature. It might be easier to see other reasons if you adjust the data for ENSO.

    2) Rather than looking at the trend for the MSU period and the full period, I would “decompose” the differences by looking at the trend for MSU, the trend for AMSU and the difference in mean between the two periods. That is also something you could again make spatial maps of.

    3) Tmean is likely more reliable than Tmin and Tmax, especially in this period where we are making the transition to Automatic Weather Stations (AWS). The response time of AWS is often different from observations made in Stevenson screens (this leads to higher Tmax and lower Tmin) and AWS have different radiation errors, depending on design and climate. So Tmax and Tmin are interesting to get an idea about the reasons for the differences, but I would focus the analysis on Tmean.

    I did not understand how your inversion detection works.

  4. I did not understand how your inversion detection works.

    Consider the BE temperatures to be represented by a cube: Lat,Lon,Date
    Consider the RSS temperatures to also be a cube.

    Then simply compute the difference BE-RSS for every Lat,:Lon,Time. basically the difference
    between the temperature estimated by RSS (Aloft) and the temperature given by BE ( at the surface). Areas where RSS is warmer than the surface show up as negative numbers.

    You end up with a “cube” or “brick” of differences: Lat.Lon,Time

    To create the mask you mark a grid as ‘Inversion” if during any month there was an inversion.
    The percentage map is created the same way.. 100% means every month of the record has that grid as “inverted” RSS is warmer than the surface.

    I started down the path of doing some comparisons to Sonde data in that region, but got sidetracked.

    One issue, of course is the accuracy of the RSS temperature estimation, as “inversion areas” falls out of that directly.

  5. Ethan Allen says:

    Great effort Steve.

    We may not see an updated RSS v4.0 TLT product (don’t ask me why, because I don’t know, but can only hazard a guess or three).

    (1) The ‘misuse’ of the RSS v3.3 TLT product from the likes of Monkers, Lamar Smith, Judith Curry and Ted Cruz, et. al. (politicization of data) as well as the UAH group attempts to “copy” (I do need to be really careful, but there does appear to be some they’re there) the RSS v3.3 data product via their v6.0 product (a veiled attempt to create a ‘gold standard’).

    (2) The RSS website currently reads as something along the lines of RSS v3.3 will continue alongside the two new RSS v4.0 products or some such. Only time will tell.

    (3) I’m probably reading too much into the title of the new RSS paper, “Sensitivity of satellite-derived tropospheric temperature trends to the diurnal cycle adjustment”” suggests that these types of satellite measurements are indeed very ‘trend line’ sensitive to various input assumptions.

    You should look into a similar analysis of the UAH v5.6 and v6.0 data products (I sort of expect v6.0 to be much more similar to the RSS v3.3 than the UAH v5.6 product is to the RSS v3.3 product).

    NOAA STAR has a monthly gridded absolute data product updated monthly for TMT. RSS v3.3 and v4.0 has TMT (gridded for both?) products. UAH v5.6 and v6.0 has TMT products. All groups have TMT products. IMHO, that’s where we should be looking.

    So much time and effort is/has/will be wasted on TLT products. Products never intended to be metrics for Earth’s surface boundary layer.

    Everett F Sargent

  6. “3) Tmean is likely more reliable than Tmin and Tmax, especially in this period where we are making the transition to Automatic Weather Stations (AWS). The response time of AWS is often different from observations made in Stevenson screens (this leads to higher Tmax and lower Tmin) and AWS have different radiation errors, depending on design and climate. So Tmax and Tmin are interesting to get an idea about the reasons for the differences, but I would focus the analysis on Tmean.”

    YA.. what I was hoping to find in TMAX wasn’t there. had RSS followed TMAX trend better than Tmin trend.. that would have pointed to some interesting questions. Pretty much a dead end.. but
    wanted to present all the stuff I was looking at.

    The other problem is that Tmean trend is different from Tmin trend and Tave trend and
    one could argue that RSS is going to be closer to a Tmax trend since they adjust to local noon.
    That was my thought at least..

  7. “(2) The RSS website currently reads as something along the lines of RSS v3.3 will continue alongside the two new RSS v4.0 products or some such. Only time will tell.”

    argg. I was hoping they would deprecate it. With GHCN I continually get questions about GHCN v2!! and USHCN.. despite the deprecated status of both. i can understand keeping
    an archive version, but at some point I would hope they settle on one.

    (3) I’m probably reading too much into the title of the new RSS paper, “Sensitivity of satellite-derived tropospheric temperature trends to the diurnal cycle adjustment”” suggests that these types of satellite measurements are indeed very ‘trend line’ sensitive to various input assumptions.

    A) yes. see Mears 2011.
    B) folks are only becoming aware of the structural uncertainty. back when Ar5 was in draft
    I was really impressed by some of the hings Peter Thorne was writing, so I’ve become
    more sensitive to the issue. RSS are to be applauded for showing sensitivities to
    analyst’s choices.

    “You should look into a similar analysis of the UAH v5.6 and v6.0 data products (I sort of expect v6.0 to be much more similar to the RSS v3.3 than the UAH v5.6 product is to the RSS v3.3 product).”

    Ya, I just found UAH annual cycle so I can create temperature fields. Ordinarily I would work in anomalies but I wanted to impress up folks ( read skeptics) that these two metrics measure different things.. a happy fall out of that was seeing “inversion” areas.

  8. Could we get an acronym buster in? I’m sure there’s one or two in there I’ve forgotten.

    BE berkeley earth
    RSS remote sensing systems (??)
    UAH Univ. Alabama, Huntsville
    Tmax max temperature
    Tmin min
    MSU microwave sounding unit
    AMSU advanced MSU
    UHI urban heat Island

    I think that is all of them

  9. Ethan Allen says:

    Steve,

    What VV said (particularly point 1).

    Also, as to the ‘Inversion’ product, semantics or nomenclature, but most people might assume its classical atmospheric definition.

    I am sort of hoping that latitude, longitude and elevation dependent curves are used for their channel definitions and channel technologies (if not then oh boy), basically the ~30% of Earth that is land). A lot of this stuff is not immediately obvious at first glance (I need to read some prior art so to speak).

  10. “1). Part of the difference will be because the tropospheric temperature responds stronger to El Nino than the surface temperature. It might be easier to see other reasons if you adjust the data for ENSO.

    Ya, Eli has something interesting posted up. That’s probably one of the next things for me to look at.

  11. Nice work, poor timing 🙂 I think many of these issues will greatly decrease with RSSV4. Many of us suspected an RSS revision was in the works and that it would be based on Po-Chedley et 2015. Sure enough Mears acknowledges that they based V4.0’s diurnal adjustments on the work of Stephen Po-Chedley, Tyler J. Thorsen, and Qiang Fu, 2015: Removing Diurnal Cycle Contamination in Satellite-Derived Tropospheric Temperatures: Understanding Tropical Tropospheric Trend Discrepancies. J. Climate, 28, 2274–2290.

    In Mears’ response to criticism of V4.0 I like the little dig at S & C you can find in the very last section:

    “Datasets used for comparison in this post and the V4.0 paper are available as below:”
    ….
    University of Alabama, Huntsville Data: No relevant paper has been published.”

  12. Ethan Allen says:

    Steven,

    Yes. Eli got me going on the higher frequency similarities. Thanks Eli.

    I misspoke in one of my recent posts over there at RR, I was NOT using IIR filtering (that one becomes very tricky for short duration data) I was using bog standard LOESS FIR filtering via commercial-of-the-shelf (or COTS in military lingo) SW (OriginPro 2016 and/or Matlab 2015b) then doing a whole bunch or residual autocorrelations versus LOESS span (% but prefer using N). Found what I thought was the optimal N versus residual R^2 (surface temperature indices formed one comparison subset, likewise satellite). Now that let’s me see the somewhat higher frequency structural differences and it’s on to FFT characterization.

    Just don’t do 1st (slope) and 2nd (acceleration) 9-point FD (finite difference) stencils on the LOESS curves (is that a feature or a bug of the LOESS algorithm, after all, it is only 2nd order correct in the strictest sense)

    Finally, I did lagged (in both directions for causality) autocorrelations between the surface and satellite groups, that one showed some promise (R^2 ~ 10% higher for say 2-10 month (satellite follows surface on the timeline)), but I was thinking, now I’m getting in a bit over my head, I only have monthly resolution, don’t know if it was real or an analysis artifact, need a statistical test, don’t have one, I’m just an olde tyme OLS stats person.

  13. Steve, good job. I applaud your investigation.

    One thing about the difference in areas of strong ( and deep ) inversions – those polar areas are the same areas modeled to have a decrease in warming with height.

    From roughly 90S to 60S and 60N to 90N, less warming with height.
    From roughly 60S to 60N, greater warming with height ( the Hot Spot ).

    The polar regions appear to be behaving as modeled, it’s the mid-latitudes and tropics are not.

  14. JCH says:

    With respect to the often El Nino spike sometimes seen, do people agree or disagree that there is also an occasional La Nina somewhat super dip? Post 2005, I think ONI is net negative in spite of the 2015-2016 El Nino.

  15. ehak says:

    From Mears 2009:

    http://journals.ametsoc.org/doi/abs/10.1175/2008JTECHA1176.1

    “The diurnal adjustment for AMSU5 is about 40% larger than that for MSU2 for the same crossing time. This is because 1) the surface contribution for AMSU5 is about 35% larger than MSU2 and 2) the AMSU5 weighting function has more weight near the bottom of the troposphere where the diurnal cycle is large over land areas. It is possible that significant errors are present in the CCM3-derived diurnal cycles, since errors have been demonstrated to be present in the diurnal cycle of cloud cover and precipitation, and the diurnal cycle in near-surface air temperature appears to be too small in the model (Dai and Trenberth 2004).

    That is what RSS4.0 is about. A new diurnal cycle adjustment. Even more necessary to get right with AMSU than MSU. The noise being made for NOAA-14 vs NOAA-15 from Spencer & Christy is rather irrelevant here. Does not change much. RSS keep both in v3.3 and v4.0. Spencer & Christy throw out NOAA-14. And that is probably an unwise decision. Why should NOAA-14 suddenly become unreliable after 1998? AMSU4 and AMSU6 on NOAA-15 both (yes – even AMSU6) have higher trend than AMSU5 on NOAA-15. Why trust AMSU5 on NOAA-15 when that is the outlier compared to MSU2 on NOAA-14 AND AMSU4 and 6 on NOAA-15?

    The big mistery here is how Spencer & Christy first argued that RSS got the diurnal adjustment wrong a year ago. But then they started to apply a similar diurnal adjustment they opined against. I believe the biggest smoking gun for wrong diurnal adjustment is the difference between TMT land and measured temperature at 2 meters. That is where that adjustment matter most. There is good agreement between MSU and land temperature before 1998. Not after. Why should land measurements suddenly be wrong after 1998 and not before?

    Very unlikely.

  16. Does permafrost have a different emissivity from unfrozen ground? Does snow cover alter emissivity? Both things have changed enormously over NH land. Were does the information on the skin temperature of the surface come from?

  17. paulski0 says:

    I’ve been digging into UAH v6beta5, focussing on NH Extratropical land (20N-90N), versus Berkeley Earth with the same parameters. There were seasonal differences in the relationship between surface and troposphere. Summer (JJA) appeared to show quite a close inter-annual variability match, so I felt focussing on that would control somewhat for divergences arising from variability responses. The difference plot for UAH versus Berkeley shows a fairly absurd step change around 2000. Also shown is the equivalent comparison to the previous beta 4 UAH version.

    A difference plot comparing NH Ext monthly anomalies between beta 5 and beta 4 offers a revealing glimpse into the sausage factory.

    Roy Spencer described the reason for and effects of the beta5 update. Essentially they were trying to fix unlikely cooling trends over Greenland and the Himalayas, and some apparent discontinuities, by adjusting the reference Earth incidence angle for AMSU data. While it may have fixed what they were looking at, they seem to have introduced another big discontinuity in NH Ext Land temperatures. Immediately made me think of <a href="https://www.youtube.com/watch?v=8mdwAkWvWMw"this (apologies for poor quality).

    I think RSS miss out those high altitude regions in TLT, perhaps because of these calibration issues. But that brings up the question which seems to be an underlying issue here: is it realistic to have a single global calibration for all regions of the planet?

  18. paulski0 says:

    Dammit, this is the UAH – Berkeley difference plot. And this is the difference plot for yah v6 beta 5 versus beta 4.

  19. JCH says:

    When I add up the ONI numbers from 1979 thru 1998, I get 44.3, which is a dominance of EL Nino and EL Nino leaning ENSO neutral over La Nina and La Nina leaning ENSO neutral.

    The same number for 1999 to present is -16.4, but if I eliminate the current El Nino with some additional cherry picking, it’s -42.3.

    So I still think it is possible the satellite series, which were programmed during a period of El Nino dominance, are not programmed to accurately measure a period of La Nina dominance. I know it’s highly likely I am wrong about this, but I do look forward to you guys getting this sorted out.

  20. they were trying to fix unlikely cooling trends over Greenland and the Himalayas,

    Yeah, I’m wondering if that was wrong to fix – perhaps air over Greenland does have a cooling trend, or at least, less warming than the global mean.

    The majority of Greenland surface is more than 2000 meters MSL, and perhaps a quarter is higher than 3000 meters. That terrain imposes similar temperature profiles to the South Pole, where it is demonstrable that increased CO2 causes more energy to escape to space, not less. That effect is not year round, being greatest in winter, but even in summer, when RF is positive, the RF over the South Pole is less than the global mean. The high terrain of Greenland ( and perhaps also the Himalayas ) may exhibit similar effects. I will run a test and ping here.

    Spencer might do well to ignore preconceived notions of what the temperature should be.

  21. Kevin O'Neill says:

    TE: Central Antarctica is the only place on Earth where surface temperatures are regularly colder than those in the overlying stratosphere. If has been suggested this increases the amount of heat escaping into space, it is unique to central Antarctica. Mangling the science seems to be a habit of yours.

  22. Essentially they were trying to fix unlikely cooling trends over Greenland and the Himalayas, and some apparent discontinuities, by adjusting the reference Earth incidence angle for AMSU data.”

    #########################

    I have seen some stuff in there code where they fiddled around with excluding areas above 1000m and 1500m ASL.

    I should probably load up UAH data and look at the same things I looked at in RSS.

  23. geoffmprice says:

    Just want to chip in thanks to Steven for writing up and sharing the analysis, very interesting explorations.

  24. “Does permafrost have a different emissivity from unfrozen ground? Does snow cover alter emissivity? Both things have changed enormously over NH land. Were does the information on the skin temperature of the surface come from?

    Permafrost? I’ve seen a couple articales that suggest it does have a different emmissivity
    Snow: yes it has a higher emmissivity

    Skin temperature? UAH and RSS don;t use skin temperatures.

    In some cases skin temps ( say from MODIS ) are inferred from the surface reflection AND
    an assumed emissivity based on land class. Other sensors ( IR ) would do a more direct measure ( ASTR for example.. recalling from memory )

    here are the sections that got my attention

    For RSS the Tb ( temperature brightness ) is a function of the

    The Surface Temp + The Temp At altitude z.. plus the integral of the two through the entire column ( 0-z)

    “The surface weight and the temperature weighting functions are
    dependent on the atmospheric absorption coefficient
     z
    as a function of height z, the
    surface emissivity es, and the Earth incidence angle  (Ulaby et al., 1981). The surface
    weight is given by the product of es and the attenuation from the surface to the top of the
    atmosphere”

    ” Land surface emissivity was assumed to be
    0.9, independent of incidence angle, an approximation which is supported by
    measurements at 37 GHz and 85 GHz (Prigent et al., 2000). The resulting weighting
    functions for AMSU5 peak about 500 meters closer to the surface, and the contribution of
    the surface is increased by about 35% relative to the MSU2 weighting function. Taken
    together, these changes result in a brightness temperature increase for AMSU5 relative to
    MSU2 of between 1.0 K and 3.0 K, depending on the surface type and local atmospheric
    profile. These differences must be removed before the AMSU results can be merged with
    the previous MSU data”

    The paper they cite

    Click to access 2000_IEEE_emisAMSU.pdf

    A couple issues here: The paper does support the contention that emmissivity is independent of incidence angle, but a constant emmissvity of .9 for the land surface isnt correct. I have no idea if it contributes to the problem.. but snow is generally going to a higher emmissivity
    so if snow cover changes over time that will create something of a false trend in the final product. How big? dunno yet

  25. cce says:

    I would think with overlapping MSU/AMSU , radiosondes, meteorological stations, AVHRR SST, ship/buoy SST, and NMAT, there would be enough information to homogenize this data. Even if no single data source is perfect, there should be enough information to tease out break points and sensor/measurement drift with the right algorithm.

  26. I would think with overlapping MSU/AMSU , radiosondes, meteorological stations, AVHRR SST, ship/buoy SST, and NMAT, there would be enough information to homogenize this data. Even if no single data source is perfect, there should be enough information to tease out break points and sensor/measurement drift with the right algorithm.

    ###############################

    Ya. It’s a fairly complicated problem. I would think it would take a couple of different experts.
    I’d say Thorne, Mears, Kennedy and Housefather..

    I like to give homework.

  27. cce says:

    That’s the right team.

  28. niclewis says:

    Steven, Thanks for carrying out and writing up this interesting and informative analysis. I was aware of the temperature inversion issue, but I had not seen it quantified.

  29. Franktoo says:

    Steve: Very interesting work. How about a fundamental question: Why do we expect surface temperature (2m over land, SST for ocean) to show the same temperature and temperature trend as the lower troposphere measured by MSU’s?

    1) Obviously we don’t expect the same temperature – they differ by about 10 degC. You are looking at tenths of a degC change in that difference over decades! We only expect a constant temperature difference when we have a constant lapse rate. You are looking into inversions (in lapse rate), but the problem is far more complex than that. For example, the atmosphere warms more than the surface during El Nino’s for example. It isn’t obvious that the trend of UAH/RSS SHOULD AGREE with that of surface measurements to less than a tenth of a degC/decade over several decades. Consider the change over decades caused by the AMO, for example.

    2) You don’t mention anything about the planetary boundary layer, which expands during the day due to convection, shrinks at night, and varies in thickness with the speed of the wind. I don’t have a very clear idea how much of the air RSS/UAH samples is boundary layer and how much is free atmosphere. If I remember correctly, some long-term changes in wind have been reported. Changes in wind are certainly a part of El Nino phenomena. https://en.wikipedia.org/wiki/Planetary_boundary_layer

    3) We are constantly measuring the temperature in thousands of locations. When the environment is well-mixed, they will all tend to produce the same temperature – with a constant lapse rate (and latitudinal gradient and seasonal change). When they aren’t well-mixed and heat capacity is low, you will find extremes. This is the case for land surface temperature when there is little wind. It might be interesting to compare the differences between RSS and surface records in well-mixed regions (the Roaring Forties) and calm regions (the horse latitudes).

    It may well be there is a systematic error in one or more of our methods for measuring temperature: a) In RSS/UAH, as suggested by the MSU/AMSU change or the diurnal corrections (which are essential given the daily change in the planetary boundary layer). b) In integrating the changing sources of SST data. c) Dealing with discontinuities in the surface temperature record. Detecting trends of 0.1 degC/decade is challenging for all three methods. If all the diurnal corrections and sensor change corrections have been made correctly, RSS/UAH has the advantage of sampling the temperature of a large mass of well-mixed air with the same technology over more than three decades.

  30. Franktoo says:

    Steve: I can see how the AMSU/MSU switch could produce a step-function change in temperature, but such step-functions are corrected with each sensor change. If the sensor change were producing a change in trend, then the slope of the signal vs temperature plot must change. If that happened, there would be a massive change in the latitudinal temperature gradients produced by the two types of sensors – there are massive latitudinal temperature gradients on the planet – 100X bigger than trend differences you are analyzing. So there may be a simple test that will eliminate this possibility.

  31. Frank,
    Can you please stop peddling?

  32. Geoff Sherrington says:

    Ethan Allen says: March 9, 2016 at 1:30 am
    Finally, I did lagged (in both directions for causality) autocorrelations between the surface and satellite groups, that one showed some promise (R^2 ~ 10% higher for say 2-10 month (satellite follows surface on the timeline)), …..

    Thanks for that. I was about to post that visually, over ocean, RSS lags BE by about 4 months on this monthly data, particularly after about 2008, but elsewhere, and was about to suggest a look at lagged correlations. At this time I have no idea to offer on a mechanism and am confining this comment to pattern matching.

  33. Geoff Sherrington says:

    Steven Mosher says: March 9, 2016 at 12:48 am re further work

    Thank you for a detailed effort to pull some meaningful signal from the noise. There comes a time when the noise overwhelms the effort.
    Couple of Q if I might:
    1. Fig 12 has different colour reference bars for BE and RSS that makes comparison harder.
    2. Much of the difference BE v. RSS in the early part of your analysis is dragged around by northern Siberia and Greenland. Later, the inversion mask fig 19 picks out these areas also. Inversion as you describe is one possibility. Are you concerned about land data quality in the BE input here?
    3. I syour ultimate objective to produce a once-for-all-time homogenisation? I wish you well when factors like change from thermometers to MMTS type sensors, the full significance of UHI measured by multiple sensors around towns simultaneously, and other diverse goodies seem to pose a limitation of final data quality, no matter how much homogenisation you do.
    4. As you know, I’ve studied the Australian record for years and conclude that there is NO WAY I would use the homogenised ACORN-SAT product.
    Cheers Geoff

  34. Greg Goodman says:

    In reading through the RSS documentation there was one other assumption that got my attention: an assumption of constant emissivity. In order to estimate the temperature the RSS approach depends upon an assumption that the emissivity of the earth is constant over time. However, given the changes in landcover over the period in question (1979-Present) we know this is not strictly true. Cities change emissivity. Greening of the planet changes emissivity. And changes in snow cover could change the emissivity.

    But the satellites are not measuring the ground temperature but the air mass at considerable height above the ground. You are talking about the emissivity of the wrong thing. There may be something to said about this assumption but it is not related to land usage.

  35. Greg,
    Steven can correct me if wrong, but I think the point is that what is measured on the satellite depends on the emissivity of the surface.

  36. Greg Goodman says:

    Further, we also note that the AMSU effect is still present.

    The principle difference between the records arises because of differences that occur after the inclusion of AMSU data.

    You are asserting that this is cause of the change not showing that is the case objectively . As with most trend obsessed analyses, it all depends upon where you start and finish. You have decided that there is a problem in 1998 and CHOOSE to break your trend analysis at the point.

    I suggest you try again, without leading the eye and your analysis with your preconceived assumptions, and plot the DATA without the trend lines and ask where does the divergence start. It seems to me that difference plots remain fairly flat to about 2003 or 2005.

    Starting and ending your trend plots at the peak of the largest divergence in the whole data record is a very bad choice. The only reason for doing so is your ASSUMPTION that there is a problem with the change-over. You are starting with that a priori assumption and the rest of you analysis is based on and biased by that choice.

    This is essentially pre-selection bias.

    Breaking the data down as you have done is a very good idea, to see where the differences are coming from.

    One thing that is obvious from all your difference plots is that the satellite data are more sensitive to change. If you plot the subtraction the other way around it would be obvious that the residual is very similar to the TS itself. It may be possible to remove some of this correlation by rescaling the satellite anomalies or applying a very light low-pass filter like a 1-2-1 binomial, or a gaussian with sigma=2mo. This may help clarify the divergences.

    Your figure 10 shows that the most remarkable divergence is in the general warming that has been building since 2010-2011 where, oddly, the surface record is showing more warming than the satellites.

    This is contrary to the general observation that satellites are more volatile. This is the main feature that deserves more attention if we are seeking to identify differences between the surface and satellite records.

    It would be worth repeating the exercise using UAH extractions since they do not use the GCM based correction but consciously remain with observational data to achieve an equivalent adjustment. Their results are marginally warmer than the traditional RSS you analysed here.

    That the generally more volatile satellite record has show less warming since 2011 seems worth digging into. This seems to be the main cause of the recent divergence.

    If you plot your fig 21 (inverted) on top of the surface data in fig 20 you will see a strong similarity until the last 5 years or so where they will diverge in opposite directions.

  37. Greg Goodman says:

    Greg,
    Steven can correct me if wrong, but I think the point is that what is measured on the satellite depends on the emissivity of the surface.

    That is certainly what he is suggesting but I don’t think that is correct at all. There are tropospheric air temperatures based on microwave emissions from the atmosphere. IFAIK the ground is not part of this calculation at all. That is why they mask out high altitudes where the ground would interfere.

    BTW, could I suggest that you configure WP to allow at least one level of indentation for replies. It avoids parallel comments getting intermingled and the whole thread becoming rather difficult to follow. Usually one or two levels works fairly well.

  38. Greg Goodman says:

    I

    would think with overlapping MSU/AMSU , radiosondes, meteorological stations, AVHRR SST, ship/buoy SST, and NMAT, there would be enough information to homogenize this data.

    I would think that the overlap has already been carefully studied and calibrated using both the overlap and the radiosonde data. Before suggesting things need “homogenizing” maybe you should look in detail at that overlap and how it was treated.

    IMHO there is far too much “homogenizing” going in climatology. Sometimes we need to accept that experimental data show different things and stop trying to do post hoc fudging to get nice graphs or agreements that we expect should be there .

    As you quite rightly point out the surface and satellite records are measuring quite different things. The first possibility to consider if they diverge slightly is that this may be information about the system that we should be trying to understand, not “correct”.

    Land and sea are very different systems. Land and air very different. The volatility of the satellite data is very likely a real physical effect due to the difference media, not something that needs “homogenizing” , likewise any divergence may also be physical.

  39. Greg Goodman, you may wish to read the following posts by Tamino:
    Ted Cruz Just Plain Wrong
    El Nino and Satellite Data

    Tamino put this together comparing RSS to balloon observations:

    And there’s this via Nick Stokes:

    The divergence is very clear starting in the 1998-2000 period.

  40. Greg,

    There are tropospheric air temperatures based on microwave emissions from the atmosphere.

    Yes, but I think this somehow depends on surface emissivity. Victor or Steven can clarify.

    BTW, could I suggest that you configure WP to allow at least one level of indentation for replies.

    You can suggest it, but I’m quite happy with it as it is. I used to have nested comments and it was suggested changing it to this, and it’s worked pretty well.

    IMHO there is far too much “homogenizing” going in climatology. Sometimes we need to accept that experimental data show different things and stop trying to do post hoc fudging to get nice graphs or agreements that we expect should be there

    IMHO, this indicates that you don’t really understand the underlying scientific principles. If we had started 150 years ago with a measuring system that was designed to determine climatological changes, maybe we wouldn’t need as much homogenizing. Since we didn’t, we probably do. The idea that we just let data speak for itself is naive and not a good scientific practice. You do need to understand what it is you’re trying to measure and how to get that signal out of the data.

    I wasn’t going to respond to your comment here. I’ll leave this for Steven. Can I suggest, though, that you tone down the condescension a little bit.

  41. Greg Goodman says:

    IMHO, this indicates that you don’t really understand the underlying scientific principles.

    I did not say that there is no need for data to be adjusted for the many changes in data collection which have occurred during the last 150 years. I said that there was too much homogenisation: the expectation that all data should be telling the same story and if they don’t they need “correction”. As I underlined it was Steve Mosher who correctly pointed out that these are different measurements and different media. There is no reason why there should not be some small variations that are REAL. The assumption that all datasets should agree is driven by a desire to have a naive, simplistic explanation for the changes. Objectively they should be expected to vary and we may learn something from that if we spend less effort trying to “correct” the data to fit expectations.

    I have a lot of respect for Mosh, he’s smart and generally quite thorough. That does not mean I will not point out where I think he is wrong. This article would have better been done without the trend analysis. This obsession with “trends” is one of the biggest hold ups in understanding climate. We would do a lot better if we banned the word and force people to do more than click on the “fit trend” button in Excel of whatever.

    Mosh’ is capable of complex analysis, the trends are misleading.

    The idea of comparing and breaking the data down is a good one as I said.

  42. Greg Goodman says:

    And there’s this via Nick Stokes:


    “The divergence is very clear starting in the 1998-2000 period.”

    That very clearly depends up on which eye you want to close first. What I see is one ‘divergence’ starting in 1992 and an opposing ‘divergence’ starting in 2003. They now seem to be very rapidly ‘converging’. None of this seems to have any relation to MSU/AMSU change-over in 1998.

    At least Nick lets the data speak for itself and does not start forcing the eye by fitting arbitrary straight lines to it. That is how it should be shown.

  43. Greg Goodman says:

    Actually, it looks like Nick’s graph there is a plot of rate of change trends, and to look at the form I’d say it’s trends over an ever shortening period. You seem happy just to look at a graph and convince yourself it fits your prior expectations and conclusions.

    Nick’s comment under that graph:

    Update. UAH V5.6 still shows 2015 as the third hottest year. The first of my plots shows why. The troposphere really is a different place. Temperatures respond very strongly to an El Nino event, but late, in the year following.

    The troposphere really is a different place. Exactly my point. That does not mean it needs ‘homogenizing’ to agree with the surface data.

  44. Greg Goodman says:

    Oops, messed up the quotes. The last para is mine, not Nick’s:

    The troposphere really is a different place. Exactly my point. That does not mean it needs ‘homogenizing’ to agree with the surface data. [Mod: fixed]

  45. Greg,

    I said that there was too much homogenisation: the expectation that all data should be telling the same story and if they don’t they need “correction”.

    I think this misrepresents the motivation behind homogenisation. However, there are reason why we might expect some similarities. We are trying to measure climate not weather.

  46. MartinM says:

    Yes, but I think this somehow depends on surface emissivity.

    Yes, it does. Photons don’t come with little tags identifying their source; we can’t detect only atmospheric emissions and screen out surface emissions. Those MSU/AMSU channels which have the bulk of their weight in the troposphere also have significant contributions from the surface.

  47. Pingback: 2016: A year in blogging | …and Then There's Physics

  48. Pingback: The Hack That Changed the World | …and Then There's Physics

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.