By looking at satellite measurements of temperature changes in the lower layer of Earth’s atmosphere, scientists found that climate models may have overestimated the decade-to-decade natural variability of temperature.
Lawrence Livermore National Laboratory (LLNL) statistician Giuliana Pallotta and climate scientist Benjamin Santer created a statistical framework to comprehensively assess the significance of differences between simulated and observed natural variability in mid- to upper tropospheric temperature (TMT). The troposphere is the lowest region of the atmosphere, extending from the Earth's surface to a height of about 4 to 12 miles, depending on latitude and season.
The team found that in current and earlier generations of climate models, the natural decade-to-decade variability of tropospheric temperature is systematically too large relative to estimates of natural variability obtained from satellites. Such an overestimate of natural “climate noise” would make it more difficult to identify a human-caused tropospheric warming signal. The research appears in the Journal of Climate.
“Our findings enhance confidence in previous claims of detectable human-caused warming of the troposphere and imply that these claims may be conservative,” Pallota said.
Improved knowledge of this tropospheric warming signal, and a better understanding of uncertainties in satellite temperature observations, have helped to advance detection and attribution studies, which assist in unraveling the causes of recent climate change.
Natural internal variability occurs in the absence of any human-caused changes in atmospheric composition. It constitutes the background noise against which any slowly evolving human-caused warming signal must be detected. The study focuses on the spectrum of internal variability, providing information on the partitioning of temperature variability on timescales ranging from months to decades. Such information is a critical component of anthropogenic signal detection studies.
Pallota and Santer explored the sensitivity of model-versus-data spectral comparisons to a wide range of subjective decisions. These included the choice of satellite and climate model TMT datasets, the method used for separating warming signals from natural variability noise, the range of frequencies considered and the statistical model used to represent observed natural variability.
“We find that on timescales of one to two decades, observed TMT variability is on average overestimated by the last two generations of climate models,” Santer said. The models analyzed were part of earlier and most recent phases of the Coupled Model Intercomparison Project (CMIP5 and CMIP6).
One of the challenges faced by the researchers is that real-world tropospheric temperature changes represent only a single instance of human-caused warming signal and natural climate variability. It is difficult to unambiguously separate signal and noise in this single realization of signal and noise. The team explored many different ways of achieving this separation in the satellite TMT data. For each of the signal and noise separation methods applied, they investigated many different statistical models of the short-term and long-term “memory” of climate noise.
Pallotta noted: “The statistical modeling allowed us to generate many thousands of different plausible estimates of internal climate variability from the single realization of observed climate change. Without the statistical modeling, it would have been tougher to make reliable inferences about the statistical significance of differences between observed tropospheric temperature variability and the temperature variability in climate models.”
The team intends to apply the statistical framework they developed to other climate variables. An obvious next step is to look at surface temperatures, which are nearly three times longer than the 41-year satellite TMT record.
The research was funded by the Department of Energy's Office of Science Regional and Global Model Analysis Program.
stark8 [at] llnl.gov