Evaluating the predictions associated with climate change just became easier with the development of new statistical methods designed to assess the performance of models producing many of these predictions.
The U.S. Geological Survey (USGS) research involved collaboration with CEED scientists at The University of Queensland (UQ) and the National Snow and Ice Data Center at the University of Boulder, Colorado.
CEED Chief Investigator Dr Eve McDonald-Madden of UQ’s School of Geography, Planning and Environmental Management, said the new methods would help ecologists, managers, and policy makers examine the quality of predictions produced by individual or sets of climate models.
“Management agencies and policy makers often use predictive models to help develop and choose intervention strategies, whether those are wildlife management plans, climate adaptation strategies, or even energy policies,” she said.
“Increasingly, sets of models are being used concurrently to represent the scientific uncertainty in the predictions.”
Research ecologist at the USGS Patuxent Wildlife Research Center, Maryland, Dr Michael Runge, who led the study, said the research addressed the question: “Are our model sets working?”
“If observations are falling within the bounds suggested by the model, that’s great,” he said.
“But we needed a way to detect when a whole model set might be failing.”
Dr Runge and his colleagues developed statistical methods for evaluating predictions from a single model or a set of models. These methods provide a way to detect failures of a model set.
“They applied these methods to two data sets, one involving predictions of the breeding distribution of northern pintail ducks, and one involving predictions of Arctic sea ice.
The methods suggest the observations of summer Arctic sea-ice extent are falling within the bounds of the current set of climate models, but are now favoring those climate models that predict an ice-free Arctic in the summer around 2055.
For northern pintail ducks, the methods, had they been in use, would have detected a change in the breeding distribution of pintails in 1985, 20 years before the change was actually detected and incorporated into hunting regulations.
Early detection of failure of a model set can trigger the work needed to diagnose the failure, build better models, and ultimately, improve the predictions used as the basis of decisions. In the practice of adaptive management, this process is sometimes called “double-loop learning.”
The article, “Detecting failure of climate predictions” by Dr Runge, Dr Julienne Stroeve, Andrew Barrett, and Dr McDonald-Madden, is available in Nature Climate Change online.
Image: NASA Goddard Space Flight Center showing melt ponds in the Arctic CC