24 May 2014
This is a guest post from Sean Sublette, the Chief Meteorologist for WSET-TV in Lynchburg-Roanoke, Va. It gives you an idea of the issues that forecasters face in attempting to communicate a forecast, and the uncertainty that is always present in any scientific prediction.
I’ve thought about it for a few years now. Greg Fishel, Chief Meteorologist at WRAL in Raleigh, mentioned it at a conference a couple of years ago. More recently, I read a supporting post from the blog of Chuck Doswell (who has probably forgotten more about tornadoes than I ever learned).
As meteorologists, we have promised too much: The Deterministic Forecast.
To be sure, the science has made phenomenal progress over the past 50 years. We know so much more thanks to the computer advances that have allowed numerical weather prediction to be successful. The day-to-day forecasts are good, in spite of public perception. Brad Panovich, Chief Meteorologist at WCNC in Charlotte, is among the many who have recently made this point.
The more you know, the more you realize you don’t know
Once the computing resources became sufficient to allow regular ensemble forecasting in the ’90s, it reaffirmed an important, if not obvious point: we cannot perfectly sample the current, instantaneous state of the atmosphere. An important point because all else being equal, the better the initial representation of the atmosphere in our computer simulations (models), the more accurate those simulations will be in their forecasts.
Because of those imperfections in the initial conditions (to say nothing of our approximations and parameterizations), highly precise deterministic forecasts, especially in the mesoscale, will continue to struggle. No surprise there. But this is especially evident when the public wants to know when precipitation will start and stop, especially convective precipitation.
Consequently, the idea of probabilistic forecasting has started to take hold, at least in some circles. I still remember Bob Ryan featuring graphics of Bob’s Odds regarding snowfall forecasts.
And of course, there has always been probability of precipitation in the forecasts, misinterpreted as it is.
The perception problem
But in the years, if not decades, before ensemble forecasting, the public became accustomed to deterministic forecasts. That type of messaging has become seared into the consciousness of the public, and it is the way most forecasts are still produced.
Evolving toward widespread use of probabilistic forecasts for public consumption presents an extraordinary challenge. One that may not really be attainable, as a move to probabilistic forecasts is going to be perceived by the public as a step backwards.
One or zero
A deterministic answer is desired… sometimes demanded…
Is it going to rain?
Is it going to be too cloudy to see the eclipse?
Should we call off the baseball game?
Should I evacuate?
Should I cancel the outdoor concert?
All of us have been faced with these questions, some have higher impacts than others. The public wants a yes or no; they are used to the deterministic message. Giving them percentages and odds often yields an uncomprehending scowl.
Big decisions, little time
Admittedly, the case below is seldom my professional problem, but those consulting meteorologists in private firms can probably relate.
Think about that outdoor concert during convective season. Sometimes, it appears clear that a venue will be impacted. Sixty minutes ahead of time, we may have very good confidence that a cell (read: thunderstorm) with damaging winds is going to be close enough to a venue to put 10,000+ people at risk of injury.
Is there enough time 60 minutes before a venue is impacted to call the venue operators or promoters and tell them there is a 80% chance of cloud to ground lightning in the next hour, or a 70% chance of damaging winds? Or do you just tell them you expect a bad storm with lightning and wind damage?
And then, do the venue operators roll the dice and sit on that information, hoping you are wrong, preserving the event? Or do they make an effort to get that information to the crowd, sending people rushing for cover… or for the exits?
Assume the storm’s forward speed is a reasonable 40 miles per hour. More often than we might like, somewhere in that storm’s 40-mile path, its trajectory turns just a few degrees from the original heading (or curves a few degrees more than expected). Sixty minutes later, the worst of the storm misses the venue by only 2 or 3 miles.
We knew those people were in danger. Did they?
Most of the time, they go on blissfully unaware. Maybe they saw lightning in the distance, and maybe it got a little breezy. But if they did not get wet, and the lightning was not perceptibly close, from their perspective, the storm missed them.
If the venue operators do not pass along the warning, the crowd is happy, and the operators gain a faulty piece of evidence suggesting we do not know what we are doing.
But if the crowd did get a message that a damaging storm was coming, from their perspective, the forecast was wrong. Period. They don’t care if it was close. They don’t care if it was a tough mesoscale forecast. Now there are 10,000+ people taking home that faulty piece of evidence.
Perhaps we should try this for the venue operators… There is a 70% chance the weather will be bad enough in the next hour that there will be an injury that will get you sued.
During internal staff meetings, I try to draw a probabilistic picture:
I may tell staff that the odds of at least a trace of snow are 80%, 1+ inch of snow are 40%, 4+ inches of snow are 20%. At the end of the meeting, I am invariably asked, “So, what’s going to happen?”
Discussing such uncertainties and probabilities, which is praiseworthy in an intrascience discipline, is often punished in the public forum. To a scientifically illiterate public, uncertainty suggests ignorance. Years of the public hearing that this is going to happen orthat is going to happen is stuck in their conscience. Probabilities? Uncertainties? Sounds wishy-washy.
Of course, the information we get from ensemble forecasts gives us a level of confidence in the forecasts, but does the public understand that? Do they want to hear that the meteorologist has low confidence in a particular forecast? I suspect that information is more palatable in print than in video media. The staff at Capital Weather Gang in Washington handle it as well as anyone else.
But in video media, there is more of a visible competition to be as precise as possible (a worthy goal, to be sure). Unfortunately, in weather patterns where mesoscale changes in temperature and moisture have huge repercussions on sensible weather (e.g. snowstorms), the public often sees wildly different forecasts among outlets. As a result, it furthers a misconception, and I frequently hear, “They don’t have any idea what’s going to happen.”
And that’s from my mother.
If that were not bad enough, observe how forecasts are marketed. Most accurate forecast,Street level radar mapping. X degree guarantee. Storm arrival down to the minute. Again, all noble goals, and sometimes there is enough good information to make precise calls on the most important parameters.
I admit, forecasting the precise evolution of numerous ongoing convective cells on the timescale of 30-60 minutes is still a daunting challenge. But that is what we are tasked to do. I still hear the echoes from one of my Penn State professors, Michael Fritsch, “Anyone can forecast the first derivative. Your job is to forecast the second derivative.”
An asymptotic approach
I am not sure where we go from here. I would like to think there is an opportunity to educate the public about probabilistic forecasts, but we still struggle with conveying probability of precipitation and the difference between and Watch and a Warning.
My guess is for business and industry, these probabilistic forecasts will make sense, helping those groups manage weather risk. Although, my friends in the energy industry tell me how amazing it is to watch markets fluctuate with each run of the GFS or ECMWF, so even that may be expecting too much.
For now, I will keep trying to demonstrate the value of the probabilistic forecast, even if it is just small talk, one person at a time.
But as much as I would like to be convinced otherwise, I don’t believe the general public will ever embrace probabilistic forecasting. Like it or not, we set the bar very high decades ago, and it is up to us to reach it.