25 November 2013
Weather Forecasting and Social Media; Are We Doing It Wrong?
Posted by Dan Satterfield
My friend Greg Fishel (the long time Chief meteorologist at WRDU in Raleigh, NC) brought up something the other day that’s really stuck in my mind. It’s actually been bugging me for awhile, and it has to do with posting raw model data on the internet (and showing it on air). You may not even realize you are seeing it, and that itself is part of the problem. It’s NOT a forecast, it’s just a model based on initial data manipulated with math and physics.
This is closely related to the guest post of Nate Johnson (earlier this week) about warnings and social media, but this is more about forecasting high impact weather events online, and why I think we meteorologists may be confusing the public. Below are some examples of raw model posts on Google Plus and Facebook. I’ve done this as well, and I’m NOT criticizing anyone here, I’m just asking if there is a better way of serving the public’s need to know. The goal should be to communicate our best forecast, along with an understanding of the uncertainty in that forecast, and both should be easy to interpret.
I could post some model data online tonight that shows snowfall in TN (and even parts of Alabama) in a few days but having forecasted in that part of the world for quite a few years, I’m very certain there will be very little or none. Posting that raw model guidance might tend to confuse people, even IF I say the odds are very slim. A picture is indeed worth a thousand words, and showing an image that says one thing and then saying another is the perfect recipe for confusion.
There are times when I think posting the model guidance is fine, and that is when you are quite certain that what you are seeing is quite likely to be basically correct. I believe for example tonight that the ensemble average of the European (ECMWF) model from London is likely fairly correct on the track and timing of this week’s coastal storm. This storm will really throw a wrench in the weather on the busiest travel day of the year, and showing this data along with a forecast of what it will do and what folks can expect seems OK to me.
You the consumer have a responsibility here as well, and that is to look closely at where your forecast is coming from. If it’s from someone online that you’ve never heard of, I’d be careful. There are a lot of folks posting online forecasts these days who can read model data, but have no background in the science. Look for someone you trust and who is doing more than showing you data (and saying something might happen). Almost anyone can do that, but making a forecast in a tough situation requires a bit more gravitas.
Dr. Jeff Kimpel (my adviser at Okla. University) gave me some advice many years ago that I’ve never forgotten, and it’s very apropos here-
“Tell them what you know, and don’t tell them what you don’t know.”
You raise some good points here. Personally, I love seeing the model data, but I am a geoscientist by trade and training and know to take the models as evolving, working conjecture, not solid forecast.
I am always in favor of educating the general public on where and how forecasts of all kinds originate from, but you can’t go very far with that before overloading people with information they aren’t familiar with or equipped to interpret – and thus leading them to make erroneous conclusions about what they see and what is forecast.
That last graphic from the NWS is excellent, and I’d like to see more of that kind of visual information presented online. Just enough information to get you to think about what you’re seeing, but not the conflicting words vs image that you discussed above.
Dan – I always enjoy your content and this subject is one that has interested me for some time. The challenge we, as broadcasters face today is negotiating the delicate balance between public access to instant information access and maintaining credibility. Moreover, people assume that if that information is available, it shouldn’t be subject to interpretation and thus is accurate and infallible.
When I worked in Houston, this issue was prevalent during hurricane season. Viewers would see the so-called spaghetti plots and invariably ask me, “which one do you prefer?” I liked to respond jokingly, “the one that’s right”. In blog posts, I would have the room to explain the pros and cons of the models that represented each strand, but in a two minute and 30-second forecast, there was precious time to go into that depth. Of course, by not showing it, I ran the risk of not seeming as knowledgeable as others who did.
That’s really the heart of this issue – competition. If someone posts a long-range projection of an extreme event, viewers wonder what their favorite met thinks about it. Let’s face it – to some extent, our sometimes fragile ego may fuel this. After all, what better feeling is there to nail that hurricane track or snowfall amount? Especially if we see something other’s didn’t; model indicated or not. There’s a certain promotional value in that, which can have an extended life.
In general, consumers realize there is a certain amount of inherent uncertainty when it comes to weather forecasts. We are trying to model extremely complicated physical properties. Improved computing power has increased that ability, but it’s far from perfect, that’s especially true in the case of extreme events. Nevertheless, we go on TV everyday and present a forecast. Isn’t a consumer owed at least some level of credibility?; otherwise, what’s the point? Sometimes, adding extra explanation about the factors complicating a forecast further confuses viewers, even when it’s meant to educate them. At the end the day, someone tuning into the news just wants a simple answer to the question – “what’s going to happen tomorrow”?
Perhaps your professor was right, after all…
As I just posted on Google+, quoting this article, this applies to many datasets. For example, http://mbtaviz.github.io/ is cool, but I wonder what kinds of conclusions will be drawn from it. It’s excellent eye candy, but I wonder about representativeness of days depicted. Until the public understands better things like risk (see http://thenormchronicles.com/), and odds, and probability, and sensitivity of conclusions to values, it may be, to paraphrase Professor Marvin Minsky, that too much data is far worse than too little.