Tuesday, April 22, 2008

Freakonomics and Statistical Illiteracy

I never actually read Freakonomics, only some of the scathing critiques levied against it (see John DiNardo's extensive review(s) for a lark). Merits of the book aside, like many academicians, I too am insanely jealous that I didn't write a wildly successful best-selling book and get offered a regular column in print and online at the New York Times. So admittedly, valid professional criticisms aside, I was genetically predisposed to not like brand "Freakonomics." But occasionally I click onto the blog to see if there is something interesting going on in the world of pop-economics of which I should be keeping abreast.

So today I clicked on this catchy post headline: "How Valid Are T.V. Weather Forecasts?". In it, the authors relay a loyal Freakonomics blog reader's efforts to evaluate local weather men/women's forecasting abilities. Clearly the reader has a bone to pick with them, but has put together a rather cute and exhaustive study of weather forecasters in his local area of Missouri. The reader is no statistician (unsurprisingly), so it is really not appropriate to criticize his analysis. But I was really taken aback by the blog's promotion of this work as an example of quality arm-chair Freakonomics, given that it was so rife with statistical fallacies. I shot off a quick comment explaining the problems, that were really too egregiously erroneous to let pass without comment. Then I realized the poster was Stephen Dubner--journalist--rather than Steven Levitt--economist. Clearly an economist--even a microeconomist (who in my experience rarely take courses in time series econometrics where forecasting is taught)--should know better. A journalist, though...well...I feel a little regret for saying what I did. But I suppose a journalist posing as an economics expert is sticking his neck out and maybe deserves a smackdown.

Anyhow, my point was that the blog author--in an egregious disservice to his readers and to promoting popular understanding of the world through the economist's lens--perpetuated fundamental misconceptions about the nature of probability and forecasts. Not to mention vilifying television weather forecasters, who are no doubt overpaid for what they do (but, hey, so are tenured economics professors). So if the reader and a pseudo-economics journalist has no clue, I thought it might be worth cluing in my loyal readers in to what's going on so that they can understand these things and not have to lose the same face.

The Freakonomics blog reader basically tracks forecasts made by a group of local weather men/women in his area and compares their forecasts to the actual weather. Again, I am not criticizing him. He actually did exhaustive and careful work documenting these things. Unfortunately though, because of the statistical fallacies, his analysis is meaningless and conclusions and criticisms of television weather people are unfounded. (They may ultimately be true, but the analysis does not provide evidence to support the conclusion).

One observation from his analysis is that weather forecasters make very similar weather predictions. The reader concludes from this that most TV weather forecasters are equally full of shit. I don’t know much about weather forecasting methods, but assume that most are using the same or very similar forecasting models based on the state of available meteorological knowledge. Moreover, most are likely running the same or very similar historical data through said models. Also likely, local weather men/women may be outsourcing to some third party forecaster (e.g. Accuweather). Thus, it should not be very surprising that the forecasting results are clustered as such.

A forecasting model basically works like this: based on what is known now, there is some probability distribution (think Bell Curve) of something happening in the future. The farther in the future one looks, the wider this probability distribution gets and thus the less precise the predictions. I hope it is obvious why this would be the case that it is harder to predict farther in the future.

So, they run their models, which generates some results (for temperature, rain probability, etc.) with some probabilistic RANGE of possible values (in statistical terms, we get a "point estimate" and a "confidence interval"). The reader also conducted a survey of TV weather forecasters, from which we learn that a standard margin of error is +/- 3 degrees. So, if the forecast is for 50 degrees, then they are predicting a 95% chance the temperature will be in the RANGE of 47 to 53 degrees. Furthermore, the model is predicting that only 5% of the time will the temperature be outside of this range.

After generating some forecast of predicted weather, it is then the forecaster’s responsibility to make some subjective decisions about his or her CONFIDENCE in these forecast values. (Just because some number comes out the end of a mathematical equation, doesn't mean I have to believe it is the Truth). Now, if the forecaster is somehow misleading us about his/her confidence in the forecasts, this is indeed a problem. But just saying that the predictions do not fit well with observed outcomes does not mean that these people are bad forecasters (do you think your local weather person is any better/worse than your average Wall Street forecaster?). This is not a valid assessment of forecast quality.

Instead, this forecast must be compared against the performance of another forecast. The simplest forecast (and the one economists usually employ as a benchmark for evaluating more sophisticated forecast methods) is to assume a forecast of “no change” from the last observed real outcome. In other words, if today it was 50 degrees and rainy, the simple forecast for tomorrow is 50 degrees and rainy. Or, if you want to evaluate the longer term forecasts, the the simple forecast for X days ahead would also be 50 degrees and rainy.

Then, we could compare which forecast tracks more closely to the true observed weather, and there are a number of available statistics that can provide a measure of this. Thank you Mr. Theil. If it is the case that the simple method outperforms the sophisticated TV weather forecaster, then, YES, we can all bash the local weatherman/woman. But until then, the rest of us can augment our visceral Freakonomics revulsion with the knowledge that they are promoting statistical illiteracy.

Labels: , , ,

1 Comments:

At 1:14 PM, Anonymous Anonymous said...

I never read Freakonomics, because wacky, warmed-over Gary Becker is still warmed-over Gary Becker. Fortunately, Ariel Rubinstein [pdf] did read the book. Daniel Davies did, too, and he wrote up a four part take down.

 

Post a Comment

<< Home