The seven-day outlook... that spread of numbers that every forecast seems to end with. How much stock can you put into it? And which weathercaster did the best in our limited sample?
TMF studied the progressive 7-day forecasts for four consecutive days in January (Jan. 16-19). In other words, if you wanted to know how the weather would be on Jan. 16, how much could you trust what was forecast seven days earlier on Jan. 9? And how much more accurate might the forecast for Jan. 16 be on Jan. 10, six days in advance of the big day. And so on until you got within five, four, three, two and one day(s) until Jan. 16. This same 7-day process was extended to three other days (Jan. 17-19) so that four days' extended outlook could be studied.
The graph below depicts the results. The vertical axis represents the number of degrees difference between the forecast temperature and the actual temperature. The horizontal axis represents the number of days into the 7-day outlook. As you would expect if the weather prognosticators are doing their job, the variance between actual and forecast temperature would go down as forecasts were refined as the day being forecast got closer.
So what does the data mean?
For the period studied, the Star Tribune's Paul Douglas was the most inaccurate from seven days out. On Jan. 9, Douglas predicted the high temperature for Jan. 16 would be 1 above. The actual high temperature for Jan. 16 was 15, meaning a 14-degree variance. The forecast made on Jan.10 for Jan. 17 was 25 degrees off (he predicted a high of zero; the actual high temperature was 25). In total for the four days, Douglas was 7-day advance prediction was off by an average of 15 degrees. In contrast, the best 7-day advance prediction was by KSTP, which was off by an average of 6 degrees.
The data suggests that when it's within four days of the day that interests you, the variance among forecasters is rather small; it wouldn't much matter whose forecast you followed. There was a two-degree spread in the forecaster variance at day 4 of the 7-day outlook. However, this is not to say that forecasters were particularly accurate at predicting the actual temperature at that juncture; only that as a group, the forecasters were forecasting very similarly. Got it? There will be a test on the material.
Two of the more interesting items to note: First, the vast majority of variance in forecast vs. actual temperatures were negative, meaning that all forecasters had a strong tendency to predict colder temperatures than what actually occurred. Secondly, KARE's extended forecasts are only five days rather than the seven days provided by most outlets. The data suggests that this is a prudent philosophy!
If the graph makes you hungry to see the nitty gritty details that fed the analysis, you may view them here.