TechCast’s recent move from its 6th generation website (www.TechCast.org) to its new 7th gen site (www.TechCastGlobal.com) offered a rare opportunity to test the repeatability of forecast data.
As the move was approaching, we captured one of the last data sets from the old site on Jan 29, 2014. The extensive background data framing each forecast (research breakthroughs, applications, new ventures, adoption levels, etc. organized into trends) was transferred to the new site, but we decided to drop the old expert estimates and have experts enter new estimates from scratch. Below is a summary of forecasts from the Jan 29 data (Before) and the most recent data (After) on Oct 13, 2014.
|Fuel Cell Cars||2019||2019||0|
|Internet of Things||2020||2021||+1|
|Next Gen Computing||2025||2027||+2|
|Humans On Mars||2037||2033||-4|
MEAN CHANGE = +.38 years
Note: Forecasts are for varying adoption levels.
This simple test is a good way to check repeatability of a research method. It is often thought that such results are “anchored” by the existing forecast data, which is another way of saying experts are “biased” by the present results. The resulting mean error of .38 years seems remarkably small, especially considering that the average forecast has a time horizon of at least 10 years out.
Repeatability is not the same as accuracy, of course, and that’s where our annual accuracy studies come in, We have found from previous studies that TechCast accuracy is on the order of +3/-1 years at about ten years out. That is, there is a tendency for experts to be over optimistic by about 3 years and under optimistic by about 1 year. This tendency toward optimism is well-reported in the literature on forecasting. We call this “forecast creep” – the tendency for forecasts to slowly creep into the future by about 3 years over a ten year horizon.
Since the elapsed time between our Before and After data is about 9 months (Jan to Oct), forecast creep probably accounts for significant portion of the .38 years error. We also note that another form of the anchoring likely accounts for this repeatability. In our system of collective intelligence, the background data provides an empirical foundation of knowledge which experts use to make their estimates, thus anchoring the results to an accurate knowledge base.
This simple test demonstrates that the TechCast system is remarkably repeatable and robust. The results also confirm other studies we have done comparing results from two different groups of experts, which also seemed remarkably similar. Considering that the expert panel changes over time, and the conditions affecting forecasts also change constantly, these results support the utility of collective intelligence forecasting. There may be a small zone of error, but pooling knowledge is a great way to get good answers to tough questions.