Whenever subject matter experts make quantitative assessments in a business process, we check those assessments after the fact and report back on precision and bias so that our SMEs can improve their future assessments. This is rather more difficult where the assessments are probabilities because the verification is necessarily statistical. Traditional methods can reveal optimism and pessimism in predictions, but they are not so effective at revealing cases where the impact of positive or negative evidence is over- or under-incorporated into an assessment and probabilities are either polarized – i.e. too extreme – or sitting on the fence – i.e .not extreme enough. They also demand substantial numbers of assessments.
I present an alternative approach to exposing systematic bias, which is based on a deeper look at how systematic bias effects probabilities. The tool produces a “bias footprint” which in a single figure shows not only the optimism or pessimism in a sequence of assessments, but also the tendency to polarize or fence-sit, together with a clear sense of the extent to which the data support those conclusions, which is dependent on both the number of assessments and the consistency of the bias.
I will demonstrate the method on Hubbard Decision Research’s FrankenSME aggregation methodology, showing how the aggregation method substantially improves predictions by removing systematic biases.