The success of quantitative risk management rests on our ability to assess probabilities, but what is a probability anyway? How do you know if your assessments are any good and what does it even mean to say there’s a 50% chance of rain tomorrow?
A pragmatic approach to these questions quickly takes us in to a discussion of what can go wrong with probabilistic assessments and we will discuss systematic biases and how to root them out.
I will also present a methodology that not only reveals bias, but also makes it clear how obvious that bias is in the data by showing an uncertainty range that reflects both consistency of bias and the number of data used to reveal it. Are you insanely optimistic or just plain unlucky?