First, kudos to the FAIR team for nudging the IT community towards better risk analysis. I was first introduced to FAIR methodology approximately 2 years ago at the Copenhagen risk management conference. The timing was quite fortunate since we just finished a project quantifying intellectual property risks for a major telecom client in Russia. We used a decision tree with Monte-Carlo simulation, which coincidently was very similar to what FAIR is proposing.
FAIR methodology has the right idea and here are 3 steps to make it even better:
Focus on effect of information risk on business decisions, not the information risk itself
The first point is quite fundamental. FAIR Institute is hardly to blame because this is exactly the flawed logic regulators have been pushing for years. While it make sense to quantify market, credit, operational risk in banks to allow them to calculate capital 1 requirements, it makes very little sense to do the same outside of the regulatory requirements. Taleb calls it f(x), instead of x.
This is absolutely fundamental and to my surprise very difficult to grasp for most risk professionals. Norman Marks even wrote a whole book about it. Let me make 101st attempt to explain it here:
- the only two people in the organization who care about risk levels are the risk manager and the internal auditor
- risk mitigation is not something executives think about, unless it’s a jailable offense, like fraud, safety or similar
- risks and their mitigations are only interesting to risk managers and people who want to justify budgets for risk mitigation
- the rest of the organization is busy achieving objectives by making decisions
- if we want executive attention, we need to convert risk information into decision information or objective information
- risk is the effect of uncertainty on objectives, so stop talking about risks and show how this uncertainty affects the objectives or decisions
- Sam Savage calls it “chance of whatever”, use risk information to communicate how uncertain or risky budgets, strategies and decision are, instead of communicating risk levels
- none of this is new, open any textbook on decision science for the last 50+ years
So while the idea to measure information risk is good, it stops short of being useful to anyone beyond the IT department trying to scavenge more budget. FAIR methodology would be so much better if instead of measuring the information risk it measured the effect that information risk has on investment decisions, strategic decisions, budgets, production forecasts, marketing campaigns, etc. And it’s not a matter of calculating a range of potential exposures from information risks and subtracting it from the budget. It’s about adding volatility or event risks inside the budget itself to recalculate budget 2.0.
And this is where it get’s interesting. As far as the budget or strategy or decision is concerned, information risk is just another business risk. Just like FX, interest rates, equipment failure and dozens others. And its effect should be modelled just like we would with FX, interest rates, equipment failure, etc. Here is an example: Forget about risk management. Measure the likelihood of success instead. #ChangingRisk
PERT is dangerous
Another troubling observation is the general preference for using PERT distributions.
According to David Vose, PERT distribution should be used exclusively for modeling expert estimates, where one is given the expert’s minimum, most likely and maximum guesses. The PERT distribution came out of the need to describe the uncertainty in tasks during the development of the Polaris missile (Clark, 1962). The project had thousands of tasks and estimates needed to be made that were intuitive, quick and consistent in approach. For more information read wiki on https://www.vosesoftware.com/riskwiki/PERTdistribution.php
The obvious limitation of using PERT is the heavy reliance on expert opinions. This comes with all the common pitfalls: poor calibration, self-interest bias, lack of information, etc. According to Douglass Hubbard, research by two psychologists, Donald MacGregor and J. Scott Armstrong investigated how much estimates improve when uncertainties are decomposed. The decomposition had the biggest improvement on estimates when the original estimate was extremely uncertain while the decomposed elements were more easily estimated.
When we created out model for the intellectual property risks few years back we found it was quite difficult for the business to accurately estimate secondary losses.
Another danger comes in the fact that PERT distributions are bounded. This means that only the scenarios proposed by the expert will be modelled and nothing else. No fat tails, no worse case scenarios, only whatever the expert chose to include.
Combine one and two: experts opinions and bounded distributions, that’s a dangerous mix. Since people are generally really bad at estimating worse case scenarios.
How to make FAIR methodology better? Read the last section for suggestions.
This is where I think there is the greatest opportunity for FAIR is:
- utilise the membership base to conduct empirical testing on various types of event frequencies and losses, start collecting and maintaining statistics on losses
- just like FAIR developed a taxonomy, develop a library of distributions (bounded and unbounded, discrete and continuous) that better describe the behavior of various events and losses
- make the bridge between information risk model and the overall business plan, budget or investment project.
To be continued…