First, kudos to the FAIR team for nudging the IT community towards better risk analysis. I was first introduced to FAIR methodology approximately 2 years ago at the Copenhagen risk management conference. The timing was quite fortunate since we just finished a project quantifying intellectual property risks for a major telecom client in Russia. We used a decision tree with Monte-Carlo simulation, which coincidently was very similar to what FAIR is proposing.
FAIR methodology has the right idea and here are 3 steps to make it even better:
Focus on effect of information risk on business decisions, not the information risk itself
The first point is quite fundamental. FAIR Institute is hardly to blame because this is exactly the flawed logic regulators have been pushing for years. While it make sense to quantify market, credit, operational risk in banks to allow them to calculate capital 1 requirements, it makes very little sense to do the same outside of the regulatory requirements. Taleb calls it f(x), instead of x.
This is absolutely fundamental and to my surprise very difficult to grasp for most risk professionals. Norman Marks even wrote a whole book about it. Let me make 101st attempt to explain it here:
- the only two people in the organization who care about risk levels are the risk manager and the internal auditor
- risk mitigation is not something executives think about, unless it’s a jailable offense, like fraud, safety or similar
- risks and their mitigations are only interesting to risk managers and people who want to justify budgets for risk mitigation
- the rest of the organization is busy achieving objectives by making decisions
- if we want executive attention, we need to convert risk information into decision information or objective information
- risk is the effect of uncertainty on objectives, so stop talking about risks and show how this uncertainty affects the objectives or decisions
- Sam Savage calls it “chance of whatever”, use risk information to communicate how uncertain or risky budgets, strategies and decision are, instead of communicating risk levels
- none of this is new, open any textbook on decision science for the last 50+ years
So while the idea to measure information risk is good, it stops short of being useful to anyone beyond the IT department trying to scavenge more budget. FAIR methodology would be so much better if instead of measuring the information risk it measured the effect that information risk has on investment decisions, strategic decisions, budgets, production forecasts, marketing campaigns, etc. And it’s not a matter of calculating a range of potential exposures from information risks and subtracting it from the budget. It’s about adding volatility or event risks inside the budget itself to recalculate budget 2.0.
And this is where it get’s interesting. As far as the budget or strategy or decision is concerned, information risk is just another business risk. Just like FX, interest rates, equipment failure and dozens others. And its effect should be modelled just like we would with FX, interest rates, equipment failure, etc. Here is an example: Forget about risk management. Measure the likelihood of success instead. #ChangingRisk
PERT is dangerous
Another troubling observation is the general preference for using PERT distributions.
According to David Vose, PERT distribution should be used exclusively for modeling expert estimates, where one is given the expert’s minimum, most likely and maximum guesses. The PERT distribution came out of the need to describe the uncertainty in tasks during the development of the Polaris missile (Clark, 1962). The project had thousands of tasks and estimates needed to be made that were intuitive, quick and consistent in approach. For more information read wiki on https://www.vosesoftware.com/riskwiki/PERTdistribution.php
The obvious limitation of using PERT is the heavy reliance on expert opinions. This comes with all the common pitfalls: poor calibration, self-interest bias, lack of information, etc. According to Douglass Hubbard, research by two psychologists, Donald MacGregor and J. Scott Armstrong investigated how much estimates improve when uncertainties are decomposed. The decomposition had the biggest improvement on estimates when the original estimate was extremely uncertain while the decomposed elements were more easily estimated.
When we created out model for the intellectual property risks few years back we found it was quite difficult for the business to accurately estimate secondary losses.
Another danger comes in the fact that PERT distributions are bounded. This means that only the scenarios proposed by the expert will be modelled and nothing else. No fat tails, no worse case scenarios, only whatever the expert chose to include.
Combine one and two: experts opinions and bounded distributions, that’s a dangerous mix. Since people are generally really bad at estimating worse case scenarios.
How to make FAIR methodology better? Read the last section for suggestions.
Reduce subjectivity
This is where I think there is the greatest opportunity for FAIR is:
- utilise the membership base to conduct empirical testing on various types of event frequencies and losses, start collecting and maintaining statistics on losses
- just like FAIR developed a taxonomy, develop a library of distributions (bounded and unbounded, discrete and continuous) that better describe the behavior of various events and losses
- make the bridge between information risk model and the overall business plan, budget or investment project.
To be continued…
Check out other decision making books
RISK-ACADEMY offers online courses

Informed Risk Taking
Learn 15 practical steps on integrating risk management into decision making, business processes, organizational culture and other activities!

ISO31000 Integrating Risk Management
Alex Sidorenko, known for his risk management blog http://www.riskacademy.blog, has created a 25-step program to integrate risk management into decision making, core business processes and the overall culture of the organization.

Advanced Risk Governance
This course gives guidance, motivation, critical information, and practical case studies to move beyond traditional risk governance, helping ensure risk management is not a stand-alone process but a change driver for business.
Alexei — First of all, thank you for playing the role of inquisitor in our profession. We need more of them to overcome the dogma that is so pervasive. I’ll write a longer response in the FAIR Institute blog which will link to your post. Briefly, however:
1) Regarding business focus. Although your are right to some degree, this is where it seems your “being familiar” with FAIR leaves you at a disadvantage. In practice, FAIR can be and often is used in a manner similar to what you suggest.
2) Regarding PERT. PERT (like any other measurement method) is only dangerous when misused. Also, you’re mistaken in believing that PERT is the only distribution type that is or can be applied when using FAIR. So to refer to this as a failing of FAIR is inaccurate.
3) Reducing subjectivity. We are working toward better utilization of the FAIR Institute community for data. Unfortunately, being a nonprofit that doesn’t charge membership fees leaves us with very few resources for that kind of effort. We’re beginning to fix this through a number of efforts which I’ll describe in my response blog post.
In the future, I’d be happy to be a sounding board for any suggestions you have regarding FAIR. I’m sure it would be a blast to brainstorm with a fellow inquisitor 😉
Cheers
Jack Jones
Great response Jack, looking forward to your article
Just to be explicit about one thing: never have I suggested that PERT is the only distribution that is or can be applied when using FAIR. What I was suggesting is that FAIR needs to done more work on emperical testing to determine best distribution types for different components of the model. PERT is a cop out when we are lazy or we need quick answers. We can do better than that
Thank you. You are correct. That should teach me to not reply to e-mail or comment on blog posts until after I’ve had my 1st (or 2nd) cup of coffee.