Ok, the title is obviously a joke, because the alternatives (multiple) have been available to anyone willing to learn for over 50 years. But since you clicked, this article will probably change your life for the better.
Thank you Damir Ramazanov, Group Project Risk Manager, ERG for helping with the article and providing quality review.
Wait, do we even need an alternative?
To me, using risk matrices is a question of ethics and professional skills and is totally up to the individual risk manager. In that sense risk matrices (or most other qualitative techniques) are like horoscopes (more in Douglas Hubbard‘s book). They are fun, easy to understand, everywhere but you probably wouldn’t use them for any meaningful day to day life decision or if you did you would have the decency to realise it’s no better than a coin toss and definitely not talk about it at the conferences calling it best practice.
The flaws are fundamental to risk matrices design and there nothing a risk manager / business analyst can do to make them reliable. All these flaws have been discussed here https://www.researchgate.net/publication/266666768_The_Risk_of_Using_Risk_Matrices and in this video by Osama Salah https://www.youtube.com/watch?v=7IcRtz7qo2w and in this post by David Vose https://www.linkedin.com/pulse/defence-risk-heat-maps-david-vose/ and in dozens of posts I have been making over the years. Additionally research by Tony Cox and Douglas Hubbard have showed that risk matrices consistently perform worse at measuring and communicating risks than proper quantitative tools. Add to it the flaw of averages (covered in Sam Savage book) and the 50+ years of research into risk perception, making decision under uncertainty and risk psychology as well as empirical testing by NASA, CIA and others and the move away from traditional qualitative risk analysis techniques become self-evident.
So what are the alternatives? There are plenty, but for the tool to be any better the following criteria have to be fulfilled:
- risk analysis has to be performed at the time of decision making, not once a quarter
- the results of risk analysis should not be expressed as arbitrary risk levels, rather be expressed as volatility or range or scenarios of the decision / objective itself (with some exceptions in HSE for example)
- the output of risk analysis should have a direct and immediate impact on the decision at hand.
It is also very important to distinguish between 2 types of risk analysis techniques:
- techniques to better understand the nature of risk to make a decision how to manage it. Usually used when a specific risk is know and is significant and management needs to deal with it in a cost effective manner:
- bow-tie diagrams
- FMEA / FMECA
- HAZID, HAZMAT, HAZAN
- 5 whys
- influence diagrams
- ICAM, etc.
- techniques to better understand how uncertainty affects the decision or objective. Used when making a decision, preparing or approving a strategy, budget, forecast, long term pricing, etc. and the risks are not obvious:
The application of the techniques above will also depend on the decision complexity, materiality, level of uncertainty and the time and resources available to risk manager:
For simple decisions
By far the easiest and the most common way to assign risk to an entity, project, supplier, business unit or a piece of equipment is by using a scoring methodology. In fact it is so common, hundreds of companies have been using it without calling risk management forever:
- S&P, Moodys, Fitch rating agencies to assign ratings to companies
- procurement departments to rank existing suppliers (gold, silver, bronze or blacklisting them)
- classifying spare parts or pieces of equipment based on criticality, etc.
- banks and corporations to allocate debtors to risk buckets / categories or to classify bad debtors
- firefighters classifying buildings into fire risk categories, etc.
Basically, any type of methodology that allows to grade / categorise items based on their predetermined characteristics is a better way to communicate risks and to use that information for decision making. Sometimes it could look like a very simple checklist. It’s kind of obvious but if you still want me to write a separate piece on the scoring methodology comment on this article using the word “scoring”.
For decisions on how to mitigate a particular risk
If you are in the situation where you need to determine best ways to mitigate a specific kind of risk, then a bow-tie diagram or an influence diagram will be very helpful. There are a bunch of techniques that help to visualise the risk by breaking it into components, for example causes and consequences as is the case with bow-ties.
This is very helpful to switch on system 2 thinking and to overcome at least some of the cognitive biases. The bow-ties are pretty basic and should be in every risk managers arsenal. FMEA, FMECA, fault trees, 5 whys and ICAM investigation techniques are very similar in principle. Their main objective is to write down possible components of a risk reminding us not to forget important sources or consequences, even though they may not be obvious at first.
I used bow-ties a lot, once I was even childish enough to present it to the CEO (ex-deputy Prime Minister of the country). That obviously didn’t go down well. So it’s probably best to use them as internal analysis tools rather than a communication tool. My personal secret with bow-ties is to always have at least 7 causes and 7 consequences and at least 3 second level causes and consequences on each branch. And then use distributions to turn a bow-tie into a quantitative risk model and a loss exceedance curve. Archer Insight did probably the best automation of quant risk analysis with bow-ties. That way we definitely switch from S1 to S2 and improve our chances of finding a solution.
For any decision involving numbers (wait, that’s most of them)
For the rest of the cases it is actually more important for us not to understand how significant each individual risk but rather how uncertainty in general affects our decision, KPI or objective. Nassim Taleb calls it f(x). They also call it f(x) in operations research. That means that we should be more interested in the effect of risk on something rather than the level of risk itself.
To my surprise the message above is actually very difficult, almost impossible, for the risk managers to digest. See if you can help me better explain it in the comments.
This is what I call risk management 2 – using risk analysis as a decision making tool. Since the idea to use risk management as decision making tool is much older than the idea to use risk management as an element of corporate governance, all we need to do is to open any good book on decision science or probability theory to find the tools.
Let’s repeat. Here are just some of the common techniques, some are older than 50 years old, ranked from simple to difficult:
- decision trees or influence diagrams
- scenario analysis
- stress testing
- simulation modelling techniques
The irony is that while many risk management departments have been using heatmaps to rank risks, other business units have been using proper risk analysis techniques forever without calling it risk management. Doctors have been using decision trees, any investment professional using sensitivity analysis, finance using scenarios, pharma companies, geologists, weather forecasters using simulation modelling forever.
If you want me to expand on any of the tools, please write in the comments.
For big and important decisions
This one is simple, if the decision is complex and the stakes are high, use simulation modelling or better. What is even better? Write in the comments.
If you found this article useful please like and share.
RISK-ACADEMY guides and templates: