If we accept there is a cognitive bias in decision making, how can we as risk professionals account for this and help our senior executive make better bias-free decisions? Risk Academy’s Alex Sidorenko discusses. Originally published for Knowledge@StrategicRisk https://www.strategicrisk-asiapacific.com/the-knowledge/overcoming-cognitive-bias-in-senior-executives/1428016.article
The earliest psychometric research was performed by psychologists Amos Tversky and Daniel Kahneman (who later won a Nobel prize in economics with Vernon Smith “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty” (Kahneman, 2002). They performed a series of gambling experiments to understand how people evaluated probabilities. Their major finding was that people use a number of heuristics to evaluate information. These heuristics are usually useful shortcuts for thinking but may lead to inaccurate judgments in complex business situations of high uncertainty – in which case they become cognitive biases.
Fifteen years later, these findings would become hugely significant to the risk practitioners across the world. Which raises a question: why did it take so long?
Implications for risk practitioners
The significant role risk perception and research into cognitive biases play in risk management have finally been acknowledged by both ISO31000:2018 and COSO:ERM 2017. Some of the implications include:
- Decision makers tend to miss significant risks (professional deformation – only seeing familiar risks, overconfidence – refusing to consider negative scenarios, post-purchase rationalisation – refusing to accept new information, confirmation bias – filtering information according to own believes, normalcy bias – refusing to view alternatives and many others). People tend to miss important risks both individually and as a group. Additional biases like ‘group think’ affect the ability of risk managers to get meaningful risk information during workshops.
- Decision makers significantly overestimate or underestimate probabilities and potential impact risks may have on a decision or an objective. In fact, cognitive biases together with a generally low statistical literacy make people’s estimates about impact and probability borderline useless if not deceitful. Making people rate, rank or otherwise qualitative assessment of risks is no better than guessing.
- Decision makers tend to ignore or dismiss risks even once it is established that they have significant impact on a decision or objective. People have a whole set of biases that prevent them from taking meaningful action. For example, sometimes we prefer to implement risk mitigations that solve the immediate problem only to increase the overall risk exposure in the long run. Some people also tend to think that inaction is better than action, which often leads to much larger losses.
- Irrationality and effect of cognitive biases significantly increase on an empty stomach. Having low glucose in our blood prevents our brain switching from system 1 to system 2 thinking, making any kind of risk discussion before lunch or at the end of the day useless.
Overall, research into cognitive biases suggests that people are often irrational when making decisions under uncertainty, which significantly reduces the value of information risk managers receive from management. If expert opinions, rankings and ratings are the only or main source of information for the risk manager, the results of risk analysis are guaranteed to be inaccurate.
More information about effect cognitive biases have on risk analysis at work and in our day to day lives is available from the good risk management books: https://riskacademy.blog/2017/01/14/my-favourite-risk-management-books
Apparently, small doses of electricity applied to Wernicke’s area of our brain significantly reduces the effect of cognitive biases on our decision making. Ok, that’s obviously a joke. I mean, the research is real, but it’s highly unlikely we will be allowed to electrocute people before risk workshops, so here are some real solutions:
- Stop using risk management techniques that primarily rely on human input. Ranking risks in terms of likelihood, consequence, velocity, viscosity and whatever else your external auditor will come up with next, mapping risks on a risk matrix and the similar are guaranteed to produce inaccurate and misleading results, so don’t use them for any significant decision.
- Use mathematical methods for risk analysis that minimize the need for subjective human input. One way to overcome cognitive biases is to use scenario analysis or simulations when performing risk analysis, instead of traditional qualitative assessments. Quantitative risk analysis helps to present an independent opinion on strategic objectives, assess the likelihood of achieving them and the impact the risks may have on their achievement. But more importantly, quantitative risk analysis helps overcome cognitive biases and significantly reduce subjectivity. Some level of subjectivity still remains, as expert opinions may be required for some range and distribution estimates, however quantitative risk techniques still significantly outperform qualitative risk assessments. Here is an interesting study Douglas Hubbard quotes in his book How to Measure Anything in Cybersecurity Risk. For over 100 unmanned space probe missions, NASA has been applying both a soft “risk score” and more sophisticated Monte Carlo simulations to assess the risks of cost and schedule overruns and mission failures. The cost and schedule estimates from Monte Carlo simulations, on average, have less than half the error of the traditional estimates.
- Better still, use mathematical methods that don’t rely on subjective human input at all. Mark Powel, an expert in mathematical risk analysis methods, says: “In maths, we use models for risk analysis but almost always there are terms or variables for which we just do not know what number to use. Most people guess these numbers and hope for the best. Instead, there are three methods that can be used to develop an uncertainty model for these numbers that maximize objectivity and eliminate subjective human input in our risk analysis. These methods are to find the uncertainty model that minimises the Fisher information (the measure of how much information the model adds to our risk analysis) (Jeffreys, 1939), find the model that maximizes the information entropy (entropy is a measure of disorder, i.e., the amount of disorder added to our risk analysis) (Lindley and Savage, 1971), and to find the model that maximizes the Expected Value of Perfect Information (the less information the model adds to risk analysis, the larger the EVPI) (Bernardo and Smith, 1995). Fortunately, all three of these diverse approaches give us the same objective uncertainty model for the same problem. Also, fortunately, these objective models have all been tabulated in textbooks for many risk problems we are likely to encounter so we don’t have to do all the math by hand.” I agree with Mark and highly recommend risk managers look into these.
- If you ever have to use management input/guesses, calibrate them before asking for information and provide plenty of sugar. More information on management calibration for the purposes of risk analysis is provided in Douglas Hubbard’s books. More information on the effect sugar has on our ability to make decisions under uncertainty is provided in Daniel Kahneman’s and Gerd Gigerenzer’s books.
- Probably the hardest recommendation of all, change the decision-making process. Consider applying the decision quality framework developed by Professor Howard Raiffa of Harvard University and Professor Ronald A. Howard of Stanford University and made popular by Carl Spetzler in his book Decision Quality.
Mark Powell added a few more ideas:
- Risk managers can only deal with the cognitive biases for the decisions they make. They cannot overcome any in an executive to which they present.
- A company can train their decision makers to recognize a wide variety of decision pitfalls (cognitive biases are but a subset of all the possible decision pitfalls). Risk managers then stand a chance of recognizing them in their own decisions.
- A company can set up a decision review panel, or have mentors assigned to decision makers.
- Risk managers shouldn’t use methods that encourage bad decision making.
The history of risk perception
The study of risk perception originated from the fact that experts and laypeople often disagreed about the riskiness of various technologies and natural hazards.
The mid-1960s experienced the rapid rise of nuclear technologies and the promise for clean and safe energy. However, public perception shifted against this new technology. Fears of both longitudinal dangers to the environment and immediate disasters creating radioactive wastelands turned the public against this new technology. The scientific and governmental communities asked why public perception was against the use of nuclear energy in spite of the fact that all the scientific experts were declaring how safe it really was. The problem, as perceived by the experts, was a difference between scientific facts and an exaggerated public perception of the dangers (Douglas, 1985).
Researchers tried to understand how people process information and make decisions under uncertainty. Early findings indicated that people use cognitive heuristics in sorting and simplifying information which leads to biases in comprehension. Later findings identified numerous factors responsible for influencing individual perceptions of risk, which included dread, newness, stigma, and other factors (Tversky & Kahneman, 1974).
Research also detected that risk perceptions are influenced by the emotional state of the perceiver (Bodenhausen, 1993). According to valence theory, positive emotions lead to optimistic risk perceptions whereas negative emotions incite a more pessimistic view of risk (Lerner, 2000).
A bit of warning about cognitive biases
Besides the cognitive biases inherent in how people think and behave under uncertainty, there are more pragmatic factors that influence the way we make decisions, including poor motivation and remuneration structures, conflict of interest, ethics, corruption, poor compliance regimes, lack of internal controls and so on. All of this makes any type of significant decision-making based on purely expert opinions and perceptions, highly subjective and unreliable.
Cognitive biases themselves are not set in stone. When scientists tried to replicate many of the tests performed by researches in 1970s, they found inconclusive or even contradictory findings, arguing that some findings related to cognitive biases we know today may be inaccurate or exaggerated.
A recent critical review of loss aversion (one of the most significant contribution of psychology to behavioural economics according to Kahneman) by D.Gal and D.Rucker of Northwestern University, published in the Journal of Consumer Psychology, loss aversion is potentially a fallacy. According to the authors there is no general cognitive bias that leads people to avoid losses more vigorously than to pursue gains. Contrary to claims based on loss aversion, price increases (i.e., losses for consumers) do not impact consumer behaviour more than price decreases (i.e., gains for consumers). Messages that frame an appeal in terms of a loss (e.g., “you will lose out by not buying our product”) are no more persuasive than messages that frame an appeal in terms of a gain (e.g., “you will gain by buying our product”). Is this study beginning of the end of cognitive biases or will this study itself be found inconclusive in 5 years time? Only time will tell. I can only vouch for myself, understanding and using cognitive biases explained a lot in my role as the Head of Risk at one of the large sovereign funds and made my job much much easier.
Another famous risk practitioner and author, Nassim Nicholas Taleb, when I met him in New York in June 2018 argued that cognitive biases may explain individual behaviours under sometimes sterile conditions, however, they should not be used to justify or explain behaviour of complex systems like societies. I tend to agree.
– – – – – – – – – – – – – – – – – – – – –
RISK-ACADEMY offers decision making and risk management training and consulting services. Our corporate risk management training programs are specifically designed to promote risk-based decision making and integrating risk management into business processes. Risk managers all over the world call us in to help sell idea of integrating risk analysis into decision making and using quantitative risk analysis techniques. Check out most popular course for decision makers https://riskacademy.blog/product/risk-based-decision-making-executives/ or our dedicated programs to help risk managers learn the foundations of quant risk analysis https://riskacademy.blog/product/risk-managers-training/. We can also help audit risk management effectiveness or develop a roadmap for risk management integration into decision making https://riskacademy.blog/product/g31000-risk-management-maturity-assessment/
11 thoughts on “If cognitive biases in decision making are a given, how do risk managers overcome them?”
outstanding paper – thanks. I have written 2 books on risk management and (industrial) asset management decision-making, and your comments align extremely well with my findings. One area of potential mitigation is the use of cross-disciplinary teams, opinion capture storyboards, real-time sensitivity testing to uncertainties and something we called the ‘Sherlock Holmes’ method for eliciting tacit knowledge (eliminate the ‘impossible’ and work with what remains as it ‘must include the truth’… See http://www.SALVOproject.org
Interesting, thanks for sharing, John
Very good reading. One of our authors wrote an article which I would reference here if you don’t mind, since I believe it adds value and complements on your thoughts, introducing the concept of successive approximation methodologies:
Creating Value in Operational Risk Management Through Behavioral Science – https://riskmanagementguru.com/creating-value-in-operational-risk-management-through-behavioral-science.html/
Very nice article Alexei, as usual it is an interesting read, you make a convincing case against all those simple risk management techniques and tools the majority of organisations all over the world use. I’m also convinced that closing the gap between reality and the perception of reality is at the core of managing risk. It means that we need to know more and increase awareness and understanding of what exists and how this can affect our objectives.
However I’m afraid your proposed method of managing risk in organisations is only applicable for a minority of risks faced by people in the field of managing risks in organisations. Also, it is very nice that powerful mathematical models and practices exist, but any result of a mathematical approach will still need interpretation and will then be subject to humans and their cognitive biases when it comes down to taking decisions. Also in large organisations.
“Better still, use mathematical methods that don’t rely on subjective human input at all.”
Nice. But who will determine the method, who will select which data to take into account. … There will always be a human input and it needs to be based on the best available information.
Also, most organisations that create value are small SME’s that will never have the knowledge, nor the capacity to use mathematical models, yet they are very successful in creating value, because they are able to gather information from different sources, analyse matters with common sense and take decisions that allow their SME to prosper. The same is valid for the everyday work of teams in larger organisations.
As such, imho, the most important instrument to overcome cognitive bias in managing risks in organisations is not mathematics, but it is dialogue, supported by the often very simplistic tools and techniques that everyone can use.
For those who think this is all bullocks, just listen to Steve Jobs, when he explains the success of Apple:
Therefore it is such a pity that the definition of communication and consultation, as mentioned in ISO31000:2009 is no longer to be found in ISO31000:2018. Probably because most people in the risk management world look at communication and consultation with a cognitive bias coming from a traditional approach to this crucial (iterative) step in the risk management process, failing to understand the difference between communication & consultation and communication and consultation in the form of dialogue.
So, for me any method, old and new, mathematical or other, are valid and useful. It all depends which risks you have to manage and how much time and effort you can spend to make your assessment timely and worthwhile, allowing for a decision that creates and protects value, with an adequate use of resources to come to that end.
Should you wish to know more on taking better decisions in organisations?
Mark will show example how to do proper mathimatical risk analysis with no subjectivity https://2019.riskawarenessweek.com/talks/the-thinking-the-math-the-coding/
Should you wish to know more on taking better decisions in organisations? https://2019.riskawarenessweek.com/schedule/ :)))))))
Very interesting article.
May i also propose that you take under consideration the findings of Bent Flyvbjerg, as presented on this article https://journals.sagepub.com/doi/full/10.1177/87569728211049046
As Bent justifies in his research paper: ‘cognitive bias is half the story in behavioral science. Political bias (or strategic misinterpretation) is the other half.’