What makes a successful risk leader and risk strategy?
I have written about this before and am really looking forward for Chris to contribute to the discussion. Here is also a short video summary of the article: https://www.youtube.com/watch?v=nqmnycKZwgg
A while back I saw some discussions about competencies that risk managers should have. Some people talked about empathy and emotional intellect, others about strong communication skills and networking. And I thought to myself, what a load of rubbish. Don’t get me wrong, they are of course important and useful, but there is more to the profession. Risk management is much more science than art. It’s only the people who ignore the science that go for the softer side usually.
As a result, I have tried to list 4 key competencies risk managers in non-financial companies should develop to successfully support the decision makers and risk takers and hence add value to their organizations:
A. Understanding how human brain works and how people make decisions in situations of uncertainty
The study of risk perception originated from the fact that experts and laypeople often disagreed about the riskiness of various technologies and natural hazards. A lot of this information is available at https://en.wikipedia.org/wiki/Risk_perception
The mid-1960s experienced the rapid rise of nuclear technologies and the promise for clean and safe energy. However, public perception shifted against this new technology. Fears of both longitudinal dangers to the environment and immediate disasters creating radioactive wastelands turned the public against this new technology. The scientific and governmental communities asked why public perception was against the use of nuclear energy in spite of the fact that all the scientific experts were declaring how safe it really was. The problem, as perceived by the experts, was a difference between scientific facts and an exaggerated public perception of the dangers (Douglas, 1985).
Researchers tried to understand how people process information and make decisions under uncertainty. Early findings indicated that people use cognitive heuristics in sorting and simplifying information which leads to biases in comprehension. Later findings identified numerous factors responsible for influencing individual perceptions of risk, which included dread, newness, stigma, and other factors (Tversky & Karneman, 1974).
Research also detected that risk perceptions are influenced by the emotional state of the perceiver (Bodenhausen, 1993). According to valence theory, positive emotions lead to optimistic risk perceptions whereas negative emotions incite a more pessimistic view of risk (Lerner, 2000).
The earliest psychometric research was performed by psychologists Daniel Kahneman (who later won a Nobel Prize in economics with Vernon Smith “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty”) (Kahneman, 2003) and Amos Tversky. They performed a series of gambling experiments to understand how people evaluated probabilities. Their major finding was that people use a number of heuristics to evaluate information, which should really make risk professionals question the information they put in the risk reports.
These heuristics are usually useful shortcuts for thinking, but may lead to inaccurate judgments in complex business situations of high uncertainty – in which case they become cognitive biases.
Besides the cognitive biases inherent in how people think and behave under uncertainty, there are more pragmatic factors that influence the way we make decisions, including poor motivation and remuneration structures, conflict of interest, ethics, corruption, poor compliance regimes, lack of internal controls and so on. All of this makes any type of significant decision-making based on purely expert opinions and perceptions, highly subjective and unreliable.
B. Corporate finance, probability, forecasting and risk modeling
The official definition of risk, according to ISO31000, is effect of uncertainty on objectives. To me, this implies that risks have to be expressed in the same language, form and shape as the objectives they can potentially affect. Think about it.
That means if the objective sounds like grow revenue by 10%, then risks should be expressed as a volatility of that revenue target. Expressing risks as high, medium, low or even by using impact (in dollars) multiplied by probability (in percent) is pretty meaningless because it lacks the clear connection with the original revenue target. But more than that, it is dangerous…
In the Economics & Management journal of the Society of Petroleum Engineers, authors Philip Thomas, Reidar Bratvold, and J.Eric Bickel reviewed 30 different papers that described various risk matrices (mostly used in the oil and gas industry). Thomas et al. also estimated a “Lie Factor” for each of several types of risk matrices. The Lie Factor is a measure defined by Edward Tufte and Peter Graves-Morris in 1983 based on how much data is distorted in a chart by misleading features of the chart, intentional or otherwise. This is effectively a variation on the “range compression” that Cox examined in detail. Using a particular method for computing the Lie Factor, they found that the ratio of distortions of data averaged across the various risk matrix designs was in excess of 100. To get a sense of what a Lie Factor of 100 means, consider that when Edward Tufte explained this method he used an example that he classified as a “whopping lie”—it had a Lie Factor of 14.8.
Thomas et al. found that any design of a risk matrix had “gross inconsistencies and arbitrariness” embedded within it. Their conclusion is consistent with the conclusions of everyone who has seriously researched risk matrices:
How can it be argued that a method that distorts the information underlying an engineering decision in non-uniform and uncontrolled ways is an industry best practice? The burden of proof is squarely on the shoulders of those who would recommend the use of such methods to prove that the obvious inconsistencies do not impair decision making, much less improve it, as is often claimed.
Another NASA study showed how Monte Carlo and statistical regression-based methods performed compared to “softer” methods. The softer method referred to was actually NASA’s own version of the 5 × 5 risk matrix. The mission scientists and engineers arguably had a subject-matter advantage over the accountants—and yet, the accountants using Monte Carlo simulations and historical data were better at forecasting than the scientists and engineers using a risk matrix.
With the exception of some operational risks, that require to be managed in a particular way by law, most business risks should be assessed/analysed in a financial or other models, a schedule or a plan. Depending on the objective, it may be strategic, investment, project financial model, budget or even project schedule.
All risk managers in the team must have basic understanding of corporate finance, probability, forecasting and risk modeling. With at least one person on the team having in-depth knowledge and experience in corporate finance.
Here is a useful resource from Wharton https://www.coursera.org/learn/wharton-finance and another example on statistics and probability from Duke University https://www.coursera.org/learn/bayesian or Stanford https://www.coursera.org/specializations/probabilistic-graphical-models
C. Laws, standards and regulations
Some industries have risk management related standards or guidelines and some countries have specific laws and regulations related to risk management. Risk management team should have an in-depth knowledge of this. Any additional guidance should be taken into account when implementing risk management in any given company.
Risk managers also need to know and understand applicable risk management standards and guidelines. The best choice for non-financial company is by far the ISO 31000:2009. At the time of writing the standard had been officially translated and adopted in 44 out of 50 largest countries by GDP, making it truly global. The standard is currently being updated by a group of experts from 30+ countries and the new version is expected to be published end 2017-start 2018. My short video on the new ISO31000 draft can be watched here: https://www.youtube.com/watch?v=WsSLqJFkHlo
D. How business works
This is the most important point. Whatever the industry, someone in the risk management team should have extensive knowledge of that industry. Risk management is a decision making tool and be properly integrated into decision making, risk managers need to understand how decisions are made and how risks are normally mitigated in the industry.
E. Computer science (new June 2018)
This is a new addition. It is now becoming more and more apparent that the future economy will be digital. Digital means data, a lot of it, often unstructured. Maybe, just maybe, the future of risk management is artificial intelligence, chatbots and Alexas doing the risk analysis for the decision makers.
Artificial intelligence is not that sexy, it’s hardcore math and programming. A skill most, if not all, non-financial risk managers don’t have. This is the skill of the future. Soon Monte-Carlo simulations would not be enough.