Grant Purdy and Roger Estall have recently published a book on decision-making called Deciding. Written to help decision makers (they call them Deciders) to make ‘even better decisions’ it goes directly to the two big challenges for every Decider – ensuring that each decision will contribute to (rather than detract from) achieving the purpose of their organisation, and being sufficiently certain that the outcomes that result from the decision, are those they intend.
The unmistakable evidence is that most organisations don’t even attempt to adopt any type of ‘risk management’ belief system. This is probably because of the complexity involved and the ill-fit with their own purpose and methods of operating.
However, of the relatively few organisations that either buy-in to the belief system, or are forced in by regulators, few if any master its intricacies, or fundamentally change the way they operate. As the saying goes, they might ‘talk the walk’ but don’t in fact ‘walk the
There are several reasons for this which most often include the following:
- The ‘risk management’ paraphernalia is complicated and unnatural. For example, the first ISO ‘risk management’ standard contained 29 labels that relate to either ordinary words given a special meaning, or to contrived expressions involving the word ‘ risk’. Even the label ‘risk’, is so ill-defined, as to require five accompanying
‘notes’ to its own definition, each of which either contradict or confuse!
- Much of what comes with ‘risk management’ is illogical and defies common sense – easily recognised as such by most people. For example, over the past decade, some ‘risk management’ proponents have popularised the dichotomy of ‘risk and opportunity’. This is about as logical as pairing bulldozers with cauliflowers – sure, it can be done, but why? The two words are utterly unrelated and are not antonyms.
- Fundamental to the ‘risk management’ belief system, is the contention that to be successful, organisations will need to somehow integrate the related paraphernalia into their usual way of operating and making decisions. This is neither realistic nor valid which is why ‘ risk management’ is usually applied (imperfectly) as an ‘add on’.
- Despite adoption of some of the trappings of ‘risk management’ – such as pronouncement of policies, references in the annual report and sporadic, but inconsistent, use of the jargon – little if any change occurs in the way decisions are made.
- The ‘risk management’ edifice and its constructs don’t make life easier or enhance decision making. This is illustrated in the box below which discusses ‘risk registers’, one of the more common and time-consuming pieces of paraphernalia that Deciders are expected to create or consult. At the human level of the individual Decider, it can seem that ‘risk management’ advocates are saying ‘here’s the answer, now fit your problem into that’. As a result (to borrow a common Australian phrase) ‘risk management’ doesn’t ‘pass the pub test’ in the eyes of most Deciders. It hinders rather than helps them to function and so it is either ignored or paid lip service.
- There is a considerable cost involved in replacing long standing practices with a new approach across an organisation and yet the returns are far from obvious or commensurate.
- There is little objective evidence to support the contention that adopting ‘risk management’ equates to enhanced organisational performance.
- An increasingly popular practice (promoted by consultants) of purporting to measure ‘ risk management maturity)” or to certify ‘risk management’ compliance, suffers fundamental problems. Setting aside the curious use of the word ‘maturity)” as a proxy for competence, the measures used are arbitrary and fuzzy, lack validation and relate to inputs rather than outcomes. Consequently, there is rarely any explicit consideration of the quality of decisions being made or their actual effect on organisational performance. Although there are some faint correlations between organisations that are successful and those that adopt the risk management paraphernalia, correlation is not causation. Any apparent correlations could be explained by already-successful organisations being able to afford to construct a ‘ risk management’ edifice or being subjected to regulatory coercion.
Risk registers’ are a common example of the type of artificial construct imposed by ‘risk management’ belief systems (even though they were not even mentioned in the ISO or COSO standards). Such registers purport to list and describe the ‘risks’ associated with either on organisation or, say, o project or other substantial decision. Although created at a point in time, few if any registers record the prevailing context which will inevitably change, and thus invalidate the diagnosis. Furthermore, the list of ‘risks’ can only ever be a sample. The practical task of filling out the columns of the register invariably distracts Deciders from achieving sufficient certainty that their decision will deliver the required
outcomes. This may explain why it is very rare that the registers are actually used in decision-making or even, accessible to Deciders.
It may seem surprising that sector peak bodies have not successfully pushed back against the regulation of ‘ risk management’. There may be two reasons for this: the vagaries of ‘ risk management’ mean it is not seen as a core issue (in contrast, say, to product regulation, quality assurance or financial regulation); or, there is reliance on the external advocacy of internal or external subject matter ‘experts’ without appreciating that they may have a perverse interest in the belief system being mandated.