На страницу электронной библиотеки |
Many methods of measuring an enterprise's operational and business risk have emerged:
The first three consistently run into data problems which reduce either their effectiveness or certainly their freedom from the influence of subjective judgment. The first two are nonetheless useful as a rough approximation or triangulation of a firm's risk capital requirement.
Each is discussed briefly below.
The proxy, analog or surrogate method
This high level or "top-down" method is often used by large companies comprising of several divisions operating independent businesses. The stages of the method are broadly as follows:
Undertaking the same approach at the group level and comparing the group results with the addition of the divisional results would allow an estimate of the diversification benefits gained by the group.
The essential problems with this method are:
There have also been instances when analog choice and data "cleaning" has been undertaken to achieve desired results - results that have subsequently diverged from expected levels with movements in equity markets. Does one then choose different analogs to achieve a desired outcome?
The earnings volatility method
This method considers the statistical variability in the earnings of the company, or of its divisions, and either empirically calculates the unexpected loss at a certain confidence level or more likely fits a standard statistical distribution to the available data and analytically calculates the unexpected loss at a certain confidence level. Before the analysis is undertaken the earnings data needs to be adjusted for risks other than operational risk (with this depending on the definition chosen). Even at the broad definition, this would involve adjusting for the full effect of credit and market risks.
The same method can be applied to volatility of asset values, including to the market capitalisation of the company, but this is not readily possible for its divisions.
The essential problems with this method are:
The loss modelling method
This method collects actual loss data and uses it to derive empirical distributions for its risks. These empirical risk distributions are then used to calculate an unexpected loss amount needing to be protected by a capital buffer. The unexpected loss can be theoretically calculated to any desired target confidence level.
This is typically thought of as a "bottom-up" method, but can be done at any level of detail with the loss types able to be defined narrowly or broadly.
This method is intuitively attractive as it endeavours to anchor itself in objective loss data. It is also promoted by the January 2001 Basel proposals in conjunction with its narrow definition of operational risk. However it faces a number of critical problems, particularly if an entity has a goal of a broad measure of risk:
A method to overcome the insufficiency of loss data proposed by several is to collect industry loss data, with scaling to the size of the entity. Although it is clearly beneficial to learn from the experience of others, this supplementary method has additional problems:
The judgment layer required to deal with these problems overshadows the apparent objective nature of this method.
The direct estimation method
The direct estimation method relies on collaborative line manager judgments to estimate a risk distribution for the risks they run. It explicitly incorporates a layer of subjective judgment based on available loss data and other relevant factors, but these subjective judgments are generally at a lower level of significance than the judgments involved in the other measurement methods.
It also provides a forward looking quantification of risk, with the effects of changes in business mix or strategy or structure, readily included in the direct estimation judgment process.
The direct estimation method is covered in detail in subsequent pages, but basically involves the selection of a risk distribution shape that is appropriate for the risk (generally allocated by default to the risk category) and than anchoring this risk distribution shape with a quantification of the impact of one or more scenarios (which can include actual risk incidents or near misses). This estimated risk distribution can be refined based on subsequent experience (or as appropriate loss data becomes available).
The detail at which risks are estimated (or losses grouped in the loss modelling method) is at the organisation's discretion - the level of detail required depends on the level of detail sought in the subsequent risk/return numbers. This may well be detailed but need not be.
Some granularity or level of detail has to be chosen and at any level some bundling of risk types is inevitable. For example, the granularity might be the risk of flood to a state's branch network or it might be the risk of all natural disasters to the organisation as a whole - either way the risk encompasses a range of possible risk events. The decision needs to be made on the level of discrimination sought for the results. Does the organisation wish to consider pricing or a possible contribution to shareholder value at the level of an individual regions? Probably not initially.
Direct estimation does not require everyone to duplicate estimates of the same risks. Common risks can be estimated by the appropriate experts who can also define which of the available indicators would be the best proxies of the risk's incidence. Staff numbers may be suitable for some risks, the number of certain types of transactions for others, and so on.
Many broadly defined operational risks do have large "intangible" or non-cash losses that are difficult to estimate. But these seem to be able to be quantified after a disaster, particularly for public companies with a sharemarket measure. In most cases it would be appropriate to consider the size of the risks before they hit. Hard? - yes. Precise? - no. But precise to the degree of precision needed - yes. And are such judgments about these risks already explicitly needed for the company's risk management? - most definitely yes.