На страницу электронной библиотеки

Which Measurement Method?

Many methods of measuring an enterprise's operational and business risk have emerged:

The first three consistently run into data problems which reduce either their effectiveness or certainly their freedom from the influence of subjective judgment. The first two are nonetheless useful as a rough approximation or triangulation of a firm's risk capital requirement.

Each is discussed briefly below.

The proxy, analog or surrogate method

This high level or "top-down" method is often used by large companies comprising of several divisions operating independent businesses. The stages of the method are broadly as follows:

  1. Identify public companies that are similar ("analogs") to the entity's different divisions either as they are or as they "should be".
  2. Identify the capital and other key financial variables of these analog companies and analyse the relationship (including by regression) between the capital and other variables
  3. Use this relationship to calculate the amount of capital the division should hold on a "stand-alone" basis according to its key financial variables

Undertaking the same approach at the group level and comparing the group results with the addition of the divisional results would allow an estimate of the diversification benefits gained by the group.

The essential problems with this method are:

  1. the level of subjective judgment required in selecting the analog companies
  2. the level of judgment required at the formation of the explanatory algorithm
  3. the shortage in term of the data history available to support this analysis
  4. the difficulty in using this method to project the capital requirements for a future innovative structure where analogs may not exist
  5. the separation between the risk capital measure and the actual risk management activities in the entity, with subsequent lack of incentive to promote risk management

There have also been instances when analog choice and data "cleaning" has been undertaken to achieve desired results - results that have subsequently diverged from expected levels with movements in equity markets. Does one then choose different analogs to achieve a desired outcome?

The earnings volatility method

This method considers the statistical variability in the earnings of the company, or of its divisions, and either empirically calculates the unexpected loss at a certain confidence level or more likely fits a standard statistical distribution to the available data and analytically calculates the unexpected loss at a certain confidence level. Before the analysis is undertaken the earnings data needs to be adjusted for risks other than operational risk (with this depending on the definition chosen). Even at the broad definition, this would involve adjusting for the full effect of credit and market risks.

The same method can be applied to volatility of asset values, including to the market capitalisation of the company, but this is not readily possible for its divisions.

The essential problems with this method are:

  1. the difficulty in finding sufficient consistent data to perform either the empirical analysis or to fit a standard statistical distribution with confidence
  2. the level of judgment required in cleaning the data for the effects of other risk types, and adjusting it for inflation, volume growth, changes in company structure etc
  3. the judgment required that the relatively short time-span of the data effectively captures the full range of experience for which risk capital provides a buffer
  4. the backward looking nature of the reliance on historical data tending to make this approach difficult to apply to strategic changes in company structure or business
  5. the separation between the risk capital measure and the actual risk management activities in the entity, with subsequent lack of incentive to promote risk management

The loss modelling method

This method collects actual loss data and uses it to derive empirical distributions for its risks. These empirical risk distributions are then used to calculate an unexpected loss amount needing to be protected by a capital buffer. The unexpected loss can be theoretically calculated to any desired target confidence level.

This is typically thought of as a "bottom-up" method, but can be done at any level of detail with the loss types able to be defined narrowly or broadly.

This method is intuitively attractive as it endeavours to anchor itself in objective loss data. It is also promoted by the January 2001 Basel proposals in conjunction with its narrow definition of operational risk. However it faces a number of critical problems, particularly if an entity has a goal of a broad measure of risk:

  1. the need to collect sufficient data to cover the range of experience that a risk capital buffer would be expected to cover (including for example the range of political events that might threaten an industry or company)
  2. the assumption that the past is a good predictor of the future (particularly with a company that evolves to a different business mix or managment style)
  3. the need to capture full economic losses (including opportunity losses) if the company wishes to pursue a holistic broad loss measure
  4. the need to collect loss data on a consistent basis (such as before or after the mitigating impact of insurance)
  5. the need to have a consistent and clear demarcation between different risk types to ensure that losses are appropriately grouped (and avoiding double counting)
  6. the difficulty in using this method to project the capital requirements for a future innovative structure/business strategy where loss data may not exist
  7. the delay between impact on the risk capital measure and the actual risk management activities in the entity, with subsequent reduction of incentive to promote risk management

A method to overcome the insufficiency of loss data proposed by several is to collect industry loss data, with scaling to the size of the entity. Although it is clearly beneficial to learn from the experience of others, this supplementary method has additional problems:

  1. the need to ensure that a consistent treatment and grouping of losses occurs despite their different sources, with many publicly sourced losses relying on news reports
  2. the need to analyse the losses sufficiently to nominate the appropriate scaling factors to be used to make them relevant to other entities
  3. the problem that even total industry experience for a period does not ensure that the data covers the range of experience that a risk capital buffer would be expected to cover (as the industry shared the same or a similar economic and political environment)

The judgment layer required to deal with these problems overshadows the apparent objective nature of this method.

The direct estimation method

The direct estimation method relies on collaborative line manager judgments to estimate a risk distribution for the risks they run. It explicitly incorporates a layer of subjective judgment based on available loss data and other relevant factors, but these subjective judgments are generally at a lower level of significance than the judgments involved in the other measurement methods.

It also provides a forward looking quantification of risk, with the effects of changes in business mix or strategy or structure, readily included in the direct estimation judgment process.

The direct estimation method is covered in detail in subsequent pages, but basically involves the selection of a risk distribution shape that is appropriate for the risk (generally allocated by default to the risk category) and than anchoring this risk distribution shape with a quantification of the impact of one or more scenarios (which can include actual risk incidents or near misses). This estimated risk distribution can be refined based on subsequent experience (or as appropriate loss data becomes available).

The detail at which risks are estimated (or losses grouped in the loss modelling method) is at the organisation's discretion - the level of detail required depends on the level of detail sought in the subsequent risk/return numbers. This may well be detailed but need not be.

Some granularity or level of detail has to be chosen and at any level some bundling of risk types is inevitable. For example, the granularity might be the risk of flood to a state's branch network or it might be the risk of all natural disasters to the organisation as a whole - either way the risk encompasses a range of possible risk events. The decision needs to be made on the level of discrimination sought for the results. Does the organisation wish to consider pricing or a possible contribution to shareholder value at the level of an individual regions? Probably not initially.

Direct estimation does not require everyone to duplicate estimates of the same risks. Common risks can be estimated by the appropriate experts who can also define which of the available indicators would be the best proxies of the risk's incidence. Staff numbers may be suitable for some risks, the number of certain types of transactions for others, and so on.

Many broadly defined operational risks do have large "intangible" or non-cash losses that are difficult to estimate. But these seem to be able to be quantified after a disaster, particularly for public companies with a sharemarket measure. In most cases it would be appropriate to consider the size of the risks before they hit. Hard? - yes. Precise? - no. But precise to the degree of precision needed - yes. And are such judgments about these risks already explicitly needed for the company's risk management? - most definitely yes.