RUS | UKR | ENG | ÄîíÍÒÓ> Ïîðòàë ìàãèñòðîâ ÄîíÍÒÓ
ÏÎËÎÑÀ ÔÎÒÎÃÐÀÔÈÉ ÈÇ ÐÀÇÍÛÕ ÏÅÐÈÎÄÎÂ ÌÎÅÉ ÆÈÇÍÈ

Ãëàâíàÿ Ìàòåðèàëû: Ðåôåðàò | Áèáëèîòåêà |Êàòàëîã áèáëèîòåêè | Ññûëêè | Îò÷åò î ïîèñêå

Email: kseniya.mos@gmail.com

Áèîãðàôèÿ
Ðåçþìå
Ïóáëèêàöèè
×ëåíñòâî â AIESEC
Êîíòàêòû

Which Measurement Method?

http://www.austega.com/risk/meas/opriskmeth.htm

Many methods of measuring an enterprise's operational and business risk have emerged:

The first three consistently run into data problems which reduce either their effectiveness or certainly their freedom from the influence of subjective judgment. The first two are nonetheless useful as a rough approximation or triangulation of a firm's risk capital requirement.

Each is discussed briefly below.

The proxy, analog or surrogate method

This high level or "top-down" method is often used by large companies comprising of several divisions operating independent businesses. The stages of the method are broadly as follows:

Undertaking the same approach at the group level and comparing the group results with the addition of the divisional results would allow an estimate of the diversification benefits gained by the group.

The essential problems with this method are:

There have also been instances when analog choice and data "cleaning" has been undertaken to achieve desired results - results that have subsequently diverged from expected levels with movements in equity markets. Does one then choose different analogs to achieve a desired outcome?

The earnings volatility method

This method considers the statistical variability in the earnings of the company, or of its divisions, and either empirically calculates the unexpected loss at a certain confidence level or more likely fits a standard statistical distribution to the available data and analytically calculates the unexpected loss at a certain confidence level. Before the analysis is undertaken the earnings data needs to be adjusted for risks other than operational risk (with this depending on the definition chosen). Even at the broad definition, this would involve adjusting for the full effect of credit and market risks.

The same method can be applied to volatility of asset values, including to the market capitalisation of the company, but this is not readily possible for its divisions.

The essential problems with this method are:

  1. the difficulty in finding sufficient consistent data to perform either the empirical analysis or to fit a standard statistical distribution with confidence
  2. the level of judgment required in cleaning the data for the effects of other risk types, and adjusting it for inflation, volume growth, changes in company structure etc
  3. the judgment required that the relatively short time-span of the data effectively captures the full range of experience for which risk capital provides a buffer
  4. the backward looking nature of the reliance on historical data tending to make this approach difficult to apply to strategic changes in company structure or business
  5. the separation between the risk capital measure and the actual risk management activities in the entity, with subsequent lack of incentive to promote risk management

The loss modelling method

This method collects actual loss data and uses it to derive empirical distributions for its risks. These empirical risk distributions are then used to calculate an unexpected loss amount needing to be protected by a capital buffer. The unexpected loss can be theoretically calculated to any desired target confidence level.

This is typically thought of as a "bottom-up" method, but can be done at any level of detail with the loss types able to be defined narrowly or broadly.

This method is intuitively attractive as it endeavours to anchor itself in objective loss data. It is also promoted by the January 2001 Basel proposals in conjunction with its narrow definition of operational risk. However it faces a number of critical problems, particularly if an entity has a goal of a broad measure of risk:

  1. the need to collect sufficient data to cover the range of experience that a risk capital buffer would be expected to cover (including for example the range of political events that might threaten an industry or company)
  2. the assumption that the past is a good predictor of the future (particularly with a company that evolves to a different business mix or managment style)
  3. the need to capture full economic losses (including opportunity losses) if the company wishes to pursue a holistic broad loss measure
  4. the need to collect loss data on a consistent basis (such as before or after the mitigating impact of insurance)
  5. the need to have a consistent and clear demarcation between different risk types to ensure that losses are appropriately grouped (and avoiding double counting)
  6. the difficulty in using this method to project the capital requirements for a future innovative structure/business strategy where loss data may not exist
  7. the delay between impact on the risk capital measure and the actual risk management activities in the entity, with subsequent reduction of incentive to promote risk management

A method to overcome the insufficiency of loss data proposed by several is to collect industry loss data, with scaling to the size of the entity. Although it is clearly beneficial to learn from the experience of others, this supplementary method has additional problems:

  1. the need to ensure that a consistent treatment and grouping of losses occurs despite their different sources, with many publicly sourced losses relying on news reports
  2. the need to analyse the losses sufficiently to nominate the appropriate scaling factors to be used to make them relevant to other entities
  3. the problem that even total industry experience for a period does not ensure that the data covers the range of experience that a risk capital buffer would be expected to cover (as the industry shared the same or a similar economic and political environment)

The judgment layer required to deal with these problems overshadows the apparent objective nature of this method.

The direct estimation method

The direct estimation method relies on collaborative line manager judgments to estimate a risk distribution for the risks they run. It explicitly incorporates a layer of subjective judgment based on available loss data and other relevant factors, but these subjective judgments are generally at a lower level of significance than the judgments involved in the other measurement methods.

It also provides a forward looking quantification of risk, with the effects of changes in business mix or strategy or structure, readily included in the direct estimation judgment process.

The direct estimation method is covered in detail in subsequent pages, but basically involves the selection of a risk distribution shape that is appropriate for the risk (generally allocated by default to the risk category) and than anchoring this risk distribution shape with a quantification of the impact of one or more scenarios (which can include actual risk incidents or near misses). This estimated risk distribution can be refined based on subsequent experience (or as appropriate loss data becomes available).

The detail at which risks are estimated (or losses grouped in the loss modelling method) is at the organisation's discretion - the level of detail required depends on the level of detail sought in the subsequent risk/return numbers. This may well be detailed but need not be.

Some granularity or level of detail has to be chosen and at any level some bundling of risk types is inevitable. For example, the granularity might be the risk of flood to a state's branch network or it might be the risk of all natural disasters to the organisation as a whole - either way the risk encompasses a range of possible risk events. The decision needs to be made on the level of discrimination sought for the results. Does the organisation wish to consider pricing or a possible contribution to shareholder value at the level of an individual regions? Probably not initially.

Direct estimation does not require everyone to duplicate estimates of the same risks. Common risks can be estimated by the appropriate experts who can also define which of the available indicators would be the best proxies of the risk's incidence. Staff numbers may be suitable for some risks, the number of certain types of transactions for others, and so on.

Many broadly defined operational risks do have large "intangible" or non-cash losses that are difficult to estimate. But these seem to be able to be quantified after a disaster, particularly for public companies with a sharemarket measure. In most cases it would be appropriate to consider the size of the risks before they hit. Hard? - yes. Precise? - no. But precise to the degree of precision needed - yes. And are such judgments about these risks already explicitly needed for the company's risk management? - most definitely yes.

Better judgment techniques

The direct estimation approach explicitly relies on judgments by line managers under the explicit facilitation of independent risk experts. This is unlike most of the other methods of quantifying operational risk which tend to rely on implicit high level assumptions and judgments which are often not considered or questioned.

The direct estimation approach encourages the use of improved judgment by allowing the obvious traps of subjective judgments to be analysed and addressed.

These traps have been well described in the literature. One good place to start is the collection of articles in Wright, George & Goodwin, Paul (eds) (1998) Forecasting with Judgment John Wiley & Sons, Chichester England.

Projecting extreme risk scenarios (and then quantifying their impacts) needs a carefully constructed mind set different from normal for the majority of business managers. For example, a line manager focussed solely on sales growth and maintaining margins is unlikely to be an effective source for extreme loss scenarios without careful explanation and appropriate scene-setting. Some will even consider the exercise of considering adverse possibilities "hypothetical", a "waste of time", or at least a distraction from what they should be focussing on.

Tip 1 - Motivation: Ensure that the business line managers clearly understand that their business head wants a serious consideration of possible extreme loss scenarios with the process outcomes and recommendations reported through to him or her.

The structured consideration of possible extreme losses may be a new experience for many line managers. Clear framing of the activities' goals can avoid unnecessary and demotivating time-wasting. The goals should include both better risk management as well as assisting risk quantification. How can early warnings of such extreme loss outcomes be detected and acted on?

Advising and adhering to a clear time-frame for the risk identification and scenario analysis is also valuable given their open-ended nature.

Tip 2 - Clear framing: Ensure the independent risk facilitators clearly frame the activities' goals (and provide an appropriate time-frame) and that these goals include better risk management as well as risk quantification.

Avoiding common judgment traps is achieved primarily by being aware of them. Commonly people predict the future in terms of immediate past experience - the so-called "anchor and adjust" heuristic. Line managers need to be reminded not to see possible adverse outcomes solely as adjustments to the latest outcome (which may be a favourable or unfavourable one) but to step back one stage further and to see the latest outcome as also just one of many.

Tip 3 - Avoid anchoring: Ensure line managers avoid limiting scenarios to those anchored to the most recent outcome where this is not fully reflective of the possible outcomes.

The literature also suggests that people may tend to be over-optimistic in forecasting. The effect of this on projecting extreme loss scenarios is not completely clear, but there appears clear advantage in informing the projections with analysis of similar third-party loss events. This can also encourage a more objective, less emotionally involved, analysis.

Tip 4 - External losses/near-losses: Use analysis of emotionally unconnected external losses and near-losses inform a more "objective" risk scenario development process.

Tip 5 - Outsider views:Avoid defensiveness and insularity by line managers by asking them to take on an "outsider" role, such as appraising the business as one of a number of possible acquisitions.

A well-noted judgment trap is conjunction error - for example, people tend to rate the proportion of the population represented by men having heart attacks as lower than the proportion represented by older men having heart attacks. The best remedy for this is a structured step-by-step approach (for example by seeking the judgment of the population proportion represented by men, then by men having heart attacks, and finally by older men having heart attacks).

Tip 6 - Divide & conquer: Break up big subjective judgments into smaller ones that can be made separately and that can capture logical connections.

In many cases there may be more than one line manager with expertise in the risk and the way it could impact the organisation. In this case judgment biases can be reduced by separately seeking the individual judgments of the expert group. There are a number of different possible techniques.

One that can be undertaken easily as part of a brainstorming session is an iterative record-discuss-revise cycle. Here participants are first asked to record their response to the framed question/goal, then sequentially to present on an aspect of their initial thoughts with this followed by group discussion, and finally to revise their initial response.

A more structured approach, known as the Delphi technique, utilises anonymity, iteration, controlled feedback, and statistical aggregation of responses. Although developed in the 1950s it is even more useful now that electronic communications simplifies distribution and aggregration of surveys/questions.

Tip 7 - Collaborative confirmation: Use multiple risk experts wherever possible both as an internal discipline and also to capture the synergy between their thinking. Use structured approaches to minimise dominance by an individual or "group-think".

Improving the risk judgment quality of the line managers is one of the key roles of the independent risk facilitators. This is both critical for risk management and risk measurement purposes.

There may well be times when even with the best endeavours the line manager judgments appear to these independent risk faciliators to be awry. The only sensible solution in this case is for the contention to be escalated to the line managers' business head. This maintains the ownership by the business division - critical to their acceptance of responsibility for the risk - and yet also defends the risk quantification process.
ÄîíÍÒÓ> Ïîðòàë ìàãèñòðîâ ÄîíÍÒÓ> Ðåôåðàò | Áèáëèîòåêà | Ññûëêè | Îò÷åò î ïîèñêå

Email: kseniya.mos@gmail.com