Response to consultation on Guidelines PD estimation, LGD estimation and treatment of defaulted assets

Go back

Question 1: Do you agree with the proposed requirement with regard to the application of appropriate adjustments and margin of conservatism? Do you have any operational concern with respect to the proposed categorization?

We assume these new requirements (together with the requirements of other EBA papers) will have a very severe impact since virtually all IRB models will need to be comprehensively revised (i.e. the changes will be material in the sense of the requirements of the model change policy). We anticipate that each model will have to go through at least one supervisory approval process. The fundamentally new approach to the treatment of MoCs, in particular, will require large-scale restructuring of models.

Question 2: Do you see any operational limitations with respect to the monitoring requirement proposed in paragraph 53?

A quarterly calculation of default rates does not, in itself, pose an insurmountable challenge. Complying with this requirement would nevertheless generate costs on a scale disproportionate to the associated regulatory benefit. It is also unclear what exactly is meant in par. 53 by “to monitor the appropriateness of the PD estimates”. We see a need to clarify that this is not a requirement a carry out a comprehensive validation of the calibration on a quarterly basis. Only a plausibility check of the default prediction can be required, in our view. Banks should also be permitted to apply a top-down approach focusing primarily on changes in default rates from one quarter to the next. It is not clear what conclusions are supposed to be drawn from the results of these quarterly reports. First, there are few portfolios where it can be assumed that instances of default will be distributed evenly over time. And, second, a recalibration of the system – in whichever direction – would not be possible in the absence of a complete alternative one-year observation period for this purpose.

The requirements concerning the calculation of the default rate itself (par. 48-52) are not totally clear or comprehensible. Nor would they serve a useful purpose, in our view.

Par. 48(a) requires the denominator to be limited to the “obligors observed at the beginning of the one-year observation period with any credit obligation.” It is not clear what is meant. The points mentioned in the following sentence (“credit obligation refers to any amount of principal, interest and fees as well as to any off-balance sheet items including guarantees”) could be understood to require the inclusion of all customers for which some kind of exposure exists at the bank, but not, for instance, cases where a client (such as a support provider) is only rated because the rating is needed for the purpose of evaluating a third party.

But this is not totally consistent with the explanation in the “Background and rationale” chapter on p. 10, which explicitly limits the calculation to “obligors with credit facilities” and explicitly requires “obligors whose obligations stem solely from non-credit products” and “obligors or facilities with just committed but undrawn credit lines” to be excluded and assigned to a separate pool.

Aside from these inconsistencies, we consider the entire approach problematic. What problem is the exclusion or separate treatment of customers without exposures intended to solve? Par. 41 sets highly restrictive conditions for excluding customers from the calculation of the default rate: cases may only be excluded where obligors were wrongly recorded as defaulting or wrongly assigned to a rating model although not actually covered by its scope. Par. 48, by contrast, requires the exclusion of a whole section of the data set – namely those customers which have no exposure on the reference date – without there being any indication that the quality of the data involved is deficient.

The two requirements are incompatible with one another. In the wholesale area at least, parties for which there is no current exposure but which are rated in their capacity as a guarantor for another customer, for example, are subject to the same monitoring and default identification processes as all other rated customers. It is therefore only logical to include them in the default-rate calculation.

A further problem with the requirement in par. 48 is the dependency of the criterion on the point in time. On different reference dates, one and the same customer would sometimes be included in the default rate (e.g. if an open credit line had just been drawn on) and sometimes not (if the credit line existed but remained undrawn). Information about whether an exposure exists on the reference date cannot be derived from the IT implementation of the rating model itself since this information is naturally not yet available when the rating is being issued. It would therefore have to be extracted from a downstream system and input into the development/validation data. In our view, the resulting complexity of the process and reduced transparency concerning the source of the underlying data would be out of all proportion to the questionable added value.

Considered in conjunction with each other par. 48 and 51 also fail to make it clear how the default rate is supposed to be calculated. The first sentence of par. 51 reads as though customers which have migrated to another rating system or which cease to need rating within the observation period always have to be included in the denominator and numerator for the calculation of the default rate. The second sentence, on the other hand, sounds more as though banks have to carry out an analysis in addition to the calculation under par. 48 and adjust the default rate thus calculated if this analysis indicates evidence of distortion. The inclusion of the phrase “if relevant” in the first sentence of par. 51 makes the desired procedure even more unclear. While the consultation paper later takes several paragraphs (par. 59-63) to spell out what is meant by an unexplained “if relevant” in Art. 180(1)(h) and (2)(e) of the CRR, the use of the phrase in par. 51 is not explained at all.

Par. 51 requires the inclusion not only of customers migrating to other risk management systems but also of customers whose credit obligations were sold during the observation period. It will frequently not be possible to implement this requirement for practical reasons since the bank will often no longer have comprehensive information about the customer’s default behaviour after the loan has been sold. This is invariably the case if business relations with the customer cease altogether on the sale of the loan.

Last but not least, the requirements are not practicable because they would necessitate a comparison with predictions which change over time. More frequent testing also increases the influence of individual cases, which leads to a greater number of false positive results. There would also be distorting seasonal effects (as a result of information updated only once a year, for instance).

Question 3: Do you agree with the proposed policy for calculating observed average default rates? How do you treat short term contracts in this regard?

We do not agree with the proposed treatment. Par. 113 requires fees and interest which are recognised in the income statement to be added to the realised loss. If all fees and interest have been recognised, this will result in a customer which has paid everything in full failing to achieve an LGD of zero (owing to the discounting which also has to be taken into account). On top of that, this will give rise to different treatment of banks which recognise fees and interest in the income statement on default and banks which banks which do not. For these reasons, we do not believe that the proposed procedure makes good sense.

There is no justification, in our view, for the concern outlined in the explanatory box that excessively high fees and interest might otherwise give rise to a negative LGD. First, par. 139 stipulates that the LGD must not be less than zero for estimation purposes. Second, the LGD has to take account of internal costs, which, as we see it, may cover fees and interest. The guidelines allow external costs to be excluded if they are taken into account elsewhere. The proposed procedure therefore fails to treat internal and external costs equally in this area.

Question 4: Are the requirements on determining the relevant historical observation periods sufficiently clear? Which adjustments (downward or upward), and due to which reasons, are currently applied to the average of observed default rates in order to estimate the long-run average default rate? If possible, please order those adjustments by materiality in terms of RWA.

No, the granularity of the master scale and the assignment of PDs are essentially determined by the granularity and composition of the relevant portfolio and by the bank’s internal processes. We do not believe any additional benefit would be derived from setting benchmarks or maximum PD levels. We are opposed to the idea of such an approach.

Question 5: How do you take economic conditions into account in the design of your rating systems, in particular in terms of: d. definition of risk drivers, e. definition of the number of grades f. definition of the long-run average of default rates?

Re. d:
On the one hand, macroeconomic conditions are often considered as factors when developing a rating system (long list). Owing to a limited differentiating ability, however, they are comparatively seldom incorporated in the model (shot list). This is because these factors are sometimes capable of differentiating between different time slices, but not between different customers. On the other hand, most of the other factors analysed are sensitive to macroeconomic changes, so these are certainly considered implicitly. We do not believe it would serve a useful purpose to make the use of macroeconomic risk drivers mandatory because economic downturns could only be mapped with a time lag, which would increase the procyclicality of rating systems.

We have strong reservations about explicitly requiring the inclusion of “geographic location for corporates” and “trend…information” because this would require a mandatory analysis of risk drivers which are not relevant to all portfolios. This is the case, for example, at public and development banks. Where trend information is concerned, moreover, it is sometimes the case that absolute figures are more relevant than changes in these figures alone. It is not feasible to meet the requirement in par. 70(b) that “the weighting in the statistical model should be purely statistically based” with a view to allowing an internal or external rating of connected clients to be incorporated into a statistical model. Even if “purely statistical” incorporation is not possible, however, it will in many cases be sensible and unavoidable from a risk angle to nevertheless consider ratings-based information in the model (e.g. in the form of expert judgement) in order to avoid understating risk.

Re. e:

The number of grades is not influenced by economic conditions but depends more on the granularity of the portfolios in question and the design of internal processes.

Re. f:

The definition of the long-run average of default rates should take account of the cyclical features of the observed portfolio. Ideally, the period used for calculating the long-run default rate should comprise at least one economic and default rate cycle. If the historical default rate data do not cover a complete cycle, it should be ensured by means of benchmarks, external studies, etc. that the long-run default rate adequately reflects the default rate level of a portfolio segment. As regards a sensible reference figure for the long-term appropriateness of the calibration target, see also our reply to question 5.3.

Question 6: Do you have processes in place to monitor the rating philosophy over time? If yes, please describe them.

The terms used in the explanation of the calculation (e.g. “significant bias”, “economic adjustment”) are largely unclear. It is also unclear when MoCs or economic adjustments are supposed to be incorporated into the calculation of the observed default rate. The explanatory box is not written clearly.

We would also like to point out that the analysis of seasonal effects on long-term loans is irrelevant if non-overlapping windows are used. It should be possible to use instead a qualitative argumentation with respect to the bank’s lending policy (no short-term loans/no consumer loans).

Par. 58 should clarify exactly what is meant by counting each default as 1. This paragraph must make clear whether this means equal weighting of the annual default rates or the creation of a cumulative reference date (i.e. the weighting of all defaults) to determine the annual default rates. It only becomes clear in the explanatory box that the equal weighting of the annual default rates is meant. Since the final guidelines will probably not include any explanatory boxes, it is important to have clarification in the text itself. Par. 58 should actually be deleted and additional wording possibly added to par. 60 explaining that the annual default rates should be weighted equally unless there are good reasons for using another procedure.

Question 7: Do you have different rating philosophy approaches to different types of exposures? If yes, please describe them.

According to par. 78, banks should decide a rating philosophy. We welcome the fact that the EBA leaves this decision to the banks themselves and, in particular, has refrained from stipulating that banks use a through-the-cycle or point-in-time approach.

The structure of the master scale and the probabilities of default assigned to the rating grades are normally retained over a period of years. Regular monitoring and validation of rating systems ensure that PDs continue to be assigned to customers and transactions in an appropriate manner.

Trends in default rates per rating grade and in migration behaviour per rating system are evaluated on an annual basis.

Question 8: Would you expect that benchmarks for number of pools and grades and maximum PD levels (e.g. for exposures that are not sensitive to the economic cycle) could reduce unjustified variability?

N/A

Question 9: Do you agree with the proposed principles for the assessment of the representativeness of data?

Please see additional comments (file upload)

The specification of how to determine the margin of conservatism is only partially helpful (e.g. in the description of possible model deficiencies) and generally far too detailed. We consider the procedure for quantifying the MoC and the categories selected by the EBA to be particularly questionable. This categorisation will have no positive effects on the predictions themselves and will do nothing to reduce the variability of model results. For this reason, we would prefer a principles-based approach which confined itself to essential aspects of the MoC.

In principle, we consider the aspects covered by the MoC, especially those concerning the quality of data, to be important questions. Nevertheless, given how complex models are – a fact which is already a frequent point of criticism – and in view of their suitability as economic risk management tools (pricing, lending decisions), we believe it would make better sense to regard these aspects as governance issues rather than something to be dealt with in the context of estimation methods and results. We recommend requiring banks to consider and regularly monitor these aspects. Attempting to quantify their impact on expected defaults and losses (in economic downturns) and then scaling up these add-ons for RWA purposes to confidence intervals of sometimes 99.9% would not serve a useful role, in our view.

The systematic identification and subsequent allocation to the new categories will be highly onerous in operational terms. Removing the adjustments for conservatism from the model calibration and converting them into an on-top add-on will, moreover, require many banks to radically restructure their current model approaches. Though this will admittedly result in greater horizontal comparability, it will not necessarily improve the quality of models. Nor do we believe the proposed procedure will always prove the most useful: it can also make good sense to incorporate conservative adjustments in the model development phase.

The expectation that margins of conservatism are precisely quantifiable will pose a general difficulty when implementing the requirements. The following statement in the explanatory box on p. 42 is especially problematic: “It is therefore clarified in the draft Guidelines that institutions should be able to calculate and report the exact impact of the MoC at the level of risk parameters…” Potential correlations between the MoC categories are not taken into account: this may cause MoC estimates to be distorted upwards.

The instruction on calibration in par. 81 also reveals an expectation that it is possible to calculate the MoC as an exact measure (“Institutions should conduct the calibration before the application of MoC…”). But in many of the application scenarios for the MoC described in par. 25, such as diminished representativeness, missing data, or inaccurate or outdated information, the idea of a quantitative “measurement” of a corresponding MoC is totally unrealistic. How, for instance, is a bank supposed to precisely quantify the estimation error arising from the unavailability of certain historical data on a certain risk driver or from a change in lending policies? An exact quantification of the associated MoC is simply not possible in such cases.

This goes all the more for weaknesses which have already been taken into account in a conservative manner in the modelling itself (e.g. conservative consideration of missing data items) and for which no explicit additional MoC therefore needs to be added to the model results. We therefore believe it should be clarified that the requirement in par. 30 (“Institutions should quantify the estimation error that results from the identified deficiency in order to justify the level of MoC”) and the reporting requirement in par. 29 should not be interpreted as meaning that every individual affected aspect needs to be exactly quantified. We agree that it makes sense to expect a quantitative estimation of aspects which lend themselves to quantifying. But requiring an exact quantification of aspects which by their very nature cannot sensibly be quantified is neither a useful nor a feasible approach, in our view.

The breakdown in par. 25(c) of the aspect “general estimation errors including errors stemming from methodological deficiencies” into two components raises questions. What exactly is to be understood by “rank order estimation error” as opposed to “estimation error in the calibration”? Some kind of error in the sequence in which the model places borrowers is evidently meant. But it is not made clear exactly how this error is supposed to be measured. One possible interpretation is an unsatisfactory degree of differentiation resulting from the fact that the predictive power of the model itself is particularly poor. Another possibility is that it is not poor predictive power per se which is meant, but that the model has been calibrated to differentiate PDs to an excessive degree compared with that which can be accurately measured. We would recommend spelling out in greater detail what is meant by “rank order estimation error”.

If the individual paragraphs dealing with the margin of conservatism are considered as a whole, certain inconsistencies in approach emerge. The MoC issue is first introduced in par. 23-25 with a clear focus on specific, nameable weaknesses in data and methods. If such weaknesses are identified, the objective should be to tackle them directly and try to eliminate them. Only if they cannot be eliminated completely and/or immediately should an MoC be applied to take account of the resulting uncertainty (par. 26-30). The basic expectation, however, is that these weaknesses can be gradually remedied and that the corresponding MoCs can be reduced over time (par. 34).

Later on, however, the consultation paper mentions specific examples of MoC aspects of a more systematic nature and where it is not clear how they might be avoided or remedied by adjusting the model or the data:

• According to par. 51, a margin of conservatism should be applied to reflect possible distortions as a result of clients migrating to a different rating system during the observation period used to calculate default rates. Yet the fact that clients will for various reasons no longer be rated by a certain rating system after a certain point in time (e.g. because business relations have been terminated) is a perfectly natural phenomenon and cannot be “remedied”. If systematic distortion of this kind exists, it cannot be expected to decrease over time. The MoC would therefore have to be applied permanently.

• According to par. 57, “an economic adjustment and an appropriate MoC” should be applied to reflect any effects on calculating default rates caused by the selected calculation date or, if overlapping time windows are used, by reduced weighting of the first and last time slices. Here, too, it is not clear what kind of economic adjustment should be made. Is the bank supposed to always use the highest of all conceivable calculations? This would certainly be the most conservative approach possible. It is equally certain, however, that it would not be a sensible approach in terms of economic expectations.

It is not clear how these specific requirements concerning MoCs for specific aspects fit with the general requirements in par. 23-35. Par. 51 and 57 do not address concrete deficiencies in data or methods in the sense set out in par. 24, but features which are natural phenomena and to a certain extent unavoidable. For this reason, we see no justification for applying MoCs to reflect these aspects; nor do we believe it would serve a useful purpose to do so.

A lack of clarity also exists concerning which business unit should be responsible for monitoring (i.e. validation or development). Bottom-up quantification (by means of triggers) is frequently not possible, but only a conservative top-down MoC add-on (based on statistical quality and adjusted to reflect expert judgement).

It is not clear how category C in par. 24 is to be understood. This category is supposed to capture general estimation errors stemming from methodological deficiencies. Par. 25(c) says that these estimation errors will include rank order estimation errors and estimation errors in the calibration. In our view, errors of this kind invariably arise from statistical model uncertainties as a result of a limited amount of available data. It should be made clear that errors stemming solely from statistical uncertainty do not have to be regarded as methodological deficiencies within the meaning of category C of par. 24 and therefore do not have to be reflected in the MoC.

It should be possible to interpret par. 32 as meaning that the overall MoC can be determined in a holistic, qualitative manner on the basis of the individual components. Requiring a cumulative addition of all components could result in an MoC of an economically implausible size.

Name of organisation

Association of German Banks