Response to consultation on supervisory handbook on the validation of rating systems under the Internal Ratings Based approach

Go back

1a) How is the split between the first and the subsequent validation implemented in your institution?

When pool models are used, validation activities are carried out regularly and on an ad hoc basis at pool level, both for the model in use and for model changes:
Regular validation of the model in place at pool level:
 Annual validation in accordance with the requirements of the CRR, Delegated Regulation 2022/439 ("assessment methodology") and the ECB Guide to internal models (EGIM) on the topics to be reviewed annually.
 More intensive validation ("extended validation") of the "model in place" at least every 3 years to cover further topics that do not have to be tested annually. For these topics, an annual review would not make sense because no relevant new findings are available year by year. At least every 3 years, the developers' approach to data preparation (incl. data quality and default rate calculation) and the treatment of deficiencies, including the margin of conservatism (MoC), are also regularly reviewed.

In case of model changes:
 Material model changes: Initial validation at pool level before notification of the change according to Art. 11 Para. 4 Delegated Regulation 2022/439 and para. 65 General Topics EGIM.
 Non-material model changes: Review of the changes depending on the materiality as well as the risk potential of the respective change (review of the basic approach, the background or causes, the approach including compared alternatives and analyses of over-adjustment effects, the impact on rating grades, the achievement of the change targets as well as the impact on central validation dimensions). This is usually done in the course of the annual validation in addition to the validation of the model in place.

Reasons for the approach chosen:
 Compliance with the supervisory requirements on the aspects to be audited.
 As specific as possible risk-adequate design of the approach based on the principle of proportionality:
o Focusing in-depth analyses on focus portfolios.
o Selecting the frequency of analyses in such a way that new findings can be expected from new analyses. This is not to be expected for some analyses (e.g. default backgrounds), especially for portfolios with few defaults (LDP) with annual frequency.
o In case of model changes: In some cases, model changes only affect one particular aspect, e.g. calibration without any influence on the ranking of the ratings. In this case, extensive tests for model differentiation are not necessary. If major model modifications are actually carried out or the change has a major impact and is associated with a corresponding risk potential with regard to the performance of the model, more extensive analyses and a more in-depth critical questioning are carried out. In the case of smaller adjustments, on the other hand, a less elaborate approach to verification by validation is carried out on the basis of the proportionality principle, adjusted in line with the risk.
o The differentiation between the annual validation of the "model in place" on the one hand and the examination of model changes on the other hand ensures that even in the case of longer-running examination and approval processes for model changes on the part of the supervisory authority, the respective productive model in use is examined regularly.
o If the approval of model changes takes a long time, it may be useful to monitor model performance for the model-after-change and other selected aspects.

1b) Do you see any constraints in implementing the proposed expectations (i) as described in section 4 for the first validation for a) newly developed models; and b) model changes; and (ii) as described in section 5 for the subsequent validation of unchanged models?

As we understand it, the EBA would like to focus validation activities on matters that have changed; these are to be examined intensively. In the case of unchanged issues, on the other hand, an approach based on standard analyses is more likely. We welcome this basic approach in principle.
However, the approach proposed by the EBA hardly differentiates between the materiality of models, portfolios or model changes. In our view, a basically identical audit approach for the validation of all models that have not been changed and all types of model changes is not risk adequate. A stronger differentiation should be made here:

 More extensive model conversions as well as material model changes should be examined more intensively than smaller adaptations.

 Also, in the case of significant extensions of the scope of application, a complete review of the entire model, as proposed in paragraph 9 lit. c, is not necessarily appropriate in our opinion; this applies in particular with regard to the portfolio already existing in the scope of application (a significant extension of the scope of application can be triggered in individual cases by just a few additional individual cases). The appropriate validation approach from a risk point of view should be chosen specifically and risk-adequate for the respective case of application.

 Material models and portfolio areas with large numbers of cases should be tested more intensively than non-material models and peripheral areas of portfolios with few cases.

 For some analyses, an annual review rhythm is not useful, as it is not to be expected that the results change qualitatively from year to year. This applies in particular to portfolios with low default rates (LDP) (e.g. homogeneity, representativeness of the period for the long-term default rate). For these reasons, we consider a multi-year rhythm to be appropriate and risk-adequate for these cases (see e.g. the ECB's considerations in this regard in point 65 General Topics EGIM).

 The carrying out of a full review by the developers every three years (see Context Box 7) does not make sense, at least for LDPs, even for essential models, as no fundamental new findings about the model design and the basic structure of the model can be expected at this frequency (the factor weights are, however, checked every three years on the basis of the latest data). In the case of material model changes, several years often pass in practice (with project implementation, supervisory approval process, IT implementation, etc.), so that virtually no data would be collected with the new model until the next full review. Therefore, such a frequency should not be presented as "best practice". Apart from this, in order not to mix up the respective supervisory expectations on individual topics, we believe that it should be avoided that a manual on validation also comments on the review of estimates.

In para 88, we understand that the expectation is expressed that the documentation for any notifiable change should be checked by the validation prior to notification: We do not consider this expectation appropriate as this task is unrelated to the other tasks of validation, especially for non-substantial changes. If an independent formal review of the notification documents prior to notification is deemed necessary, this should be specified in general terms, but in any case without assigning this task to validation.
Regarding a substantive review by the validation, it should be clarified in the last sentence that the validators perform a review of the documentation of the CRCU and not a compliance assessment for the application package. Furthermore, it should be clarified that this review only needs to be carried out for material changes prior to notification to the supervisory authority and can also be carried out for non-material changes as part of the annual validation.

3a) Do you deem it preferential to split the review of the definition of default between IRB-related topics and other topics?

In our view, a generally applicable answer to these questions cannot be meaningfully given. Whether it makes sense to split the review of the definition of default (DoD) between IRB and non-IRB issues depends on the particular IRB procedures of the respective institution, the portfolios concerned and the other processes affected (e.g. accounting, depending on the respective accounting standard). Therefore, the determination of which role, if any, the validation function should have in the DoD review and which tasks, if any, should be taken over by other organisational units in the respective institution must be made on a case-by-case basis. It should therefore be refrained from making general specifications in this regard.

Question 4: Which approach factoring in the rating philosophy of a model into the back-testing analyses should be considered as best practices?

Art. 12 point f Delegated Regulation 2022/439 ("assessment methodology") states in general terms that the rating philosophy is to be taken into account in backtesting analyses, among other things. In addition, paragraph 66 point c in conjunction with paragraph 67 EBA GL 2017/16 specifies how this is to be done: The expected responsiveness of PDs in relation to changes in macroeconomic conditions based on the respective rating philosophy is examined to determine whether the actual behaviour of PDs in relation to default rates over time corresponds to these expectations. In our view, these specifications are as specific and concrete as is reasonably possible within the framework of a general regulation.

In our opinion, the appropriate approach for the validation of the specific procedure must be specifically geared to the selected rating philosophy, the characteristics of the respective model and the underlying segment and must be designed accordingly (e.g. taking into account the cyclicality of the segment and the calibration method in the respective model). For this reason, in our view there is no generally applicable "best practice" approach for the concrete procedure for taking the rating philosophy into account in backtesting analyses. Accordingly, it should also be refrained from defining or recommending such an approach.
In our view, it would make sense to examine the development of the one-year default rates and mean PDs over time and to take strong deviations as a reason for further examinations of the rating philosophy.

Question 5: What analyses do you consider to be best practice to empirically assess the modelling choices in paragraph [76] and, more generally, the performance of the slotting approach used (i.e. the discriminatory power and homogeneity)?

In our view, the validation procedure for so-called "slotting approaches" must also be designed in a risk-adequate form in the sense of the proportionality principle. In particular, the materiality of the portfolio covered by the respective slotting approach must be taken into account.
In cases where the slotting approach only covers peripheral areas of an institution's SL portfolio because the core SL portfolios are covered by more elaborate IRB rating systems, the corresponding review of the modelling may be less complex and in-depth than in cases where the slotting approach is applied to core areas of the SL portfolio. In the simpler case, it would be conceivable, for example, to carry out validation actions only with regard to default risk in a first step.

6a) Which of the above mentioned approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?

For LDP portfolios or other portfolios with limited data, option 2 will usually be ruled out, as the development will necessarily exhaust the available data base during development in order to develop a model that is as accurate and stable as possible. If a certain amount of time has elapsed between the time of development and validation, it may make sense under certain circumstances to carry out out-of-time (OOT) tests on the basis of more current data on the part of the validation (option 1). However, this usually requires more recent data of at least another year to be available at the time of validation compared to the dataset used by the developers. However, this will not always be possible.
Nevertheless, further qualitative analyses (option 3) in the validation should ensure that the development does not result in any over-adaptation effects to the data basis. To this end, the results of quantitative analyses should be supplemented in particular by economic plausibility checks of the model design and individual model components. In addition, it may be useful to identify potential overfitting effects with regard to the developer data basis by means of certain analyses on the basis of the development data basis, such as cross-validation tests, also in a quantitative way.
In summary, it can be said that all three options make sense in principle and should be applied or possible depending on the respective (data) situation.

6b): More in general, which validation approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?

The validation procedure should always be specifically geared to the framework conditions of the respective model, which, in addition to the properties of the model and the underlying economic segment, also include the quantity structure of the available data in particular. For the quantitative analyses used, metrics and statistical tests should be designed in such a way that the data basis used, i.e. in particular also a small amount of data, is taken into account appropriately. The smaller the respective quantity structure is, the more other analysis tools should be included in addition to quantitative analyses based on the model results and internal failure experience, e.g.:

 Comparisons with external benchmarks such as external ratings, studies (if available) etc.
 Expert assessments of the economic plausibility of the model design, the model components and the model results.
 Analyses of individual cases, e.g. on the background of overwritten ratings and the economic background of defaults that have occurred.
 if necessary, carrying out impulse-response tests to gain additional insights into the stability/responsiveness of the model.

Upload files

Name of the organization

EAPB