Response to consultation on supervisory handbook on the validation of rating systems under the Internal Ratings Based approach

Go back

1b) Do you see any constraints in implementing the proposed expectations (i) as described in section 4 for the first validation for a) newly developed models; and b) model changes; and (ii) as described in section 5 for the subsequent validation of unchanged models?

Implementation tests
According to article 80 in the handbook it is expected that the validation function validate the proper IT implementation of the rating system after the core performance assessment. According to article 102, the documentation package submitted to the validation function for the first validation is expected to include sufficient documentation of the IT implementation, to allow the validation function to form an opinion of the model implementation. Is it correctly understood that evaluation of the model implementation can be conducted after the first validation but before the model is implemented? The institution would like this to be clarified in the handbook. The institution would like to propose that an option to complete the testing of the implementation after the completion of the initial validation has been submitted for approval, but before the model is used for calculating capital requirements. This method would facilitate more conclusive testing on implementations that more closely resemble the final implementation. This would also give the institution a greater flexibility to finalize or make changes to the implementation of the models after the application has been submitted and whilst the supervisory authority approves the application.

Overrides
The institution observed a constraint with regards to the overrides analysis mentioned in article 47 (b) and 94. As the new model might not be implemented at the time of the initial validation, and therefore the frequency and accuracy of overrides might not be possible to test in the initial validation. In focus box 7 two alternatives are presented, however the institution finds that alternative (a) as not contributing meaningfully to the validation, as an old model’s overrides are not based on the new model’s shortcomings. Alternative (b) would provide a more meaningful validation, however, is expected to be burdensome in comparison to the value created (for example due to sample variation and uncertainty). The institution would therefore like to propose that tests for overrides are only suggested as part of the subsequent validation.

Change management
The institution finds a constraint regarding the role of the validation function’s role in change management as mentioned in for example articles 21 (b) and 37. In article 11 section 4 (d) in Commission Delegated Regulation (EU) 2022/439 it is stated that the validation function should verify all changes related to the internal models and their materiality. In the handbook the validation function is assigned with assessing the materiality of the change. Currently, the institution finds that the wording between the validation handbook and CDR 2022/439 does not align. To avoid misunderstandings, the validation handbook should be aligned with CRD 2022/439 (e.g. by clarifying the interpretation of “verify” in comparison to “assess”).

Question 2: For rating systems that are used and validated across different entities, do you have a particular process in place to share the findings of all relevant validation functions? Do you apply a singular set of remedial action across all the entities or are there cases where remedial actions are tailor-made to each level of application?

In relation to articles 3 and 29 in the handbook the institution finds that the reference to the CRR should be referenced more specifically to provide further guidelines on the basis for the scope of the validation. The institution finds that the EBA should reference a specific article or specific articles in these sections.

Question 4: Which approach factoring in the rating philosophy of a model into the back-testing analyses should be considered as best practices?

The institution finds that any suggested back-testing approaches for best estimates needs to account for models having different structures and provide testing options that work for low-default portfolios or provide specifics on back-testing for low default portfolios in years without defaults.

The institution does in addition request a clarification regarding article 48 (a), and the alignment of the best-estimate, DR and LRADR as these are calculated over different time periods. Could the EBA please provide further explanation on how the validation function should evaluate the alignment of these estimates?

6b): More in general, which validation approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?

The institution would like to request a clarification with regards to which article or guideline that requires the CRCU to conduct tests in OOS and OOT as part of the model development. In article 93 (a) given that it is specified that that validation function should verify that these tests have been performed. This implies that there is a requirement for the CRCU to conduct these tests and it would be beneficial if it is clearly specified on basis of which article or guideline this verification is based.

Upload files

Name of the organization

N/A