Response to consultation on Regulatory Technical Standards that specify material changes and extensions to the Internal Ratings Based approach
Question 1. Do you have any comments on the clarification of the scope of the revised draft regulatory technical standards to specify the conditions for assessing the materiality of the use of an existing rating system for other additional exposures not already covered by that rating system and changes to rating systems under the IRB Approach?
With regard to paragraph 10 (page 7 of the CP), it would be helpful to provide examples of the new recital 4 of the draft RTS. For example, it could be stated that this new rule applies to all types of annuity loans in the retail business, also if a new annuity loan product (with different features) is introduced.
Para. 12 clarifies that changes due to regulatory requirements without institution-specific room for maneuver, which are mandatory under CRR III and do not affect the performance of a rating system, do not fall within the scope of the RTS and are therefore not to be reported as changes. This clarification is welcome. However, we would like to point out that most of the requirements of CRR III have already been implemented by the institutions and the proposed regulation therefore has only a minor impact. We would therefore like to advocate that this principle should also be applied to future regulatory projects of comparable scope, i.e. after CRR III.
It should be noted that the recitals of the proposed amending Delegated Regulation will not be part of the final consolidated version to be published in the EU’s official journal. Therefore, it is difficult for institutions to track the recitals in the first place. Moreover, it is not entirely clear how the new recitals relate to existing recitals of the original Delegated Regulation 529/2014 and the amending Delegated Regulation 2015/924. For instance, recital 2 of the proposed amending Delegated Regulation creates confusion when read in comparison to the existing recital 7. The existing recital 7 refers to the "on-going alignment of the models to the calculation dataset used". It would be very helpful if the EBA could explain what exactly "calculation dataset" means and how it differs from the "reference dataset". In this context, it is not clear what the "data for the application portfolio" mentioned in para. 8 of the consultation paper mean. From our point of view, the latter can hardly refer to the actual input used for a rating. After all, it goes without saying that this input will change over time and ratings will always be prepared using the most up-to-date information. This cannot possibly constitute a change to the rating system. To make this clearer, it would be helpful if the EBA could provide some examples for clarification.
As mentioned in the public hearing on 15 January 2025, for some rating systems it may be necessary to update certain system settings on a regular basis based on routinely collected new data. Examples of such data might be stock prices, interest rates, inflation rates or rent levels. The old recital 7 seems to provide a convenient way to perform the necessary updates without classifying them as changes. This would make a lot of sense from our perspective, especially regarding the increasing importance of machine learning approaches.
Question 2. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of changes as described in the Annex I, part II, Section 1 and Annex I, part II, Section 2?
We welcome the clarification that only validation changes that lead to a progressive change in the validation result constitute a material change.
However, any other change to the validation process or the validation methodology still leads to an ex-ante notification and therefore produces costs – even if it is clearly conservative. Examples:
◼ inclusion of supplementary tests that have no or only a conservative effect on the traffic light system,
◼ setting stricter threshold values for test procedures,
◼ changing the validation process by including additional control steps
To avoid an incentive for institutions to refrain from or delay such sensible changes, they should only require ex-post notification instead of ex-ante notification.
The same applies to the model development methodology.
Moreover, it would be helpful if
◼ examples of material changes to the unlikeliness to pay indicator (Annex 1 part II Section 1 point 3(d)) could be added to para. 15 and
◼ examples of changes to the validation method and/ or validation process could be added to para. 16.
In addition, we would welcome the retention of the current exemption of the slotting criteria approach: The procedure is used for simplified risk quantification with few institution-specific design options and as such it should not be subject to the same rules as a “real” rating system. The assessment criteria rank correlation and change in distribution are also of limited use due the limited assessment scale. The regulations in Article 4 should therefore be sufficient as quantitative criteria.
Finally, in the context of the proposed additional indications of "unlikeliness to pay" in para. 15, we would suggest using the expression "case-by-case assessment" of para. 58 of the Guidelines on the application of the definition of default (EBA/GL/2016/07) instead of "manual default reclassification".
Question 3. Do you have any comments on the clarifications and revisions made to the qualita-tive criteria for assessing the materiality of extensions and reductions as described in the Annex I, Part I, Section 1 and Annex I, Part I, Section 2?
NA
Question 4. Do you have any comments on the introduced clarification on the implementation of the quantitative threshold described in Article 4(1)(c)(i) and 4(1)(d)(i)?
The quantitative RWA thresholds should differentiate between pre- and post-synthetic securitizations.
◼ To assess the change impact on the level of an individual rating system, the pre-securitization RWA values should be considered. The reason is that these relate to the function of the rating system, which is independent of subsequent insurance effects from synthetic securitizations.
◼ In contrast, for the overall RWA effect, effects from synthetic securitizations should be taken into account to ensure that the actual overall effect is measured. Any portfolio effects must also be taken into account.
It should be clarified, that the overall RWA impact factors in the potential effect of the output floor.
According to paragraph 25, changes that relate to several rating systems should not be split up and the determination of the materiality threshold of -1.5% in relation to the overall RWA effect for credit and dilution risk in accordance with Art. 4(1)c(i) should be determined across all rating systems. It is questionable how the thresholds for the effects on the scope of application of the rating systems concerned are to be determined in this case. In our opinion, it should be clarified whether, in such a case, an aggregated consideration of the RWA effect across all affected IRBA scopes of application of the affected rating systems in accordance with Art. 4(1)(c)(ii) is also to be carried out to determine the threshold of -15% in the scope of application or whether the RWA effects are to be determined individually for each affected scope of application of an IRBA rating system. This could quickly lead to the materiality threshold being exceeded in the case of small rating systems.
We would like to emphasize that the concepts of "modifications of the same nature " and "one change affecting several rating systems" can be very problematic. For example, suppose a rating model includes a coefficient that can be changed to adjust the overall level of PD estimates. One might argue that changes to this coefficient are all "similar in nature" and therefore, if the bank changes it six times in 15 years, then according to the logic on which the EBA is based, all of these changes need to be treated as a single change. Likewise, while a change in the definition of default seems to be a plausible example of a change that affects multiple rating systems, there are likely to be other cases where the assessment is more difficult. For example, suppose several rating models include a metric measure of inflation. At some point it may appear reasonable to switch to another metric that is considered more suitable. However, this change may very well be made at very different points in time for the rating models concerned. In fact, the question of whether the new metric is really more suitable may need to be answered separately for each model. At the very least, these issues should be approached with a great deal of pragmatism.
Question 5. Do you have any comments on the revised 15% threshold described in Article 4(1)(d)(ii) related to the materiality of extensions of the range of application of rating systems?
NA
Question 6. Do you have any comments on the documentation requirement for extensions that require prior notification?
We concur with the view described in consultation box 6. Requiring the entire documentation catalogue (validation report and technical documentation) for extensions that only require prior notification would be disproportionate. It would unnecessarily delay or slow down sensible model changes. This applies particularly to models that have been developed jointly and are operated by a central servicer (pool models). If still deemed useful from a methodological perspective in individual cases, institutions may add a validation report voluntarily but they should not have a regulatory obligation to do so.
Moreover, it should be clarified that changes to the validation process requiring prior notification do not require a written assessment by internal audit.