Response to consultation on supervisory handbook on the validation of rating systems under the Internal Ratings Based approach

Go back

1a) How is the split between the first and the subsequent validation implemented in your institution?

The FBF welcomes the opportunity to respond to this consultation. It would like to share a few comments and reactions on the consultation paper, which are not necessarily referring to the questions asked in the EBA document, hence the choice to list them in a separate note:



 Overall, a high level of expectation, without explicit mention of materiality and proportionality principles


 Some grey areas with regard to the responsibilities in the assessment of the model documentation between internal audit (IA) and validation function
In Interaction Box 2, it is specified that: “While the validation function assesses the content of the documentation, in particular during its assessment of the rating system developed by the CRCU, it is not necessarily expected to perform a regulatory compliance check based on the documentation they receive. Instead, this assessment may be performed by the IA.”
This sentence is confusing. Our understanding is that validation is expected to assess the regulatory compliance with aspects under its supervision. Topics outside validation supervision have to be reviewed by IA once they are implemented within the bank.


 Boundary between the responsibility of CRCU and validation function is sometime not clear, specifically:
 In paragraph 87 – interaction box 10, it is specified that: “the validation function is expected to perform additional tests whenever it considers that the actions performed by the CRCU do not cover all angles of all the potential deficiencies.” However, should the prerequisite tests which are expected to be performed by the LoD1 according to internal procedures/protocols/modelling standards be missing, then the validation function should request the CRCU to perform the analysis and then control it.
 In paragraph 93 b it is specified that: “In particular, where a sufficient amount of more recent data as in model development is available (in the case where a significant period of time occurred between the cut-off date used for model development and the validation date), the validation function is expected to perform at least an OOT-validation using that data.” We would like to clarify what is considered to be a “significant period of time”. Considering CRR articles 180(1)(h) and 180(2)(e) which specify that model development should use the most recent data if relevant, how does this paragraph apply in the case of a very long model development process where the validation function should therefore not be made responsible for model calibration?
 In paragraph 85 on the assessment of the adequacy of the implementation that should be performed by the validation function, no reference is made to the assessment that should be first performed by the CRCU. However, the CRCU should perform the control on the adequacy of the implementation, first, and then it should be reviewed by the validation function.
 Paragraph 37: Assessment of the materiality of a model change or extension
“In addition to the assessment of the model change in terms of performance improvement, the validation function is expected to assess the materiality of all model changes and extensions and their combined effects."
Comment: We understand that the validation function is expected to be the second line of defense, between the CRCU and the IA; the CRCU should be the at the origin of the materiality assessment of the change they submit. For this reason, we prefer a process where the materiality of extensions and changes is first identified and assessed by the CRCU, as the first line of defense, and then independently reviewed and challenged by the validation function.


 Objective of the validation and validation policy (paragraph 32 and 33):
Confirmation that the overall assessment can be performed based on the severity of the recommendation (if the criticality of each recommendation is duly justified by an impact study each time this is feasible, or consistent with internal recommendation scaling policy in the other cases), rather than on an aggregation of statistical results / marginal impacts. (also in paragraph 35 b)


 For an initial validation, difficulty to systematically perform the described analyses on the overrides, as the model is not yet applied. (paragraph 47 a and b)


 Requirement of an independent access to the data used for model validation

We would like the authors of the Consultation Paper to precise what is meant by ‘all the relevant IRB data’ in the sentence “The validation function is expected have an independent access of all the relevant IRB data” in paragraph 84 of the CP. Indeed, as ‘all’ could encompass all the potential data sources ever used in the data collection for modelling purposes, it could be understood as a potential independent access to all the data of the bank, what could be difficult to manage. However, when analysing the regulatory references (copied below), the observed practices of the validation function, and the precisions brought in the public hearing of October 4th 2022, we understand that the data targeted by the words ‘all the relevant IRB data’ should be understood as the reference data set (RDS) based on which the model is built, for risk differentiation, risk quantification or back testing purposes. As a consequence, we request this clarification to be explicitly mentioned, to avoid possible misunderstandings such as the one illustrated at the beginning of this paragraph.

Consultation paper paragraph 84, page 67: Accessibility of the data for the data maintenance. The validation function is expected have an independent access of all the relevant IRB data and their respective databases, as well as sufficient data extraction and manipulation capabilities. (Footnote 278)

Consultation paper footnote, page 67: 278 To ensure that the validation function is in a position to propose an effective and independent challenge to model development and use. To this end, the data should be available, as referred to in Article 42(1)(b) of the CDR on assessment methodology.

Article 42(1)(b) of the CDR on assessment methodology: Article 42 Data requirements 1 ((b) ‘When assessing compliance with the overall requirements for estimation laid down in Article 179 of Regulation (EU) No 575/2013, the data used for the quantification of risk parameters and the quality of that data, competent authorities shall verify: … (b) the availability of quantitative data providing a breakdown of the loss experience by the factors which drive the respective risk parameters as referred to in Article 179(1)(b) of Regulation (EU) No 575/2013.



 Independence of the validation function vis-à-vis the CRCU (paragraph 18)
 (Point a) “The structural independence ensured via the organisational setup (see Interaction box [1]). In this regard, it is expected that large and complex institutions apply the setup which provides the highest level of independence of the validation function (Point 1 of Interaction box [1])."
Point 1 of Interaction box [1]: “Article 10 of the CDR on assessment methodology provides three different types of setups within the institution’s organisational structure which can be allowed, depending on the nature, size and scale of the institution and the complexity of the risks inherent in its business model: 1. The validation function is in a unit separated from the CRCU and both units report to different members of the senior management;"
 Comment: We believe that the independence of the validation function cannot be evaluated only on the basis of the reporting to the senior management being to a unique or to different senior managers. Indeed, the regulation is not prescriptive regarding the definition of "senior managers" and very different set-up have been implemented in different banks. Hence, we believe it is necessary to take into consideration the definition and the number of senior managers within a bank, as well as their level of seniority. As an example, when there is only one senior manager in the risk control function (being the Chief Risk Officer), we believe it would be detrimental to the validation function to be forced outside of the risk control function to fulfill the requirement, and as long as the validation function reports at the highest level (the CRO) of the control function.


 Scope of the validation (paragraph 29)

 With regards to the intra group outsourcing of internal validation: Paragraphs 30, 31, 141, 142, 150.
We would like to call the attention of the author to the fact that requesting within a banking group each supervised and consolidated entity to establish its own validation policy would jeopardize the consistent and homogeneous management of model risk at a consolidated level, especially when a rating system is used by several entities of a group. Moreover, although we fully emphasis that in the framework of intra group outsourcing the outsourcing entity will always retain responsibility for the opinion it should get on the rating system, our view is that this opinion shall be part of the group rating system approval process. Hence, we question the appropriateness and the applicability of the ban of outsourcing intra group any part of the validation function and the subsequent limitation of outsourcing to operational tasks, and strongly suggest removing it.

 (Paragraph 3) “To satisfy the requirements of Part III, Title II of the CRR, the validation activities are expected to be performed at all relevant levels where an internal model is used to satisfy the requirements of the CRR. As such, the internal validation should be conducted at each level where a CA has granted an approval for a rating system (or is expected to do so in the context of an initial validation of a new model). Therefore, in the case where a rating system has received the approval on a consolidated as well as sub-consolidated and/or individual basis, the internal validation is expected to be performed at these levels.”
Comment: Let us consider the case of a banking group which is made of several local banks, and is structured into several consolidation/sub-consolidation/individual levels. In this context, some rating systems are developed over very broad scopes that exceed the limits of individual banks or sub-consolidation levels. These rating systems typically involve large corporates, or financing activities that have a homogeneity of business treatment and obligor's behavior across the modelling perimeter. Of course, such rating systems are intended to be used across the same wide scope considered for the modeling.
For these rating systems, the validation strategy follows the modelling strategy: validation is performed at the same level as modelling, and the representativity tests will notably allow to verify that the referential data set gathers data from the whole scope.
Such models are submitted for approval to the ECB with full transparency on the scope on which they have been modelled, validated and on which they are intended to be used. When received, the approval reflects also this scope, i.e. the individual, sub-consolidated and consolidated levels on which the rating system is approved.
This guarantees in our view an homogeneity of treatment of comparable obligor and exposures, and a single rating of obligors across the Group.

1b) Do you see any constraints in implementing the proposed expectations (i) as described in section 4 for the first validation for a) newly developed models; and b) model changes; and (ii) as described in section 5 for the subsequent validation of unchanged models?

-

Question 2: For rating systems that are used and validated across different entities, do you have a particular process in place to share the findings of all relevant validation functions? Do you apply a singular set of remedial action across all the entities or are there cases where remedial actions are tailor-made to each level of application?

-

3a) Do you deem it preferential to split the review of the definition of default between IRB-related topics and other topics?

-

3b) If you do prefer a split in question 3a, which topics of the definition of default would you consider to be IRB-related, and hence should be covered by the internal validation function?

-

Question 4: Which approach factoring in the rating philosophy of a model into the back-testing analyses should be considered as best practices?

-

Question 5: What analyses do you consider to be best practice to empirically assess the modelling choices in paragraph [76] and, more generally, the performance of the slotting approach used (i.e. the discriminatory power and homogeneity)?

-

6a) Which of the above mentioned approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?

-

6b): More in general, which validation approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?

-

Upload files

Name of the organization

French Banking Federation