Response to consultation on supervisory handbook on the validation of rating systems under the Internal Ratings Based approach

Go back

1a) How is the split between the first and the subsequent validation implemented in your institution?

The first validation assesses the model as a whole performing all analyses defined by the validation approach. The subsequent validation differntiates between analyses which needed to be performed every year and those which could be conducted at least all 3 years. In the first validation it depends whether the cut-off date from model development and validation differs. If it differs, analyses will be performed independently by the validation since more data and thus information are available. If it is equal, the analyses performed by the CRCU during development will be indepentendly challenged.

1b) Do you see any constraints in implementing the proposed expectations (i) as described in section 4 for the first validation for a) newly developed models; and b) model changes; and (ii) as described in section 5 for the subsequent validation of unchanged models?

"(i) P. 88: It ist not clear, how the term ""reviews the documentation that will be submitted to CAs"" is defined. While the model and development documentation is in the scope of the validation from a methodological point of view, the whole Model Change Documentation incl. Internal-Audit-Reports etc. is not. This is in scope of the Internal-Audit Function to assess whether the Model Change Documentation is adequat and complete. Those responsibilities between IA and IVU are clearly documented in the MCP-Policy.
P. 89: It is not clear what is meant by ""check the correct calculation of the IRB metrics"". Regulatory compliance as well as an adequate Internal Control System is in the responsibility of Internal Audit. While we understand, the independently by the IVU calculated IRB-metrics should be compliant to the regulatory requirements the responsibility to check the calculation of the CRCU should not necessary be by the IVU.
P. 102: The propesed opinion of the IVU on"" the integrity of the model implemented in the development environment and to be implemented in the production environment"" will lead to a change of the skill-profile of members of the IVU. Beside the scarce quantitative skills a mixture of quantitative and IT-skills will be needed leading to a more challenging recruitment, while the IT-skillset exist in the IT-department.

(ii) P. 133: It is expected that the IVU checks that any (non-material) model-change has been properly reflected in the business/functional requirements. It is not clear what is meant by business/functional requirements. The check of the model documentation and the adequate documentation of the productive model complemented by development documentations for model changes is in scope of the IVU. "

Question 2: For rating systems that are used and validated across different entities, do you have a particular process in place to share the findings of all relevant validation functions? Do you apply a singular set of remedial action across all the entities or are there cases where remedial actions are tailor-made to each level of application?

n/a - no different entities

3a) Do you deem it preferential to split the review of the definition of default between IRB-related topics and other topics?

We do not deem it preferential.

3b) If you do prefer a split in question 3a, which topics of the definition of default would you consider to be IRB-related, and hence should be covered by the internal validation function?

n/a

Question 4: Which approach factoring in the rating philosophy of a model into the back-testing analyses should be considered as best practices?

In our opinion the assessement of the independent validation unit of the back-testing results should factor in the circumstances. In most cases, there is not sufficient data to perform a statistically meaningful back-testing on different time slices or different subsets, especially for LGD. Those analysis can be conducted, but a quantitative threshhold should only defined if enough data is available. In all other cases the results should be discussed and assesses qualitatively.

Question 5: What analyses do you consider to be best practice to empirically assess the modelling choices in paragraph [76] and, more generally, the performance of the slotting approach used (i.e. the discriminatory power and homogeneity)?

n/a - no slotting approach

6a) Which of the above mentioned approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?

Dismiss some data from the development will lead to an information loss and potentially bias according to the experience. Therefore the whole data set should be used for developement. Complement the tests performed by the CRCU with in sample tests and qualitative analyses (Approach 3) would be a good approach.

6b): More in general, which validation approaches do you consider as best practices to assess the performance of the model in the context of data scarcity?

In the case of data scarcity it is crucial to get an understanding of the underlying data and their constraints to assess whether statictically not significant metrics can be assessed qualitatively. In case of data scarcity the impression of all analyses together has a much bigger impact on the assessement as a single analysis. Furthermore it is valuabel to assess the development of the analysis results over time to identify any hidden bias in the development data set or any developments according to the Portfolio or metrics.

Upload files

Name of the organization

Aareal Bank AG