With regards to the interpretation of the requirement reported in paragraph (89) of the Guidelines on PD estimation, LGD estimation and the treatment of defaulted exposures (EBA/GL/2017/16), is it correct that if a ranking method or overrides policy has changed over time, the institutions cannot “conduct the calibration after taking into account any overrides applied in the assignment of obligors to grades or pools”, because the relevant data is not available (e.g. in case of a model change the rating produced by the new PD model cannot include the override process)?
With regards to the interpretation of the requirement reported in par. 89 of the GLs, is it correct that if a ranking method or overrides policy has changed over time, the institutions should “analyse the effects of these changes on the frequency and scope of overrides and take them into account appropriately” in the annual review of estimates framework, once the relevant data becomes available, meaning that the new rating attribution process has been fully deployed and used by the analysts of the Supervised Entity?
The EBA states in Chapter 5.3.5 “Calibration to the long-run average default rate”, par. 89 that: “Institutions should conduct the calibration after taking into account any overrides applied in the assignment of obligors to grades or pools, and before the application of MoC or floors to PD estimates as referred to in Articles 160(1) and 163(1) of Regulation (EU) No 575/2013. Where a ranking method or overrides policy has changed over time, institutions should analyse the effects of these changes on the frequency and scope of overrides and take them into account appropriately.” In consideration of the above, the overrides should be taken into consideration during the PD model calibration procedure. Moreover, the same requirement applies to the new PD models where the ranking method or overrides policy has changed with respect to the previous one.
Based on Article 172(3) of the CRR 575/213: “For grade and pool assignments institutions shall document the situations in which human judgement may override the inputs or outputs of the assignment process and the personnel responsible for approving these overrides. Institutions shall document these overrides and note down the personnel responsible. Institutions shall analyse the performance of the exposures whose assignments have been overridden. This analysis shall include an assessment of the performance of exposures whose rating has been overridden by a particular person, accounting for all the responsible personnel.” Such a revision is carried out on a case-by-case basis and should take into account all the quantitative and qualitative information characterizing a specific obligor and the relevant economic, legal and business environment. The overrides can regard the information that is already taken into consideration by the PD model but should be given a different weight based on some specific circumstances. Alternatively, it can regard the information that is not taken into consideration by the PD model but should be relevant for the correct risk profile definition of a certain obligor.
Finally, the override process should be regulated by specific internal rules, while the motivations should be well-documented, analysed and authorized by an independent unit.
On the basis thereof, in case of a material change of the rating system (that includes a substantial revision of both the ranking method and the overrides policy), we believe that the above described process cannot be replicated before the model release, because:
• It is very difficult to analyse all the obligors present in the calibration sample on a case-by-case basis, taking into account their specific characteristics; moreover, the analysis should be conducted separately for each reference date, as both the obligor’s characteristics and the relevant economic, legal and business environment can change over time.
• If a certain rating system is substantially / completely new, the familiarity of rating analysts with the inputs and outputs of the model is still quite low; hence, it would be difficult to apply an override before the rating attribution process is officially authorized to be actually used in the real-world context.
• Any override simulation before the PD model release would introduce substantial doubtful assumptions, proxies and asymmetries, thus resulting in a potentially significant risk quantification error in the PD calibration phase; moreover, the above could reduce the conservativeness of the PD estimates.
• The use of the previous overrides with regards to the new model would be incorrect, as those overrides used to be applied based on some specific set of risk drivers and their respective weights in the rating model; as a consequence, if the model design and the risk drivers (or their weights) are different in the new model, the application of the override based on the previous motivation would be incoherent.
• Finally, when a new rating model is developed, one should analyse the recurring override motivations and all the relevant information that has not been accounted for in the previous rating model; therefore, the necessity of certain overrides can disappear for the new version of the PD model.
As a supervised entity, we believe that if the “ranking method or override policy has changed over time”, the override process and its impact from the “risk quantification” point of view should be treated in the annual review of estimates framework and cannot be applied in the initial calibration phase. Once the actual unbiased distribution of the overrides is observed, the presence of any PD underestimation risk should be verified by means of annual back-testing analysis. The evidence highlighted by the analysis of the sufficiently long time series of the default rates observed by each rating grade after override application can be used to eventually adjust the risk quantification accuracy. In other words, if the observed default rates are lower than the PD estimates associated to the rating grades before override, but higher than PD estimates associated to the rating grades after override, there is a certain distortion introduced by the overrides process. Such a distortion should be adjusted in order to increase the risk quantification precision.
Paragraph 89 of the EBA Guidelines on PD, LGD estimation and the treatment of defaulted assets (EBA/GL/2017/16) clarifies that for the purpose of calibrating PD estimates to the long-run average default rate, any overrides applied in the assignment of obligors to grades or pools are taken into account and that institutions should analyse the effects of changes of the ranking method or overrides policy on the frequency and scope of overrides and take them into account appropriately.
These requirements ensure that the calibration is conducted taking into account all obligors assigned to specific grades or pools at the moment of the calibration, including obligors whose assignment to grades or pools was overridden by the institution until that moment, in order to avoid undue calibration biases that result from not considering overridden assignments triggered, for instance, by information not accounted for by the current rating model.
It follows that the calibration should generally be based on grade assignments taking into account overrides of model inputs and outputs.
For overrides of inputs, the overridden values should be considered to the extent that they are relevant for the ranking method underlying the grades and pools for which the calibration is performed, in order to take into account all relevant information in the assignment of obligors and facilities to grades or pools, in accordance with Article 171(2) CRR. Therefore, if a risk driver either in the ranking method or for defining the calibration segment is still relevant for the updated model, the changed value should still be applied in calibration for the respective historical observation.
Historical overrides of outputs should be considered in the calibration process by relying on historical grade assignments. However, in some cases, historical grade assignments that result from the application of overrides to outputs may not be available or may no longer be considered appropriate since historical grade assignments after overrides are not based on the rating assignment process used for the application portfolio (e.g. because the structure of the rating grades or pools has changed). In those cases, institutions should identify a deficiency covered under category A of paragraph 37 of EBA/GL/2017/16, apply an appropriate adjustment to the extent possible and the corresponding MoC to account for uncertainty associated with the consideration of overrides of model inputs and outputs within the model calibration. The category A includes in particular deficiencies coming from ‘missing, inaccurate or outdated rating assignment used for assessing historical grades or pools for the purpose of calculation of default rates or average realised LGDs per grade or pool’ and ‘missing, inaccurate or outdated data on risk drivers and rating criteria’. The appropriateness of the quantification of this MoC should then be reassessed during the subsequent annual review of estimates, in accordance with paragraph 51 of EBA/GL/2017/16.