Response to consultation on Guidelines PD estimation, LGD estimation and treatment of defaulted assets

Go back

Question 1: Do you agree with the proposed requirement with regard to the application of appropriate adjustments and margin of conservatism? Do you have any operational concern with respect to the proposed categorization?

We expect most IRB Systems, with different degrees of priority and with different impacted parameters or model components, would require material changes to fully address the requirements

Question 2: Do you see any operational limitations with respect to the monitoring requirement proposed in paragraph 53?

No, we don't see any limitation in calculating one-year default rates at least quarterly.

Question 3: Do you agree with the proposed policy for calculating observed average default rates? How do you treat short term contracts in this regard?

We disagree from the proposed treatment (point 116) for interest and fees capitalised after the moment of default since it is not consistent with the economic loss" meaning of LGD.
In our view only cash flows related to effective recoveries or costs should be taken into account. Interest and fees capitalised after the moment of default are economically irrelevant prior to their capitalisation as no effective cash flow is indeed associated and recoveries are already properly discounted. We agree that interest and fees have a similar economic meaning, so that both should be excluded as in the alternative proposal included in the explanatory box. The alternative proposal that exclude both is fully appropriate.
From a more formal point of view, we consider that art. 181(1)(i) of the CRR states that "to the extent that unpaid late fees have been capitalised in the institution's income statement, they shall be added to the institution's measure of exposure and loss" has a non equivocal interpretation: unpaid late fees count only after they have been capitalised and thus part of the EAD and straightforwardly their been part of the EAD means considering them both at "exposure and loss".
The eventuality of negative realised LGD as outcome of the alternative approach, risk highlighted in the explanatory box, is already accounted by a general zero-floor. Potentially, LGD might find a more prudent floor in the LGD resulting from material costs that being never capitalised are not subject to recovery. Compensating discounting effect is fully economically grounded.
Finally, as far as art. 115 is concerned, we believe that as the outstanding amount at the date of return to non-defaulted exposures includes also interests and fees charged during the default period and such amount should be discounted at default date as any other recovery.

As far as the usage of undrawn amount, we deem Guidelines treatment to be fully in line with CRR provisions for CCF estimates. Indeed a treatment within LGD as allowed for Retail would be equally sensible, but however would require a CRR amendment."

Question 4: Are the requirements on determining the relevant historical observation periods sufficiently clear? Which adjustments (downward or upward), and due to which reasons, are currently applied to the average of observed default rates in order to estimate the long-run average default rate? If possible, please order those adjustments by materiality in terms of RWA.

As Rating Scale structure might be relevant both for PD calibration and for backtesting purposes and as criteria used by different institutions in rating scales design are quite different, some guidelines are appropriate. A rating scale should generally be designed in order that undue concentrations are avoided, but more importantly it should ensure that counterparties with the same risk are assigned the same PD and counterparties with different risk are assigned a different PD. For this reason, a rating scale should optimise risk variability within the same class and between different classes.
As classes are used to calibrate PDs, the statistical robustness of risk differentiation should be explicitly tested.
Such an optimisation implies that the optimal number of class is not always the same as it is strongly related to the distribution of underlying available risk drivers and thus of final scores or individual PDs (where estimated). For this reason, a benchmark on the number of classes is not found to be beneficial and might in some cases increase variability. Equally setting a maximum PD threshold would be inappropriate as the granularity of the scale at higher PD levels is strictly related to the discriminatory power of different models, even though the proposed approach would most likely help reducing the RWA variability for the upper classes of the rating scale (i.e. grades close to default).
For the above mentioned reasons, scales should generally be designed specifically for each portfolio and the recourse to institution-wide masterscales should be limited to reporting purposes where an aggregate view is required. Even with the use of portfolio-specific scales, in some cases significant concentrations cannot be avoided (especially for retail - e.g. regularly amortising mortgages).
In other cases, the differentiation of PDs among lower risk classes is not statistically grounded but a a granular scale is required for a reasonable business process. In such cases, for instance, a joint PD calibration for regulatory purposes covering more than one class can be most appropriate and thus calibration should be assessed at this aggregate level.

However a benchmark on the number of grades would not help reducing unjustified variability as the appropriate number is strongly related to the distribution of underlying risk drivers and thus of final scores or individual PDs (where estimated). Equally, setting a maximum PD threshold would be inappropriate as it is strictly related to the discriminatory power of different models.

Question 5: How do you take economic conditions into account in the design of your rating systems, in particular in terms of: d. definition of risk drivers, e. definition of the number of grades f. definition of the long-run average of default rates?

Most of Rating Systems do not take into account directly economic conditions, but include variables correlated to economic conditions (behavioral data, financial information, etc) so that they are hybrid in nature.

Question 6: Do you have processes in place to monitor the rating philosophy over time? If yes, please describe them.

We generally agree with the proposed policy.
As far as short term contracts are concerned, we aknowledge that some business models or portfolios might be more heavily affected by seasonality effects which need to be addressed.
From a more general standpoint, however, we do not believe that short term contracts phenomena should be addressed by adjusting 1-year default figures for missed to follow up positions as this is part of the 1-year default experience of the institutions. The use of overlapping default observation windows, for instance, wouldn't prevent capturing in the 1-year default figures all defaults even when seasonality effects are relevant.
Providing that all defaults are considered, the exclusion of specific corrections seems to us more in line with the CRR definition of default and more consistent with the overall IRB framework as maturity have a 1-year floor.

Question 7: Do you have different rating philosophy approaches to different types of exposures? If yes, please describe them.

A process to address rating philosophy over time will be beneficial to correctly identify PIT/TTC-ness characteristics and thus define targeted backtesting metrics focused on unexpected miscalibration. The same can be extended to risk drivers dynamic properties to properly manage representativeness assessments.

Question 8: Would you expect that benchmarks for number of pools and grades and maximum PD levels (e.g. for exposures that are not sensitive to the economic cycle) could reduce unjustified variability?

Ex-ante definition of a rating philosophy is a non-standard practice. However, models are in practice differently hybrid across portfolios depending on risk drivers - retail models tend to be more PIT as behavioural information have generally an higher weight compared to other portfolios - and modelling techniques - for instance, shadow rating models to CRAs rating tend to be more TTC.

Question 9: Do you agree with the proposed principles for the assessment of the representativeness of data?

From a general standpoint, we believe that the MoC estimation process is too pervasive and might indeed generate itself homogeneity issues across institutions.
As guidelines do not promote standards of quantifications, they might not fulfil the final objective of enhancing comparability.
As assessing every MoC area with quantitative analysis would be unduly burdensome, impact quantifications should be limited to most significant deficiencies only and an overall estimation among different areas should be allowed, as sources of model risk might be correlated and MoCs shouldn't be summed up in those cases, but jointly assessed (moreover, the application at risk parameter level ignores interconnectedness of risk parameters, therefore MoCs applied to PD may logically have the opposite effect on LGD).
As MoC are required both for model estimation and model application phase, it should be made clear that this should not result in a double counting of MoCs.
In addition to that, more clarifications should be provided as regards the role of MoCs in the use test area.

Name of organisation

Prometeia