This question should be assessed in the light of the quantitative impact study that the EBA is currently conducting. However, it is expected that due to the guideline the majority of the models need to be significantly changed and approval is required from the competent authorities. Nevertheless, some indications are the following:
The LGD in-default guidelines would for the majority of the banks have a material impact;
ELBE could be potentially material in many banks especially if the indirect approach is used;
In terms of PD and LGD, the methodological impact is limited but the operational cost of enhancements to documentation, justifications and changes to processes is significant and those changes are likely to also require competent authority approval.
No. In general, one-year default rates are already being calculated at least on a quarterly basis. No further operational limitations should be present. It can be argued that quarterly calculation for Low Default Portfolios (LDPs) will provide little of information value considering the very low amount of defaults that will be observable within that time frame. Rather than operational limitations, we believe that the new monitoring requirements will have an impact in terms of additional effort.
EBA should clarify if paragraph 39, namely the requirement to review the rating assignment no later than 3 months, is only related with one-year default rates calculation or it also includes the revision of the rating grade. If the latter is included in the scope of paragraph 39, we cannot agree as the rating models could have a qualitative component (especially for Corporates) that cannot be reviewed in such a short period.
As far as Retail exposures are concerned, we agree with the proposed treatment of additional drawings for both CCF and LGD and banks’ models are already developed aligned with CRR and EBA GL.
The difference between unpaid late fees, interest and fees needs further clarification. An example on different type of flows to be considered and corresponding treatment would be extremely helpful. The same goes with fees and cost terms. For what concern costs, all direct costs and relevant indirect costs are considered in the economic loss and for LGD calculation purposes. Fee can be a misleading concept in the consultation document: sometimes it can be assimilated to interest and sometimes to cost, also considering that there is no clear taxonomy.
It seems that the Guidelines give way to confusion between accounting schemes and the concept of economic loss, in particular for the issue concerning contractual interests. All the fees are considered in the economic loss as well as all the other direct costs: they are included as well in the exposure at the denominator of the LGD until the beginning of the default event (or the beginning of the litigation phase in case of multi-stages model), while they are not added to the exposure if they are recorded after the default event (or litigation event) but, as stated above, considered as cash-out. On the other side the interests can be further divided in two categories:
Contractual interests (interests accrued on capital based on terms and conditions contractually agreed with the client): these interests are included in the exposure at the denominator of the LGD until the beginning of the default event (or the beginning of the litigation phase in case of multi-stages model) but have not to be considered in the numerator of loss rate computation since their inclusion would result in a double counting with the respect to the discounting process (whose section is separately treated in the GL). Moreover the inclusion of these interests such as costs in the numerator would represent an accounting scheme which is a completely different matter compared to the economic loss: the share of interests which will be cashed-in will be adequately discounted to take into account the time value of money but nothing more has to be added in the LGD formula. The confusion between accounting scheme and economic loss should be corrected.
Unpaid late fees interests (interests accrued on unpaid capital): these interests are included in the exposure at the denominator of the loss rate until the entry in default status (or until the entry into the litigation phase if a multi-stages model is applied), but the GL ask to consider that in case of recovery of late interest that have not been previously capitalized the moment of recovery should be considered a moment of capitalization. If this requisite impose not to consider the cash-in related to unpaid late fees interests and exceeding the amount included in the EAD for the loss rate computation, the proposal is not correct: in fact a cash-in is always a cash-in and the priority rules for the cash-in repartition decided by the bank (capital, interests, etc.) should not distort the economic loss estimation. All the cash-in should be considered without any specific treatment for the case of unpaid late fees interests.
Regarding the representativeness of data, we agree on the importance of representativeness of the development sample to a more recent portfolio, but with the exceptions highlighted in the proposed amendment to paragraph 99 (see annex).
Within the new GL an inconsistency is detected, between the points 99/110 and point 111 (exactly like it happens in CRR art 181.1(a) and 179.1(d)), since the former asks for representativeness of the development sample towards the application, while the latter asks for the inclusion of all defaults, specifying that “it is not possible to remove the observations that are not fully representative from the estimation sample. However, in this case institutions should apply adequate margin of conservatism to account for the weaknesses in data and, if possible, adjust the data to ensure greater representativeness”.
We disagree with the requirement to include non_representative data which introduce biases in the estimation and then apply MoC to overcome them. Indeed, it would imply a double inclusion of errors within the estimation. Therefore, we deem that the possibility to exclude non representative data from the development sample should be allowed (see annex 1). The same concept should be applied not only to not representative data but to data with quality issues as well.
Given the fact that data used for the purpose of LGD estimation has to be sufficiently representative to the current portfolio covered by the relevant LGD model, it is not clear how this is not included within the periodical review of internal models.
Moreover, with regards to the point 103(a), we deem that the comparison between the reference data set (composed by defaulted exposure, over various points in time) to the current portfolio of non-defaulted exposures would lead to undesirable results, since the two analysed samples are physiologically different in terms of characteristics of the relevant risk drivers. In our view it should be clarified how to handle situations, like this one, when lack of representativeness (i.e. distributions of risk drivers are different) is solely due to intrinsic differences between defaulted and performing exposures.
No. The number of pools and grades should reflect the ability of entities to discriminate risk. The limitations should rather be linked to individual banks structure of eligible exposure classes, collateral types, industries and products. These could be specified in the Guidelines.
We should not forget that a standardised Pillar 2 disclosure template is sufficient for comparative purposes.
It is important that risk drivers are based on achieving the best possible estimate of obligor default risk, and not to capture economic conditions as such unless they are deemed to be significant contributors to the default risk of the obligor.
We largely agree with the proposed policy. The overlapping windows policy is the widely used manner of calculation of the observed average default rates. The representativeness of the defaults occurring at the beginning and at the end of the observation period is considered negligible, so relevant biases are therefore considered not possible taken into account that internal data will cover at least 5 years.
Regarding the bias due to the choice of fixed reference dates in case of Non Overlapping method, the volatility could be considered acceptable if a full economic cycle is included in the long run average so that changing the observation point substantially doesn't move the final average value, while considering a limited period in the cycle can influence the value if a downturn/upturn period is considered.
Supervisory standards may differ across various points:
Yearly update of estimates;
Monitoring of the rating philosophy (Point In Time vs Through The Cycle) will vary based on the exposure categories in question e.g. retail vs corporate;
Migration matrix analysis to verify rating stability.
The stocktaking of the EBF study on IRB models for residential mortgage portfolios gives indications.
Please note that all responses to specific questions are only part of the EBF full response that is attached to this submission.
We mostly agree on the application of appropriate adjustments and margin of conservatism but just in specific cases such as methodological deficiencies, estimation errors that diminished representativeness of historical observations and deficiencies due to missing data.
It can be argued that most of the deficiencies set out in the consultation paper are already being recognised in the institution-specific estimations and the appropriate adjustments are applied. Therefore, we wonder whether it is necessary to quantify MoC by applying an ‘unbiased’ adjustment and a conservative one. In addition, there could be a potential overlapping effect of applying several conservative adjustments. This needs to be assessed carefully and adjusted for in the application of capital requirements.
In the definition and quantification of MoC and related adjustments the Regulator should request banks to focus only on the most relevant and material ones. A wider application and definition of the MoC will not lead to less variability in RWA. It is the modelling practices and banks underlying exposure data that results in the differences in RWA. This will also be the situation after the introduction of the MoC. The MoC will only ensure that it is implemented by all banks. A common disclosure practice is just an important tool that must be developed further to secure comparability.
Further standardisation of the criteria for the adjustments and MoC identification and quantification is welcomed and would avoid or reduce an unjustified RWA variability between banks.
Furthermore, the proposed C category seems more relevant in the monitoring process; in model development, the identified errors will be limited to categories A and/or B. Category C seems related to underperformance or optimisation of models not to methodological deficiencies.
We oppose that changes in business processes, the economic or legal environment should be subject to a quantification of a MoC. It would not be possible to quantify such an external factor. This should rather relate to an operational risk issue. Such a MoC factor seems as a new capital requirement, as banks are constantly subject to regulatory changes and capital and liquidity reforms as well as changes in the economic environment. In addition, changes in the underwriting standards or in the recovery standards should not be subject to MoC if the bank can prove that parts of the historical data set create representative problems between the current portfolio and the RDS, and consequently should be disregarded. Instead, if banks are obliged to apply a MoC over a non-representative RDS this will duplicate the complexity and opacity of the information and risk drivers used to estimate the risk parameters. As such, we do not agree with paragraph 41 which limits substantially the number of exclusions that can be performed when setting the reference data set for the default rate calculation and with paragraph 25, which states that institutions should apply a MoC if there are changes in relevant processes.
We cannot agree with the proposal to evaluate the impact of each MoC in terms of final risk parameters: this approach implies the estimation of a number of n models for each MoC applied which impact may not be linear. Drawing comparisons between the model with all MoCs versus the model ex-MoCs could be more accurate to assess the overall adjustment impact.
As regards missing data, we believe that only not informative missing (i.e. due to lack of information) should be included in this category. On the contrary, informative missing (i.e. those having an economical underlying meaning) should be treated differently without the application of any MoC.
Finally, there is need to clarify the non-linearity of MoC and the consequent need to aggregate them in such a way to take into account the implicit correlation (i.e. not summing them up).
Generally, yes. But the mandatory use of the whole population of defaults does not have a statistical justification.
In some cases it is expected that the distribution of some risk drivers change over time, so differences will appear when compared with their distribution in the current portfolio. It is not clear which criteria should be followed in order to adequately assess these situations.
As regards LGD models the development team should always consider the significance of the risk drivers used to differentiate loss rates estimates not only in the sample but also in the real portfolio since long historical series can include biases in this sense.
On the other hand a more complex topic concerns the inclusion of all the defaults in the sample: this principle can be potentially in contrast with the idea to have historical series as broad as possible (indicated by both CRR and EBA), in fact the consequence of a broad historical series can be the impossibility to have a complete information for all the defaults recorded and therefore the need to exclude some cases (for example because it is not possible to calculate correctly the target variable or they have a different default definition). For example the current process of the sample definition in LGD models foresees the exclusions of some defaults for data quality reasons: if all the defaults need to be included in the final sample, for these cases a LGD will be forcedly assigned. The question is therefore which LGD should be assigned and homogenous rules have to be provided in order not to create variability. Moreover not only data quality exclusions are performed: for example some defaults are excluded since they are open and their recovery process in progress (they are not considered irrecoverable such as Incomplete Workout cases): for these situations, detailed also hereinafter in the GL, a clear guidance of the recovery rate estimates has to be provided in order not create undue variability among banks.
In long historical data series older data could be assigned a lower weight if it is less representative than the recent data. Flexibility should be allowed provided that any difference in the weighting is properly documented. Not allowing such flexibility would lead to imposing biases in the LGD which then afterwards should be corrected by a MoC which would not be intuitive.
With reference to the treatment of an exposure that after the return to non-defaulted status is classified as defaulted again proposed by the art 90, we think it is not specified what happens for all cases of exposures “returned to non-default status” for which a 12-month observation period after re-classification is not observable. Additionally, it is not clear how this treatment should be combined with the probation period which the RTS on default definition introduced and requested to implement by 2021.
In particular, assuming to implement a 3-month probation period, the exposures returned to non-defaulted status, will continue to be classified as default for the following 3 months, prior to be properly considered non-default. In case of subsequent re-default after non-default classification, based on the proposed treatment, the 3 months of “imposed” default status deriving from the probation period would not be taken into account to define the number of months occurring between re-default and non-default classification.
See example provided below for further clarification:
It is important to highlight that the proposed threshold (1 year) is too conservative; internal data should be used in order to find the most reasonable threshold, after the introduction of a probation period (minimum length of 3 months) as indicated in EBA Guidelines on the application of the definition of default under Article 178 of Regulation (EU) 575/2013.