We expect that the application of the new standards will potentially results in a material model change for all models currently in place, implying huge efforts/costs for the Group as well as for Supervisors which will have to manage all the Material model changes by 2021. The more impacting requirements are those on LGD estimation and the ones on MoC adoption, in light of what commented in the relevant sections.
In this respect, we urge the EBA to share with banks the timeline and discuss how the implementation could be managed in a more efficient way. With this regard, UniCredit deems that regardless of the final implementation date, Banks should be allowed to adopt a staggered approach foreseeing intermediate steps of pre-approval with the Supervisory Authorities.
Calculating one-year default rates at least on a quarterly basis does not present particular operational constraints, since we generally compute it for monitoring purposes. Nevertheless in some cases, and especially for Low Default Portfolios (LDPs), Default rates (DRs) can be very volatile, thus the outcomes must be carefully assessed and a change in the DR should not be automatically considered as a trigger for further actions.
With regard to treatment of drawings after defaults, in our opinion they should be included directly in the LGD computation instead of in the EAD. This would have the following advantages:
• all the information for EAD estimation would be available at the date of default, without the need to make assumptions on possible future drawing after defaults;
• it is difficult to manage drawing after default in CCF in case of multiple default, since further drawings could be expected during the time span between the two default events (which could last up to 15 months according to the GL). This might increase a lot the volatility of the CCFs, and this is a relevant topic since the stabilization of the target CCF is one of the most onerous activities during modelling phase. It should be considered that also Basel Committee pays specific attention to this point ;
• additional drawings after default are usually stored in LGD databases and it is not always easy to distinguish drawings after defaults from costs and other cash flows. Treatment in the EAD implies higher operational complexity. Due to that operational burden, if allocation of drawings after default into the EAD is not carefully managed we could incur in a double counting of exposures;
• additional drawings after default can derive from a regular workout process and should thus be counted in LGD. This is particularly true for restructuring cases.
• Expected Loss calculation would remain unchanged.
In case it is confirmed that drawing after defaults have to be included in the EAD, it would be advisable allowing to fix a limited period for drawing after defaults computation, considering that most drawings after default occur close to the default event. This limited period should be defined to ensure a balance between value added to the estimation and computational effort. Allowing for this would be particularly relevant in case of long recovery processes.

For interest in arrears capitalized after the default, we see a risk of potential double counting with the discounting rate, and we deem appropriate from an economic point of view that discounting effects might be compensated by recoveries over capitalized interests. Indeed, the interests in arrears charged to the client are based on the reference market interest rate as a sort of cost, having a financial connotation also linked to the cost of time deriving from the missed payments. However, also the discount rate, which includes the risk free component based on market interest rates, includes the financial effect of time. We does deem appropriate the alternative solution outlined in the explanatory box, according to which an alternative approach could be not increasing the economic loss by the amount of fees or interests after default. We acknowledge that differences in rates might potentially lead to negative LGDs, however we deem that this risk is already addressed by fixing a floor to zero to the observed LGD.
In cases where punctual PDs are assigned, a benchmark for the number of pools and grades would not be relevant if the aim was to reduce unjustifiable RWAs variability. Rating grades are in fact used just for reporting as well as for business purposes in specific fields where the individual PD cannot be used.
In contrast, in cases where average PDs are adopted, we deem that the definition of a common number of pools and grades could be a too strict requirement, since it would not allow taking into account specific portfolios composition and ratings distribution/concentration, which represent the main drivers for rating grades definition. In conclusion, we do not believe that the number of rating grades is one of the major sources for RWA variability and hence we do not deem that the proposal of a fixed number of rating classes is value adding in a context where both the usage of punctual and rating class PD is allowed.

As far as the opportunity to fix a maximum PD level is concerned, we agree that a common benchmark value would be recommendable in order to reduce model variability, in case the setup of a limit was required.
Economic conditions are not a direct input to the rating models but changes to the economic environment affect the obligors' information considered in the models (e.g. obligors’ financial statements, behavioral and qualitative information). On the other hand, estimates are calibrated to long run average default rates which tend to stabilize the average portfolio risk level. In light of these considerations, it can be inferred that our models respond to an hybrid philosophy.
We agree with the general approach proposed for the calculation of the observed average default rates. However, we have a specific concern over the formula reported in the explanatory box at page 48: the simple average among one-year default rates seems to be the only approach available for the computation of long run average. Actually, we deem that in certain cases, where this is justified, different methodologies should be allowed. Below some examples are reported in this respect:
• weighted average with weights defined on the number of clients in each year of observation, which in our opinion could be a better option for portfolios such as LDPs, characterized by low numbers;
• weighted average with higher weights for recent periods consistently with art. 180(2)(e) of CRR, clarifying that “an institution need not give equal importance to historic data if more recent data is a better predictor of loss rates”.

Furthermore, we would argue for the non-overlapping method, which, in our opinion, is simpler compared to the overlapping method overall ensuring at least the same level of reliability, and allowing a more efficient management of multiple defaults within each cohort (refer also to comments provided on paragraph 90 of Chapter 6). Nonetheless, we agree that choosing a fixed date for default rate computation could be affected by seasonal effects. In this respect we deem it would be appropriate to perform a preliminary analysis to understand whether significant seasonal effects exist and to take it into consideration when the relevant reference date is chosen.

As far as the short term/terminated contracts are concerned, we do not usually apply any particular treatment for customers who have only short term/terminated contracts in our models. Indeed we deem that such contracts should not be considered like a source of bias for yearly default rate computation, because they allow to represent the actual observed default rate of the institution, consistently with its portfolio composition. Moreover, the inclusion of short term contracts without any specific treatment is consistent with the 1-year floor to the maturity in the supervisory formulas.
No processes are currently in place to monitor rating philosophy since the assessment of the level of Through-The-Cycle (TTC)/Point-In-Time (PIT)-ness is made on a qualitative base.
Nevertheless, we deem that rating philosophy has a direct impact on RWA variability. It should be clarified to which extent this variability is deemed justifiable or whether an harmonization in this respect should be pursued (e.g. through a clearer definition of calibration sample characteristics in terms of time series to be considered). In this respect, we would recommend that the EBA clarifies in paragraph 80(d) that the sample should be as much as possible aligned to the current portfolio, tending at best to be exactly equal to it, which combined with the Central Tendency defined as a long run default rate would bring to a more stable approach allowing to achieve a higher stability in estimates and RWA requirement. We indeed see as potentially inconsistent the two requirements of having a calibration sample “comparable to the current portfolio” and at the same time “representative of the likely range of variability”.

In any case, rating philosophy is a fundamental driver for model back-testing and maintenance, both in terms of tests to be performed and outcomes interpretation. For this reason, we deem that, in order to reduce unjustified variability, at least common criteria to assess the TTC/PIT-ness of each model should be identified.
All our authorized IRB models follow a hybrid philosophy, with a different weight of PIT components depending on the importance of more cycle-sensitive information.
Indeed, we deem that an hybrid nature of models is unavoidable given the current regulatory requirements. On one hand they require stability in estimates and RWA together with the adoption of a long run perspective in the credit risk parameter quantification (i.e. calibration), on the other hand several articles of CRR, both on rating assignment process and model estimates, require that all relevant information are considered with a timely update implying that in a downturn (upturn) period both financial statement as well as behavioural and qualitative components tend to worsen (enhance) introducing a systemic risk component in addition to the pure idiosyncratic one.
Overall we deem that the requirements on the application of appropriate adjustments and margin of conservatism (MoC) are reasonable. However, there are some remarks that in our opinion would require clarification or should be more detailed, to make the application of MoC more effective.

Firstly, it should be considered that not all models deficiencies potentially requiring MoC according to the 4 categories (A, B, C and D) mentioned in the GL are quantifiable or have real impacts on model estimates quality. Therefore , we deem that a mandatory MoC application as per paragraph 30 is not suitable for adequate risk estimation. This being said, UniCredit deems that the occurrence of one or more of the relevant triggers (as defined in the four categories) should instead result in a transparent and adequately documented assessment on the necessity for a MoC application. In other words, only relevant and material gaps should be addressed through a MoC.
Furthermore, the same deficiency could impact more than one MoC category (e.g. data quality issues could trigger higher uncertainty in the estimates); a proper treatment should be defined to avoid double counting of conservative effects, taking into account possible correlation among MoCs when applying them on top of risk parameters.
This would allow preventing errors propagation due to the fact that MoC includes ‘estimates into the estimates’, increasing the model risk and consequently lowering the quality of final risk parameters. This seems to us also consistent with paragraph 32, according to which “Institutions should consider the overall impact of the identified deficiencies and the resulting MoC on the soundness of the model and ensure that capital requirements are not distorted due to the necessity for excessive adjustments”).

Secondly, it should be considered that no GL has been provided for MoC quantifications. In principle we agree on this, since we deem that a certain margin of autonomy should be left to Institutions that should define MoC based on ad hoc assessments and analysis on real data. Nonetheless, UniCredit is concerned that different supervisors and institutions might have diverging opinions on the adequacy of the estimated MoC, weakening the effectiveness of the GL in terms of reduction of unjustifiable variability of RWAs. In this context, it would be useful to add in the GL some methodological examples for MoC quantification. In doing this, we deem fundamental that all the considerations above are taken into account and that the illustrative MoC are defined to ensure a balance between appropriate conservativeness and the need to limit biases on estimates. In this regard, we deem for instance, that the example reported at page 8 of the GL (i.e. asking for a MoC equal to the 90% confidence interval around the average of the new default rates) is too strict.

In light of these considerations, in order to avoid the use of the MoC is a source of additional variability, we deem the MoC should be applied as a last resort measure only when strictly necessary. Cases where the application of a MoC is deemed consistent, are for instance a change in the default definition that cannot be fully rebuilt in the past and for which it cannot be reasonably demonstrated that the impact would be immaterial or a significant reduction of data representativeness over the life of the model as detected during model monitoring depicted in chapter 9. Hence, where an alternative treatment could be adopted, this should be done. Below some examples are reported in this respect.

We deem for instance that the inclusion of not representative data or data with quality issues in the development sample should be avoided, to limit both model risk and unjustifiable variability of the estimates among banks, driven by different approaches adopted in defining the relevant adjustments. We would thus recommend that the exclusion of not representative/erroneous data is allowed, considering the exclusion as an appropriate adjustment whose objective is “to achieve the possibly most accurate estimates”. In any case banks should clearly document the materiality of the exclusions, justifying the underlying reasons, and assess whether a MoC is needed in light of the above reported reasoning.

A further example could be represented by outdated data (e.g. old financial statements relevant for PD estimation). Indeed, we agree on the fact that they should be cautiously treated in the application phase (as stated by paragraph 186), but during the development phase they should be excluded not to introduce distortions.

Finally, we would appreciate to receive some clarification/additional information on the following points that could imply the inclusion of a MoC:
• with specific reference to missing data, it should be specified that only not informative missing (i.e. due to lack of information) should imply the application of MoC. Informative missing (i.e. those having an economical underlying meaning, e.g. no return from credit bureau for clients operating just with one bank) should be properly treated based on the economic and risk meaning, without the application of any MoC;
• regarding point 25(c)(i) and (ii), it should be clarified what the rank order estimation error" and the "estimation error in the calibration" stand for. While an estimation error always exists, even though in some cases it could be small, according to paragraph34(b) “the MoC stemming from Categories C as referred to in paragraph 24 is eliminated after the error is rectified in all parts of the rating system that were affected”. Based on these considerations:
- we interpret the two estimation errors above mentioned as strong misclassification between good and bad obligors and between DR and PD, respectively, to be offset as an interim solution with MoC as long as more structured intervention are implemented;
- we suggest to specify that just significant rank ordering and calibration estimation errors would imply the introduction of a MoC."