Response to consultation on Guidelines PD estimation, LGD estimation and treatment of defaulted assets

Go back

Question 1: Do you agree with the proposed requirement with regard to the application of appropriate adjustments and margin of conservatism? Do you have any operational concern with respect to the proposed categorization?

A: Generally speaking, since our internal models are already validated and mostly consistent with the CRR regulatory framework, it would be reasonable not to expect a relevant impact in terms of RWA and regulatory capital. If this is not the case, the consequence would be that the current RWA levels are both biased and underestimated and therefore the trust in validated internal models could further decrease. Moreover, at the same time, Banks and Supervisory reputational risk could increase.


Quantitative simulation on how the new EBA Guidelines will impact banks are not available due to the complexity of the proposed modifications as well as the unclear technical aspects of some of them.
However, ISP certainly expects a very relevant impact of the proposed EBA Guidelines in terms of resources and effort (e.g. PD-LGD models design, roll out plans - will new application packages be necessary for every model estimation? - calibration, back testing, monitoring, periodical review, process and IT systems).

LGD parameters is undoubtedly the most impacted in these GLs: many aspects for both Performing and Defaulted Asset models are subject to revision and therefore the entire framework has to be carefully evaluated together with the implementation timings.

Finally we highlight the need to have an appropriate discussion with JST on these topics since the expectation on our side is that these GLs, once approved, will require many model changes with substantial modifications to the current framework.

In general terms, EBA and the Competent Authority should also consider the following aspects in the quantification of the impact coming from the whole adoption of these EBA GLs (2020/2021). In details:

a) Impacts on model changes in progress;
b) Review of the roll-out plan and new (potentially a lot) model change applications due to the alignment to the new EBA GLs;
c) Adoption of the new definition of default (with impact on process, IT systems and models) and its relation with the rules of this GL;
d) TRIM exercise and expectations
e) Impacts on current models due to Basel Committee’s decision about the general IRB approach (for example about LDP which are currently included within this GL but contain many differences from HDP portfolios);
f) Impact on the supervisory authorities’ activities

Considering the expected important impact on the banks (in order to align current validated models based on CRR regulatory provisions to the new EBA methodological framework) and supervisory activities (e.g. on-site inspections, model change applications, ex-ante and ex-post notification, …) a reasonably long transitional period should be provided.

Question 2: Do you see any operational limitations with respect to the monitoring requirement proposed in paragraph 53?

No operational limitations should be present. We already have an automatic 3 months Default Rate calculation engine for each segment. On the other hand, the impacts on time and effort should be evaluated given that, in order to be coherent with the calculation adopted on the Central Tendency, the repetition of the same exclusions/data treatments should be applied.

Question 3: Do you agree with the proposed policy for calculating observed average default rates? How do you treat short term contracts in this regard?

As far as Retail exposures are concerned, we agree with the proposed treatment of additional drawings for both CCF and LGD and our models are already developed aligned with CRR and EBA GL.

As far as Corporate exposures are concerned, our current approach is based on a prudential CCF equal to 100% applied on every undrawn amount referred to defaulted counterparties in the application portfolio.
Is our current approach about Corporate exposures considered compliant with the CRR?
We have considered the default date as a point of break between EAD and LGD estimation and in general we think that this kind of approach, confirmed only for Retail approach in the EBA GL, is more simple and intuitive. It is not clear why for a Corporate segment it is preferable to choose a different approach which is more complicated respect to the Retail one and we are worried that EBA prescriptions moved just from a wrong provision already included in the CRR IV. If we had to include additional drawings in the Corporate CCF estimation as required by EBA GL, we would need a few clarifications, in particular:
1) what we have to consider as additional drawings (are they only capital drawdowns after the default date?);
2) whether our estimates have to be based on closed defaults or we have to do projections also on still open defaults and include them in the CCF estimation sample;
3) how to treat different seriousness of default status (e.g. hard/soft collection).

In this regard and considering the above mentioned points, EBA should provide more methodological details on CCF estimation (for example through a guideline on CCF estimation). If there are no reasons to differently manage additional drawings between Corporate and Retail segments then CRR should be coherently modified exploiting current layer one revision in progress.


The approach concerning fees and interests is very important for an appropriate LGD computation. It seems that the Guidelines introduces confusion between accounting schemes and the concept of economic loss, in particular for the issue concerning contractual interests. All the fees are considered in the economic loss as well as all the other direct costs: they are included as well in the exposure at the denominator of the LGD until the beginning of the default event (or the beginning of the litigation phase in case of multi-stages model), while they are not added to the exposure if they are recorded after the default event (or litigation event) but, as stated above, considered as cash-out. On the other side the interests can be further divided in two categories:
• Contractual interests: these interests are included in the exposure at the denominator of the LGD until the beginning of the default event (or the beginning of the litigation phase in case of multi-stages model) but have not be considered as a cash-out in the numerator of loss rate computation since their inclusion would result in a double counting with the respect to the discounting process (whose section is separately treated in the GL). Moreover the inclusion of these interests such as costs in the numerator would represent an accounting scheme which is a completely different object compared to the economic loss: the share of interests which will be cashed-in will be adequately discounted to take into account the time value of money but nothing more has to be added in the LGD formula. The following example can clearly illustrate the distortion in the LGD computation deriving from the inclusion of the contractual interests in the numerator of the formula together with discounting process of the cash flows as indicated in the Explanatory box for consultation purposes at pages 65-66: (please refer to Table 1 pag. 8 of the position paper)
As immediately noticeable this position ends its recovery process with a full recovery of both capital and interests. Since the contractual interest of this facility is equal to the one applied for the discounting process the resulting LGD is correctly equal to 0. Nevertheless if the contractual interests are also added as cash-out in the numerator of the LGD formula the resulting loss rate becomes equal to 4,4%, a value undoubtedly wrong.
The confusion between accounting scheme and economic loss has to be corrected. Given this principle, about the negative LGD it has to be underlined that, in case of perfect alignment between contractual interest rate and discounting rate, the discounting process would not determine any distortion in this sense (see the example above). Nevertheless banks tend to apply current rates approach for the discounting process as also suggested by the BCBS Working Paper 14 (2): “their use allows the consideration of all available information and facilitates the comparison between LGD estimates from different portfolios”; this approach can determine negative loss rates for the different consideration of the value of money in time. The subsequent 0% floor appositely solves this misalignment without creating distortions in the sample. More specific comments on this topic will be provided in the dedicated section of the GL;
• Unpaid late fees interests: these interests are included in the exposure at the denominator of the loss rate until the entrance in the default status (or until the entrance in the litigation phase if a multi-stages model is applied), but the GL asks to consider that in case of recovery of late interest that have not been previously capitalised the moment of recovery should be considered a moment of capitalization. If this requirement imposes not to consider the cash-in related to unpaid late fees interests and exceeding the amount included in the EAD for the loss rate computation we do not agree with the proposal: in fact according to our opinion a cash-in is always a cash-in and the priority rules for the cash-in repartition decided by the bank (capital, interests, etc.) should not distort the economic loss estimation; our opinion is that all the cash-in should be considered without any specific treatment for the case of unpaid late interests.

We deem more appropriate to treat the two topics separately and to pay attention to the terminology (which is critical on this technical issue, for example sometimes the terms “unpaid” or “late” are forgotten in the text). We propose to rectify these issues by properly considering the economic loss concept without inappropriately including accounting topics. The double counting effect with the discounting process has to be considered and avoided.
For the unpaid late fees interests we propose to consider a cash-in always a cash-in without rectifying anything for the capitalization concept: the repartition of the cash-in decided by each bank can cause a distortion in the economic loss estimates if a share of these unpaid late fees interests has to be rectified and therefore can increase the variability among banks.

Question for discussion: Article 115 states that additional recovery cash flow has to be added to the calculation at the date of the return to non-defaulted status in the amount that was outstanding at the date of the return to non-defaulted status and this additional recovery cash has not to be discounted: this approach means that this recovery is not discounted analogously to the other cash flows because is not considered as a cash flow. Can EBA provide a definitive clarification on this point?

(2) Note: Refer to Studies on the Validation of the Internal Rating Systems", Basel Committee on Banking Supervision, Working Paper No. 14 February 2005."

Question 4: Are the requirements on determining the relevant historical observation periods sufficiently clear? Which adjustments (downward or upward), and due to which reasons, are currently applied to the average of observed default rates in order to estimate the long-run average default rate? If possible, please order those adjustments by materiality in terms of RWA.

The choice of the grades number influences the stability of the ratings themselves. The use of fixed number of grades and a maximum level for the PD allows a better (but not complete) comparisons with other banks and comparability. Nevertheless, it is necessary to evaluate the portfolio of each Bank and its specificity and therefore, for example, the concentration of the population among grades: it could be necessary join or divide grades to ensure default rates monotonicity and a better discriminatory power of the model.
The same number of pools and grade could be not enough for improving comparability between banks, because the ranges of the grades could be very different. Furthermore the banks could introduce managerial master scales for business choices if the compulsory numbers of grades wouldn’t be a good fitting of the portfolio.
A possible approach could be to define standard common number of pools and grade just for transparency and comparability purposes (Pillar III). But, in order to define the portfolio/bank’s specific master scale in our view is important to use statistical optimization method able to reflect the specificities of a model/exposure class and the relationship between rating grade and the observed default rate.
A different approach could lead bank (as above mentioned) to define specific (optimized) master scale at least for managerial and business purposes.

Guidelines on master scale definition would be appreciated. We would be keen to contribute by describing our methodological approach on this issue.

Question 5: How do you take economic conditions into account in the design of your rating systems, in particular in terms of: d. definition of risk drivers, e. definition of the number of grades f. definition of the long-run average of default rates?

ISP’s approach in modelling PD is TTC oriented.
As broad as possible long run historical series are considered in building the estimation sample and in the long list variables definition.
Moreover, the CT for PD calibration purpose is the long run average observed default rate (with an adjustment to make homogeneous the default definition over time). As a consequence, macroeconomic conditions are generally reflected in the independent variables behavior and in the default rate evolution.

In details:

d) Macroeconomic conditions (ie: GDP, unemployment rate, inflation rate, etc.) are not considered as risk drivers for model development. The definition criteria for the Long list of variables potentially predictive of default probability is “Through The Cycle” oriented.
However, risk drivers could be implicitly affected by economic conditions in different way according to the portfolio analyzed.
e) Macro economic conditions are not considered in the Master Scale construction. The current Master Scale construction methodology aims to minimize migrations between grades due to economic conditions and to ensure a good distribution of population among grades. Definition of the number of grades is conducted in relation to the observed Default Rates in order to have a monotonous trend with significant variability between grades and no concentration effects.
f) Definition of the Long - Run average of default rates is conducted including in the observed period a full economic cycle, that embeds both downturn and upturn periods, ensuring historical observation period representative of the likely range of variability of one-year default rates. Macroeconomic conditions (e.g. GDP trend, BOI default rates) are used for benchmarking purposes.

Question 6: Do you have processes in place to monitor the rating philosophy over time? If yes, please describe them.

We usually adopt a Non-overlapping windows method when possible. In fact, differently from the Overlapping method, defaults are considered just one time without inevitably incurring in overweight / underweights that increase/decrease the Central Tendency value. Regarding the bias due to the choice of fixed reference dates in case of Non Overlapping method, the volatility could be considered acceptable if a full economic cycle is included in the long run average so that changing the observation point substantially doesn't move the final average value, while considering a limited period in the cycle can influence the value if a downturn/upturn period is considered. On the other hand, the effect of using an Overlapping windows method is just an increase/decrease of the CT value depending on a high/low default presence in the middle of the historical series, given that here, differently from the tails, defaults/bonis are counted 12 times. In addition, in case of multiple defaults in the same period the Overlapping method takes into account the return to performing given that the initial performing perimeter changes each time, with the result of increasing the Default Rate, differently from the Non-Overlapping method where the initial performing perimeter is set one time at the beginning of the year.

Short term contracts are included in the population so that a customer with an active short term contract at the reference date is considered both in the numerator (in case of default) and in the denominator (in case of a bonis status at the reference date), independently from the expiration date of the contract, no reasoning about the maturity are conducted.
It’s worth to note that ISP is a long – term relationship bank. As consequence, the weight of short term contract on the whole portfolio is marginal and the default rate is not biased.
EBA’s GL should better clarify how to deal with short term contracts and give more details.

Question 7: Do you have different rating philosophy approaches to different types of exposures? If yes, please describe them.

The monitoring of the rating philosophy (Point In Time vs Through The Cycle) isn’t a formalized process but migration matrix analyses are annually performed to verify the rating stability.

ISP really appreciates the attempt to in-depth analyze this topic. It’s now clear the definition of PIT vs TTC models but it should be also clear that the majority of the models can be considered as “hybrid” between PIT and TTC with a score more based on PIT indicators and a calibration TTC.

In our opinion further clarifications should be provided in order to define technical standards and a homogeneous metrics for all the banks for the quantification of the pitness vs ttcness model’s degree.
Without a common interpretation inside the industry and among supervisory authorities’ performance evaluations will continue to be not homogeneous.

ISP considers very important all the aspects related to the backtesting point of view.
We completely agree that backtesting analysis needs to be adapted in order to consider the cyclicality and the dynamic proprieties of a model.

In our view, a possible direction could be defined through a multidimensional approach framework for validation purposes (1). The idea is to develop a set of tests, each aiming at measuring the performance of the models and their compliance with different (even conflictual) regulatory requirements. Tests to be introduced are based on different hypotheses (correlations, degree of pitness measures, …). The outcome would be a simultaneous assessment of the several dimensions in a traffic light approach involving both quantitative thresholds and qualitative judgments. The traffic light approach could be implemented with specific threshold for each test, differentiated for the degree of pitness of the model, the target segment (corporate, retail, HDP, LDP), number of rating classes of the system.

(1) For further details see AIFIRM position paper: “Validation of rating models’ calibration” by S. Cuneo, G. De Laurentis, F. Salis, F.Salvucci.- January 2016.

Question 8: Would you expect that benchmarks for number of pools and grades and maximum PD levels (e.g. for exposures that are not sensitive to the economic cycle) could reduce unjustified variability?

Internal models are developed according to a TTC philosophy.
However, the weight of behavioral component can be different for each model: models with high contribute of behavioral information are more Point In Time rather than models with high contribute of other more stable information (Retail vs Corporate exposures).

Question 9: Do you agree with the proposed principles for the assessment of the representativeness of data?

ISP agrees about the application of appropriate adjustments and margin of conservatism but just in specific cases such us methodological deficiencies, estimation errors that diminished representativeness of historical observations and deficiencies due to missing data.
In the definition of the MOC related categories, EBA should consider the fact that correlation between MOC could arises with problems in quantification and in the evaluations of the overall impact.
The MOC is not the solution, but a temporary prudential method to solve some difficulties. The MOC multiplication and correlation could lead to a not desired increase of the unjustified" variability and in same case also to over-estimate the risk, with impacts on the real economy.
It is absolutely necessary that during the definition and quantification of MOC and adjustments the Regulator requires to banks to focus only on the most relevant and material, preventing that the error propagation due also to “MOC estimates” generates estimates even less comparable among banks or unrealistic estimates, exponentially increasing the so called model risk: the unjustified variability among banks can in fact be just moved from modelling techniques and data to MOC definition and quantification without at the end obtaining the desired improvement in enhancing comparability and creating a level playing field
It should be better explained when a MOC should be applied and the related level. Furthermore whereas the Regulator suggests MOC application in the different phases of model development, it would be reasonable to think that clear guidelines should be defined in order to standardize the criteria for the adjustments/MOC identification and quantification with the aim to avoid or to reduce the “unjustified” RWA variability among banks.
It has also to be underlined the difficult evaluation of impacts in terms of PD (as well as RWA), due to the introduction of adjustments in the first stages of the model development (before the Master Scale PD definition) or the introduction of MOC not applied directly to the final PD.
Moreover, EBA’s GL should better clarify the way of managing MOC correlation that could arise in the application of margin of conservatism in several stages of the estimation model process.

According to ISP’s view point it is crucial the EBA’s GL Art. 32 which contains a strong indication in ensuring that capital requirements are not distorted due to the necessity for excessive adjustments and its coherence with the whole MOC section.
As consequence, and in order to prevent different interpretations in quantification and in application of adjustments and margin of conservatism among banks, countries and national supervisory authorities, EBA should better define:

• categories and methodological aspects in estimating MOC
• the MOC’s temporary characteristic (refer to Art.34)
• a sort of standardizations (clear quantification?) in order to make homogenous the meaning of the sentence “…. not distorted due to the necessity for excessive adjustments”


In conclusion ISP completely disagrees to evaluate the impact of each MOC in terms of final risk parameters: this approach implies the estimate of a number of n models for each MOC applied, but the impact of MOC could be not linear. Making a comparison between the model with all MOC applied and the model without any MOC is considered more appropriate to assess the overall adjustment impact."

Name of organisation

Intesa Sanpaolo S.p.A