French Banking Federation

We are in favour of including the additional paragraph which observes that LGD in-default appropriate for an economic downturn could be estimated on the basis of the downturn estimation methodology performed for the LGD estimates of non-defaulted exposures. The EBA could provide some additional clarification. For example,
• If we observe a downturn period in 2009, do we have to estimate the LGD in-default as the 2009 LGD with default for 1 year or as the observed LGD in 2010?
• Which defaults must be used for the calculation of the downturn margin for the LGD-in-Default (exposure which defaulted during the downturn period, exposure already in default during the downturn period, changes in cash flow during the downturn period, etc.)?
First of all, we share the concern that the proposed policy in paragraph 15 could create undue burden if applied to every downturn period identified. The analysis should be performed coherently with the available historical series to take into account potential structural breaks. They could occur in the historical series of economic factors (e.g. : introduction of the Eurozone) or for the LGD series (e.g. : heterogeneous definition of default)
Also, we think that in the following additional cases, an exemption of identification of downturn periods should apply:
• When a major macroeconomic crisis is taken into account in the observed or estimated impact (and the level of final LGDs is not understated)
• When the link between LGDs and economic factors may not be evidenced using statistical models, especially where these links are not justified on an economic reasoning or where data is heterogeneous (e.g. the definition of default may not be homogenous during the historical period).
• When the whole lot of observed LGD belongs mainly to downturn periods, LRA LGD and Downturn LGD are equal.
For this purpose, these exemptions could be documented and approved by the supervisor.
The downturn period should be identified on a recent period: macroeconomic factors are not always available for 20 years and furthermore observed LGD is not available for such a long period. Therefore, it is not realistic to quantify a statistical link between LGD and macroeconomic variables.
Furthermore, if using the extrapolation approach, we could be able to find a link between LGD and economic factors, but maybe not those proposed in article 2 paragraph 1 of the RTS. For instance, the GDP (and inflation) was almost flat during the last 10 years in France and therefore it will be difficult to establish a link between these factors and the observed LGD (with available data for the last 10 years). Flexibility should be given to institution in the way they estimate the LGD according to economic factor.
In addition regarding the same point, it would be consistent to use the model developed within the IFRS9 framework. Indeed LGD are already modelled with macroeconomic factors. If most of the principles comply with the GL, why don’t we use the same model but “backward” instead of “forward” to estimate the downturn LGD during a downturn period (consistent with the extrapolation method)? This could only reinforce the estimations, insure coherence and make more robust the models. Without ignoring the efficiency gains.
Finally, we are wondering if the 20 years approach of the guidelines is compliant with articles 145 and 176 CRR. At least, the penalizing approach for banks that cannot collect 20 years of data is unfair.
Yes, everything seems clear and well explained in the paragraph 14. However, we would like to point out on the following issues:
• For specific segments, the link between LGDs and economic factors may not be significant using statistical models. It is specifically the case for household loans with a guaranty such as “Credit Logement” in France where the LGD is stable whatever the economic conditions are. No link/dependence between LGD and economic factors can be found.
• If the paragraph 14 is clear, estimating by different manner the downturn LGD of each segments (for instance Haircut for one period and one segment and Extrapolation for another period another segment) could add a lot of work and complexity.
• Moreover, the fact that a 20 years period has to be analysed to determine the downturn period is not consistent with our 10 years data availability for LGD modelling.
• A clear definition of the “calibration segment” would be needed. Is it the type of exposure (Corporate, Bank etc.) or a sub-class of LGD in the model for instance?
• For Specialised Lending models, Downturn can be modelled transaction by transaction on the basis of its own characteristics regarding the asset type and loan characteristics.
• Furthermore, new requirements for economic downturn estimation seem to overlap with macro-prudential authorities. What does it mean to estimate economic downturn if there is a counter-cycle buffer to comply with?
The examples provided for the haircut and extrapolation approaches are sufficiently clear and we understand the GL provides examples of haircut and extrapolation methodologies and that institutions could have the most fitted methodologies in accordance with their modelling assumptions.
However, as said in question 2, is it possible within the extrapolation approach to use the model developed for IFRS9 which could fit the need to estimate Downturn LGD for specific downturn period?
When basing our downturn LGD estimation on observed impact, we think that a more pragmatic approach should be applied. In terms of proportionality, we deem indeed that all required analysis are operationally heavy while it is not sure they can lead to an exploitation for LGD calibration. As a consequence, we suggest that these analyses should be alleviated and performed accordingly with a cost/benefit approach in line with good risk management practices.
In addition, the context of application the conservatism margins are not clearly described.
Other approaches should be considered notably for specialised lending models which can be based on asset values or future cash flows generation and loan characteristics. Please see answer to question 9.
The first approach seems to be clearly adapted to HDP. On a very low default portfolio it seems quite unlikely to quantify downturn effect on LDP focused on specific downturn period. In order to use all available information we propose to quantify downturn effect by comparing LRA and LRA’; where LRA’ is simply the LRA without taking into account downturn period.
We understand the necessity to limit the number of approaches. However it may be severe at this step to restrict the GL to the 2 main methods described (Haircut and Extrapolation) since alternatives approaches such as model components may show better performances such as robustness and simplicity to monitor.
In overall, we deem the rationale of LGD downturn estimation to be rule of picking “the worst of the worst”. While we support the fact of distinguishing clearly MoC and downturn estimation, we still question the way of adding too many layers of conservatism.
Applying the worst average LGD measured over different downturn periods and by segment will imply a strong overestimation of EL and UL at a bank portfolio level as all segments are not necessarily fully correlated and don’t have the same sensitivity to downturn period. For example a downturn period could be for Reserve Based lending oil extraction projects a period where the oil prices are depressed whereas these low prices would be quite favourable for airlines and thus for observed losses on aircraft finance.
We consider important to mention that the total exposure amount or share which is treated with section 7 should remain non material, as to avoid misinterpretation of competent authorities which may consider it as a “case-by-default” and apply the section 7 over systematically.
Generally, we view the 20% add-on as far too conservative with a risk to largely over-estimate LGDs, and not justified. Although we understand the intention is to provide a strong incentive for an internal estimation of the downturn adjustment, we think that in cases such as LDPs the lack of data availability is a crucial issue. And this incentive should not lead institutions to “invent” downturn impact where there is no proved impact on LGDs in order to avoid the application of add-ons. Also, it would be useful for the EBA to provide further rationale on the choice of the add-on level (20%).
Our proposal is to replace the 20% add-on with the reference value approach, and computed coherently with Sections 5 and 6. As it relies on internal loss data, the reference value is more adequate than an arbitrary forfeit value. The reference value approach can then be disregarded as a benchmark option.
At the moment, it is still too early to estimate segments or pool that could fall under the policy proposed in section 7. However Low Default Portfolios can be an example where there can be very few observations and the model is based on expert judgement (Bank or Sovereign exposures).
Applying a fixed add-on is simplistic as it is a one-size-fits-all-approach. It will penalize the best transactions.
An alternative proposal would be replacing this 20% add-on with the reference value approach (see answer to question 6). The long-run average LGD are significantly different from one another according to the type of exposure, so the add-on must be different according to the level of Long-run average LGD.
Given the difference of the granularity between LDP and HDP portfolios, the regulator should let the possibility to choose between:
• add-on per segment or pool (bottom up);
• a global add-on (top-down): split of the downturn margin between the different segments.
The reference value approach would be then disregarded as a benchmark option, and computed coherently with Sections 5 and 6. Utilizing the reference value approach in this manner, would allow to differentiate the floor among exposure classes.
Some Specialised Lending models are quite risk sensitive and therefore enable to differentiate the best transactions from the most risky ones. Applying a fixed add on would very much penalise the best ones and push the banks to take in their portfolios the most risky transactions which would be the only ones with a sufficient margin to cover a high LGD.
No this minimum Moc appears to be far too conservative. It would need to be differentiated according to the segments/types of exposure. We would appreciate to get some feedback on how the +20% has been defined. See also our answers to questions 6 and 8.
The guidelines seem to be designed for statistical models and don’t envisage at this stage theoretical models which can be used for Specialised Lending.
These models are based on assets values or cash flows based on deep historical data, like aircraft values or electricity prices in the case of a power plant for example. These models are a combination of statistical data on asset values or cash flows and expert /experience assessment regarding the risk drivers used.
The way LGD is modelled can be based on simulations of asset or cash flows values, applying haircuts and /or volatilities. These haircuts or volatilities themselves can be calibrated in order to take into account downturn periods.
Like the “extrapolation approach” which is based on simulated historical loss data, and estimates “realized” historical LGDs, these theoretical models simulate future possible “realized” LGDs (that would materialize in case of default, ie predictive LGDs) on the basis of the simulation of assets values or of future cash flows.
In the case of such models, it is not necessarily an add on which is added, but the process of simulation itself which is calibrated in order to simulate historical LGDs addressing downturn consideration.
Such models are quite risk sensitive as they take into account each transaction characteristics, (asset type, duration of future cash flows, debt repayment characteristics), and they incorporate downturn aspects in the modelling, which implies an increase of LGD due to such DT aspect. The downturn impact is thus generated at each transaction level.
In the case of an asset based model like an aircraft financing LGD model, some downturn studies can be done also, showing that the Specialised Lending model simulates haircuts which take into account downturn periods haircuts.
In the case of cash flows models like for Project Finance for example, the downturn aspect can be incorporated in the simulation process.
For Specialised Lending, in order to assess whether the model adequately addresses the downturn requirements, the back testing should be taken into account. The purpose of back testing is to compare, on the overall defaulted transactions, historical LGD on defaulted deals with model LGD forecasts one year before default. As long as such backtesting is satisfactory, it means that the downturn aspect is adequately taken into account.
“How to use” the reference value is not clear. Is it only a comparison point or could it become the Downturn LGD if the Downturn LGD estimated via sections 5 or 6 is lower?
The reference value shall not be considered as a “hard floor” as extreme values might be encountered using only two years , as some years with high loss rates do not stem from “downturn” effects.
When determining the reference value, the guidelines suggest to compare a LGD average-weighted by the occurrences of default with a LGD average-weighted by the exposure at default, which seems to have no sense as long as they have no comparable features. Indeed, while one point focuses on the default occurrences, the other one focuses on the global exposure amounts of an institution.
We then suggest to suppress this comparison from the guidelines.
French Banking Federation