ESBG finds it reasonable to have separate analysis for different types of exposures, while we wish more clarification on the definition of jurisdiction and how detailed it should be. For example in Germany and in the US there are several jurisdictions, and if the RTS means different legal jurisdiction as such, a concern is that too granular an approach can result in too few observations to make meaningful analyses.
In general, we see that data availability is an issue, both in internal and external data, when we look further than 20 years back and need long time series of observed model components with, for example, the same definition of default as is the case for the current regulation.
Furthermore, in ESBG’s view, the model components should be defined more clearly, as the definitions are both difficult to understand and seem very open to define.
ESBG would note that the managerial options in a downturn are potentially more effective for CF than for LGD: During the first signs of stress, credit line management might be quickly adjusted to stabilise or even reduce observed CFs. Observed LGDs, however, cannot be influenced so easily. Expected cash flows will suffer from the higher number of defaults and the general stress in the environment, which makes it much more likely to actually observe increased downturn LGDs.
In ESBG’s view, the definition of model components is not very clear; in fact it can be interpreted in various ways. The definition of model components is also very complex, and it greatly complicates the calculation since it means that not only one but multiple downturn add-on values shall be calculated. Moreover, many of the proposed macro variables, especially the market indices, can be difficult to obtain over a long time span. A 20-year time horizon is a very high number compared to the data availability for financial institution. If this is not fulfilled, an additional margin of conservatism (MoC) is to be applied on downturn add-on, which is another complication.
In this regard, ESBG would like to ask the EBA to clarify how this MoC should be determined.
We also see challenges to decide on what is a strong enough dependency for an economic factor to be selected, and that this evaluation can vary between the authorities, thereby imposing different MoC on similar cases.
If several factors are included in one economic factor model, financial institutions might have statistical issues due to the lack of long time series. As mentioned in Q3, ESBG sees a challenge in deciding what is a strong enough dependency for an economic factor, especially to get this evaluation harmonised across different regulators.
In ESBG’s view, it is unclear how to sort the observed losses according to the realisation of the model component.
Are the banks supposed to discount the cash flows of each default back to time of default, and then move this observation to the year of realisation of “main” cash flow? We have found a paper from Global Credit Data stating that this gives higher correlation towards macroeconomic factor, although a difference is the discounting effect imposed.
We would like to ask the EBA to provide further examples of how to map different model components to different points in time during the workout period (i.e. cured, liquidation, distressed restructuring, etc.). Is there a risk to aggregate losses that are not comparable, in the case where the banks have a long time to resolution? Also, is it possible to find a solution that gives a proper method to calibrate the estimates for capital purposes?
In ESBG’s view, a one-year duration can impose a calibration level that is very different between institutions on similar portfolios. The reason is that low default portfolios or other small portfolios (with a separate model due to proven differences from other portfolios) might have their calibration level set based on a very limited number of observations. Another issue is that structural breaks such as requirements from regulators in a crisis, new management with changed strategy, new financial accounting standards in the bank etc. might give extreme results in a single year that are not representative for the future. It is very hard to argue towards the regulators that a structural break with similar magnitude will not happen again, and some of these events are more likely to happen during a crisis. Therefore, we advise the EBA to look at the average of a model component over the time period viewed as one downturn (e.g. 3 years) to ensure a more reliable estimate.
Moreover, since it is up to each institution to define a year (Q1 to Q4, Q3 to Q2 etc.) banks might experience variance between institutions. It could be relevant to read more about the choices each institution makes in e.g. the Pillar 3 report.
ESBG is concerned by the fact that different authorities will put different requirements on which situations are classified as severe, and thereby impose different MoC to similar portfolios. Another concern relates to the availability of data for the past 20+ years, both internally and on model components (where the model component and/or the economic factor do not have a long time series).
ESBG believes that more details in Article 2(3) are needed in order to ensure harmonisation.
ESBG strongly recommends the definition of such steps in Article 6.
ESBG would like to highlight that it is important to have consistency in the chosen downturn period across PD, LGD and EAD estimates. The mission of the IRB models is to estimate the expected and unexpected loss over time and in a downturn scenario, and banks cannot isolate these estimates completely.
We believe it can be a challenge to decide how many years to combine as one crisis in a situation where you experience two consecutive crises. Should they be viewed as one period of crisis or two? Expert judgement might increase the robustness of this, but it might lead to different judgements between different banks and regulators.
We would also like to restate our concern about how to evaluate which and how many economic factors are sufficient.
Requiring financial institutions to compile data from some years ago is a challenge, especially if looking at components at a more detailed level than overall LGD and CF, in addition to taking into account the new default definition.
In this regard see Q2. As commented in the RTS, many banks have quite simple CF models compared to LGD models, therefore ESBG thinks that these RTS might be too extensive for CF estimates. We might see a situation where a single year observation set the CF (unless LR average is higher), and this might lead to bias in the CF estimate.
We would also like to highlight that it is crucial that the same final downturn period is selected for CF and LGD estimation. This is because the two estimates are strongly linked (and typically based on the same dataset), therefore consistency is of utmost importance.
In ESBG’s view, yes. The MoC can also be more elaborated so that it is interpreted equally across banks and regulators. We would like to highlight that it is important to also evaluate the total impact of the estimates. We advise the EBA to include this in the guideline on modelling as well. If we have several appropriate MoC on many components and steps it is likely that the overall estimate gets too high and with little relevance. It is important to keep a relevant scenario in place for the IRB models so that banks continue to improve their risk management models, and with results that one can imagine that could happen / can relate to.
ESBG welcomes simpler approaches. Many details might not lead to more harmonisation because banks and regulators towards the overall downturn level need to do several assumptions.
ESBG finds the model component approach preferred by the EBA too complex, combining strict methodology prescription and subjective input from the panel of experts. Such a methodological approach requires verification of significant underlying assumptions and might increase model risk and uncertainty. Such an increased complexity could even contradict the current EBA approach which aims to reduce model risk through a variety of consultations and methodological harmonisation. Consequently we believe that the reference value approach is more adequate as it defines a common frame for estimation, and at the same time it gives flexibility in estimating the final LGD. This approach allows banks to use model component downturn estimation if appropriate. ESBG prefers approaches that define common estimation framework, but that also allow the banks to select a methodology tailored to their environment.
Generally, challenges are present in all three methods, e.g. all of them are vulnerable to data limitations. We favour guidelines that respond to risk sensitivity; therefore, as we stated, the reference value approach seems a bit more relevant than the supervisory add-on approach. However, there are still elements that need to be clarified in the alternative approaches, like the definition of a reference value.
One of our members has based its downturn estimate on observed losses during a severe crisis. This bank looked at losses over more than one year as the severe crisis in its case occurred in the early 90’s (also set out from the FSA). During these years, the accounting principles were different from now, and additionally the accounting practice in the commercial banks was influenced by the public refinancing of the banks. This resulted in banks taking huge losses in a single year, which in turn resulted in high write-back rates after the crisis. Since data from many years ago is scare and with certain quality issues, especially at a more detailed level, we evaluate that it is necessary to look at more than one single year when we select the downturn level. Please note that our member had to evaluate information from the financial statements and different benchmarks when deciding the downturn level of LGDs due to data limitations.