Response to consultation on RTS on the specification of the nature, severity and duration of an economic downturn

Go back

Question 1: Do you have any concerns around the workability of the suggested approach (e.g. data availability issues)?

Identifying the nature of the economic downturn requires a long data history of internal defaults and losses (at least 20 years) disclosed at the level of model component and exposure types. This point is an issue of concern, given that such a long internal data history is still not available.

The same is true for external macroeconomic time series. It is almost impossible to identify sufficiently long, appropriate and representative external time series, especially for specialised internal portfolios (specific retail segments, specialised lending, etc.). The use of other data series implies the risk of lacking representativeness. Also, the determination of adequate margins of conservatism is not trivial and will lead to a significant amount of undesired volatility of the results obtained.

Furthermore, it is not clear if the inclusion of a huge number of macroeconomic factors gives better results than a simpler approach.

The implementation costs of such an approach are extremely high. This holds not only for an initial development phase but also for the model maintenance. This is due to the requirements concerning an extremely long internal and external data history, to the necessity to assign observations to a time of realisation (identification of the time of realisation requires adjustments to the underlying databases) and also to the role of the panel of experts. Furthermore, for implementing this approach, it is necessary to develop a number of different independent macroeconomic models, which will increase the complexity of the model significantly. It is not clear if such modelling leads to better predictions. The expected added value from this extremely large effort is estimated as low and does not justify it.

Question 2: Do you see any significant differences between LGD and CF estimates which should be reflected in the approach used for the economic downturn identification?

The term “type of exposure” is not clear. Does this refer to regulatory exposure classes or does this reflect the portfolio segmentation used for the estimation of LGD and CCF?

Question 3: Is the concept of model components sufficiently clear from the RTS? Do you have operational concerns around the proposed model components approach?

In our opinion, the concept is clearly described in the draft RTS.

On the other hand, the proposed alternative approach for the determination of the model components (using a list of predefined potential drivers for multimodal shapes, such as for example cure rate, time-in-default, recovery rate related to outstanding amount, etc.) is not clear: How is the term “relating to” in expressions, like “recovery rate relating to collateral value” to be interpreted? Do you refer to a certain segmentation by collateral value and the corresponding recovery rate or does this refer to a ratio? This should be made clear.

Furthermore, an approach using pre-defined model components would not be applicable and would be very complex to implement. In our opinion, the consistency between general model set-up and the definition of model components and also of the internal data availability must be guaranteed. This is not always possible with pre-defined model components. An adequate definition of components depends heavily on the general model set-up and the availability of the related internal data. In order to identify pre-defined model components, it could be necessary to artificially build sub-segments along alternative risk drivers that might not even be included in the internal model or data framework. As an example, in a model of the type LGD = w_cure*LGD_cure + (1-w_cure)*LGD_Liquidations, an additional sub-segmentation along the risk driver “collateral value” would have to be introduced into the model set-up in order to identify the component “recovery rate relating to collateral value”.

Question 4: Do you have any concerns about the complexity around the dependency approach proposed for the identification of the nature of an economic downturn? Is it sufficiently operational?

The implementation costs of such an approach are extremely high. This holds not only for an initial development phase but also for the model maintenance. This is due to the requirements concerning an extremely long internal and external data history, to the necessity to assign observations to a time of realisation (identification of the time of realisation requires adjustments to the underlying databases) and also to the role of the panel of experts. Furthermore, for implementing this approach, it is necessary to develop a number of different independent macroeconomic models, which will increase the complexity of the model significantly. It is not clear if such modelling leads to better predictions. The expected added value from this extremely large effort is estimated as low and does not justify it. The alternative reference value approach seems more appropriate here.

Question 5: Do you agree with the proposed approach for computing the time series of the realised model component referring to the realisation of the model component rather than to the year of default?

This approach too is very complex and the knowledge one can expect to gain is often only minimal.

There are examples where the point of time of realisation is not clear. For example, in the case of the component “probability of a cure”, the point of time of the realisation of the component is the completion date of the recovery process without losses (if it does cure) or the time of writing off losses (if it does not cure). Here considering the time of default or the completion date or the recovery process might be the most appropriate point of time.

Question 6: Do you envisage any situation where a one year duration is not suitable of capturing the economic downturn at the economic factor level?

No.

Question 7: Do you have any concerns about the approach proposed for the identification of the severity of an economic downturn? Is it sufficiently operational?

The Basel framework has been in place since 2006/2007. Therefore, the major problem of this approach is the availability of sufficiently long and representative internal and external time series. Especially the identification and availability of suitable external data sources for such a long time period is a problem, given that the external macroeconomic data history should be adequate and representative for the institution’s specific portfolio. This holds especially for specific portfolios such as for example certain retail portfolios or specialised lending.

Question 8: Do you think that more details should be included in Article 2(3) for the purposes of the evaluating whether sufficiently severe conditions are observed in the past?

It is unclear on the basis of which indications an assessment can be made of whether a sufficiently severe economic downturn has occurred within the 20-year period necessary for the data history.

Moreover, it should be defined more clearly what is meant by the term “for the future”. The time period for “the future” is especially relevant. Are you referring to a one-year horizon?

Question 9: Do you think Article 6 should pin down the steps for the joint impact analysis described in this text box?

No. Freedom of methodological choices should be granted.

In our view, the practical application procedure described is far too complex. Further detailing of the application description would tend to further complicate implementation.

Question 10: Do you have any concern around the proposed approach about the identification of the final downturn scenario?

Yes. The use of the long-run average values in a certain downturn scenario for those components, where none of the explanatory macroeconomic variables assumes the worst historical value in this special downturn scenario, might not be an appropriate choice.

Taking the example from the explanatory box of Article 6: In Downturn Scenario A in 2001 only the GDP growth assumes its historical worst realisation. Nevertheless, it is possible that interest rates, for example, also assumed a “stressed value” in that year, even though this was not the historical worst realisation. In this situation, the long-run average of model component 2 might therefore be insufficiently conservative, given that model component 2 was also “stressed” in crisis A. Calculating LGD_A with the long-run average for component 2 therefore might not be appropriate.

In this sense, it is important not to include a readily specified approach in the Article. Methodological freedom should be respected here too. A benchmarking requirement could then help to validate the methodology chosen by the institution.

Question 11: Do you see any issue with the estimation of the model components for downturn periods which are not in the data base of the institution (e.g. in step 3 the case where the estimation of cure rate for 2001 is performed on the basis of the dependency assessment described in Article 3(2)(e) and (f))?

Extrapolation of internal time series using external data sources and estimating the corresponding relationship is a standard methodological approach. Nevertheless, the problem of finding adequate and representative external time series, especially for more sophisticated or specialised portfolios, will always exist. Internal effects, such as the individual portfolio composition, market decisions or modifications in the recovery process, cannot be captured through such an extrapolation.

Furthermore, the development of such models will multiply the complexity of the LGD model set-up.

The inclusion of additional margins of conservatism, because of additional requirements that are very difficult to fulfil, does not improve the quality of the downturn estimations.

Question 12: Do you think the same approach for the identification of the final downturn scenario proposed in this text box for LGD could be adopted also for the purpose of downturn CF estimation?

For CF estimations, simpler approaches are particularly important because as a rule they are based on more volatile data and shorter data histories, which are strongly influenced by changed business strategies and processes.

Question 13: Do you think the draft GLs should describe in more detail the downturn adjustment methodology?

No. Freedom of methodological choices should be allowed.

Question 14: Do you think simpler alternative approaches for downturn adjustment should be considered in the spirit of proportionality?

We agree with the rationale behind the alternative reference value approach (also see for example answer to questions 1, 3 or 10). It is essential that reference values can be defined at institution level, given that very often the institutions’ portfolios are composed very differently. This may concern the product mix, the regional concentration or may be motivated by the institution’s channels of customer acquisition.

The alternative distribution approach is simple and works well. It considers the institution’s specific portfolio and recovery processes. A significant amount of effort and costs can be saved.

It is not clear how the proposed add-ons on the discount rates are justified.

Question 15: What is your view on the alternative approaches? Please provide your rationale.

In general, simplifications are absolutely essential. In addition, exceptions from strict procedures should be made possible in duly justified cases because the methodology provided for cannot be applied appropriately with respect to all portfolios (see also answer to question 14).

Question 16: Which approach are you currently using for estimating downturn LGDs?

Bausparkassen are using different approaches.

Name of organisation

European Federation of Building Societies