Primary tabs

Dutch Banking Association (NVB)

We support the goal of the EBA to reduce the unwarranted variation in risk weights via more detailed methodology, modelling and data standards. We believe that in current modelling the downturn component is a subjective step in the process that could well be a significant contribution to unwarranted risk weight variation.
Unwarranted risk weight variation could be reduced by adding standardization to the regulations or guidelines. However, standardization often reduces risk sensitivity. Hence, it should be clear that any form of additional standardization truly reduced the unwarranted risk weight variation and doesn’t reduces the risk sensitivity too much.
We also believe that a future downturn can better be estimated by supplementing the historic internal experience with looking forward. Hence, formulating a view on the next downturn substantiated by internal data makes more sense than only replicating the previous downturn. In this respect it seems like a good process to formulate the economic downturn conditions of which the impact on the models need to be estimated.
Nevertheless, formulating the next downturn scenario can contribute to the unwarranted risk weight variation instead of reducing it. We are of the view that it is important to ensure a proper alignment with the current IFRS9 and stress tests methodologies, where institutions also calculate the impact of future states of the economy.
Finally, we propose to investigate whether it makes sense if the scenario setting should be determined by the ESRB in order to prevent additional variation.
It should be prevented that various steps in the process of arriving at a model output will become too prescriptive while other steps are still fairly subjective. Then the too detailed and prescriptive measures can become an operational burden without a significant reduction of the unwarranted risk weight variation. That is undesirable from a cost-benefit analysis perspective.
In practice we are able to evidence a relationship between loss-given-default and an economic downturn, whereas we cannot evidence thus far that the estimate changes of the conversion factor (CF) following an economic downturn.
We recognize that for certain portfolios e.g. a cure rate and LGDLIQUIDATION model component react differently to macro-economic circumstances.
In the spirit of proportionality, the number of model components should match the materiality of the model in question. Based on experience we expect around 2 model components for the larger models, but only if the model performance benefits from adding these model components. While for less significant models the cure rate could be estimated directly (fixed rate).
Also, what is defined as model components leaves room for interpretation, introducing different practices between banks and variability of RWA because of this.
• The proposed approach is rather complex and will lead to many operational issues. There is a potential misalignment with the IFRS9 and stress test methodology. For those and downturn methodology, a relationship between economic factors and LGD (components) is investigated and determined. Yet, because the analysis is prescribed in such detail for the downturn LGD, this may result in a very different relationship with economic factors compared to the relationship used under IFRS 9 for a given portfolio. This not only means a duplication of work and operational activity, but also comes across as inconsistent and counter-intuitive for model stakeholders. Instead, it is preferable if the basis of the scenario analysis for IFRS 9 can be re-used as the starting point for the downturn LGD methodology.

• The steps of the statistical correlation are prescribed in detail. However, the final determination of the nature economic downturn also depends on the role of the expert panel and qualitative criteria. As such, different financial institutions may still have different assessments and as such lead to different downturn scenarios. It is therefore questionable whether the complex, detailed prescribed analysis contributes to the aim of harmonisation. A simpler approach not only reduces time-consuming complexity, but also leaves less room for subjective interpretation.
We prefer taking the default moment as starting point.
In order to model Loss Given Default, information is generally brought to the moment of default. To instead use a different moment of model component realization for downturn methodology increases the complexity and operational activities during model development. Analysis essentially is doubled since both default moment and realization moment would need to be investigated.

Furthermore, in our view the model component realization moment may also introduce subjectivity to the modelling process. The default moment is clear-cut given defined default triggers and leaves little room for interpretation. By contrast, the realization moment is dependent on the taken modelling approach. It is dependent on which model components are chosen for a given model. For example, when multiple cashflows occur at different moments throughout the recovery process, what is the realization moment for the LGDLIQUIDATION model component? Or when the local business process does not have a clearly defined moment when it is formally decided that the customer can no longer cure, then what is the realization moment of a non-cured default case for the probability of cure (wCURE)?
As such, different financial institutions may come up with different definitions of the model component realization moment. As a result, using realization moment does not only increase complexity compared to using default moment, but may also increase the room for interpretation and thus potentially reduce the desired harmonisation.
Example:
We are in favor of modelling to the moment of default. Nevertheless, as an example, within residential mortgage portfolio modelling the moment of liquidation is very relevant too. This means the liquidation can occur under different economic circumstances than the default start moment. Hence, even if we model to the moment of default, the average time to liquidation is factored in and therefore the moment of realisation is taken into account.
Although it is relevant to know how long an economic downturn lasts and in which period thereof the lowest point is reached, we think that indeed for the sake of simplicity and better comparability between institutions a one year duration is a good idea. We could not envisage obvious situations where a one year duration is not suitable.
Estimating the relationship between macro-factors and model components will show weak relationships for some model components, this could create different macro-factors selected within countries for the same portfolios.
It could be valuable from a harmonization effort (reducing unwarranted risk weight variation) and from an operational point of view to articulate which market data for macro factors should be used.
For internal loss data, requiring 20 years of data seems too much. Most institutions do not have so many years of past data. Therefore they should make estimates instead. Different institutions will make different estimations, contributing to unwarranted risk weight variation.
To accommodate the build-up of data, the requirement could start with 10 years. From then on, every year an additional year of data shall be added to the dataset.
We identified three possible solutions:
1) Every institution builds a downturn scenario model in which it also identifies the downturn state of various macro-economic factors (e.g. house prices, GDP decline and unemployment rate). The LGD adhering to the downturn state (unemployment rate in the downturn scenario) is estimated directly.
2) Every institution builds a downturn scenario model in which it also identifies the downturn state of various macro-economic factors (e.g. house prices, GDP decline and unemployment rate). Every year the distance from the actual state (e.g. current unemployment rate) to the downturn state (unemployment rate in the downturn scenario) is calculated. The distance is the input for determining the downturn factor. Every model should be approved by the CA, while it does not be approved every year.
3) An alternative solution might be that the European Systemic Risk Board (ESRB) provides the downturn scenarios for all individual jurisdictions, as they also do for the European stress tests. This will truly lower the unwarranted risk weight variation. Every year the institutions calculate the distance to the downturn scenario (the distance from the actual economic factors to the downturn factors). This yearly update on its own should not be seen as a model change requiring CA model approval. Furthermore, the RWA for specific portfolios becomes less predictable long-term due to the external dependency. This can be a challenge when adhering to the use test.
For sure we think that Article 6 should pin down the steps for the joint impact analysis as described in the text box. To reach harmonisation between institutions the proposed model component approach should be fairly prescriptive, else there will be too much room for institutions to come to different interpretations of this approach and thus to different outcomes.
Nevertheless, despite a high degree of prescriptiveness, we wonder whether the model component approach will lead to the desired high level of harmonisation between institutions, because sufficient sources for subjectivity and divergence remain. For example, it is required to use 20 years of data. Since few institutions will have realized values of model components for such a long period, the use of estimated values of model components is needed. Since the relationship between model components and economic factors is often weak, it will be hard to make a quantitative model that produces precise estimations. Another possible source of divergence is the heavy use of experts in the assessment of the dependency between model components and economic factors, and in the determination of the MoC.

The approach can be seen as counter intuitive as it uses in part the long-run average value of a model component instead of driven by a downturn scenario. Furthermore, different model components may have their worst observation at different moments in time, because they react to an economic crisis in different (time-lagged) ways. To simply take the worst of each model component from different moments in time leads to an artificial overly conservative downturn LGD value that does not occur in practice, because the compensating effects between model components is ignored.
• The current phrasing of Article 6 leaves very open how the overall, joint impact should be determined. This means that after the complex, time-consuming individual analysis steps described in the previous articles, different financial institutions may still combine them into a different overall downturn scenario. Thus, the subjectivity and RWA variance caused by a downturn LGD is not necessarily reduced by this approach. So the additional efforts spent due to the increased complexity of the earlier steps may be in vain.
• If Article 6 is more specified with the steps pinned down, then the complexity of the overall downturn methodology is further increased. Yet the outcome can be seen as counter intuitive as it uses in part the long-run average value of a model component instead of driven by a downturn scenario. Furthermore, different model components may have their worst observation at different moments in time, because they react to an economic crisis in different (time-lagged) ways. To simply take the worst of each model component from different moments in time leads to an artificial overly conservative downturn LGD value that does not occur in practice, because the compensating effects between model components is ignored.
Although the relationship between factors and the portfolio could be estimated on a timeframe with data available and calculated for the downturn values, we still expect some issues. For example, complexity can be expected depending on portfolio size, growth and number of defaults. We fear that this would typically result in less exact estimates, which would introduce more subjectivity and be clearly a step backwards.
No. The impact of downturn on the CF could be limited and for some models even non-existent depending on the product.
We suggest that per model – via a year to year test – it is checked whether there actually is a significant effect of downturn on the CF’s. If so, the institution should model it. If not, the institution should not be forced to take additional modelling steps. The downturn CF then equals its long-run average equivalent.
Regarding which document should describe the downturn, we do not have a preference on whether the draft GLs describe the downturn methodology in detail or if it only refers to the RTS. We do not prefer a similar text in two different documents, as this might lead to differences in future updates.
As stated in the answer to question 1, the steps in the process to arrive at PD, LGD and EAD parameters that contribute the most to the unwarranted risk weight variation should be addressed the most. These steps could benefit from additional detailed rules. However, if in certain steps additional details are required, while in other steps a fair amount of subjectivity is still retained, one could argue that the additional detailed rules are only contributing to a more operational burden without harvesting the benefits.
A potential step in the process that is still quite subjective is the determination of what or how severe a downturn actually is, while for institutions present in the same markets, the downturn should be quite similar (while the impact on such a downturn could be different).
So it makes sense to take less detailed, but clear and objective steps in determining the economic factors in a downturn scenario. This would lower the complexity and reduce the unwarranted risk weight variation.
With the Reference Value approach alignment with IFRS 9 and stress testing methodology is possible due to its less prescriptive detail. Simultaneously, by having a simple, clear and objective reference value (such as the worst 2-year average) the unwanted RWA variance is reduced. From a cost-benefit perspective, this is preferable over the model component approach.
The Supervisory Add-on approach is seen as too simplified to the point where it loses the risk sensitivity of the model.
As this question was answered by individual institutions some answers partly overlap:
• The downturn period is identified based on a combination of internal and external (macro-economic) data. In case downturn is observed in sufficient internal data, then the downturn adjustment is determined based on the worst average observed values in the downturn period. In case of insufficient internal data, then benchmarking is used based on a.o. external data.
• Note that the downturn adjustment is analysed per model component as there can be different drivers between the cure rate and recovery rate. But the average observed overall LGD is also considered in case the worst observation per model component occurs at different moments in time. This to avoid overly conservative downturn LGDs.
• For some models we apply a multi-plication factor to translate the average LGD into a downturn LGD.
• For some models we identify the downturn years for cure rate and loss-given-loss and stress the model component or its underlying risk drivers.
• Regarding the model approach: the model components or underlying risk drivers are stressed. Stress factors are based on historical data or input of experts.
Otto ter Haar
D