We expect a material impact on our models from the proposed guidelines but generally welcome this as providing a more consistent interpretation of CRR rules across the industry and supervisors. The number of material change requests depends on interpretation of the RTS on materiality of Changes to IRB rating systems. Simply introducing of the MoC framework without changes to the level of conservatism could be interpreted as a material change requiring prior approval by Competent Authorities. We do however consider the proposed changes sufficiently extensive that the implementation timeline is challenging.
We do not see any operational constraints on calculating the one-year default rate quarterly. However we would appreciate clarity from the EBA as to the purpose this would serve. Is it the intention to monitor changes in default risk on the respective level (portfolio, calibration segment or grades)? Or is it the intention to monitor appropriateness of calibration? The latter would require the quarterly calculation or updating of long run average default rates. Clarification would help us ensure we can best meet the EBA’s expectation.
One motivation of the CRR requirement to reflect drawings after default in the Credit Conversion Factors (CCFs) might have been the reflection of the higher potential for additional drawings for clients with higher free limits. It needs to be pointed out however that this can lead to distortions and increased uncertainty in the conversion factor and LGD estimates:
- The calibration of the LGD parameters suffers from uncertainties around non-finalised cases. If drawings after default are reflected in conversion factor estimates the same applies for LGD parameters. Consequently a model based approach to simulate additional drawings after default will also be required for non-finalised cases in the estimation of conversion factors. For LGD such an approach cannot be avoided, but we feel that the impact of such a simulation on risk parameters should be minimised as far as possible to limit potential bias resulting from associated uncertainties.
- The text in the Guidelines seem to specify that use of a credit line after default should be handled in an asymmetric way for drawings (i.e. increases of outstanding) and repayments (i.e. decreases of outstanding). There are however common situations where the client is formally classified as defaulted but makes normal use of its credit line. In such cases any drawing would increase the denominator, while repayments would only be counted in the nominator effectively increasing the denominator with every drawing and thus leading to inappropriate LGD estimates. Such repeated drawings–repayments-drawings could not be appropriately balanced in the conversion factor, or could result in distorted parameter settings. This becomes especially evident in case of restructured credit lines, where such behavior would be the norm. In addition the conversion factor before default may not be a good reflection of available limits after restructuring, thus an appropriate balance between conversion factor and LGD cannot be assured.
Based on this reasoning we feel that the reflection of additional drawings after default in the LGD, instead of conversion factors, would provide a more stable and plausible framework. We would like to encourage the EBA to further investigate these issues in the estimation of conversion factors and potentially develop further guidance on this.
The granularity of a rating scale should be sufficient to allow for a meaningful risk differentiation for portfolios with different risks. We do not believe that a more granular rating scale reduces RWAs due to the concavity of the risk weight formula, as the concavity is only present if plotted against a PD scale, while rating scales are usually on a log(PD) scale and rating distributions tend to be normally distributed on the log(PD) scale. Risk weight curves plotted against a log(PD) scale are linear (see function of corporate risk weights versus log(PD) below), therefore the granularity of the rating scale should not have a significant impact on RWA levels.
a. definition of risk drivers,
The discriminatory power of risk drivers (used to describe the ability of a model to predict outcomes) should be provided independently of the economic condition, in order to obtain a stable model throughout the economic cycle. Since risk drivers are typically identified in a point-in-time approach, the usage of economic factors would not result in a good discriminatory power in this approach. They are rather expected to vary over time, i.e. different modelling approaches are needed to reflect this in the rating (e.g. use of overlays). In addition economic factors are usually available at portfolio level, thus not suitable to predict difference between customers but rather (or only) to adjust overall PD levels, a further reason to decouple them from the ‘idiosyncratic risk drivers’.
b. definition of the number of grades
Generally a sufficiently broad and granular rating scale would cover all relevant economic conditions.
c. definition of the long-run average of default rates?
Long-run average default rates are determined based on historic experience without adjustment for economic conditions.
Para 56 (a): The effect of short duration contracts or terminated contracts is undefined. Currently there is only the CRR definition of default rate that provides no guidance on how to treat short term contracts. It is recognised that a significant portion of short term contracts, or contracts that mature before the end of the 12 month observation period, could result in an underestimation of losses. We therefore encourage the EBA to provide a common definition of how short term contracts should be treated with regard to default rate calculation. A proposal to reasonably reflect this would be as follows:
a) The denominator should consist of the number of all non-defaulted obligors observed at the beginning of the one-year observation period (with any credit obligation…)
• If they remained non-defaulted for the whole observation period or defaulted during the observation period
• Additionally the obligors that ‚left‘ the population during the observation period pro rata for the time they were present during the observation period (i.e. they should be counted as the number of days they remained in the population divided by 365).
b) The numerator should consist of the number of obligors considered in the denominator with at least one default during the observation period.
No, we do currently not have such processes in place.
Yes, different approaches are in use. They are described in detail in the corresponding questionnaire submitted to the EBA.
We welcome the increased transparency expected from the Margin of Conservatism (MoC) framework. It should be recognised that the quantification of MoC will usually include an element of judgement (including statistical errors ). We would therefore recommend that the EBA allow for materiality considerations to be taken into account when assessing the MoC, based on the proposal below.
In this regard, another important element to note is that an assessment of the size of the MoC on a risk parameter level would require very extensive sensitivity studies in the case of more complex models. These may involve many risk drivers that are subject to some conservative treatment (e.g. account for missing information or outliers). While the overall effect of this conservatism may be comparatively low on a risk parameter level, the operational burden to quantify it compared to a most likely value would be significant. Therefore we would propose introducing a materiality threshold defined as follows:
If the effect of an applied conservatism can be reasonable assessed to be lower than (e.g.) 5% on a risk parameter level, the requirement to report that conservatism according to the MoC framework should not be mandatory.
Such conservatism would still be documented but be excluded from the reporting requirements.
This approach would, in our view significantly reduce the operational burden in the MoC quantification, but still provide for a meaningful increase in transparency around applied conservatism.
We generally welcome more clarity and explanations for assessment of the representativeness of the data. However, we believe the Guidelines should allow for expert judgment and qualitative assessment when deciding about the data representativeness for the following reasons:
- Recent or planned future changes in collection practices / default definition may have not yet materialised in sufficient empirical data to perform statistical tests.
- The effect of a process change may be difficult to quantify due to possible time lags, correlation with other factors such as portfolio dynamics and macroeconomic changes;
We therefore believe more flexibility should be given in the assessment of representativity to be defined based on the combination of quantitative and qualitative assessment.