Operational impact will be very significant. We also expect a capital impact (increased risk weights): as the better quality assets will be impacted the most due to increased conservatism.
Use Test compliancy becomes more difficult, there might become a more significant gap between own estimates (pillar 2) and model output used for regulatory capital calculations.
We view a lot of models (all?) will be impacted because of the changes in default definition, methodical changes in the margin of conservatism, and the expected downturn guidance.
We do see operational impact, given that most monitoring is currently done on an annual basis. We suggest to take into consideration to limit the increased requirement to large models only (based on size, risk, strategic and other considerations).
Adding missed interest payments after default in the numerator of realized LGD[*] and discounting all cash-flows as well leads to double counting of losses.
It seems to create a discontinuity in the measured Exposure Valued (between EAD and Exposure in Default) when performing status changes to default - which is methodologically unsound. LGD and LiD will be hard to reconcile because of changes in the denominator [**].
Graph to example ** (see next page)
Example at [*]:
Suppose that a client has an outstanding on a loan of 100K at 1-7-2015, to be repaid at that date, the interest equals 6%, discount rate 5%. Suppose that the client defaults at 1-7-2015 through an unlikely to pay signal and repays at 1-7-2016 in full. This means that the bank accrued interest during default of 6% x 100K = 6K.
Assuming that we capitalized interest of 6K at 1-8-2015 under the accounting framework because we expect the interest to be repaid in the end: The present value of the cash flow of 100K at 1-7-2016 is about 95K (100K/1,05) resulting in an economic loss of 5K (100K=95K). The missed interest results in a loss of 6K. We would calculate the loss as 5K+6K=11K. Resulting in an LGD of 11K/100K = 11%. This is twice the actual loss.
Example at [**]
Let’s have a loan of 100 outstanding, with default in month 8 (outstanding 105), loss financing of 25 in month 10 (outstanding 105+25=130), write off of in month 12 (and 80 net cash-in in month 12), loss of 130-80=50. Suppose for the simplicity of example that cost are nil.
The LGD is 50 / 130 = 38% and LiD in month 8 and 9 is 48% and from month 10 is 38% again (see the green line). There is a substantial operational impact / IT burden, in order to determine where the double counting originates from.
We interpreted the term “benchmarks” as standards. Please indicate if our understanding is correct.
If our understanding is correct, then:
Banks operate in a selection of the total market. Predicted PD depends on the rank-order within each selected pool and by definition, comparability is limited and hard to interpret. Only name by name benchmarking mitigates this selection bias. So it could reduce variability at the condition that the benchmarking is operated on a name by name basis (e.g. similar to a credit bureau).
The introduction of the probation period and conditions for a cure already reduced artificial cures. This seems to introduce a prolonged probation period for the purpose of LGD estimation.
Conceptional: This creates methodological issues because the reference data for PD estimation no longer corresponds with the reference data for LGD and EAD estimation. This leads to conceptual inconsistent results
Capital impact: It seems to be a conservative approach, but makes it less transparent on what part of the LGD is the best estimate and what part is the margin of conservatism.
Operational impact: all systems need to be adjusted, and all models need re-calibration.
[a] Risk Drivers for TTC PD models are predominantly micro-economic in nature and aim at rank ordering obligors/customers/products that face the same economic conditions, dependent on the type of portfolio. Risk drivers, such as financials, move along with macro-economic situation. Macro-economic conditions may impact the model calibration level, but not its design (see 5.4. C.)
[b] The number of grades is a function of the rank-order created by the PD model. The number of grades depends on how statistically significant the differences are between the various grades, but not from economic conditions.
a] We believe that both approaches are workable, as long as the time horizon is long enough to calculate the long-run average. This therefore is not considered to be a large source of unintended risk weight variation. Moreover, adding weighing schemes may actually add to unintended variation.
[b] We believe a sample of the portfolio should reflect its full diversity, including short term contracts. A performance window of 12 months is always used for the determination of into default rates. Months on book is very often a driver in the PD models that would capture the potential changes of risk profile during the lifecycle of a loan.
The current approach is that for regulatory models, the long-run average is used. Point in time (PIT) and through the cycle (TTC) monitoring of the calibration level are both performed on a yearly basis .
We agree that the different philosophies (TTC, PIT and every hybrid form in between) can create unintended risk weight variation. Therefore we believe it is important that the EBA formulates a clear and unambiguous definition of these different flavours. Thereafter the individual institutions can indicate which philosophy (or plural) is used. Please note that Dutch institutions do have methodology standards, stating how TTC and PIT are defined within the institution, that were required and approved by the Dutch supervisor DNB.
Secondly, it is relevant to investigate what the level of unintended risk weight variation the different philosophies creates: if it is significant it should be addressed, if not, it could co-exist.
For retail portfolios, the models in general have a more hybrid nature given that behaviour risk drivers like payment overdue information are typically included in model prediction.
For non-retail portfolios, the models show more TTC features, although drivers like profitability co-move with economic conditions and thus influence rating levels.
For future provision purposes as from January 2018, the IFRS9 Methodology will complement the Basel TTC level at PIT (calibration) level.
[a] We agree. The level of risk weight variation – after implementing these proposed guidelines – will also be determined by the role of the ECB.
[b] Considering the challenging nature of estimating model risk, banks should be explicitly allowed to assess each of the 4 mentioned categories in a uniform way for less material models.
As an example, article 105, implementation of policy changes are processes without a clear cut-off. Tying applicable policy to segments or sectors should be possible, for individual cases this matching is not reliable.
Could you confirm that we read the article such that if major adjustments in policy (such as entering in to a certain market segment or leaving it), that these should be taken into account in calibration?
Could you confirm that we should read article 109 to be applicable on pool level and is intended to prevent including future expectations to adjust estimates?