We expect that most if not all models will require rebuild or at least enhancements in documentation and justifications. This is driven by the proposed changes in the LGD calculation (discount factor) and the specific assessment of MoC. On this basis, we recommend that a proportionate approach should be taken in rolling out the requirements, accounting for materiality of models and materiality of non-compliance. Given the current resource constraints we observe with the competent authority, we would also recommend re-assessing the implementation timeline after finalisation of the guidelines, potentially considering a gradual timeline for compliance.
No, we don’t see any operational limitations in this respect.
We generally agree with the proposed treatment.
We are unsure whether such benchmarks would reduce variability. If at all they should only be introduced at the level of regulatory reporting (e.g. Pillar 2), as changes within institutions’ rating systems would be a material operational burden with the potential for unintended consequences.
Economic conditions are accounted for in the selection of risk drivers by considering a sufficiently long history as well as historical evolution of macro factors, loss rates and observed default rates. The long-run default rate may be augmented based on whether a downturn period is captured by the data or not. Economic conditions do not impact the number of grades, which have remained constant since the bank acquired the IRB status.
We agree with the proposed policy for calculating average observed default rates (ODR). Short term contracts are currently excluded from the calculation of the ODR’s central tendency.
We currently use the migration analysis to understand the model’s rating philosophy. We assess at the volatility of credit grade migration across time. For hybrid TTC models, we expect the model volatility to lie within a pre-determined threshold.
Retail models for the home regulator follow an approach leaning towards PIT, while all other models (including retail models for host regulators) follow an approach leaning towards TTC.
We generally agree that model deficiencies, including data errors and other estimation uncertainties, should be identified. We also consider the proposed categories to be comprehensive, with the exception of C, which should be handled at model development stage. With respect to the application of a Margin of Conservatism (MoC), we note that it may not always be possible to precisely quantify model deficiencies and that in any case a materiality threshold should be introduced. On this basis, the introduction of a MoC will hardly contribute to increasing comparability or accuracy of risk measures, as different institutions will apply different approaches to their calculation. As an example, even for the same portfolios institutions may compute different MoC depending on their respective historical data availability, resulting therefore in different risk measures. Finally, with respect to the monitoring of MoC, operationally this could only be performed as part of the annual model review, where the applicability and quantum of the adjustment should be re-assessed. In most of the cases it will not be possible to monitor the evolution of MoC on an on-going basis as part of the regular model performance monitoring process.
We support the principle of representativeness, but prescriptive requirements may conflict with the historical nature of the data and render such assessment very judgmental. Instead, concerns related to the representativeness of data should be addressed by an appropriate MoC.