Deutsche Bank

While the current prioritisation of work seems to be correct, this prioritisation needs to be reviewed once negotiations on the implementation of the Level 1 text for SACCR and FRTB are finalised. This includes the transposition of the updated Basel standard on FRTB.
We support the proposal of the EBA to assign products to risk categories. Most vanilla products will be assigned based on step 1. If this approach is followed, banks should map their internal product classifications based on description provided. As banks already assign products to asset class in the Uncleared Derivatives IM calculation using ISDA-SIMM, this should not be an issue. There also should not be any material deviation in assignment between banks, based on daily reconciliations. We therefore agree with the product list based approach in step 1, followed by a qualitative and quantitative approach in step 2 (when needed).
From our perspective, Bond Forwards on IG bonds and forward starting Cross Currency Swaps primarily have interest rate risk, and hence should be classified under IR risk category. Cross Currency Swaps where start date is in the past (ie near leg notional exchange has been done) should be classified under FX risk category. In addition, Inflation Swaps, Muni Swaps, Cap Floors may also be explicitly included under IR category.
We would not support a list of strict criteria. Banks should be allowed to identify vanilla products based on internal product types, and classify them to risk categories. Banks currently already classify products under CEM and UM calculations and the UM reconciliations have proven that banks are consistent in categorizing products to risk categories.
A qualitative approach to identify the primary risk driver and assigning them to risk category should work in practice. As stated above, banks already use similar approaches to classify products for CEM and UM. When needed, banks can use the quantitative approach.
Option 3 would be the most appropriate answer as this uses volatility x sensitivity. Alternatively, for volatility, banks could use their internal vols, FRTB Risk Weights, or SA CCR supervisory factors / vols. This analysis will be done annually for several trades of a product type (including multiple risk factors), and will deliver the primary asset class for every trade, and if >Z% belong to a particular asset class, then that product type is assigned that risk class(es).
For option 3, Z% could be 40%. Anything below 40% would entail that the asset class is not a primary driver. Normally this would lead to a single asset class being above 40% and being the primary driver. If there are two primary drivers (both driving >40% trades), then those capture most of the risk of product type. One could assign 50% of notional to each of the risk classes in this case to avoid double-counting. In some cases, it is possible no asset class is above 40%, and in that case the product could be assigned to the “other” category, which already has a punitive supervisory factor.
As stated above, anything with more than 2 primary drivers (or with no primary driver, i.e. all risk classes below 40%), should go under “other” category, which already has a punitive supervisory factor.
Following on our previous answers, the risk categories should be capped at 2. We do not believe it has added value to have a category for products with 3 primary risk drivers. If there are any products with more than 2 drivers, they may be better suited in “other” category, which already has a punitive supervisory factor.
It is our understanding that the quantitative analysis should be done for a product type to classify it once and review it annually. It will lead to additional and unnecessary complexity to do such analysis at trade level, and classify different trades of the same product type to different risk categories (leading to issues with netting, etc). Also, doing this analysis more frequently may lead to a product type switching risk categories back and forth, and would need investigation and justifications and instability in calculation and volatility in capital requirements.
The suggested approach should work for a negative interest rate environment. However there are other products like binary options, digital options, target profit forward, etc. where such a modified Black Scholes formula will not generate an appropriate delta. Hence banks should be allowed to continue to use internal models for delta calculation. Such models are already approved and used in EOD market risk VAR calculations, and recently in ISDA-SIMM uncleared derivatives IM calculation. The daily IM reconciliations show that even though models may be different between banks, the overall effect on IM (or, in case of SACCR, on PFE) is not material. For banks not using internal models the suggested approach seems most appropriate from the available options.
Banks should be allowed to select a lamda per currency, based on market rates, such that the formula works for all trades in that currency. We would suggest to refrain from introducing a requirement on a formal industry wide benchmark for lambda as each bank may have different approaches that makes aggregation challenging. Similar to other market data calibrations, for the existing stressed EPE window calibration for example, regulatory guidelines could be provided that allows each bank to implement their own solution.
We agree that the long and short positions as definition in the CRR is sufficiently clear.
We agree, but we also note that, as the change in an instrument's circumstance would require a move to be in compliance with the regulations, could it be argued that there should be an implicit approval process without requiring the full reclassification approval from the Competent Authority (CA) and associated capital surcharge. For example, could we notify the CA of the move related to a change in instruments' circumstance and provide associated documentation and if nothing further is requested within a certain timeframe then banks can assume the move to be approved.
Yes, as illiquidity is not a reason for re-classification.
The following are examples of exceptional circumstances requiring approval for reclassification;
- Bank restructuring that would result in the permanent closure of a trading desk.
- Market conditions that result in serious long-term disruptive impacts on the ability to trade.
- Changing nature of an instrument as described in this discussion paper.
It is possible that in certain situations there may be residual FX positions at a legal entity level that are not captured in market risk. This could be due to any un-sold currency P&L for the reporting period across multiple currencies and legal entities that individually are below the selloff requirements for the entity. It is therefore likely that any exposure from this scenario would not be material at a consolidated level.
No comment.
There is no capacity or benefit for banks to create a daily P&L process for non-trading books to revalue FX positions daily. For many accrual banking book positions it is difficult to get intra month valuations. Revaluing only the FX positions on a daily basis is therefore not representative of the actual risk and would also still show significant month end jumps. An artificial risk construction tested against an artificial P&L does therefore not represent a reasonable test of the model.
It does not have an impact on the frequency as IFRS 13 does not prescribe frequency of accounting valuation.
In general any factor that leads to adjusting the carrying value can affect the valuation, e.g. loan loss provisions.
Restricting the number of notional trading desk to only 1 by risk type (FX, Commodities) may prevent any application for IMA if 1 part of banks business is not able to fulfil all IMA requirements.
In our view, notional trading desks should not be required to meet any of the qualitative requirements.
Consider for example if a single notional desk for Banking Book FX is used, it would either not be possible or not meaningful to meet the requirements:
- The business strategy and risk management structure would be for the entire bank – there is no meaning to having to meet this at the desk level
- Many “dealers” would be involved in this notional trading desk
- Limits would not be set at this level as it is an artificial construct with no business owner
- Reporting in isolation has no added value for the notional desk
- Lastly, the notional desk would not have a defined business plan
As such, to prevent regulatory arbitrage, notional desks should be limited to BB positions, while Trading book positions should not be allowed to sit on notional desks.
Backtesting & P&L Attribution of positions which are not Fair Value is of limited value. Requiring a separate “artificial” accounting approach for the purposes of own funds calculation increases overheads and lacks any use test to control its accuracy.
If this artificial approach is not abandoned then two elements need to be taken into account:
- Non-FV positions from IMA notional trading desks need to be excluded- this would mean that there would need to be at least FV and non-FV notional desks.
- The quantitative requirements for IMA for the notional desks also need to be removed
We do not see a benefit in a daily P&L attribution process for non-trading book positions as they are not revalued daily, similar to our response to question 19.
The definition provided is sufficiently clear. Nevertheless we are of the opinion that products such as Variance swap (future volatility exposure) should not be included in the exotic risk category since an SBM vega charge can be computed and any residual risk would be capitalized via a 0.1% charge.
We don't see the necessity of complementing those definitions with a non- exhaustive list. Nevertheless we do see that certain products, by virtue of the way they are hedged in the market should not attract a RRAO charge and therefore can be excluded. This could be the case for standard CMS related hedging mechanisms for CMS Spread structures, which should not themselves, attract additional RRAO charges. To provide an example, if we consider Multi Look CMS spread trades (assuming 20Y maturity), this can be hedged with 20 times One look CMS Spreads. All these trades under current rules will attract RROA i.e. 21 times individual Notional, while in reality such package will show an overall flat risk.
As mentioned above some instruments such as Variance Swaps or CMS spread options should be excluded or use a more accurate risk weight to avoid penalising market making activities.
No comment.
Distressed products which trade far from par and products already defaulted are generally price based and are not appropriately captured within the standardized approach. While for the subordinated exposures DRC SA is already providing adequate capitalisation (which would cover any residual risk and losses), for senior exposures, it could be argued that some residual risk exists due to recovery risk. For this reason banks would expect such exposures to be charged a residual risk add on charge of 0.1% to cover for such residual recovery risk.
For securitized products, it would be preferable to have further clarity on current RRAO FTRB classifications. Embedded Prepayment risk (embedded optionality) is generally present in many securitized products (except CMBS), which are generally booked as linear instruments. The bank suggests that for such linear instruments, with prepayment risk (even the non-retails instruments), where an uneconomic exercise of the embedded optionality lead to a loss, should be in scope for RRAO as those positions will not be charged vega/curvature but do exhibit convexity from prepayment behaviour. Therefore, where positions have prepayment risk (i.e. duration factor in a non-zero prepayment rate / expected call date), it would be preferable to use a behavioural add-on in addition to standardized Default Risk Charge (DRC) and Credit Spread Risk (CSR).
No comment.
Variance swaps may fall in this category, tagged as exotic risk while rather bearing residual risk. This is an issue that should be fixed by moving Future realized volatility outside the category of exotic risks.
We are of the opinion that the Liquidity Horizons (LH) framework does not need further granularity. Some discretion would be required by banks in mapping more complex product to liquidity horizon buckets. For example in the case of multi underlying trades, the approach listed in paragraph 148 seems sensible, but there are other situations where some discretion should be applied based on expert judgment for more complex exposures e.g. when dealing with VIX indices. For such corner cases the bank should be allowed to apply internal methodologies that would identify the most relevant liquidity horizon once approved by regulators.
No additional risk factors come to mind.
We advise to amend Table 2 of Article 325be to include the additional risk factor categories and Liquidity Horizons listed in the Basel FAQ published on January 2017 at paragraph 2.2.
Q&As would be sufficient to clarify open uncertainties without compromising flexibility. The following elements would, however, benefit from additional guidance via RTS’s:
- Liquidity horizon recalibration, especially in order to account for de-risking profile and mean reversion effect for liquidity horizon above 40days.
- Reducing the cliff effect between LH buckets as per point raised on question 45 (equity small cap context)
Quantifying the concept of liquidity via a single attribute should be done by using of a broad market definition. Setting a liquidity level only on OTC market data e.g. by using the BIS OTC derivative statistics would be a limitation and misleading. Both cash and derivative products should be considered as well as OTC and exchanged traded markets. Furthermore we would like to highlight that the use of different liquidity horizons, for specified currencies and non-specified, could lead to un-intended impact on liquidity by penalising emerging market jurisdictions and introducing an uneven playing field. We would therefore suggest the use of a unique liquidity horizon set at 10 days.
We do not consider that the BCBS calculation captures enough of the market to be considered a true measure of liquidity, and a 3 year revision is not appropriate given the dynamic nature of the market.
It is worth noting that in relation to FRTB NMRF, many data providers such as MarkIT or Bloomberg, are currently developing initiatives in order to provide an overview of liquidity by product. Regulators could leverage such information in order to better calibrate FRTB liquidity horizons.
Although we support the idea of using the triennial central bank survey on FX as a source for volumes, we are of the opinion that also other sources such as Bloomberg and Reuters should be utilised to achieve a more holistic view around the FX market liquidity. Furthermore, if banks were to estimate the required liquidity horizon, it would take into consideration also elements such as the bank’s market share, the risk sensitivity to each FX risk factors and the internal limit which reflect the bank’s risk appetite.
No comment.
As mentioned above, in order de define liquidity horizons, banks have far more information than only looking into a defined turnover level from the BIS report. From internal analysis, our view is that for the FX spot market there should be no distinction between currencies, since all of them (currently classified in FRTB text as liquid and illiquid) would qualify for a liquidity horizon well below the 10 days. Recent analysis on material bank's risk shows that 2 days is a sensible indication for liquidity horizon. Although a 2 day LH might not be always applicable, an FRTB 10 day LH would be a conservative assumption.

Similar consideration can be made on FX volatility risk factors, for which the same analysis shows a maximum 5 days liquidity horizon (even for very concentrated exposures), leading to the suggestion of using a 10 day FRTB liquidity horizon as opposed to the current 40 days LH presented in the FRTB standards.
As mentioned above, only one LH should be used across all the currency pairs. Nevertheless the concept of triangulation would also be an option if it is effectively adopted into regular risk management practice.
In principle, we support EBA’s suggestion on how to assign small and large capitalization liquidity horizon to equity prices and vols. Nevertheless, we are of the opinion that banks should have discretion in the methodology on how to assign liquidity horizons, subject to internal validation. Such methodologies may consider the relative size of the exposure and should aim to avoid cliff effects and fluctuation of capital requirements. Against this background, we note that:
- where there are trades on names smaller than EUR 2bn, this is to be considered as small in size and it is manageable relative to the liquidity of that market. This refers to the size of a bank’s position relative to the markets, which is slightly different to the paragraph 166-2 which refers to choosing instruments which are liquid relative to the markets it is operating in.
- In addition, if the market cap for a particular stock keeps fluctuating around the 2bn mark, the capitalization process might get lots of noise, since the liquidity horizon for the equity small cap risk factor will fluctuate between 120 day and 20 day a frequent basis
From a risk management point of view, we are concerned that following ESMA’s main indices and using its components as suggested in paragraph 166-2 might include highly illiquid stocks and count them as a large cap names. This is not sensible as they genuinely should be classed as small cap. For this reason, banks should have the discretion of classifying exposures as small cap.
Yes, we agree. The proposed exclusion criteria provide enough information to identify and exclude the VA which as in line with Industry feedbacks should be excluded from HPL.
Yes. Valuation adjustments are taken at a portfolio level and applied consistently across parameters appropriate to it, with the understanding that individual adjustments may be required at a more granular level (e.g. Product-Model, or Trade specific). When considering the appropriate netting level for the close out cost calculation, it is important to use a level which reflects how business units would economically unwind the risk in practice. In most cases this is considered to be at the region/business unit level rather than at the trading desk level. It would be possible to calculate such valuation adjustments at the desk level, but that approach is not deemed suitable, as it would not reflect how business units would economically unwind the risk in practice.
Yes, we agree. This is because the inclusion criteria evolve around the following two arguments:
- Only VA which are updated daily and which are not in the exclusion list should be included in HPL
- If an adjustment is considered in the daily VaR the same should be also included in the HPL
If an adjustment can be performed on a daily basis, it is likely that such adjustment it is part of the correct marking practice and contribute to the daily volatility, which the VaR is meant to capture.
No, we are of the opinion that the criteria proposed for exclusion and inclusion as proposed in the EBA DP are sufficient. The definition of specific valuation adjustments related to market risk would be restrictive and difficult to apply in practice, considering that each institution has a different way to name and define valuation adjustment. In principle all valuation adjustments can be somehow linked to the market risk concept. However only if they are updated on a daily basis, can they be effectively captured by the bank risk model, since such daily adjustments will contribute to the daily volatility captured in the risk model? Therefore the key criteria which we propose to follow for inclusion is the daily frequency update.
Valuation adjustments are not a large driver of overshootings to the extent that they impact desk eligibility.
Yes, we agree. The EBA DP proposes to exclude the VA which are charged under separate capital treatment and excluded from CET1, while propose to include the adjustment which are not captured daily. This is sensible since actual P&L will be only used in back-testing, hence a P&L event due to remarking e.g. IPV valuation adjustment (taken usually monthly) will not have any adverse effect on P&L Attribution process, but rightly will do so only on back-testing.
Yes, we agree as per the answer above.
See answer to 51.
Yes, it is part of the time effect.
We do not see the need to define NII as “the cash flow related component of the passage of time on the value of the portfolio. It measures the paid or received interest cash flows and the interest cash flow related effect on the fair value.”, and to take it up in the RTSs as per EBA proposal. We support the more generic definition of P&L due to passage of time.”
We do not see the need to define time effect" and to capture it in regulation as per the EBA proposal. We support the more generic definition of P&L due to passage of time."
No comment.
Yes, we agree as per answer above.
1) Definition of the observation period
Approach c) is operationally efficient and should be sufficiently conservative. In case no data from stress period is available current period could be scaled up based on factors used for ES. Options1b) and 1d) are computationally complex especially when done on P&L level and therefore not our preferred option

2) Types of data acceptable for the observations
2b) should be the default case, 2c) where necessary, 2a) would contradict the BCBS principle best data".

3) Additional conditions on the data observed for the NMRF
It is sensible to only allow one risk factor level per day. True stale risk factor observations should not be filtered out. This is an additional requirement beyond the IMA requirement that would lead to significant operational hurdles. Rather than fall back to the fall back shocks, "gauge data" mentioned in 2) could provide sufficient information to derive shocks.

4) Definition of the liquidity horizon LH(j) for an NMRF
Effective liquidity horizons would be a significant operational burden and very complex to apply, e.g. due to monitoring of broken hedges.

5) Calibration of parameter CLsigma
No comment

6) Calibration of parameters CES equiv
Paragraph 262b) provides a reasonable approach. A floor of 3 seems rather high given the large number of NMRF and the conservative aggregation. A regular calibration (e.g. monthly) of a single value should be sufficient.

7) Calibration of ????????
Option E: The main driver of conservativeness for NMRF is the conservative aggregation scheme given the large number of risk factors. Setting a single value of K for all risk factors would be too simplistic and defining an individual k for every risk factor is too complex.

8) Calibration of ????????????,???????? and ????????e to achieve the target calibration ‘at least as high as an expected shortfall’
No comment."
In general we have some concerns on the level of conservativeness Various levels of conservativeness are layered upon each other (volatility calculation, C_ES factor, kappa, correction factor to not underestimate small samples) which will lead to overly conservative stand-alone numbers. The conservativeness of the NMRF charge is largely driven by this conservative aggregation scheme. Ensuring that every NMRF is at least as conservative as a stand-alone ES calculation and adding these will lead to even more conservative NMRF impacts.

Kappa calculation: Properly calculating kappa is computationally expensive (every NMRF will have a bespoke kappa) and the correct value for kappa would constantly change over time. Selecting an unbiased value would be necessary to avoid overly conservative calculations due to the large number of NMRFs and the conservative aggregation scheme.

Return calculation: Non-equidistant returns are scaled to large liquidity horizons. This will lead to significant complexity as every return could come from a different time period. A more pragmatic approach would be to calculate returns over 10 days in line with ES and scale them to larger liquidity horizons.Scaling short returns to very long holding periods using the sqrt-of-time rule will easily lead to excessive shocks, in particular if the shock is calibrated for a basis risk factor (in cases where banks decompose NMRFs into modellable proxy and non-modellable basis, footnote 40 of the BCBS text )

Paragraph 247: For all non-linear risk factors, an optimization over the range of possible risk factor values is necessary. While this is easily achievable for sensitivity-based calculations, a grid-based approach is required for solutions based on full revaluation. This will significantly increase the overall model complexity and likely lead to RWA variability.
We propose as an alternative to allow use of stale data to calculate standard deviation. It is very complex to cleanly differentiate between genuine stale data and non-stale data. Every shock will have a bespoke liquidity-horizon making the overall calculation significantly more complex. In addition, return calculations over longer time horizons to mitigate impact from sqrt scaling of long horizons in particular for basis risk factors should also be allowed. Using using sqrt-of-time will be an issue for basis risk factors scaled to long horizons.

Rather than selecting parameters for kappa, C_ES and sample size corrections factors to ensure that every NMRF is at least as high as a stand-alone calculation, focus on the overall level of capital that will mostly be driven by conservative aggregation approach.
If the goal is to ensure that the stress scenarios lead to a ES equivalent number in all cases, a direct loss based approach seems to be more logical than the risk factor based stressed approach described here. The P&L approach is very similar to traditional risk metrics like ES and VaR, which naturally leads to the question why those risk factors should not be included in the ES model in the first place.
For solutions based on Full Revaluation this approach will quickly become computationally complex due to the multitude of NRMFs that would require a stand-alone ES calculation.
We do not have any additional views on these points.
Given the status of discussions at the Basel level and the uncertainty on the final eligibility criteria, it is at this stage not possible to provide detailed feedback.
Given the status of discussions at the Basel level and the uncertainty on the final eligibility criteria, it is at this stage not possible to provide detailed feedback.
No detailed comments at this stage.
No detailed comments at this stage.
Option 2 is more suitable. Option 1 is generally not viable as max loss is not viable for the vast majority of risk factors.
No comments at this stage.
In the event that the RTS on assessment methodology is adopted, we agree that it makes sense for the EBA to propose a revised set of the rules for application to the FRTB. Many of the articles, however, will not be applicable under the new FRTB regulations.

Given the non-final status of both the RTS on assessment methodology and CRR2, it is difficult to agree that some of the articles from one should apply to the other. In principle, however, it is useful to establish that they can be used as guidance for banks and Competent Authorities, however, we would not recommend that any formal requirement or standard be established ahead of a revised version.

With respect of the RTS on Model Changes, this should only become relevant post go-live of the FRTB framework. As such, we believe that there is sufficient time for a revised version to be published.
There are also a number of elements of the current RTS, which will need to be explicitly addressed ahead of the implementation of FRTB, such as:
- Definition of extensions, in particular to new desks.
- Changes in market risk factors are appropriately considered by quantitative tests at the desk level.
No comment as this is related to banks with smaller trading books.
No additional comments.
Koen Holdtgrefe
D