Both SACCR related RTS should be prioritized, which seems appropriate
a. The mapping one with more than one risk driver will require some complex implementation, which requires time
b. The delta topic is high in terms of impact, as it might apply to huge class of underlying
For these reasons it is appropriate that both are high priority.
The Industry view on the allocation of products is that, in general, these allocations are appropriate. Mapping these transactions to risk categories is a well-known issue. Banks already faced and solved when setting the calculation for the Mark-to-Market Method (aka CEM) and when implementing initial margins models (ISDA SIMM). The classification used for CEM did not raise major issues or resulted to significant number of Q&As to EBA.

Many derivative transactions have a single risk driver (disregarding interest rates for the purpose of discounting and any Valuation Adjustment (XVA such as CVA, DVA) or several drivers referring to the same category.

For the sake of stability of the SACCR measure, RTS should avoid as much as possible switching between asset classes. Step 1 should cover most of the portfolio.

However, some exceptions can be easily identified; e.g. a forward starting FX Swap has almost no FX sensitivity before initial exchange).

It is the view across Industry that it is challenging to agree on universal product category list/ taxonomy. Therefore, the list provided is a good thing, but it should be only an indicative list, summarizing ‘principles’ to map transactions to a single risk driver category.

The approach should, as much as possible, allow for most of transactions a simple and stable allocation to one main risk driver.

The approach to be followed for complex financial instruments, derivative strategies (combinations of puts and calls), hybrids (for example FX + equity), caps etc. should be clarified. In particular whether a dynamic determination of the primary risk driver and hence the assorted long/short position has to be carried out where the primary risk driver may be path-dependent?
As expressed in the response to question 2, the list should only be indicative, therefore it doesn’t need to be complete.
The list should be only an indicative list, and it should not be an issue for banks to complete it given that this was already performed for CEM implementation.
Step 2 applies only to transactions which have the material risk driver identified in more than one category. Such an approach would lead to potential double counting, contrary to the global simple approach of SA CCR.

Within a Risk Category, SACCR distinguishes 3 cases:
• One or more risk factor in a category where notional is counted once;
• Difference (basis) of risk factors within one category, the Supervisory Factor is halved (article 280)
• When a risk factor is market implied or realised volatility, the Supervisory Factor multiplied by 5.

The Industry identifies no consistency in double counting the notional when risk categories are different and it is not clear if this is outcome is indeed the intention of the EBA.

Industry supports the approach proposed by EBA to compare risk drivers by comparing risk sensitivities across all risk categories. It is the view that this comparison should be performed by including all sensitivities whilst also excluding sensitivities arising from Valuation Adjustments (XVA such as CVA, DVA, etc.).

The Industry believes that the calculation frequency should be explicitly defined. It is believed that the assignation shall be done on a dynamic basis, so that when the nature of the transaction change (e.g. Forward starting FX before or after first notional exchange, convertible going from deep in the money to out the money…) the allocation is reviewed consistently with the new risks of the transaction.

If double (even multiple) counting is the not the intention of EBA, Industry views assignation to one single risk class (based on comparing risk drivers by comparing risk sensitivities across all risk categories) as the favoured solution.
Of the proposed options expressed by the EBA, Option 3 is considered to be the most appropriate.

Ideally, the best option would be to use historical volatilities of each underlying. The volatilities used to calibrate the FRTB risk weight would be another option. Alternatively, the FRTB SBA risk weight in Option 3 should be preferred rather than the PFE approach in Option 4.

Liquidity horizon could be captured by multiplying the FRTB weights by square root of liquidity horizon.
The Industry supports the idea that besides sensitivities, the volatility of the different underlying instruments must be taken into account. As stated in EBA document, sensitivities can only be compared after being expressed as ‘average moves’ of value, in order to capture the differences in volatility of the different asset classes and to properly assess the risk category associated with the highest exposure.

The RTS shall make reference to FRTB SBA, so that once it is recalibrated as announced by the BCBS the consistency remains.
The Industry view is that there is no single reasonable threshold for X and Y which would be universally acceptable.

In order to be consistent with the single risk category driver of step 1, where the majority of risk drivers are classified in a single category, the transaction should be similarly categorised. In both option 1 or 2, if the threshold is too low, some contradiction with Step 1 could arise , such as the interest rates used for discounting could push toward inclusion of interest rates risk category for a product which, according to step 1, would be only Credit. If the threshold is too high, some material risk drivers could be missed.

In both option 1 or 2, whatever the threshold, crossing it would create instability (jumps) in the exposure profile and counter-intuitive results.

In both option 1 and option 2, final assignation will depend on the granularity at which the sensitivities are modelled:
• If an equity product on a basket of tens of names is considered, the rate/discounting might be comparable or even higher than each of the single stock sensitivity, leading to classification of the product as mostly IR while it is mostly equity.
Such behaviour would create both instability and the unpredictability of the SACCR measure, which is not desirable.

The Industry view is that any classification should be built based on total net risk weighted sensitivity by risk class. As stated in question 5, Industry participants are of the view that double counting (or even multiple counting) as proposed by either X or Y approach is a solution which is not consistent with the general SA CCR approach. Consequently, the Industry strongly believes that notionals should be affected once by the risk class associated to the highest sensitivity.

An alternative smooth solution, considered less favourable to the aforementioned proposal, could be to associate weighted notional values by percentage of total sensitivities of the transaction to each risk category associated to the risk factor for which the sensitivity has been computed. This would make the allocation both stable (moving from 49/51 Credit/Interest rate to 51/49 only cause small changes of the exposure).

A proposed solution for Option 3 is to use a ‘modified Option 2’ approach which maintains step 1 and 2 (which are weighted by FRTB risk weights as in option 3) as they are and assigns weighted notional Si/(∑ Sj) to the risk category related to risk factor, i.
A fallback approach is required for banks having no capacity either to calculate or to collect FRTB sensitivities in the context of EAD calculations (as the IT infrastructure used for both might be different, and sometime different technology), or it would be too costly to justify. In such case the approach must be more conservative than the one in step 2.

Based on Industry proposal described in answer to question 8, a more prudent and consistent approach is to map the notional to the category with the highest supervisory factor (SF) in a separate hedging set.

If a proposal different than the one proposed by Industry in Q7 is retained, Industry agrees with the principles that this approach must be simplistic and more conservative than results of Steps 1 and 2.
A solution which requires no cap would be to assign weighted notionals to the different categories.
The understanding from Industry participants is that the quantitative analysis should be done for a product type to classify it once and review it annually. It will be a significant overhead to do such analysis at trade level, and classify different trades of the same product type to different risk categories (leading to issues with netting, etc.). Also, doing this analysis more frequently may lead to a product type switching risk categories back and forth. That would require investigation to identify the cause for change and would result into instability in calculation.
The most appropriate solution, as identified by Industry contributors, would be to use the FRTB SBA sensitivities, consistently with EBA proposal for Option 3. In addition, an extension to this solution is to also calculate a separate supervisory delta for rates instruments.
For the sake of simplicity and stability, the use of the proposed formula should be limited to only the transaction for which the current formula can’t be applied, so applied only to trades where (P+ lambda) / (K+ lambda) is equal or lower to 0.

Based on the proposal described in paragraph 88, a lambda must be determined such as the ln() function embedded can be computed:
P+ lambda is positive so lambda must be greater than absolute value of any Forward;
K+lambda is positive so lambda must be greater than absolute value of any Strike.

Hence, lambda depends both on currency and portfolio.

A single lambda per currency would make the whole portfolio of the bank depends on the lowest option strike
• Any new option with a strike smaller than previous ones in currency i would trigger a recalibration of the lambda value for this currency, changing the risk measure of the whole portfolio.
• Such behaviour would create both instability and the unpredictability of the SACCR measure, which is not desirable.

A single lambda per currency, or worse a single lambda per currency shared by the whole Industry, would lead to instability and lack of predictability of the SACCR measure, as a new trade by any Industry participant with a low negative strike could either impact all EU institutions (if lambda is shifted to a lower value), or bank won’t be able to process it (if lambda is not shifter to a low enough value).

To avoid such undesirable outcome, the proposed formula should be used and lambda should be set at subportfolio or even trade level by each bank, such as P+ lambdaand K+lambda are positive, and only for trades where it is required.
Industry considers the definition provided by the CRR2 to be sufficiently clear.
Industry would argue that the use in the text of ‘exceptional circumstances’ seems to suggest limited approval may be granted to deviations from the presumptive ‘trading book’ (TB) list. However, we note that a number of items on the list are subject to ongoing Basel-level discussion/resolution as part of the ongoing FAQ process due to limitations caused by the current wording. Moreover, “exceptional circumstances” are not mentioned in the Basel text for such deviations.
In addition, the option to move from the banking book (BB) to the trading book should be preserved (for instance when there are trading activity and price availability together with a change of intent) controlled by an implicit approval process without requiring the full reclassification approval from the Competent Authority (CA) and associated capital surcharge. For example, the CA could be advised of the move related to a change in instruments' circumstance and receive associated documentation. In the absence of further requests from the CA, it should then be assumed that the change was approved.
There are further examples in addition to those noted in Paragraph 102 which include:
• Changes in accounting rules requires instruments that were booked under amortized cost to be fair valued or vice versa; and
• Changes in the intent and ability to trade due to liquidity and other market condition (e.g. instruments designated for securitization warehousing changed to trading).
We recommend consistency with the Basel Accord on the above points.

It is also proposed that regular buy and sell activity between a trading desk and another distinct banking book unit of the bank which acts as a client should be exempted from the rule on movement between regulatory books provided transactions are conducted on an arm’s length basis.

Under CRR2, financial assets or liabilities measured at fair value are one of the items on the presumptive list of trading book. This is compared to “instruments held as accounting trading assets or liabilities” per Basel text. The wider scope would be problematic as not all faired valued instruments would be of trading nature and banks would be required to obtain regulator’s approval to exclude such instruments from trading book. We actively advocate for aligning the CRR2 Level 1 text with the Basel Accord for AFS portfolios to avoid such issues arising but if such revision is not included in the final L1 text, then further flexibility will need to be ensured through the Level 2 process.

The Industry asks for a more flexible framework removing the absolute prevalence of presumptive lists and would recommend the “trading intent” of instruments remain to be a deciding factor for the classification of the TB versus BB.
i. For instance, the EBA shall consider that in some cases securities are listed or de-listed from indices: a flexible framework is necessary not to be obliged to reallocate instruments form the banking book to the trading book (and vice versa).
ii. The part of primary market syndicated loans to be distributed should be assigned in the banking book as there is no trading intent on such loans which are not traded on an active two-way market.
iii. Alternative investments activity (including holdings of hedge funds) currently capitalized using market methodologies with explicit trading intent should be assigned in the trading book, even without daily look-through or daily real prices.

There should be no supervisory approval or Risk Weighted Asset (RWA) surcharge due to changes in circumstances of instruments beyond the control of banks. For example, changes in accounting treatment, the listing of an equity or look through being obtained on a Collective Investment Undertaken (CIU) as such changes are not subjective nor within the control of the firm. The firm would only be acting to ensure ongoing compliance with the regulation. In case a capital surcharge is applied, the overall capital taken for a position should not exceed a reasonable estimated maximum loss. The Industry would welcome a clear definition of the latter.

More broadly, regarding the treatment of CIUs, we note that there is some ambiguity regarding the proposed definition of CIUs in CRR2 and whether third country equivalent funds continue to receive an equivalent treatment to EU CIUs.
Industry participants agree that correlation trading portfolio (CTP) instruments should remain in the trading book (TB) notwithstanding a loss of liquidity. In fact, the Basel text does not seem to allow that it could be otherwise. More generally, and as mentioned in paragraph 27 of BCBS’s FRTB standard, illiquidity should not be a reason for re-classification, for any instrument.
Since CTP instruments are capitalised as per the Standardised Approach (SA), the loss of liquidity should not lead to increase Risk Weighted Asset (RWA) (the SA does not cover for it, or it could be seen is already accounting for a loss of liquidity through a conservatively calibrated supervisory weights).
However, participants do not agree with the potential solution suggested by the EBA in paragraph 109, which is to exclude the tranche in question from the CTP, but to maintain it inside the TB. This would create a divergent treatment with the hedging products, and would therefore disincentive hedging, and be particularly punitive in cases where only 1 or 2 reference entities out of a traded index full of reference entities have become illiquid. As per the Basel FAQ submitted (please refer to Figure 1 in the attachment), this can be solved by fixing the problematic part of paragraph 61(b).
The following circumstances may warrant the approval of reclassification:
i. Significant shift in the liquidity conditions of a large portion of financial instruments that may lead the management to re-consider the intent on such instruments.
ii. A modification in the accounting standards that implies the need to change the accounting valuation for certain instruments (implementation of IFRS9 being a recent illustration).
iii. A change in the business model of some activity affecting financial assets.
iv. A position is held in the trading book (TB), and the purpose for which it is held is consistent with the current trading book vs banking book (BB) classification as per Article 104. Subsequently, a decision is made to change the trading strategy, so the position will be held to maturity – a purpose that is now not consistent with Article 104. If a change in trading intent alone is not sufficient to justify re-designating an instrument, having the position in the TB also no longer comply with the regulatory texts.
Industry participants propose that regulators allow banks the discretion to re-assign instruments between regulatory books due to exceptional circumstances due to reasons that are beyond banks’ control by notifying the supervisors rather than having to seek a pre-approval from regulators for each re-assignment.

Exceptional circumstances should encompass accounting rules changes and change of circumstances beyond bank’s control and also transactions between trading desks and another distinctive banking book unit acting as a client as mentioned in Q14.
The terminology in this and following questions is not accurate, it should be non-trading FX and commodity risk not position.

The identification would be the same under the current Basel 2.5 process, however, Industry participants do not consider it is appropriate to apply the FRTB trading book requirements in full to such notional desks (such as a single head trader, P&L attribution etc.), as risk is not managed actively in the same way as for “true” trading desks. Therefore, these controls would not be meaningful (see response to Q23 for further detail). The positions are mainly in non-trading businesses where instruments are denominated in non-reporting currencies.

In general, risk-weighted assets (RWAs) from FX and commodity risk of non-trading position are not considered to be material.
Typically, only a small portion of non-trading book instruments are elected to be fair-valued. Non-trading book instruments under amortized cost are fair valued for disclosure purpose as required under IFRS and US GAAP and hence the frequency of revaluation is generally driven by frequency of the issuance of financial statements containing such disclosure (quarterly or annually). However, the fair value for disclosure purpose is not computed using the same infrastructure as those financial instruments at fair value therefore it is not possible nor appropriate to increase valuation to daily frequency.
Non-monetary items may include:
• equity investment in unconsolidated sub or other investee,
• fixed assets,
• intangible assets,
• minority interests, and
• Equities.

Non-monetary assets and liabilities are re-measured at historic spot rate at point of origination, and therefore should not introduce any FX exposure in the standalone reporting entity. (It is not clear why the question refers only to non-monetary positions for FX).

However, when consolidating subsidiaries of which functional currency is non-reporting, there will be gains and losses when translating capital balances into reporting currency due to FX rate move.

Net investment in subsidiaries is not marked to market. However, the periodic capital translation process effectively ‘mark to market’ the FX component. This is performed on a monthly basis.

The question did not address the FX risk for monetary non-fair valued instruments. Monetary assets and liabilities are re-measured at month-end spot rates, effectively ‘marking to market’ the FX component. This is performed on a daily basis.
Yes. Non-trading book instruments under amortized cost are required to be fair valued for disclosure purpose as required under IFRS and US GAAP. Hence the frequency of revaluation is generally driven by frequency of issuance of financial statements containing such disclosure (quarterly or annually).
Any value adjustment applied to the instruments would affect non-trading book FX positions.
Industry proposes to not expand further on the requirements for notional desks beyond the generic description in the BCBS text. Paragraph 26 of Basel text “for regulatory capital purposes, these positions will be treated as if they were held on notional trading desks within the trading book” implies that designation of notional trading desks was mainly for risk-weighted asset (RWA) calculation and allocation purpose. This is no different to Basel 2.5 under which there is no requirement regarding notional trading desks. What is more important is to ensure all banking book FX exposures are identified and captured. Having a notional desk would not be a necessary condition to achieve that goal.
Industry participants would recommend not setting a minimum number of notional trading desks, as this is subject to a firm’s systems and infrastructure across functions / department and geographical locations, and not something that can be described by regulators.
A mandated consolidation of all FX/ Commodity in banking book under one desk will be an artificial construct – especially since the FX/Commodity risk is not always synonymous with FX/Commodity positions. The ability to keep such positions in their entirety into their originating businesses encourages more transparency and better risk management practices.
Industry participants propose that FX and commodity risks arising from banking book exposure should not be subject to any of the qualitative and quantitative requirements for trading desks, as these requirements are not meaningful or applicable to banking book businesses.
In general, banking book businesses with FX or commodity exposure would not have a trader, and risk management would generally be carried out by traders from FX or commodity trading businesses, and may be on a centralized or macro basis for a number of desks. Enforcing the one trader/head trader per desk rule under these circumstances would not be meaningful from governance point of view as it would entail artificially designating trader/head trader from trading business to cover each notional trading desks. This would not be proportionate to the scale and nature of the risks being run in the banking book.
Business strategy, annual plan and regular management information systems (MIS) would not be meaningful for FX and commodity exposure in isolation, whilst risk limits would not be set at this level; therefore, risk reporting for banking book FX and commodity exposure would also not be meaningful.
As highlighted in the responses in Q18 – Q21, FX and commodity risk in the banking book can arise from positions that may not be fair valued, or may not be marked to market on a daily basis. Hence, it may not be appropriate to calculate a daily P&L for only the FX/commodity exposure of such positions.
In general, FX/commodity risk arising from banking book positions comprises of simple, linear risk factors, and the risk models for such exposures will already be tested by all of the modellability and model performance requirements for the market making FX and commodity trading desks. Hence, imposing these quantitative requirements on banking book FX and commodity exposure has little incremental value.
Backtesting and P&L attribution of positions which are not subject to fair value (FV) is of limited value. Forcing separate “artificial” accounting for the purposes of own funds calculation increases overheads and lacks any use test to control its accuracy (refer to Q18).

If this is not pursued, there are two possibilities:
• Exclude non-FV positions
• Remove the quantitative requirements for internal model approach (IMA) for the notional desks from IMA notional trading desks.

In this case, this would mean that there would need to be at least FV and non-FV notional desks.
Please see answer to Q24.
The definition provided is clear. Nevertheless the Industry is of the opinion that products such as variance swaps (future volatility exposure) and equity dividend swaps should not be included in the exotic risk category since a sensitivities-based method’s (SBM) vega charge can be computed and any residual risk should be capitalized via a 0.1%.
The Industry does not see the necessity of complementing the definitions with a list of instruments. If such a list is provided, the list should be for purposes of clarification only and not be used to define the scope of the residual risk add-on (RRAO). The absence of a fully standardised industry-wide product taxonomy (particularly for the more exotic products affected by the RRAO) means that a definitive list of products would increase rather than reduce regulatory uncertainty. Nevertheless, Industry members are of the opinion that certain products by virtue of the way they are hedged in the market should not attract a RRAO charge. This could be the case for interest rate yield curve options, which are widely used instruments for hedging yield curve exposures, and which pay-outs are typically based on two constant maturity swap (CMS) rates. To provide an example, if one considers multi-look CMS spread trades (assuming 20-year maturity), this can be hedged with 20 times one look CMS Spreads. All these trades under current rules will attract RRAO i.e. 21 times individual notional, while in reality such package will show an overall flat risk.
The Industry also considers that Asian options (options on average) should not be considered as bearing “gap risk”, or any other residual risk. Indeed, they behave smoothly (smoother than plain vanilla options), and do not present additional risk compared to those caught by the sensitivities-based method (SBM). In this respect, even if they cannot be perfectly replicated as a finite linear combination of vanilla options, they should not be subject to the RRAO. This is particularly penalizing for the commodity business.
Another case to consider are vanilla options on commodity differential (options on the spread between two oil qualities for example), it must be clear that they can be seen as options on one underlying, and that they are excluded from the RRAO.
Regarding digital risk, there is an issue with the RRAO computation, and if it is not addressed the FX business would be disproportionally penalized. The problem is that with respect to the RRAO, the appropriate notional of a (vanilla) barrier option is the “size” of the digit, not the full notional of the option. This would make homogeneous capital charges for a pure digit, and for the digit embedded in a barrier option. Hence, for “vanilla digits” (meaning, payoffs having no other exotic features that the digit), Industry participants propose to retain the size of the digits and not the notional of the options:

• Pure digit paying €100 if Stock A > Strike: notional = €100.
• Up and in call, strike K, barrier K’>K, for N stocks A: notional = Nx(K’-K).

This would impact mostly the FX and equity businesses.
As mentioned above, some instruments such as variance swaps, equity dividend swaps, Asian options, options on commodities differential or yield curve/constant maturity swap (CMS) spread option should be excluded or use a more accurate risk weight to avoid penalization of market making activities.
The Industry believes that there is no option types missing and that the options in the list do meet the general criteria.
Distressed products which trade far from par and products already defaulted are generally price based and are not appropriately captured within the standardized approach. While included for the subordinated exposures, standardised approach (SA) default risk charge (DRC) is already providing adequate capitalization (which would cover any residual risk and losses), for low priced senior exposures. However, it could be argued that some residual risk exist due to recovery risk.
For this reason, we suggest that distressed products are excluded from the sensitivity based method and instead, capitalized via SA-DRC only (for subordinated exposures) or a combination of SA-DRC and a residual risk add on charge of 0.1% of market value (for senior exposures with prices under 40% of par).
In addition to distressed products, there are certain other price-based products such as certain Interest Only Strips, municipal Variable Rate Demand Obligations, pre-crisis Commercial Mortgage Backed securities/Residential Mortgage Backed Securities whose credit risk is poorly captured by the sensitivity based method (SBM). Hence, these types of instruments should also be excluded from SBM, and the Industry will welcome the opportunity to further discuss RRAO based capitalization for them with the EBA.
Residual risk add-on (RRAO) should only apply where uneconomic exercise of the option can increase duration / results in a loss.

For securitized products, it would be preferable to have further clarity on current FRTB classifications. Embedded Prepayment risk (embedded optionality) is generally present in many securitized products (except CMBS), which are generally booked as linear instruments. Industry members would, therefore, suggest that such linear instrument with prepayment risk (even the non-retails instruments), where an uneconomic exercise of the embedded optionality lead to a loss, should be in scope for RRAO as those positions will not be charged vega/curvature but do exhibit convexity from prepayment behaviour.

Therefore, where positions have prepayment risk (i.e. duration factor in a non-zero prepayment rate / expected call date) it would be preferable to use a behavioural add-on in addition to standardized Default Risk Charge (DRC) and Credit Spread Risk (CSR).

We also propose to exclude the following US agency mortgage products from the behavioural list:

• To-Be-Announced’s (TBAs), Agency Mortgage pools, Collateralised Mortgage Obligations (CMOs) with minimum exposure to optionality, due to market’s high volume, large gross notional and liquidity and whereby the general interest rate risk (GIRR) delta, vega and curvature and CSR delta in sensitivities-based method (SBM) would adequately cover the risk;
• Furthermore, clarification is sought as to whether TBAs and deliverable pools (pools eligible to be delivered to a TBA contract) can be deemed as products “eligible for central clearing” per paragraph 58(f).
The Industry believes that the list is sensible, and brings clarity on the residual risk add-on (RRAO) scope (it will be particularly useful for avoiding confusion with internal and external inspection missions). Given the broad conceptual scope of the RRAO, an explicit exclusion of the products covered by paragraph 58(h) is needed in order to avoid inappropriately penalising vanilla products such as equity index options or government bond futures.
At current stage, Industry believes that variance swaps may fall in this category, tagged as exotic risk while rather bearing residual risk. This is an issue that should be fixed by moving future realized volatility outside the category of exotic risks.
Note: there are some instruments bearing more than one “other residual risk” (exotic instruments with both correlation and digital risk for example). In this case it should be made clear that the residual risk add-on (RRAO) charge applies only once.
The Industry believes that the liquidity horizon framework does not need further granularity and some discretion would be required by banks in mapping more complex products to liquidity horizon buckets.

The approach listed in paragraph 148 seems sensible for example in the case of multi-underlying trades, but there are other situations where some discretion should be applied based on expert judgment for more complex exposures (e.g. when dealing with volatility indices (VIX)). For such cases a bank should be allowed to apply internal methodologies that would identify the most relevant liquidity horizon once approved by regulators.
Furthermore, there are additional concerns regarding the paragraph 148 that appear to relax the mapping rules at the risk factor level. It can be interpreted as for an instrument level and/or relevant risk factor level mapping procedure. Industry participants are uncertain that moving from risk factor to instrument level would ensure the same level of capital for same portfolio. This would mean that economically equivalent positions could attract different capital charges and in some cases could inappropriately penalise a well-hedged portfolio.
Paragraph 148 describes the possibility to attribute the whole instrument to only one liquidity horizon relevant for the risk factor with the major sensitivity rather than capture all the risk factors and respective liquidity horizons. These different settings of liquidity horizon mapping, if permitted, may generate non homogeneous impacts on the capital for the banks using different approaches (instrument versus single risk factors). The Industry believes that this is contrary to the intent of the internal model approach (IMA) as proposed by Basel.
Additional guidance would be helpful but should be outside the RTS space and better placed in the Q&A space so that flexibility is left to institutions.
In our view, the table 2 of Article 325be should be amended to include the additional risk factor categories and liquidity horizons listed in the FAQ published January 2017 in paragraph 2.2.
As described in the response to Q35, we believe that Q&A would be sufficient to clarify open uncertainties without compromising flexibility. However, they are also of the opinion that RTSs should focus on:
• Liquidity horizon (LH) recalibration, especially in order to account for de-risking profile and mean reversion effect for liquidity horizon above 40 days.
• Reducing the cliff effect between LH buckets as per point raised on Q45 (i.e. equity small cap context)
While it is challenging to quantify the concept of liquidity via a single attribute, the key aspect is the use of a broad market definition. Setting a liquidity level only on over-the-counter (OTC) market data (e.g. by using the Bank for International Settlements (BIS) OTC derivative statistics) would be a limitation and misleading. Industry members are of the opinion that both cash and derivative products should be considered as well as OTC and exchanged traded markets.
Further, Industry participants have highlighted how the use of different liquidity horizons, for specified currencies and non-specified, can lead to unintended impact on liquidity, penalizing emerging market jurisdictions and introducing an uneven level playing field. We therefore suggest using a unique liquidity horizon set to 10 days.
The Industry is of the view that the BCBS calculation does not capture enough of the market to be considered a true measure of liquidity. In addition, a 3-year revision does not seem satisfactory given the dynamic nature of the market and would recommend an annual review.
It is worth noting that in relation to FRTB non-modellable risk factor (NMRF), many data providers are currently setting up initiatives so that liquidity by product can be studied. Ideally the regulators could leverage on such information to inform better calibration of the FRTB liquidity horizons.
The Industry strongly supports permitting the use of triangulation for preserving the consistency of capital charge and is of the view that the threshold should therefore be assessed per currency and not currency pair, which is a more suitable measure of liquidity.
Setting the threshold to daily turnover above USD 25 billion would then include the 24 explicitly listed currencies, except for HUF, in Table 5 of the BIS Triennial survey – this corresponds to a turnover above USD 250 billion for the 10 day liquidity horizon. All currency pair combinations consisting of currencies above the threshold are then defined to be liquid.
If EUR, NOK and SEK are all liquid currencies, then EUR/NOK, EUR/SEK and NOK/SEK will all be classified as liquid currency pairs.
If EUR and USD are liquid, but CZK is not, then EUR/CZK and USD/CZK will not be classified as liquid currency pairs
Although we broadly support the idea of using the triennial central bank survey on FX as a good source for volumes, other market data sources should be utilised to have a more holistic view of the FX market liquidity.
Further, if a bank was to estimate the required liquidity horizon, it would take into consideration also elements such as the bank’s market share, the risk sensitivity to each FX risk factors and the internal limit which reflect the bank’s risk appetite.
Please see Industry response for Q41.
As mentioned above the approach that a bank would adopt to define liquidity horizon is more complex than just looking into a defined turnover level from the Bank for International Settlements report. There should be no distinction for the FX spot market between currencies since all of them (currently classified in FRTB text as liquid and illiquid) would qualify for a liquidity horizon well below the 10 days.
A 2-day period could be used as a sensible indication for liquidity horizon (LH). Although a 2-day LH might not be always applicable, FRTB 10 day LH could well be considered a conservative assumption to be used, for all FX currency pairs.
The Industry supports the concept of triangulation for preserving the consistency of capital charge; the list of liquid currency pairs under the Accord Text should be expandable via “triangulation” to improve the rule requirements’ risk sensitivity. With triangulation allowed, it’s no longer necessary to define the turnover threshold by currency pair.
Therefore, a proposed suggestion is to define the threshold by currency, not currency pair, which also better reflects the flows in a currency.
The concept of triangulation is effectively adopted in the regular risk management practices.
Allowing a simple “one-step” triangulation logic is reflective of how the FX market typically functions and will result in correctly aligning the capital with economic risk. There is no added risk by allowing triangulation and thus no prudential reason not to allow triangulation.
The view of Industry participants is broadly in line with EBA proposed approach. The threshold for equity is reasonable and we appreciate the possibility of taking into account the national dimension of a given equity.
Nevertheless, Industry participants are of the opinion that the bank should have discretion in the methodology on how to assign liquidity horizons, subject to internal validation, such methodologies may consider the relative size of the exposure and should aim to avoid cliff effects and fluctuation of capital requirement, without adding further complexity to the FRTB framework.
It was also pointed out that equity index volatilities are generally modelled separately from single name volatilities, and that market capitalisation is not a good way of determining which indices are most liquid (in any case, equity index options are generally more liquid than even large-cap single name options). The RTS should make clear that the appropriate liquidity option for index volatility is 20 business days for liquid indices.
In support of the above points, Industry participants would like to stress that:
• Trades on names smaller than 2bn are manageable as the size is relatively small compared to the liquidity of that market. The consideration refers to the size of the bank’s position relative to the markets, which is slightly different to 166(2) which refers to choosing instruments which are liquid relative to the markets there are operating in.
Further if the market capitalisation for a particular stock keeps fluctuating around the 2bn threshold, the capitalisation process will be affected because of the moves between 60 day returns and 20 day returns. As an example, to avoid such drawbacks, banks should have the flexibility to factor in some stickiness around the prescribed thresholds, i.e. maintaining the liquidity horizon unchanged for a specific period of time if liquidity thresholds are breached.
In general, Industry participants consider it sensible to use the work from of European Securities and Markets Authority (ESMA) to identify large capitalisation equities based on equity indices as a complement to, but not instead of, the USD 2 billion threshold. However, the methodology should include all classes of shares (e.g. Class A and Class B shares) even if only one class is included in a specified index. If this is not permitted, it is possible that broken hedges may arise if not all share classes of a certain issuer are treated consistently. Banks should have the discretion to apply the USD 2 billion threshold as a complement, e.g. for newly issued equities.
The Industry considers that the inclusion/exclusion of valuation adjustments is one of the key items in the EBA discussion paper, but the EBA proposals are not necessarily consistent with what the Industry had put forward to the Market Risk Group (MRG) as part of its PLA recommendations.
Discussion related to the proposed exclusion criteria which lists that:
i. Valuation adjustment for which a separate regulatory capital treatment has been specified as part of the rule would be excluded from HPL (e.g. CVA)
ii. VA which are excluded from CET1 would be excluded from HPL (e.g. PVA)
iii. VA that are updated at a less than daily frequency is the measure of P& would be excluded from HPL
The EBA DP states that:
i. If a VA is calculated daily and at the desk level and does not have a separate capital treatment, and not explicitly deducted from CET1, then it needs to be part of HPL (Paragraph 182 and Paragraph 186)
ii. HPL should be the same for Backtesting and PLA (Paragraph 175)
iii. If a VA is included in HPL, then it should also be included in RTPL (Paragraph 226)

However, the criteria above are considered problematic due to the following:
i. As the industry has previously argued, VAs are a measure of potential inadequacies in Front Office pricers and hence, they should not be included in evaluating the desk level performance of risk models.
ii. In addition, even if certain VAs are calculated daily, these daily points do not necessarily constitute a “time series” that can be on-boarded into VaR/ES
iii. Paragraph 226 can be interpreted to simply mean:

〖RTPL〗_(incl VAs)= 〖RTPL〗_(excl VAs) + VAs .

However, it could potential generate issues with the PLA test since the test metrics will be statistical measures that may not be commutative e.g. Variance((a+c)/(b+c))≠Variance(a/b).

Hence, even if the same VAs are included in both RTPL and HPL, they can still be a cause of PLA failures which will be inconsistent with the scope of the PLA test as outlined in paragraph 223.

If regulatory concern is focused on sufficiently conservative capitalization, then the VAs can be included in Firm-level/”top of the house” backtesting against actual P&L only since that is focused on capital adequacy rather than model performance which is the purpose of backtesting against hypothetical P&L as described in paragraph 177.

Even for inclusion in the firm-level backtesting against actual P&L, it would be preferable for regulators to specify the type of XVA which are deemed in scope of Market Risk capital rules. Current language leaves open the potential for inclusion of certain VAs in backtesting despite unclear guidance with respect to their inclusion in VaR (e.g. FVA).
Industry participants can respond to this question through bilateral engagement.
As mentioned in the response to Q47, the Industry does not agree with including VAs in the HPL.
Please refer to the response to Q49.
Industry participants can respond to this question through bilateral engagement.
The EBA DP proposes to exclude the VAs which are charged under separate capital treatment and excluded from CET1; while the paper includes the adjustments which are not captured daily. This is a sensible approach since actual P&L will be only used in backtesting.
As mentioned previously, the Industry does not agree with including VAs for desk level model performance tests, but for firm-level/top of the house backtesting, Industry members agree with the criteria defined for the inclusion of a VA.

However, if VAs were to be included in the firm-level HPL to assess capital adequacy, we agree in part with the proposal to provide only the criteria for inclusion/exclusion:
• The text implies that if, in the future, Funding VA were to be subject to a separate capital treatment, it would then be excluded from the hypothetical and actual P&L. However, given current lack of regulatory guidance on the capital treatment of FVA, the text does not clarify what the treatment of FVA in backtesting would be in the future. Further guidance would be welcomed on the FVA capital and hedge treatment as well as their treatment for backtesting purpose.
• Including non-desk level VAs at the ‘top of the house’ could mean that a mixture of IMA approved and non-IMA approved desks would be included in actual P&L. The Industry seeks further clarification on addressing these situations.
• Further clarity on Bid-Offer VAs and their inclusion on being required only in actual P&L would be welcomed as well as additional guidance on XVA and treatment of hedges.
Industry participants can respond to this question through bilateral engagement.
Yes, net interest income (NII) is considered to be a part of the time effect.
Industry participants do not see the need to define NII and to mention it in the regulation as per EBA proposal. The more generic definition of ‘P&L due to passage of time’ is more broadly supported.
Industry participants do not see the need to define time effect" and mention it in the regulation as per EBA proposal. The more generic definition of ‘P&L due to passage of time’ is supported."
Industry members do not completely agree that proposal 2 would achieve the best outcome.
While, theoretically, it aligns with overall themes of acting as a ‘reality check’, the hypothetical P&L being a truer test of the model, in practice this will likely lead to P&L that is deterministic in nature being added to the actual measure. In line with the treatment of other P&L elements which are independent of market risk factors e.g. fees and commissions, Industry participants do not feel it is appropriate to capture this type of P&L in backtesting unless the corresponding effects are captured in VaR.
A proposed alternative would be to retain ‘passage of time’ P&L in both actual and hypothetical P&Ls to the extent the associated effects are captured in VaR. Or in other words, if the ‘passage of time’ effects are not captured by VaR, they should not be included in either hypothetical or actual P&L. This maintains hypothetical as a true test of aligned P&L and VaR while avoiding distortion of the actual test by the inclusion of deterministic P&L.
As mentioned in the response to Q47, the inclusion of VAs in the desk level HPL is not supported by Industry participants. To illustrate further, while the exact PLA test metrics will not be known until the Basel Consultation Paper is published, using the existing BCBS Variance Ratio as an example (Figure 2 in the attachment) is equivalent to adding a new random variable to the calculation which will have an influence in the outcome.
Industry participants have identified several preferences amongst options 1-8 as described in the EBA RTS. These are described in more detail below.

1. Definition of the observation period.
Of the four options identified for the observation period, the Industry view for each is described as follows:
Approach (c) is operationally efficient and should be sufficiently conservative. In case not enough data from stress period is available current period could be scaled up based on factors used for ES. I.e. Approach (a)
Approach (a) can be considered for most risk factors as a fallback, using the reduced set ES ratio as factors.
Approaches b) and d) are considered computationally complex especially when done at P&L level.

2. Types of data acceptable for the observations
b) Should be the default case, c) where necessary and a) would contradict the BCBS principle best data"

3. Additional conditions on the data observed for the NMRF
There is no direct link between the data used for the observation of whether a risk factor is observable and the data that is available and used as the historical data set to calculate the SES. Industry participants will answer the question assuming the questions are about historical data for SES.
It is reasonable to specify that historical data should not have a frequency that is higher than daily. No prescription should be given above that on minimum time.
Genuinely stale risk factor observations should not be filtered out. This is an additional requirement beyond the IMA requirement that would lead to significant operational hurdles.
Rather than using the fall back shocks, "gauge data" mentioned in 2) could provide sufficient information to derive shocks. Automatic conditions based on data to move a factor to the fallback approach should be avoided.

4. Definition of the liquidity horizon LH(i) for an NMRF
Effective liquidity horizons would be a significant operational burden and complexity, e.g. due to monitoring of broken hedges. Therefore, Industry participants have previously asked regulators for flexibility to not apply the maturity cap provision detailed in the Basel text (paragraph 181 (k) ‘Furthermore, liquidity horizons should be capped at the maturity of the related instrument’) or in CRR2 (Article 325be) and strongly suggest EBA to discard it in the context of NMRF.

5. Calibration of parameter CLsigma
A large number of NMRFs is expected based on current observability assumptions. Since NMRFs are calculated separately and added in absolute terms, the correction will lead to an overly conservative estimate on portfolio level as the majority of the estimation uncertainty would diversify away.
The graph below shows results of a simulation where there are multiple NMRFs with small data samples for each NMRF (10 data points for each NMRF, resulting 500 approximately equal sized SES in this example) and shows the distribution of results, for portfolio level, from a large number of simulations for different values of CLsigma. The unbiased results is 20%. The adjusted standard deviation i.e. applying sqrt((n-1)/(n-1.5) ) is good at removing the small sample bias but the CLsigma using a conservative percentile e.g. 90% for each SES creates a bias overall in the results. Either the CLsigma could not be applied as effects net out at portfolio level or a lower percentile could be chosen. A percentile of 60% would give a reasonable comfort at portfolio level without creating too much bias (Figure 3, please refer to the attachment).

6. Calibration of parameters CES equiv
A methodology that relied less on scaling parameters where there is sufficient data would be more desirable.
In practice, many risk factors that will be deemed modellable will:
(a) be a decomposed basis risk via footnote 40 in the Basel text and
(b) have daily data available. For examples of these types of risk factors we have calculated Empirical CES ratio (Figure 4, please refer to the attachment) and given the nature of the data we find lower CES ratios from this empirical data than can be found using a theoretical data approaches. This highlights the model risk of a scale up approach that should be mitigated by allowing a full ES approach if sufficient data is available.
Justifying individual ratios per risk factor will create significant burden for banks and in turn to regulators to monitor. There is a risk however that setting this could be made over-conservative and be inappropriate in some cases. Industry would favour setting at some reasonable common level that only exceptionally had to be lowered e.g. setting a maximum multiplier of 2.3 with firms able to make this lower if they can justify it.
Figure 5 (please refer to the attachment) shows the ratio of 97.5% ES from 10 day overlapping periods divided by the EBA standard deviation formula for different data gaps. The results are the average of 1000 simulations and have not taken account of the small sample size adjustment factor CLsigma.
A ratio of 2.25 look sufficient to get a reasonable overall outcome from this theoretical distribution data. Further work should be done using empirical distributed data.

7. Calibration of Kj
Option E: The main driver of conservativeness for NMRF is the conservative aggregation scheme given the large number of risk factors. Setting a single value of K for all risk factors would be too simplistic and defining an individual k for every risk factor is too complex.
Moreover, it seems there is a confusion between the FS, the argmax of the loss function and the ES(rj) the expected shortfall of possible shocked market data.
Kappa calibration also imposes unwarranted complexity on the framework as it requires revaluation computation for tail risks, which can impose unbounded complexity and heavy computation. Furthermore, computation failures can lead to bespoke fall backs approaches based on the implementation choices, which would be very hard to be standardized across banks.
It is also noted that that non linearity is not considered to be systematically material at bank portfolio level.
Therefore, the Industry view is that the Kappa adjustment should be set to 1.

8. Calibration of kjt, CLsigma and CES equiv to achieve the target calibration’ at least as high as an expected shortfall’
The solution should not lead to a significant increase in model complexity without clear benefits. A large number of NMFRs is expected. As long as the calculation is unbiased, no major capital underestimation is expected.
It is correct the interaction of these factors should be considered. If the CLsigma factor is applied before calculating SES ratios then we get the following ratios from theoretical distribution.
The results illustrated in Figure 6 (please refer to the attachment) is with CLsigma = 90%
The results illustrated in Figure 7 (please refer to the attachment) using a less conservative percentile for CLsigma of 60%."
Industry members have identified general observations on the proposed methodology, along with specific methodological challenges associated with the non-linearity adjustment Kappa, observation returns, the Argmax approach for ‘future shock’ FS and the overall computation. These are elaborated further below.

General Observations
The rules set out in the CRR for stressed expected shortfall (SES) calculation are more prescriptive and conservative that the main Basel text. The Industry is concerned that as the EBA is using the CRR as its starting point therefore the initial assumptions used for the analysis are too rigid and not in line with main Basel text.
Various levels of conservativeness are layered upon each other (volatility calculation, C_ES factor, kappa, correction factor to not underestimate small samples) which will lead to overly conservative stand-alone numbers.
The conservativeness of the non-modellable risk factor (NMRF) charge is largely driven by the conservative aggregation scheme. Ensuring that every NMRF is at least as conservative as a stand-alone expected shortfall (ES) calculation and adding these will lead to even more conservative NMRF impacts.
It targets an ES equivalence but at the same time it does not seem to allow flexibility to use a fuller ES calculation for the stressed expected shortfall (SES). This is a significant loss of ability to take a fuller risk management approach where required.
It is very targeted at single risk factor case and may not fit that well with any other changes that industry have requested such as (a) applying observability to segments of curves and surfaces (b) calculating SES component charge for whole curve or surface objects
There is heavy use of scale up factors where potentially this could be avoided and potentially only use scale up factors where there are small amounts of data.
While the goal of a standardized NMRF capital charge calculation has the benefits of both consistency and efficiency (in terms of both internal validation and regulatory review), attempts to create a generalized, one-size-fits-all approach for all types of risk factors also leads to several issues:
• Complexity: The proposed framework is both mathematically complex, and certain aspects of the calculation is computationally resource intensive.
• Over-conservatism: When looked in conjunction with the fact that the rules for NMRF identification and aggregation are already conservative, the added conservatism to compensate for distributional and parameter uncertainties may lead to an aggregate NMRF capital charge that is overly punitive.
• Model risk: The proposed approach may not guarantee sensible answers for all cases. The lack of diversification in the aggregation formula also does not help. While there is value to have standardization that cover most situations, Industry participants are worry that taking away all flexibility might not be the best thing to do. Bank should be allowed the flexibility to justify deviation from the proposed if such a deviation is deemed necessary.
• Stability: The proposed approach, effectively uses a mechanistic extrapolation for cases when there is little data input. It is unclear how stable the numbers could be especially when new observations are added or old observations are dropped from the sample.

Kappa calculation
Properly calculating kappa is computationally intensive (every NMRF will have a bespoke kappa) and the correct value for kappa would constantly change over time. Selecting an unbiased value would be necessary to avoid overly conservative calculations due to the large number of NMRFs and the conservative aggregation scheme.
Computational challenges in the evaluation of argmax
For all non-linear risk factors an optimization over the range of possible risk factor values is necessary. While this is easily achievable for sensitivity-based calculations, a grid-based approach is required for solutions based on full revaluation. This will significantly increase the overall model complexity and likely lead to RWA variability (see answers to Q63).
Return calculation
EBA proposes to compute returns scaled up to the NMRF liquidity horizon (possibly to the effective liquidity horizon i.e. capped to the maturity of the instrument) (i.e. paragraphs 240 and257). The maximum liquidity horizon as defined in the CRR2 draft text can go up to 120-day (table 1 – Article 325bd).
As a consequence, banks would potentially have to feed up to 120-day returns into their risk valuation models. Given potential NMRF composition (cf. points on yield curve, credit curve, volatility surface), feeding shocks of this size is likely to cause pricing model issues.
Scaling short returns to very long holding periods using the square-root-of-time rule will easily lead to excessive shocks in particular if the shock is calibrated for a basis risk factor (in cases where NMRFs are decomposed into modellable proxy and non-modellable basis, footnote 40 of the BCBS text).
Issues with stressing single risk factors of a curve/surface object
The NMRF concept of stressing individual risk factors in term structure models can violate the intrinsic assumptions on the model dynamics that real world market data follows as illustrated in Figure 8. Additional constraints and approximations might be necessary to avoid spurious hedge breaks and spurious results in general (sensitivity based stresses, adjoint algorithmic differentiation (AAD) and analytic calibration). The concept of stressing single risk factors might need to be adjusted.
A full valuation P&L computation will consequently exhibit numerical errors, and (if not #ERR) the result integrates a desarbitrage algorithm: as such it is not interpretable as a P&L resulting from the shock, missing the actual objective.
It is therefore suggested that the proposal of stressing single risk factors and computing the induce P&L with full valuation should be re-considered.

Figure 8
Many NMRFs are the basis between long term (non-modellable) part of a curve and the short or medium term (modellable) part, and also the (non-modellable) aisles of smiles in volatility surfaces versus the (modellable) center of the surface.
In this context, the scenario “this particular point of the curve/surface experiments a large shock while the rest of the curve/surface is unchanged” has little economic meaning: this theoretical deformation induces arbitrages / inconsistencies and would not occur in the real world.
Industry members have also identified concerns where computational efforts would present inherent challenges.
Industry participants have carefully considered the proposals put forward by the EBA and identified the following areas where alternatives could be considered and are presented below.

Proposals on scaling and liquidity horizons
EBA proposes to compute returns scaled up to the non-modellable risk factor (NMRF) liquidity horizon (possibly to the effective liquidity horizon i.e. capped to the maturity of the instrument) (i.e. paragraphs 240 and 257). The maximum liquidity horizon as defined in the CRR2 draft text can go up to 120-day (table 1 – Article 325bd)
As a consequence, banks would potentially have to feed up to 120-day returns into their risk valuation models. Given potential NMRF composition (cf. points on yield curve, credit curve, volatility surface), feeding shocks of this size is likely to cause pricing model issues.
An alternative set of proposals to consider to mitigate against this problem is:
• to compute 10-day returns instead (either observed directly, or scaled up/down - subject to further discussion / discretion)
• then rescale the expected shortfall (ES) figure, obtained as part of the NMRF charge calculation, to the correct liquidity horizon (LH) using the square-root-of-time rule
• Refrain from using the Effective Liquidity Horizon here (in particular, ignore the instrument maturity cap),
• Eliminate the kappa non-linearity adjustment for the 10-day returns based calculation, as shock sizes for 10-day are lower
Using 120 (or at minimum 20) day returns can cause:
• Pricing failures, owing to shocks of such size applied instantaneously
• Curves/surfaces even more unrealistic and not arbitrage-free anymore, when applied to some points on the object but not others
Rescaling the ES figure instead (Figure 9, please refer to attachment) would be in line with the approach adopted for modellable risk factors (i.e. ES, CRR2 draft Article 325bd, paragrah1), which is also consistent with the current thinking of this RTS, when an equivalent ES-like calculation is sought for NMRF stress charge.

Proposal for stressed expected shortfall (SES) component calculation to address data sparsity and non-sparse data
• While it is understood there is an objective to create standardization, it is difficult to create a single method that will work well in all situations. Therefore it suggested that there should be different approaches that are allowed to be used for SES calculation. The different approaches will be well defined and all strive to a common standard. This will create a reasonable level consistency without removing all flexibility to take the most appropriate approach.
• The proposed approach described in Figure 10 (please refer to attachment) is not a new proposal as such but a proposal to evolve the ideas presented in the EBA discussion paper into a potential framework that would be applicable across the broad array of NMRF types/circumstances introducing simplifications where possible.

1. Any of the 3 approaches above can be taken to calculate the component SES charge. There are a broad range of different NMRF types and framework needs to cope with all types.
2. Generally footnote 40 will be applied where possible so that NMRF will often be a residue basis
3. Use of scaled sensitivity allowed in any of these approaches if (a) low gamma for risk factor in that portfolio (b) if main convexity captured in ES model
4. No Kappa or Kappa =1 in RF approach
5. STDEV is the small sample adjusted standard deviation, i.e. sample standard deviation multiplied by sqrt((n-1)/n-1.5))
5. CLsigma use a 60% confidence interval
6. CES factor with maximum value and can be lowered if justified
7. All methods first target a stressed 10 day P&L that is then scaled up to a P&L for the liquidity horizon
8. Direct methods (i.e. 1 & 2) will work more naturally with calculation of NMRF across a series of granular risk factors. Note Industry requests to BCBS Market Risk Group (MRG) on granularity of observation and a granularity of the calculation of SES component.

Importance of testing:
It will be important to test a new approach using real portfolios and to check that there are not unexpected outcomes. Part of that testing should include whether by using a more robust measure of scale/spread (e.g. inter decile range IDR or inter quartile range IQR*) instead of standard deviation whether this can give better results e.g. more stable results.

Proposal on the data usage for calculating Standard Deviation
Allow using stale data to calculate standard deviation. It is very complex to cleanly differentiate between genuine stale data and non-stale data.

(*) Care has to be taken in defining IDR or IQR in small samples. Using the general percentile (plotting) point formula where percentile = (k-a)/(n+1-2a) where k is rank order, n is sample size and ‘a’ is plotting point factor often set at ½ or 1/3.
Industry members have formulated views on the risk-based approach and direct loss based approach and present the following observations.
In case the objective is to ensure that the stress scenarios lead to an expected shortfall (ES) equivalent number in all cases, a direct loss based approach seems to be more logical than the risk factor based stressed approach described here. The P&L approach is very similar to traditional risk metrics like ES and VaR, which naturally leads to the question why those risk factors should not be included in the ES model in the first place. This is why when there is sufficient data the direct ES calculation should be allowed. This is not out of line with creating standardisation and meeting the Basel standards. Note this is not the same as allowing the non-modellable risk factor (NMRF) to flow through the main ES model as there is the significant penalty from (a) not allowing aggregation with main ES and (b) conservative aggregation assumptions between stressed ES components.
Computational efforts and operational complexity are a significant concern in particular for banks planning to use full revaluation due to the large number of NMRFs (possibly many 1000s). The formula in paragraph 271a) does not consider additional revaluations necessary to solve the optimization problem in paragraph 247 and the overall efforts will therefore be significantly higher. This will especially lead to problem for banks that plan to calculate the NMRF charge based on full revaluation.

Limiting the calculations for the Risk Factor approach - monotonous loss functions
The EBA proposal to compute the maximum P&L impact over a given range for each risk factor could be burdensome depending on how many scenarios are considered sufficient to explore the range to find the maximum loss – in theory this could require infinite amount of scenarios. In many cases even when there is some non –linearity there will be the situation where the P&L with respect to the Risk Factor is monotonic when looked at over a wide range of shifts, meaning that the maximum loss will be at one extreme end of the risk factor range. For risk factors where the banks positions are in linear instruments this will be the case by definition. So in practice most risk factors will have this monotonic feature. In other cases where there is not this feature and the maximum loss is in the middle of the range the calculation burden can be controlled by specifying a sensible number of scenarios to consider e.g.10 scenarios per risk factor.
In order to mitigate the computational and implementation burden in the ‘Risk Factor based approach’, it should be satisfactory to only calculate the P&L for the maximum and minimum risk factor shift for the cases where the loss function is monotonic or reasonably expected to be monotonic. So this would reduce the calculation burden to only calculating two scenarios in these cases. In order to use this simplified approach and not calculate more scenarios the bank would have demonstrate periodically that this assumption of being monotonic is a reasonable assumption. Alternatively institutions can search over a limit number of scenarios e.g. 10 and in this case there is no extra requirement to establish it is monotonic.
The Industry view is that this indeed should be an option.
The direct approach can be modified to be equivalent to the full expected shortfall (ES) approach if there is enough data. Industry members believe that the direct approach is important to allow as there are situations where a simpler risk factor approach may not be able to give a good answer. E.g. if modellability is assessed by segments.
For solutions based on Full Revaluation this approach will quickly become computationally expensive due to the multitude of non-modellable risk factors (NRMFs) that would require a stand-alone ES calculation. Therefore, a simplification should be allowed to use scaled up sensitivities if the main risk is captured by the risk decomposition into MRF and NMRF basis.
a) How to define the frequency of review of the extreme scenarios of future shocks. In accordance with the monthly assessment of modellability of risk factors in Article 325bf(1) of the CRR2 proposal, a monthly review frequency could be considered a natural choice.
A monthly review should be sufficient in combination with the ability to define on-demand shocks for new risk factors.
b) Conditions in which supervisors may be dissatisfied with the calculation (leading to the application of the regulatory fallback).
If no suitable data or proxy data is available to estimate stress scenarios the fallback scenarios should be considered. Generally, using proxy data for non-modellable risk factor (NMRF) is less problematic as no basis risks are ignored.
f) Care must be taken that the method is sufficiently conservative in comparison with the IMA ES risk measure to avoid incentivising banks to include NMRF unduly.
The overall conservativeness is mostly driven by the aggregation scheme and therefore the overall capital impact will be substantially higher in the NMRF framework compared to a scenario where the risk factor would be included in the expected shortfall (ES) model.
Generally, it is possible that a small number of risk factors might be more conservative in ES when considered in isolation. This is driven by various facts, such as the usage of different regulatory multipliers in the NMRF and ES calculation or cross-effects between risk factors that are not captured in the NMRF framework.
These idiosyncratic effects are likely small compared to the overall conservativeness of the approach though.
The understanding of the most relevant non-modellable risk factor (NMRF) for institutions is still evolving as:
(a) bucketing rules and assumptions are being further clarified
(b) central data suppliers evolve their services.
Even with broad bucketing significant amount of risk factors are expected to be non-modellable, including many risk factors that are currently routinely included in VaR and SVaR models with full data histories.
The following NMRFs have been identified as amongst the most relevant for Industry members:
• Non-G10 rates risk factors
• Interest rate (IR) volatilities other than EUR and USD
• IR out-of-the-money (OTM) volatility for all currencies
• G10 FX Volatilities > 3y
• Non G10 FX volatilities
• Single name equity risk factors other than spot for develop markets ,(i.e. implied volatility, repo rates and dividends)
• Most non-US credit risk factors
Providing a precise answer to this question is difficult given the current uncertainty as per the final eligibility criteria at Basel level and given the large variety of risk factors (RFs) that may become non-modellable (i.e. what is fit for certain factors may be quite approximate for others).
Also, the answer would ultimately depend on each firm’s choice to proxy or not the non-modellable risk factor (NMRF) as per footnote 40 in Basel text (i.e. residual basis has least likely the same distribution than the outright factor itself).
In the framework proposed by the EBA, we identify mainly 2 areas where a statistical assumption on the NMRF distribution could be employed:
• Calibration of ‘calibrated shock’ CS, when a direct estimation on empirical data is not possible or deemed not sufficiently robust
• Computation of the kappa adjustment, when the loss profile is locally convex around CS

Regarding the first area, the EBA provides in annex 4 several values for theoretical skewed generalized t-distributions (SGT). Such distributions requires the calibration of 5 parameters, one of which (lambda) drives the distribution asymmetry. Figure 6 in this annex 4 of the EBA discussion paper gives evidence that the asymmetry has a direct impact on the estimation of the CES parameter (scaling factor from standard deviation to expected shortfall ES) with up to 33% increases between positive asymmetry (lambda=0.30) and no asymmetry (lambda=0).
Indeed, an asymmetrical distribution usually implies that one tail is fatter than the other. For illustration purpose, we have plotted in Figure 11 (please refer to attachment) a skewed normal distribution function (‘Snorm’ in amber). Whereas the kurtosis is equal to 3 for the whole distribution, the kurtosis restricted to the left tail only and right tail only equal 3.8 and 2.2 respectively. Similarly, the ratio ES>97.5 / σ is above 5 while ES<2.5 / (-σ) is below 1. In other words, the fatter one tail the thinner the other.

Although it is recognized that asymmetry exists at the RF level, it would be quite a punitive and unrealistic assumption that, for each and every NMRF, the ES of the loss function would systematically coincide with the fatter tail. Most likely, the stressed ES framework will cover a large number of risk factors and a sound base scenario should remain agnostic to which tail realizes the ES.
Owing to the above considerations, it is strongly recommended the EBA to avoid any statistical asymmetry at the RF level and just consider symmetrical distributions in its effort to capture potential “fat-tailedness”.
It should be noted that for many NMRF, daily quotes are available from brokers or market data providers. For these risk factors any assumptions, on the nature of the theoretical distribution to use, can be avoided by direct use of historical returns.
On the other hand there will be a smaller number of generally NMRF, similar to the current risk not in VaR (RNIV), e.g. certain correlations or inflation seasonality, for which data is indeed sparse. For such risk factors no general statistical distribution can be assumed. Instead banks need to decide on a case-by-case basis under which distribution assumption to model the stress shocks.
This question relates to the computation of the kappa adjustment, which is only necessary when the argmax ‘future shock’ FS of the loss profile is an endpoint of the search interval ‘calibrated stress scenario risk factor range’ CSSRFR. Indeed, when FS equals CS, the loss profile likely keeps increasing beyond CS, hence the EBA’s proposal to compute a positive adjustment if this increase is “super-linear”.
As detailed in previous answers, the Industry participants reiterate their preference for kappa=1.
That being said, should such an option not be retained, it is noteworthy that quadratic approximations of the loss profile may enable the use of closed-form formulas for the adjustment.
For instance, usual second-order Taylor Young development enables stressed expected shortfall (SES) estimation of the form as displayed in Figure 12 (please refer to attachment).
Although this type of simplified approach could be considered, it would still mean additional model risk (distributional assumptions for ϕ, extrapolation of loss function) and complexity (estimation of the local convexity) compared to the Industry proposal to set kappa at 1.
The above formula also shows that the non-linearity adjustment comes into play regardless of the sign of Γ. In particular, when the argmax FS is not an endpoint of ‘calibrated stress scenario risk factor range’ CSSRFR, Γ is negative and the adjustment should reflect how much lower the ES of losses is compared to the max loss f(FS). Since there is in general no particular reason why Γ should be positive or negative, kappa=1 (i.e. no adjustment) is certainly the fairest assumption.
Industry participants can respond to this question through bilateral engagement.
Option 2 is preferred as Option 1 is generally not viable as max loss is not viable for the vast majority of risk factors.
The Industry views the use of the fallback case as a very rare scenario to occur. In almost all cases it will be more appropriate to use one of the options described under Q62 and if required use calibrated stress shifts from gauge data rather than use a default table of shifts.
The sensitivities-based method (SBM) shifts are extremely conservative when applied to individual risk factors.
The paradigm from SBM and non-modellable risk factor (NMRF) is completely different. The SBM effectively uses very conservative factors on main risk components but in turn has missing risk embedded in the model.
The conservatism is even more severe as many NMRF are basis risks or risk parameters that have a different magnitude of volatility to the main underlying markets that have higher volatility.
However given a fallback approach has to be specified maybe use of SBM weights is the easiest to set out. Modifications could include a general reduction in SBM factors, or the use of X% of standardised if the risk factor has been decomposed into MRF and NMRF. But this maybe an unnecessary complication to introduce given that the fallback approach should hardly ever be applied in the industry view.
In the event that the RTS on assessment methodology is adopted, the Industry agrees that it makes sense for the EBA to propose a revised set of the rules for application to the FRTB.
Many of the articles, however, will not be applicable under the new FRTB regulations. As such a new version should remove, amend and add requirements to the current version in order to be applicable to the FRTB framework.
Given the non-final status of both the RTS on assessment methodology and CRR2, it is difficult to agree that some of the articles from one should apply to the other. In principle, however, it is useful to establish that they can be used as guidance for banks and Competent Authorities, however, we would not recommend that any formal requirement or standard be established ahead of a revised version.
With respect of the RTS on model changes, this should only become relevant post go-live of the FRTB framework. As such, it allows sufficient time for a revised version to be published.
There are also a number of elements of the current RTS, which will need to be explicitly addressed ahead of the implementation of FRTB, such as:
• Definition of extensions, in particular to new desks.
• Changes in market risk factors to be appropriately considered by quantitative tests at the desk level
Industry participants can respond to this question through bilateral engagement.
Whilst it is fully appreciated that RTS mandates are out of EBA’s control, Industry members would like to express concern regarding the scope of application of the EBA RTS on DRC in Article 325 bq(12) of CRR2 Commission draft.

EBA shall develop draft regulatory technical standards to specify the requirements that have to be fulfilled by an institution's internal methodology or external sources for estimating default probabilities and loss given default in accordance with point (e) of paragraph 5 and point (d) of paragraph 6.

As currently drafted, such RTS only applies to institutions with no internal ratings-based approach (IRBA) approval to estimate internal probability of default (PD)/ loss given default (LGD). Industry participants consider such RTS should also be applicable to IRBA-validated institutions for those issuers in default risk charge (DRC) scope which are not covered by internal credit methodologies.

In paragraph 289 of the discussion paper, the EBA recommends that DRC guidelines are directly addressed in the revised RTS on assessment methodology. The Industry would like to go one step further and suggest that the above-mentioned DRC RTS is tackled together with the guidelines in the revised RTS on assessment methodology.

The revised RTS on assessment methodology should then clarify that flexibility offered to non-IRBA institutions to use alternative approaches (external ratings simplified approaches) should also apply to IRBA-validated institutions for issuers with no internal PD/LGD. In case such flexibility is not granted, IRBA-validated institutions will potentially have to rate internally thousands of issuers meaning in most cases collecting comprehensive information (e.g. various balance sheet ratios) on each and single issuer which is intractable.