Response to consultation on proposed RTS in the context of the EBA’s response to the European Commission’s Call for advice on new AMLA mandates
Question 1: Do you have any comments on the approach proposed by the EBA to assess and classify the risk profile of obliged entities?
The following responses have been carefully structured to comply with the limitations of the EBA’s consultation submission form, which does not allow for attachments. Consequently, while each response addresses the corresponding question directly and self-sufficiently, additional explanations, technical clarifications, and illustrative examples related to the proposed approach will be provided in a supplementary document submitted via email. This supporting material offers further insights into the operational mechanics and supervisory benefits of the proposed method, as well as its relevance to each specific question in the consultation.
Coming back to the question - yes. While the intent of the EBA to provide a harmonized, objective mechanism for risk profiling is welcome, we believe that the proposed approach, being heavily reliant on static, self-reported, and qualitative assessments, may lack the precision and auditability necessary for a supervisory framework of this magnitude. Specifically:
- It may be subject to inconsistent interpretation across jurisdictions and institutions.
- It may not provide supervisors with adequate means to measure the accumulation of risk exposures over time or to monitor adherence to stated risk appetites.
We propose the integration of a quantified, standardized, and entity-specific risk measurement framework. This approach measures accumulated non-financial risk, including money laundering and terrorist financing (ML/TF) risks, ,through transaction-based attribution. Rather than relying solely on static risk profiles, this method captures residual risk at the point of origin (i.e., the transaction level) and accumulates it as it flows through the organization’s operational layers.
However, it is essential to clarify that while the method is based on transactional attribution, it does not entail monitoring each transaction individually for ML/TF. Instead, it quantifies and accumulates residual risk based on standardized metrics and the effectiveness of enterprise-wide controls. Anti-ML/TF controls are incorporated into the organization’s integrated risk mitigation index. Where these controls are ineffective, the method reflects higher residual ML/TF risk accumulation through the aggregate of accepted transactions.
This supports a realistic and integrated perspective: weak controls at operational levels (e.g., poor onboarding checks or inadequate transaction monitoring) result in proportionally higher accumulated residual risk in ML/TF-related categories, even without requiring transaction-by-transaction scrutiny. In contrast, robust enterprise-wide controls lower residual risk scores systematically and transparently.
In our view, this approach:
- Reduces ambiguity and improves comparability across institutions;
- Provides a transparent audit trail from inherent to residual risk;
- Enables early supervisory intervention and ongoing assurance;
- Aligns closely with the nature of ML/TF threats, which are often emergent and embedded in operational scale rather than in isolated events;
- Offers a consistent framework to integrate all non-financial risks under a single, auditable, and comparable measurement model, with benefits for regulators, governments, and industry stakeholders alike.
Question 2: Do you agree with the proposed relationship between inherent risk and residual risk, whereby residual risk can be lower, but never be higher, than inherent risk? Would you favour another approach instead, whereby the obliged entity’s residual risk score can be worse than its inherent risk score? If so, please set out your rationale and provide evidence of the impact the EBA’s proposal would have.
We agree in principle with the assertion that residual risk should not be higher than inherent risk. However, we believe it is critical to distinguish between rule-based limitation and measurement-based validation.
The proposed approach appears to treat the relationship between inherent and residual risk as a static hierarchy based on assumptions about control effectiveness. In contrast, the alternative framework we suggest treats this relationship as a quantifiable and observable outcome of transaction-level activity and institutional control performance.
In this model:
- Residual risk equals inherent risk only when no effective mitigation is evidenced.
- Residual risk decreases as evidence of control actions accumulates.
- Residual risk cannot exceed inherent risk, not because of an imposed rule, but because both are computed from the same foundational dataset.
Importantly, control effectiveness, including that which pertains to ML/TF, is embedded in an overall risk mitigation index. This index is built from observable and auditable best practice benchmarks implemented at the level of each control element within operational processes. Thus, if anti-ML/TF controls weaken, a corresponding and measurable increase in accumulated residual ML/TF risk will be registered, making this deterioration immediately visible to supervisors (on a daily basis).
This method avoids the pitfalls of subjective residual risk estimates and provides a consistent foundation for supervisory oversight. It enables:
- Early detection of ineffective or deteriorating controls through the accumulation of residual risk;
- Comparable measurement of ML/TF residual risk across entities using a standardized approach;
- More precise prioritization of supervisory actions and allocation of oversight resources.
To conclude, it is our belief that the proposed rule-based relationship may be theoretically sound but operationally weak. In summary, we believe that the proposed rule-based relationship may be theoretically sound but operationally weak. It appears to adopt an externally oriented perspective, focusing on how an institution’s risk profile is perceived from a supervisory standpoint, rather than how risk is internally generated, accumulated, and mitigated within the institution.
In contrast, our proposed approach is internally oriented. It captures the build-up of risk at the transactional level and reflects the institution’s control effectiveness over time, allowing risk to be understood as a dynamic and measurable element of internal operations. This distinction is critical, as it allows for earlier intervention, tailored remediation, and a more objective calibration of compliance expectations.
A measurement-based approach offers greater conceptual coherence and stronger practical enforceability, while also enabling the integration of all non-financial risks within a single, unified supervisory framework.,
3a: What will be the impact, in terms of cost, for credit and financial institutions to provide this new set of data in the short, medium and long term?
We believe that the costs associated with implementing the data requirements outlined in Annex I will vary significantly depending on the maturity of each institution’s risk data infrastructure. In the short term, compliance may incur substantial costs for institutions that lack integrated, real-time risk data capabilities. These costs could include:
- Upgrades to data collection and aggregation systems;
- Integration of fragmented operational risk data sources;
- Additional staffing and data governance protocols;
- Enhancements to reporting and validation tools.
However, over the medium and long term, the adoption of a standardized, integrated risk quantification framework could offset these costs by improving risk visibility, automating reporting processes, and enabling more efficient supervision. Institutions that implement data architectures which support risk attribution at the source (e.g., at transaction or control level) will be positioned to meet regulatory expectations consistently and sustainably.
This approach, by integrating all non-financial risks into a common operational framework, also reduces redundancy and creates synergies across compliance, audit, and risk functions—potentially generating substantial efficiency gains over time.
Furthermore, institutions benefit not only from compliance alignment but also from enhanced internal control. The proposed model allows organizations to independently detect risk accumulations and control deteriorations in real-time, empowering them to implement timely remedial actions without awaiting external regulatory intervention. This improves operational resilience, reduces the likelihood of enforcement actions, and supports a more constructive supervisory relationship based on transparency and proactive management.
3b: Among the data points listed in the Annex I to this consultation paper, what are those that are not currently available to most credit and financial institutions?
Based on industry observations, many of the data points listed in Annex I may be partially available, but few institutions possess a fully integrated infrastructure capable of generating them consistently, accurately, and in a format suited for risk-based supervision. In particular, the following categories of data are often unavailable or fragmented:
- Granular operational risk controls and effectiveness indicators, especially those linked to specific business processes or control owners;
- Timely data on control failures and corresponding remediation efforts;
- Dynamic linkages between inherent and residual risk at the process or business unit level;
- Standardized and forward-looking risk metrics aligned with supervisory objectives.
These gaps are largely due to legacy systems, siloed data environments, and a lack of standardized risk measurement frameworks across institutions. The framework we propose would facilitate the consolidation and structuring of such data by embedding risk measurement into core business processes and linking it to quantifiable controls. This alignment could improve data quality and availability without placing undue burden on institutions.
3c: To what extent could the data points listed in Annex I to this Consultation Paper be provided by the non-financial sector?
We believe that a portion of the data listed in Annex I could be reported by larger non-financial entities, particularly those operating in highly regulated sectors or with advanced compliance frameworks (e.g., energy, telecommunications, and critical infrastructure providers). However, many non-financial entities—especially SMEs—may lack the systems, expertise, or incentives to systematically generate and report this type of data.
For the non-financial sector, a scalable and proportional framework would be essential. Embedding risk quantification at the operational level, as suggested in our approach, could enable even less sophisticated entities to generate standardized indicators of residual risk by focusing on control implementation and transactional exposure. This could increase regulatory inclusion without imposing excessive compliance burdens.
Additionally, such a framework could encourage non-financial entities, particularly those that are partners or service providers to financial institutions, to adopt this approach voluntarily. By doing so, they would be able to supply relevant, structured information that aligns with supervisory expectations and integrates seamlessly into the reporting systems of regulated financial institutions. This would enhance the overall integrity and transparency of the financial ecosystem and improve collaborative risk management across sectors.,
Question 4: Do you have any comments on the proposed frequency at which risk profiles would be reviewed (once per year for the normal frequency and once every three years for the reduced frequency)? What would be the difference in the cost of compliance between the normal and reduced frequency? Please provide evidence.
We understand the rationale for establishing a regular review cycle to ensure that risk profiles remain current. However, we believe that the proposed fixed frequency may not fully align with the dynamic nature of non-financial risks, particularly ML/TF risks, which can escalate rapidly based on geopolitical, organizational, or behavioral triggers.
Under the approach we propose, the accumulation of residual risk is monitored continuously through operational data. This enables institutions to identify and respond to material shifts in their risk profiles in near real-time, rather than relying on annual or triennial reassessments. As a result, the need for fixed periodic reviews may become secondary to an adaptive, evidence-based monitoring process.
In terms of compliance costs:
- Institutions using static or manual risk assessment processes will likely find annual reviews resource-intensive, with efforts duplicating or lagging actual risk developments.
- A real-time, integrated framework would allow institutions to generate up-to-date risk profile data as a by-product of normal operations, significantly reducing the marginal cost of compliance.
Moreover, such a model supports a more forward-looking and preventative supervisory posture: issues are flagged before they escalate, minimizing supervisory burdens and enforcement interventions. This continuous observability allows even entities eligible for a reduced frequency to maintain up-to-date insights, ensuring that longer review intervals do not become blind spots.
In conclusion, while we appreciate the intent behind the proposed review frequencies, we believe that transitioning to a dynamic, continuous monitoring model would provide a more accurate and cost-effective foundation for supervision and self-governance. Furthermore, reliance on backward-looking periodic reviews inherently limits responsiveness: if breaches are identified during these assessments, institutions may be required to retrospectively analyze and possibly recalculate previously accumulated exposures, an exercise that is both resource-intensive and operationally complex. A continuous model would significantly mitigate this burden by identifying deviations as they occur, thereby avoiding retroactive corrections and enabling timely corrective actions.
It is also important to note that a periodic review process, by its nature, requires regulators to assess each institution's risk profile in its own specific context. While this ensures tailored oversight, it can substantially hinder the comparability of risk data across institutions, particularly when institutions vary widely in size, complexity, and reporting practices. A standardized, dynamic model that attributes risk based on operational data provides a common foundation for consistent interpretation, enabling regulators to draw more meaningful comparisons across the supervised population and to identify emerging systemic trends more efficiently.
Question 5: Do you agree with the proposed criteria for the application of the reduced frequency? What alternative criteria would you propose? Please provide evidence.
We recognize the EBA’s effort to establish criteria that justify a reduced review frequency for lower-risk institutions. However, we believe that the current criteria, based largely on backward-looking indicators and subjective assessments, may not fully support a reliable or sustainable supervisory model over time.
The proposed criteria focus on general indicators of “lower risk,” such as low inherent risk scores and an absence of recent supervisory findings. While intuitively reasonable, these indicators may not reflect the current or emerging risk posture of an institution. They may overlook deteriorations in control effectiveness or shifts in business exposure that could go undetected between review cycles.
We suggest a shift toward dynamic, data-driven eligibility for reduced frequency, based on real-time metrics derived from ongoing operational activity. Institutions could qualify for reduced frequency if they consistently demonstrate:
- Low and stable levels of accumulated residual risk in specific categories (e.g., ML/TF, fraud, conduct);
- Strong control effectiveness evidenced by consistently high mitigation index scores;
- Timely identification and remediation of emerging risk exposures without external prompting;
- Transparent and auditable risk metrics continuously aligned with supervisory expectations.
Such an approach ensures that eligibility for reduced review is not simply a designation based on past performance, but a result of continued demonstrable risk governance maturity. It also promotes ongoing investment in effective risk infrastructure and self-governance, rather than treating reduced frequency as a static entitlement.
Finally, by anchoring eligibility in standardized, operationally derived risk metrics, this model facilitates comparability across institutions and gives supervisors a consistent and objective basis for granting or revoking the privilege of reduced review frequency.
Question 6: When assessing the geographical risks to which obliged entities are exposed, should crossborder transactions linked with EEA jurisdictions be assessed differently than transactions linked with third countries? Please set out your rationale and provide evidence.
While it is understandable that cross-border transactions involving third countries may introduce heightened risk due to variations in legal frameworks, regulatory oversight, and AML/CTF standards, we believe that assessing geographic risk strictly on the basis of jurisdictional classification (EEA vs. non-EEA) may oversimplify the actual exposure landscape.
A more robust approach would involve dynamically measuring geographic risk as it accumulates through transactions, based on the actual operational exposure and the demonstrated effectiveness of applicable controls. Under this model:
- Risk levels are determined by the volume, nature, and risk profile of cross-border transactions rather than jurisdiction alone;
- Jurisdictional classification is one of several contributing factors to inherent risk, but not the sole determinant;
- Residual geographic risk is quantified by incorporating controls in place for KYC, transaction monitoring, correspondent banking, and data lineage across borders.
This approach supports a more precise and evidence-based distinction between exposures. For example, an institution with a high volume of transactions with a third country that has effective bilateral controls and real-time monitoring may exhibit lower residual geographic risk than an institution with limited controls engaging in higher-risk intra-EEA transactions.
Furthermore, dynamically measured geographic risk allows supervisors to assess institutional exposure based on actual behaviors and governance performance, rather than generalized assumptions. It also facilitates the early identification of specific risk concentrations that may not be visible through static jurisdiction-based categorization.
In summary, while third-country status is relevant, we believe that cross-border risk assessment should be grounded in observed transactional behaviors, contextual control effectiveness, and measurable accumulation of residual exposure. This enables both greater supervisory precision and more equitable treatment of institutions operating in diverse international contexts.
Question 1: Do you agree with the thresholds and provided in Article 1 of the draft RTS and their value? If you do not agree, which thresholds to assess the materiality of the activities exercised under the freedom to provide services should the EBA propose instead? Please explain your rationale and provide evidence of the impact the EBA’s proposal and your proposal would have.
We acknowledge the intention behind establishing thresholds in Article 1 of the draft RTS to identify material cross-border service activity. However, we believe that using fixed numerical thresholds (e.g., turnover, transaction volume, or customer counts) may only partially capture the actual risk exposure presented by an institution's activities under the freedom to provide services.
Fixed thresholds offer clarity but risk creating perverse incentives (such as fragmentation of business structures) or overlooking high-risk but low-volume activities (e.g., niche services with elevated ML/TF exposure). Furthermore, fixed metrics may disproportionately impact newer or smaller entities that operate cross-border under rigorous controls but fall near arbitrary thresholds.
An alternative approach would be to complement quantitative thresholds with qualitative and operational risk indicators that reflect the nature and effectiveness of an institution's control environment. Specifically, thresholds for materiality could be assessed dynamically by considering:
- The accumulation of residual risk attributed to cross-border transactions;
- The effectiveness of controls in the jurisdictions where services are offered, as evidenced by real-time mitigation metrics;
- The diversity and complexity of the products or services provided;
- The observed behavioral risk patterns associated with the client base in those jurisdictions.
Incorporating a mechanism to calculate and observe these parameters dynamically would allow regulators to identify genuinely material risk profiles without relying solely on static measures. This model supports early supervisory visibility into emerging threats while also reducing compliance burdens for institutions that maintain effective, transparent, and well-documented risk control mechanisms.
By integrating both operational and exposure-based indicators into the materiality assessment, the resulting supervisory landscape would be more proportional, equitable, and risk-sensitive, benefiting both regulated entities and supervisory authorities.
Importantly, this would also enable regulators to manage by exception rather than engage in resource-intensive micro-management of compliance levels. By focusing supervisory attention on outliers and abnormal accumulations of risk, supervisory efficiency and effectiveness would be significantly enhanced.
Question 2: What is your view on the possibility to lower the value of the thresholds that are set in article 1 of the draft RTS? What would be the possible impact of doing so? Please provide evidence.
We believe that lowering the thresholds as currently proposed may not necessarily improve the identification of material cross-border activity, and may instead lead to disproportionate compliance obligations and supervisory inefficiencies.
Lower thresholds could result in a wider range of institutions being captured under the scope of heightened scrutiny, including those whose activities present limited risk in terms of ML/TF exposure. This could divert supervisory focus away from genuinely high-risk cases and create resource burdens for both supervisors and obliged entities, particularly smaller or more specialized firms.
Instead, we propose that materiality be defined through a combination of thresholds and operationally derived indicators that measure actual exposure and control effectiveness. If thresholds are to be lowered, such a change should be accompanied by mechanisms that differentiate entities based on their demonstrated ability to manage and mitigate risk effectively. For example, entities with consistently low residual risk accumulation—evidenced through real-time operational data—should remain eligible for reduced oversight despite falling within lower threshold brackets.
In this way, the focus shifts from a purely volumetric understanding of risk to one that is proportional, evidence-based, and better aligned with the institution’s actual risk posture. This model reduces the likelihood of over-regulation and supports the strategic allocation of supervisory resources to areas where intervention is most warranted.
Question 3: Do you agree on having a single threshold on the number of customers, irrespective of whether they are retail or institutional customers? Alternatively, do you think a distinction should be made between these two categories? Please explain the rationale and provide evidence to support your view.
While the simplicity of applying a single threshold across all customer types may offer administrative ease, we believe that this approach risks overlooking meaningful differences in the risk profiles, behaviors, and monitoring challenges posed by retail versus institutional clients.
Retail customers typically present a higher volume of low-value transactions, are more heterogeneous, and may vary widely in terms of geographic location, financial literacy, and risk sensitivity. Conversely, institutional clients often represent fewer entities but with more complex operations, larger transaction sizes, and often more opaque ownership or organizational structures.
A single threshold fails to capture these nuances. It could either underestimate the significance of institutional risk concentrations or overstate the systemic exposure of large volumes of well-controlled retail relationships. To maintain both risk sensitivity and fairness, we believe that differentiated thresholds should be considered, accounting for:
- The average transaction value and frequency per customer segment;
- The complexity and risk associated with onboarding, due diligence, and ongoing monitoring requirements;
- The degree of inherent risk tied to the client’s sector, structure, and jurisdiction;
- The institution's ability to quantify and track residual risks by segment in operational terms.
By adopting segmented thresholds—aligned with operational risk data and observed behaviors—supervisors can more effectively allocate oversight resources and institutions can avoid disproportionate compliance efforts.
Ultimately, a flexible, data-driven approach to customer segmentation better supports supervisory objectives and reflects the diversity of risk exposures across the financial sector.
Question 4: Do you agree that the methodology for selection provided in this RTS builds on the methodology laid down in the RTS under article 40(2)? If you do not agree, please provide your rationale and evidence of the impact the EBA’s proposal and your proposal would have.
We understand that the intent of the selection methodology under Article 12(7) is to build upon the framework developed under Article 40(2). However, we believe that this continuation, as currently framed, may miss an opportunity to address structural limitations present in the original methodology, particularly its static and compliance-centered orientation.
The Article 40(2) approach establishes a rule-based link between inherent and residual risk, anchored largely in assumptions about mitigation effectiveness. While this provides conceptual clarity, it does not account for the actual behaviors, control degradation, or emerging exposures that occur dynamically across institutions' operations.
A more robust and operationally grounded methodology, one that captures accumulated risk based on transactional and control-level data, would enhance the integrity and precision of the selection process under Article 12(7). In this alternative model:
- The exposure arising from activities under the freedom to provide services is quantified as it occurs, with residual risk accumulating only where controls are demonstrably ineffective or absent;
- The selection of entities for enhanced oversight would thus reflect actual, observed deterioration in risk posture, rather than assumptions based on static scoring models;
- Risk attribution and mitigation efforts are assessed in real time, reducing reliance on periodic reviews or historical performance.
Such a system enables the supervisory process to focus on those institutions where actual operational indicators justify intervention, rather than broadly applying criteria based on generalized classifications. This not only improves supervisory resource allocation, but also allows institutions to better understand the rationale behind their classification and engage in self-corrective behavior.
In summary, while the intent to build upon Article 40(2) is understood, we believe that adopting a more dynamic, data-driven methodology would provide a more effective foundation for selection and risk-based prioritization.
Question 5: Do you agree that the selection methodology should not allow the adjustment of the inherent risk score provided in article 2 of draft under article 40(2) AMLD6? If you do not agree, please provide the rationale and evidence of the impact the EBA’s proposal would have.
We believe that not allowing any adjustment to the inherent risk score—irrespective of institution-specific operational data—limits the accuracy, fairness, and responsiveness of the selection methodology.
Inherent risk scores are typically derived from generalized exposure categories and may not account for institution-specific factors such as geography, business model, client base, and control maturity. By making these scores static and non-adjustable, the methodology risks classifying institutions in a manner that does not reflect their actual risk posture, operational resilience, or risk governance performance.
A more effective approach would be to permit risk-sensitive calibration of inherent risk scores, supported by operational metrics that demonstrate how certain structural factors or control configurations influence the effective exposure of the institution. For example, if an institution operates in a high-risk sector but has successfully implemented risk-specific onboarding protocols, automated monitoring systems, and real-time risk visibility mechanisms, its effective inherent risk may justifiably be lower than what would be inferred from sectoral classification alone.
Allowing justified adjustments based on verifiable operational performance would:
- Improve the precision of supervisory prioritization;
- Enhance transparency and engagement between institutions and supervisors;
- Encourage proactive governance and continuous improvement among obliged entities;
- Avoid the penalization of entities that actively invest in structural risk mitigation.
While safeguards should be in place to ensure consistency and prevent manipulation, a degree of flexibility to adjust inherent risk scores in line with operational data would make the overall framework more adaptive, risk-aligned, and equitable.
Under the proposed method, such adjustments can be operationalized through the calibration of parameters related to product risk profiles and transaction volume bands. This allows obliged institutions to tailor their measurement of accepted risk exposures to reflect the realities of their specific business models and client interactions. Over time, these calibrations may be shared among institutions as emerging best practices or could evolve into a centralized reference framework, benefiting the entire sector.
This would support a more uniform and evidence-based approach to inherent risk measurement, enhancing comparability, promoting industry-wide learning, and strengthening the overall alignment of supervisory metrics with operational realities.
Question 6: Do you agree with the methodology for the calculation of the group-wide score that is laid down in article 5 of the RTS? If you do not agree, please provide the rationale for it and provide evidence of the impact the EBA’s proposal and your proposal would have.
While we recognize the value of calculating a group-wide score to ensure consolidated supervision and coordinated oversight, we believe that the current methodology may not fully account for the risk dynamics that emerge from the diversity and autonomy of group entities. A simple aggregation or weighted average approach, as may be implied by the current draft, could potentially obscure risk concentrations or emerging vulnerabilities in specific business units or jurisdictions.
We propose that a more operationally grounded calculation method be considered—one that:
- Quantifies residual risk at the transaction level across all entities, based on actual exposure and demonstrated control effectiveness;
- Aggregates group-wide risk by accounting for interdependencies, risk transmission channels, and shared services or infrastructure that may amplify vulnerabilities;
- Enables the isolation of outlier entities within a group that materially alter the group’s risk profile, allowing for more targeted interventions.
Such an approach would provide greater supervisory clarity and foster more accurate internal group-level governance. Additionally, it would incentivize each entity within the group to maintain its own operational discipline, as its performance directly affects the overall group risk posture.
Moreover, using this model, institutions would be able to proactively identify and address group-wide risks before they aggregate into systemically significant exposures. It would also allow for benchmarking across groups, enhancing regulatory comparability and supervisory transparency.
In summary, while the group-wide score is a valuable concept, we believe that a methodology anchored in dynamic, transaction-based risk measurement and disaggregated control performance would produce more reliable and actionable insights for both supervisors and group-level governance.
The method we propose allows for the aggregation of data across the group in a structured and meaningful way. By attributing residual risk to transactions and applying standardized parameters for product risk profiles and volume bands, each entity within the group contributes risk data in a consistent format. This harmonized structure enables group-level consolidation without distorting individual risk contributions. The aggregated risk profile thus reflects the true operational exposure of the group, highlighting areas of vulnerability or exemplary performance across subsidiaries. It also allows for comparative assessments within and across groups, ultimately supporting a more integrated and evidence-driven supervisory process.
Question 7: Do you have any concern with the identification of the group-wide perimeter? Please provide the rationale and the evidence to support your view on this.
Yes, we believe that the identification of the group-wide perimeter requires further refinement to ensure that risk measurement and supervisory assessments are both accurate and proportionate.
A key concern is that a formalistic or static definition of the group-wide perimeter may not reflect the actual risk exposure or control structure within diversified financial groups. For example, entities that are legally within the same group may vary significantly in terms of the nature, complexity, and inherent risk of their activities. In such cases, a perimeter that includes all group entities without consideration of their materiality or contribution to group-wide risk could lead to distorted or diluted supervisory conclusions.
The methodology we propose allows for the identification of a dynamic and meaningful group-wide perimeter by anchoring it in the actual accumulation of risk through transactions and the structure of inter-entity controls. Risk is attributed to the specific entities generating it, and then aggregated into the group profile using harmonized data structures. This ensures that the perimeter reflects not only legal or organizational connections but also real risk transmission pathways and shared operational infrastructure.
Furthermore, the model facilitates the clear identification of high-impact entities within the group—those that either contribute disproportionately to residual risk or serve as control hubs for the wider organization. This enables supervisors to tailor their oversight based on real exposure and influence, rather than simply relying on legal form or group affiliation.
A dynamic perimeter definition also supports fairer and more efficient supervision by distinguishing between core and peripheral entities, allowing for proportional regulation and targeted remediation where needed.
Therefore, while group-wide perimeter identification is essential, we recommend a more nuanced, data-driven approach that focuses on operational interdependencies and actual risk flows rather than static organizational charts.
Question 8: Do you agree to give the same consideration to the parent company and the other entities of the group for the determination of the group-wide risk profile? Do you agree this would reliably assess the group-wide controls effectiveness even if the parent company has a low-relevant activity compared to the other entities?
We believe that giving the same consideration to the parent company and all other entities within the group, without differentiation, may lead to an inaccurate reflection of the group-wide risk profile and control effectiveness. While uniform treatment ensures formal equality, it may obscure the actual dynamics of risk accumulation and control application across the group.
Parent companies may at times serve primarily a strategic, financial, or administrative function and may have limited direct exposure to transactional activity or operational risk. Conversely, certain subsidiaries or business units may bear the bulk of risk-generating activities, and may also be responsible for implementing key elements of the group’s risk control infrastructure.
The method we propose addresses this by measuring residual risk and control effectiveness at the entity level, based on transaction-level data and calibrated parameters. This allows each entity’s contribution to group-wide risk to be assessed proportionally, based on real operational inputs. Where the parent company is primarily responsible for governance and shared risk functions, its effectiveness is still captured - through the observed mitigation performance across the group. However, its weight in the risk profile is commensurate with its actual operational impact.
Such an approach:
- Reflects the true distribution of risk-generating activity and control execution across the group;
- Allows for accurate identification of structural weaknesses or excellence at the entity level;
- Encourages responsibility and accountability aligned with operational roles, rather than legal form;
- Enhances the fairness and precision of supervisory assessments.
To conclude, while the parent company plays a critical role in setting standards and ensuring compliance, the evaluation of risk should reflect the relative operational relevance of each group entity. This ensures a more faithful representation of group-wide risk and a more actionable basis for both supervision and internal risk management.
Question 9: Do you agree with the transitional rules set out in Article 6 of this RTS? In case you don’t, please provide the rationale for it and provide evidence of the impact the EBA’s proposal and your proposal would have.
We acknowledge the importance of transitional arrangements to ensure that institutions and supervisors can adapt effectively to new regulatory frameworks. However, we believe that the proposed transitional rules may benefit from further clarification and refinement to ensure alignment with implementation realities and to avoid unintended compliance burdens or supervisory gaps.
Specifically, if the implementation timeline is too short, institutions may resort to compliance shortcuts or minimal-effort implementations that replicate existing risk assessment methodologies rather than adopt innovative, operationally grounded frameworks. On the other hand, overly extended timelines may create a regulatory vacuum or delay the realization of the intended risk-sensitivity and supervisory improvements.
We recommend that the transitional arrangements be explicitly linked to demonstrable operational progress milestones, such as:
- The institution’s readiness to report on residual risk accumulations across cross-border transactions;
- The internal alignment of product-level and volume-based risk parameters;
- The institution’s ability to disaggregate group-wide risk and identify material contributors.
A phased approach that encourages early voluntary adoption, while supporting institutions in developing their capabilities, would yield a more meaningful transition. It would also create opportunities for regulators to observe and iterate on practical implementation issues in real time.
From a supervisory perspective, this would allow for early calibration of expectations and more informed guidance, reducing the risk of post-transition enforcement surges. Importantly, the proposed method's reliance on harmonized data structures and real-time observability facilitates both accelerated onboarding and long-term integration.
In our view, transitional rules should balance ambition with feasibility, enabling both industry and supervisors to prepare for a more dynamic, evidence-driven risk assessment framework without compromising its integrity or diluting its objectives.,
Question 1: Do you agree with the proposals as set out in Section 1 of the draft RTS? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We agree with the overall objective of the proposals in Section 1, which aim to establish a clearer and more standardized framework for customer due diligence (CDD) and risk assessment practices. However, we believe that their effectiveness could be significantly enhanced by shifting the methodological foundation from compliance-checklist adherence toward dynamic, operationally grounded indicators.
The proposals, as currently drafted, emphasize documentation and rule-based thresholds, which, although administratively clear, may not fully reflect the true risk landscape of institutions or their clients. A rule-based approach risks incentivizing form-over-substance compliance, with institutions potentially focusing more on procedural fulfillment than on active risk detection and mitigation.
We propose an alternative or complementary approach centered around the quantification of accepted residual risks through transactions. Under this model, institutions dynamically measure the accumulation of residual ML/TF risk by assessing each transaction against calibrated risk parameters (such as product type, jurisdiction, and counterparty attributes) and real-time control effectiveness.
This would:
- Provide ongoing visibility into actual operational risk rather than relying on periodic documentation reviews;
- Allow for early identification of control failures or blind spots;
- Promote proactive remediation and self-correction by institutions before regulatory intervention is necessary;
- Reduce reliance on rigid, one-size-fits-all compliance activities that may not correspond to risk realities.
From a cost perspective, while transitioning to this model may involve an initial investment in system alignment and risk modeling, it could lead to long-term efficiency gains. Institutions would be able to streamline compliance tasks by automating operational risk assessment and focusing human resources where the risk is material. Moreover, regulators would be empowered to manage by exception, focusing oversight where data indicates actual, not assumed, vulnerability.
We are confident that, while the proposals in Section 1 move in the right direction by establishing a coherent framework, they would benefit from a shift toward a more real-time, transaction-sensitive model. Such an evolution would better align institutional behavior with supervisory intent and would support a more efficient and risk-proportionate compliance architecture
Question 2: Do you have any comments regarding Article 6 on the verification of the customer in a non face-to-face context? Do you think that the remote solutions, as described under Article 6 paragraphs 2-6 would provide the same level of protection against identity fraud as the electronic identification means described under Article 6 paragraph 1 (i.e. e-IDAS compliant solutions)? Do you think that the use of such remote solutions should be considered only temporary, until such time when e-IDAS-compliant solutions are made available? Please explain your reasoning.
We believe that the provisions in Article 6 rightly acknowledge the growing reliance on remote customer onboarding and verification solutions, particularly in the context of digital financial services. However, we caution against assuming equivalency between all remote solutions and those that are e-IDAS compliant, without sufficient consideration for actual effectiveness in mitigating ML/TF risks.
Remote verification technologies—such as biometric authentication, live video calls, or digital document validation—can be highly effective if properly implemented and monitored. Yet their effectiveness is dependent on control robustness, user experience, and the broader risk environment in which they operate. e-IDAS-compliant solutions offer a structured, harmonized framework with cross-border validity and strong authentication assurances. Nevertheless, their availability remains uneven across jurisdictions and market segments.
We therefore recommend a pragmatic and risk-sensitive approach. Remote verification methods under paragraphs 2-6 should be permitted not merely as a temporary fallback, but as part of a broader risk-based model that allows institutions to calibrate controls in real time. Their continued use should be conditioned on demonstrable effectiveness in preventing identity fraud and controlling risk exposures, not just on whether e-IDAS compliance has been achieved.
Institutions could monitor the residual risk accumulated through onboarding processes involving these remote solutions. If consistently low residual risks are demonstrated, continued use should be supported. Conversely, if particular technologies correlate with elevated risk accumulation, their application should be subject to enhanced oversight or phased out.
Ultimately, flexibility in technology choice, coupled with real-time performance tracking and risk measurement, would allow institutions to innovate responsibly while maintaining supervisory confidence.
We would also caution against overreliance on e-IDAS-compliant solutions under the assumption that such systems are infallible. While e-IDAS offers a standardized and trusted identification mechanism, any digital system is subject to evolving threats, including data breaches, impersonation attacks, or systemic failures. A risk-based approach would allow institutions to monitor residual risk accumulation even when e-IDAS systems are used, ensuring that reliance on any single solution does not create hidden vulnerabilities.
This vigilance is especially important in complex or high-risk onboarding scenarios, where additional safeguards may be necessary to complement e-IDAS authentication. Supervisory frameworks should therefore encourage validation of system performance in practice, rather than relying solely on certification status or formal compliance. This approach also encourages a competitive technology landscape without compromising integrity or uniformity.
Question 3: Do you have any comments regarding Article 8 on virtual IBANS? If so, please explain your reasoning.
We recognize the benefits that virtual IBANs (vIBANs) can bring in terms of payment facilitation, operational efficiency, and customer experience. However, from an AML/CFT perspective, the use of vIBANs requires careful scrutiny, particularly due to their potential to obscure the true origin or destination of funds.
The principal concern lies in the possible masking of transactional flows through layering, where vIBANs are used across complex institutional structures or among third-party providers. Without granular visibility into the identity and activity of the underlying account holders, the traceability and auditability of financial transactions could be impaired. This could create blind spots in transaction monitoring and increase the challenge of detecting suspicious behaviors.
To mitigate these risks while preserving the legitimate benefits of vIBANs, we recommend that the framework under Article 8 include:
- A requirement for clear attribution of each vIBAN to its respective ultimate beneficiary or customer, with traceable records accessible in real time by both the obliged entity and the competent authority;
- A mechanism to quantify the residual risk associated with transactions involving vIBANs based on product, jurisdiction, client profile, transaction purpose, and transaction volume band;
- Integration of vIBAN-related data into institution-wide residual risk accumulation metrics, enabling better assessment of systemic exposure.
Under the proposed approach, risk from vIBAN transactions would be monitored as part of the broader framework for residual ML/TF risk accumulation. This ensures that even where multiple virtual accounts are used for legitimate purposes, any erosion in control effectiveness, such as misattribution, lack of clarity, or unusual concentration, would be detected in real time.
Although we support the use of vIBANs, we recommend that Article 8 establish more rigorous transparency, attribution, and risk monitoring standards to safeguard against their potential misuse.,
Question 4: Do you agree with the proposals as set out in Section 2 of the draft RTS? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We broadly agree with the intent of Section 2, which seeks to establish consistency and robustness in the identification and verification processes for obliged entities. However, we believe that its implementation should be augmented by a dynamic and operationally responsive layer of risk monitoring to ensure that compliance activities correspond to real-world exposures.
While the proposed obligations offer structural clarity, they may not always distinguish effectively between different levels of residual ML/TF risk among customers or customer types. Institutions that apply a static, rule-based model could inadvertently treat high- and low-risk relationships similarly, missing opportunities for proactive engagement or efficient resource allocation.
By adopting a transaction-level approach to residual risk accumulation, institutions can more precisely monitor changes in customer behavior, emerging typologies, or anomalies that might not be evident during initial onboarding. This would not only strengthen compliance posture but also create a feedback loop that continuously refines and adapts identification protocols based on observed risk patterns.
Moreover, real-time insights into how residual risk develops and clusters across different client segments can support more targeted verification procedures and alert thresholds. Rather than conducting uniform checks at fixed intervals, institutions would calibrate their scrutiny to actual risk accumulation, creating a more proportionate and effective regime.
We also wish to highlight a broader concern: a rigid interpretation of Section 2 may unintentionally incentivize overly cautious or exclusionary practices by institutions. When risk assessments are detached from operational realities and grounded only in rule-based interpretations, institutions may err on the side of over-rejection rather than risk being non-compliant. This could lead to denial of service to certain clients, particularly those from underserved or higher-risk demographics, without a clear path to requalification or recourse. In the long term, such practices could undermine financial inclusion goals, weaken trust in regulated institutions, and even drive riskier behaviors into less visible channels.
By contrast, a dynamic, evidence-based monitoring framework empowers institutions to make proportionate decisions based on measurable control performance and actual risk exposure. In addition, by continuously capturing and analyzing the risk signals reflected in Annex I of the RTS, such as changes in product type, customer behavior, geographic exposure, or transaction patterns, institutions can more accurately identify the emergence of higher-risk profiles and adapt their verification protocols accordingly. This promotes a culture of risk competence rather than defensive overcompliance and helps ensure that regulatory objectives are met in a fair, adaptive, and sustainable manner.
In terms of cost and impact, while the dynamic model introduces analytical overhead at the outset, it would reduce the long-term burden by enabling risk-driven prioritization of compliance tasks and minimizing the inefficiency of one-size-fits-all procedures. This would enhance both compliance effectiveness and operational sustainability.
Therefore, while we support the direction outlined in Section 2, we recommend integrating dynamic risk observability mechanisms that reflect operational realities, customer behaviors, and institutional exposures. This would ensure that the objectives of Section 2 are met not only in form, but in measurable substance.
Question 5: Do you agree with the proposals as set out in Section 3 of the draft RTS? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We generally support the direction taken in Section 3, which addresses the application of simplified due diligence (SDD) measures for customers, products, and services that are deemed to pose a lower risk of ML/TF. We agree with the principle that such measures should be permitted where justified by a demonstrably low level of risk. However, we believe the effectiveness and integrity of this framework would be improved by incorporating mechanisms that monitor how risk evolves in practice, rather than relying on initial classification alone.
One area of concern is the potential over-reliance on pre-set criteria to determine eligibility for SDD. While these criteria help define a common baseline, they may also become static reference points that fail to capture dynamic shifts in behavior, market conditions, or control effectiveness. This may inadvertently permit continued application of simplified measures even as actual risk rises, or conversely, prevent their use where a transaction profile is operationally well-controlled but administratively misaligned.
We recommend that the SDD framework include a provision for ongoing risk monitoring at the transaction level, whereby the effectiveness of simplified measures can be verified through residual risk accumulation metrics. This would enable institutions to dynamically validate whether SDD remains appropriate, and adjust their treatment based on real-time observations. In cases where risk begins to accumulate at a higher-than-expected rate, standard or enhanced due diligence could be reinstated promptly, ensuring responsiveness and proportionality.
Furthermore, our proposed method supports the use of institution-specific calibrations, allowing flexibility to adapt product risk ratings and transaction volume bands within evidence-backed parameters. This approach ensures that SDD is applied only where it truly reflects the risk landscape, and not simply because a product or service has been generically deemed low-risk.
Over time, aggregated results from this model could be used to develop industry-wide benchmarks or centralized references for SDD eligibility. This would improve regulatory comparability, reduce interpretive discrepancies, and support convergence toward shared, evidence-informed standards.
In terms of cost impact, incorporating these dynamic tools into the SDD decision-making process could reduce false positives and unnecessary escalations, making compliance more targeted and efficient. Regulators, in turn, would benefit from being able to monitor exception patterns rather than uniformly scrutinizing low-risk relationships.
So, while we agree with the objective and structure of Section 3, we recommend supplementing it with dynamic mechanisms that allow institutions to validate and adjust their application of simplified due diligence over time, based on risk accumulation trends and control effectiveness. In this model, institutions would also be empowered to proactively adjust their mitigation strategies as soon as risk thresholds are exceeded, without needing to await regulatory instructions. This capability supports a more timely and efficient risk response, fosters institutional accountability, and reduces the burden on supervisory authorities by encouraging self-correcting behaviors.
Question 6: Do you agree with the proposals as set out in Section 4 of the draft RTS? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We recognize the intent of Section 4, which establishes provisions for identifying and responding to changes in a customer's ML/TF risk profile. This effort to formalize escalation and reassessment triggers is important for maintaining the integrity of risk-based due diligence. However, we believe the approach can be significantly strengthened by incorporating continuous, data-driven monitoring rather than relying primarily on periodic reviews or predefined triggers.
The reliance on predefined escalation triggers or periodic reviews, as outlined in the draft, may cause institutions to miss gradual yet meaningful shifts in customer behavior or risk exposure. A model that measures residual risk accumulation through transactions, factoring in product types, transaction volumes, geographic patterns, and customer behaviors, would enable real-time insight into whether a customer's profile is evolving toward a higher risk classification.
This capability would empower institutions to intervene proactively, adjusting due diligence or mitigation measures as soon as predefined thresholds are breached, rather than waiting for retrospective reassessments. In cases where a customer's risk profile begins to deviate from initial expectations, real-time detection enables swift recalibration of controls to avoid compliance gaps or exposures.
Furthermore, dynamically adjusting due diligence measures based on real-time data fosters a more adaptive and evidence-based risk governance model. Institutions would be positioned to demonstrate objective and proportionate escalation decisions to supervisors, reducing the need for micro-management and enabling supervisory authorities to focus on exceptional cases and systemic trends.
To conclude, while we agree with the structural intent of Section 4, we recommend enhancing it by incorporating mechanisms that track residual risk accumulation and escalation readiness in real time. This approach ensures that the reassessment process is not only reactive but also anticipatory, strengthening the overall effectiveness and credibility of the risk-based compliance framework.
Question 7: What are the specific sectors or financial products or services which, because they are associated with lower ML/TF risks, should benefit from specific sectoral simplified due diligence measures to be explicitly spelled out under Section 4 of the daft RTS? Please explain your rationale and provide evidence.
Certain sectors and products demonstrate risk characteristics that make them suitable candidates for simplified due diligence (SDD) measures, provided that ongoing monitoring supports their continued low-risk profile. Examples include:
- Basic savings accounts with strict usage limits and no international transfer functionality;
- Regulated pension fund accounts where disbursement conditions are predefined and restrictive;
- Low-value payment instruments with capped transaction thresholds and full transparency of fund origin;
- Government-issued benefit accounts managed under national identity verification schemes;
- Municipal utilities billing accounts limited to domestic transactions.
These examples are not inherently risk-free, but their structures often limit the opportunity for misuse, particularly when supported by clear source-of-funds documentation, established customer relationships, and narrow transaction purposes.
Rather than exhaustively codifying every eligible category in advance, it would be more effective to enable institutions to designate specific products or services for SDD based on real-time measurement of accumulated residual ML/TF risk. This allows for greater flexibility while maintaining supervisory confidence.
The approach we advocate supports such designations by aggregating risk data at the product and customer segment level. If a product consistently demonstrates minimal residual risk, institutions could apply SDD measures within defined parameters, and exit that regime should observed risks increase.
Moreover, this model encourages data sharing across institutions and sectors. A non-financial entity could provide risk-relevant indicators to its banking partner, for instance, usage controls or identity verification details, that would support the bank’s risk classification. Over time, this may lead to voluntary convergence on simplified due diligence protocols for certain low-risk sectors, benefiting both the financial and non-financial industries.
Such a framework provides the dual advantage of protecting the financial system from misuse while promoting access and efficiency in areas where the risk is structurally constrained. Regulatory clarity on examples is helpful, but dynamic risk observability and proportional controls will remain essential to ensure that simplified measures do not lead to control erosion over time.
Question 8: Do you agree with the proposals as set out in Section 5 of the draft RTS? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We agree with the principle behind Section 5, which requires obliged entities to maintain ongoing monitoring of business relationships to ensure that the information held remains up to date and that risks are adequately managed. Nonetheless, the effectiveness of this provision can be improved by shifting from static reviews to dynamic tracking of residual ML/TF risk as it emerges through ongoing customer activity.
Static approaches that rely on periodic review cycles may overlook critical developments that emerge between review dates, such as changes in transaction behavior, geographies of counterparties, or patterns of control circumvention. A more responsive model would incorporate real-time indicators, enabling institutions to capture and respond to these developments before they lead to breaches or systemic risk accumulation.
We propose a monitoring model that quantifies residual risk at the transaction level, providing institutions with a cumulative view of how customer-specific exposures evolve. This allows for real-time alerts when thresholds are approached or exceeded and provides a data-backed foundation for when and how to update customer information and risk assessments.
An additional advantage of this approach lies in how it encourages proactive internal compliance cultures. Rather than passively waiting for periodic cycles or regulatory audits, institutions are empowered to continuously refine their understanding of customer risk. This facilitates early intervention, limits compliance gaps, and strengthens the traceability of decisions and actions over time.
From a regulatory standpoint, this continuous monitoring model supports more efficient supervision. Supervisory authorities can focus their attention on high-risk entities or cases where institutions fail to adjust controls in response to observable risk signals, rather than expending resources uniformly across the entire financial sector.
Furthermore, the operationalization of real-time monitoring is increasingly feasible given advances in data integration, analytics, and risk modeling. Initial implementation may entail a learning curve, but it offers long-term cost savings by reducing the need for duplicative manual reviews and improving the targeting of compliance interventions.
We therefore support the objectives of Section 5, while we recommend reinforcing its application by requiring institutions to adopt monitoring practices that reflect actual and evolving customer risks. This approach would result in more effective due diligence, reduced exposure to undetected risk accumulations, and a more focused and sustainable compliance environment for both institutions and regulators.
Question 9: Do you agree with the proposals as set out in Section 6 of the draft RTS? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We support the objective of Section 6, which aims to define clear conditions for the termination of business relationships when the required customer due diligence (CDD) measures cannot be fulfilled. Such clarity is essential to mitigate ML/TF risk and ensure that institutions maintain only those relationships for which they can confidently manage the associated risk.
However, the application of this section would benefit from greater operational nuance. Termination should be grounded in a measurable, cumulative assessment of risk exposure, rather than being triggered solely by formal documentation gaps or procedural failings. A residual risk-based approach enables institutions to determine, based on observed transaction behavior and control performance, whether the relationship continues to meet acceptable thresholds, even if some static compliance checks are incomplete.
For example, if a client’s residual risk score remains consistently low based on actual transaction monitoring and behavioral patterns, institutions may consider alternate controls to mitigate the documentation gap. Conversely, if transaction data shows emerging risk signals, even where documentation is formally complete, termination or escalation may be warranted. This allows institutions to make proportionate decisions that align with the spirit of the regulation rather than applying inflexible thresholds.
Moreover, this approach empowers institutions to proactively resolve issues that might otherwise lead to relationship termination. If risk accumulation is detected early, institutions can engage with the client to clarify or update records, apply additional controls, or modify transaction parameters. This reduces unnecessary offboarding and avoids creating barriers to access for clients who may pose no meaningful risk.
From a regulatory perspective, this model encourages more efficient oversight. Supervisors would be better positioned to focus on termination decisions that occur in the context of measurable risk, rather than purely formal deficiencies. It would also improve consistency in how institutions interpret their obligations across sectors and jurisdictions.
Considering the above, while we support the intent of Section 6, we recommend enhancing its implementation through a residual risk tracking approach that balances regulatory integrity with practical adaptability. This would allow institutions to safeguard their operations without overextending termination actions, and regulators to monitor the sector more strategically.
Question 10: Do you agree with the proposals as set out in Section 7 of the draft RTS? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We agree with the principle of Section 7, which aims to clarify the obligations of obliged entities in documenting and justifying their decisions related to due diligence measures and risk-based adjustments. However, we see an opportunity to reinforce its effectiveness by anchoring documentation requirements to measurable, operational outcomes rather than merely formal records.
In current practice, documentation often becomes an administrative artifact, decoupled from clients' actual behavior or the effectiveness of internal controls. By shifting the emphasis toward the traceability of risk-related decisions, as reflected in transaction-level data and accumulated residual risk metrics, institutions can more accurately demonstrate the rationale for their actions.
This includes justifying the application of simplified, standard, or enhanced due diligence; triggering reviews or escalations; or modifying controls in response to observed risk accumulation. If such decisions are captured in real time alongside their underlying data inputs, institutions can build audit trails that are both transparent and meaningful.
In operational terms, this approach reduces the need for duplicative narrative explanations or excessive paper trails. Instead, the same system that calculates risk can also provide just-in-time documentation outputs for internal review or supervisory inspection.
From a cost perspective, linking documentation to actual monitoring systems improves efficiency and consistency and helps avoid regulatory friction caused by gaps between stated policies and operational behavior.
Moreover, it encourages institutions to be more deliberate and accountable in their decision-making, knowing that all adjustments to risk treatment can be retrospectively verified against observed data. This reinforces both internal governance and external trust.
We support the intention of Section 7 but recommend strengthening its operational value by promoting data-driven documentation that reflects real-time risk conditions, rather than post hoc or checklist-based compliance statements. This would result in a more credible, traceable, and scalable compliance documentation framework.
Question 11: Do you agree with the proposals as set out in Section 8 of the draft RTS (and in Annex I linked to it)? If you do not agree, please explain your rationale and provide evidence of the impact this section would have, including the cost of compliance, if adopted as such?
We support the effort to consolidate and standardize the obligations for obliged entities to retain, update, and make available to competent authorities the data necessary to assess ML/TF risks. The emphasis placed in Section 8 and Annex I on ensuring that data is complete, accurate, and updated is essential for achieving transparency and traceability across the AML/CFT framework.
However, we believe the current framing could benefit from enhancements that would improve both operational feasibility and analytical value. In particular, we recommend that institutions be allowed to structure and maintain this data as part of an ongoing residual risk measurement process rather than as a static compliance ledger.
By embedding data obligations within a dynamic, transaction-level risk accumulation model, institutions can ensure that each data point directly contributes to the real-time evaluation of ML/TF exposure. This would:
- Reduce duplication and administrative burdens, as data maintenance would be integrated into ongoing operational processes rather than treated as a separate regulatory task;
- Enhance the relevance and timeliness of the data submitted to authorities, as it would reflect current operational realities and control performance;
- Provide regulators with more insightful and actionable information, especially when aggregated across institutions to identify sectoral trends or emerging threats.
The model we advocate allows institutions to store and link each data point, such as product attributes, counterparty jurisdiction, transaction volume, and control effectiveness, to corresponding residual risk calculations. These data points can be configured to trigger alerts or recalibrate thresholds, providing a responsive feedback mechanism that improves risk governance while also satisfying regulatory retention and auditability requirements.
Additionally, embedding data in a risk-sensitive framework enables more meaningful comparisons across entities and timeframes. Authorities would be better positioned to manage by exception, focusing reviews and inquiries on outliers or institutions with patterns indicating weak control application or disproportionate risk exposure.
In terms of cost, we recognize that the initial effort to structure data systems around this model may be non-trivial. However, the resulting efficiencies, through better targeting of compliance activities, reduced need for post-hoc reconciliations, and automated preparation of supervisory reporting, would produce significant long-term savings.
As such, while we support the goals of Section 8 and Annex I, we recommend enhancing their implementation through integration into a dynamic and risk-responsive data management framework. This approach would ensure that compliance obligations are met not only in letter but also in substance, delivering a more robust and agile AML/CFT architecture.
Question 1: Do you any have comments or suggestions regarding the proposed list of indicators to classify the level of gravity of breaches sets out in Article 1 of the draft RTS? If so, please explain your reasoning.
We welcome the effort to establish a structured framework for assessing the gravity of breaches under the AMLD6. The proposed list of indicators under Article 1 reflects an attempt to achieve greater consistency and transparency in enforcement decisions. However, we believe that certain refinements could improve the framework’s relevance and operational value.
The current list appears to emphasize the formal aspects of breaches, such as duration, recurrence, and type of obligation violated, which are important, but may not always reflect the true magnitude or systemic implications of the breach. We suggest supplementing these with indicators that capture the actual residual ML/TF risk exposure caused by the breach, particularly when such exposure is objectively measurable.
For instance, a breach that leads to demonstrable residual risk accumulation across multiple transactions or clients, even over a short period, could pose a greater systemic threat than a longer-lasting breach that occurs in a low-risk or well-controlled environment. Integrating this dimension into the assessment would allow for a more proportionate and risk-sensitive classification of severity.
We also recommend that the framework include an assessment of whether the breach occurred in the presence of functioning risk monitoring mechanisms. Where institutions have actively monitored, reported, and sought to remediate a risk in good faith, the gravity of a breach should be seen in that operational context. Penalizing institutions equally regardless of their detection and remediation efforts could discourage transparency and undermine the development of strong internal compliance cultures.
Additionally, the impact on customer trust and sectoral reputation could be considered, especially in cases where the breach has a broader public or systemic effect. Metrics such as affected customer segments, geographical spread, or links to emerging typologies may help qualify the severity more holistically.
As explained, while we agree with the intent and structure of Article 1, we recommend complementing the proposed indicators with metrics that reflect operational risk outcomes, systemic relevance, and institutional governance behaviors. This would allow the framework to function not just as a punitive tool, but also as a mechanism that incentivizes genuine risk control, proactive reporting, and continuous improvement in AML/CFT practices.
Question 2: Do you have any comments or suggestions on the proposed classification of the level of gravity of breaches sets out in Article 2 of the draft RTS? If so, please explain your reasoning.
We support the establishment of a tiered classification system in Article 2, which seeks to distinguish between minor, significant, and very significant breaches. However, we believe the framework would benefit from more precise alignment with measurable indicators of actual risk impact, institutional responsiveness, and system-wide implications.
In its current form, the proposed classification tends to rely on the nature of the obligation breached and the degree of deviation from compliance. While these are relevant factors, they may not sufficiently capture the full context of the breach, particularly its operational risk consequences or the extent to which it was actively monitored and mitigated.
A more refined classification would consider the following dimensions:
- Residual risk exposure: How much risk was actually accumulated due to the breach? Was there a quantifiable increase in residual ML/TF risk across transactions or accounts that can be traced to the failure?
- Institutional detection and remediation: Did the institution identify the issue proactively and act swiftly to remediate it? Was the breach disclosed to the regulator transparently and in a timely manner?
- Systemic and reputational impact: Did the breach have cascading effects beyond the institution, affecting confidence in the financial system or exposing vulnerabilities in broader AML/CFT processes?
Incorporating these operationally grounded dimensions would allow the gravity classification to better reflect the real-world implications of non-compliance. For example, a "significant" breach in a low-risk context that was promptly self-identified and mitigated might reasonably warrant a lower classification than a "minor" formal breach that leads to material risk accumulation.
Moreover, we suggest that regulators adopt a proportional and transparent methodology for assigning severity levels, ensuring consistency across institutions and cases. This could be aided by institution-wide metrics and benchmarking systems that indicate how effectively different entities are managing similar risks.
Ultimately, we believe the classification framework should incentivize the right behaviors: continuous monitoring, transparent reporting, timely remediation, and evidence-based risk control. This will ensure that the classification of breach gravity remains meaningful, fair, and aligned with both regulatory objectives and operational realities.
Question 3: Do you have any comments or suggestions regarding the proposed list of criteria to be taken into account when setting up the level of pecuniary sanctions of Article 4 of the draft RTS? If so, please explain your reasoning.
Yes. We support the intention to define clear and proportional criteria for setting pecuniary sanctions, but believe the proposed list could be further refined to reflect not only the outcome of the breach but also the institution's capacity and behavior in relation to it. A static list of factors, if not grounded in observable performance indicators, may lead to inconsistent enforcement or diminished incentives for institutions to improve internal controls proactively.
We suggest incorporating criteria that include:
- The magnitude of residual risk associated with the breach, as objectively measurable within the institution’s risk measurement system;
- The speed and effectiveness of remediation efforts, including whether risk levels demonstrably decreased following control interventions;
- The degree of institutional control awareness, i.e., whether failures resulted from gross negligence, control gaps, or unforeseeable system breakdowns;
- The institution’s prior risk trajectory, where persistent or worsening residual risk may signal systemic management weaknesses.
Moreover, we recommend embedding an evaluation of whether the institution had a framework in place capable of identifying the issue internally before external detection. Institutions that operate with such proactive mechanisms should receive moderated penalties, as they contribute to the broader goal of systemic risk reduction and lessen the supervisory burden.
Incorporating such dynamic and evidence-based criteria will ensure that sanctions serve not only as deterrents but also as tools to promote sustainable improvements in governance and internal risk management.
Question 4: Do you have any comments or suggestions of addition regarding what needs to be taken into account as regards the financial strength of the legal or natural person held responsible (Article 4(5) and Article 4(6) of the draft RTS)? If so, please explain.
We acknowledge the importance of the criteria outlined in Article 4 for setting pecuniary sanctions. These criteria aim to reflect the severity of the breach and its surrounding circumstances. We suggest further refinement to ensure that penalties are not only fair but also serve as effective deterrents and reinforcements of sound risk governance practices.
In particular, we welcome the inclusion of elements that allow for consideration of an institution’s cooperation, remediation efforts, and ability to detect and address breaches through its own internal monitoring systems. This recognition creates a necessary incentive structure where institutions are encouraged to invest in risk-aware practices rather than simply avoiding detection.
However, we believe the framework should more explicitly incorporate the actual residual ML/TF risk generated by the breach as a central factor in sanction determination. A purely formalistic breach that produces negligible residual risk should not carry the same penalty weight as one that leads to significant, quantifiable exposure, even if both fall under the same technical classification.
Regarding the sanctioning of natural persons, we recommend particular care in applying proportionality. The draft appropriately considers the financial strength of individuals, but should also weigh the degree of responsibility, influence, and autonomy the individual had in the specific breach. Sanctioning should be differentiated between willful misconduct and failures resulting from unclear mandates, poor organizational governance, or system-wide deficiencies beyond an individual’s control.
Where senior management is shown to have been actively involved in suppressing internal risk reporting or ignoring credible risk indicators, individual sanctions can justifiably be applied more severely. However, in cases where the individual acted in good faith within a flawed system, sanctions should reflect that nuance to avoid discouraging future transparency or internal whistleblowing.
We therefore propose enhancing the Article 4 criteria with an emphasis on actual risk consequences and governance behavior, both institutional and individual. This approach would not only improve the effectiveness and legitimacy of sanctions but also strengthen the incentive for entities and individuals to actively contribute to a robust and forward-looking AML/CFT culture.
We also wish to highlight a potential unintended consequence of the framework’s current emphasis on individual responsibility. If applied without careful calibration, it may lead to a growing perception that senior compliance officers or risk managers are de facto representatives of regulatory authorities within institutions, accountable more to supervisors than to the institution’s strategic governance. This perception can erode internal collaboration, alienate control staff, and ultimately discourage proactive engagement with the business.
In such an environment, individuals may begin to operate defensively, making decisions aimed at personal protection rather than risk mitigation. This could include withholding internal disclosures, delaying escalations, or taking overly conservative stances that impede legitimate operations. There is also the broader risk that competent professionals may be disincentivized from accepting control roles if the legal and financial risks of doing so are seen as excessive or poorly defined.
We recommend that the RTS clarify that individual accountability should be applied with proportionality, context, and fairness, reinforcing, rather than undermining, the effectiveness of control functions within obliged entities, with an emphasis on actual risk consequences and governance behavior, both institutional and individual. This approach would not only improve the effectiveness and legitimacy of sanctions but also strengthen the incentive for entities and individuals to actively contribute to a robust and forward-looking AML/CFT culture.
5a: restrict or limit the business, operations or network of institutions comprising the obliged entity, or to require the divestment of activities as referred to in Article 56 (2) (e) of Directive (EU) 2024/1640?
We believe the use of these administrative measures should be based on clear, evidence-driven criteria that establish a direct relationship between the identified risk and the proposed limitation. Without such a basis, restrictions may result in disproportionate business disruption or strategic distortions.
These measures, while intended to mitigate systemic exposure, may unintentionally penalize segments of the business that are not directly related to the identified breach, undermining otherwise sound operations and jeopardizing the institution’s ability to deliver services or sustain its core strategy.
Supervisors should demonstrate that such actions are necessary because the residual risk exceeds tolerable thresholds and cannot be mitigated through less intrusive means. Importantly, institutions should be afforded a transparent remediation path and the opportunity to show how their own internal controls are working to bring residual risk within an acceptable range.
5b: withdrawal or suspension of an authorisation as referred to in Article 56 (2) (f) of Directive (EU) 2024/1640?
Authorisation withdrawal should be treated as a measure of last resort, used only when the breach reflects a fundamental and unremedied failure of governance, risk management, or willingness to cooperate. We believe it is essential to factor in whether the institution has demonstrated meaningful progress in risk mitigation and control improvement.
Supervisors should assess not just the severity of the breach but also the institution’s posture, whether there is a genuine commitment to correcting deficiencies and restoring compliance.
5c: require changes in governance structure as referred to in Article 56 (2) (g) of Directive (EU) 2024/1640?
We agree that governance changes can be an effective and proportionate response to systemic or repeated compliance failures. However, the criteria for requiring such changes should include an analysis of whether the current structure materially contributed to the breach, and whether it has proven unable to support remediation. Care should also be taken to avoid inadvertently alienating compliance leadership by creating an atmosphere of personal liability rather than shared responsibility.
Governance measures should encourage accountable, empowered, and well-resourced compliance leadership, not discourage capable individuals from stepping into those roles.
Question 6: Which of these indicators and criteria could apply also to the non-financial sector? Which ones should not apply? Please explain your reasoning.
We believe that many of the indicators and criteria proposed under the draft RTS are conceptually transferable to the non-financial sector, particularly those related to residual risk exposure, recurrence of breaches, and the presence (or absence) of internal detection and remediation efforts. However, the operational realities of non-financial entities vary significantly, and a one-size-fits-all application may undermine the effectiveness of the framework.
Non-financial entities often lack the sophisticated compliance infrastructure of financial institutions. As such, criteria based on the formality or complexity of internal systems (e.g., structured risk models, governance reporting hierarchies) may not be directly applicable or may unfairly penalize firms with simpler but proportionate controls. Instead, evaluations in the non-financial sector should emphasize:
- The impact of the breach in terms of facilitating or failing to prevent ML/TF activity,
- The traceability and accountability of internal decisions that led to the breach,
- The degree to which the entity acted in good faith to address issues once identified,
- And the presence of sector-relevant risk mitigation standards, even if less formalized.
Certain criteria, such as those relating to the management of cross-border risk exposures or the use of transaction monitoring technologies, may also need adaptation, given that non-financial entities may not handle comparable transaction volumes or operate across jurisdictions in the same way as financial institutions.
In contrast, criteria related to the financial strength of the entity, degree of intentional misconduct, and risk outcomes of the breach (e.g., whether it enabled criminal behavior) remain universally relevant. These should be retained but assessed with consideration of the size, structure, and resources available to non-financial firms.
We recommend that the final RTS include specific interpretive guidance on how key criteria should be adapted for use in the non-financial sector. This would help preserve fairness while reinforcing the same core objective: the reduction of ML/TF risk across all obliged entities.
Question 7: Do you think that the indicators and criteria set out in the draft RTS should be more detailed as regards the naturals persons that are not themselves obliged entities and in particular as regards the senior management as defined in AMLR? If so, please provide your suggestions.
We support the principle of individual accountability where senior management or natural persons materially contribute to compliance failures. However, we believe that the indicators and criteria applicable to such individuals could be further refined to distinguish between types of involvement and the nature of responsibilities held.
The draft RTS could benefit from clearer guidance on how to differentiate between:
- Active misconduct (e.g., knowingly ignoring warnings or bypassing controls),
- Negligence (e.g., failure to oversee or question risk reports), and
- Systemic or organizational failure (e.g., absence of clear delegation, ambiguous reporting lines).
Not all senior managers hold equal influence or visibility over AML/CFT systems. A more granular classification of roles and their corresponding obligations would enhance fairness. For example, defining a baseline set of expectations for risk owners, compliance leads, and executive directors separately would better align enforcement with actual accountability.
We also encourage the incorporation of criteria that account for an individual's contribution to remediation efforts. Where a senior officer has taken documented, good-faith action to raise concerns, promote improvements, or escalate issues, especially in the face of internal resistance, such efforts should be considered in any assessment of personal culpability.
Lastly, the RTS could clarify the threshold at which individual responsibility is invoked. This would help avoid the perception that senior personnel are held strictly liable for all institutional shortcomings, which could undermine morale and discourage capable professionals from assuming these critical roles.
Greater specificity would enhance both deterrence and legitimacy of enforcement, ensuring individuals are sanctioned in a way that reflects their true influence on the compliance environment, that are not themselves obliged entities and in particular as regards the senior management as defined in AMLR.
Question 8: Do you think that the draft RTS should be more granular and develop more specific rules on factors and on the calculation of the amount of the periodic penalty payments and if yes, which factors should be included into the EU legislation and why?
We believe there is merit in developing more granular guidance on the calculation of periodic penalty payments, as this would enhance both predictability and fairness in supervisory enforcement. More specific rules would also reinforce the principle of proportionality, which is central to effective compliance regimes.
Currently, the lack of detailed criteria on how penalty payments are to be calibrated risks creating inconsistency across jurisdictions and institutions. Institutions may face materially different penalties for similar breaches depending on the interpretation of supervisory authorities. This variability could undermine the perception of fairness and reduce the incentive for firms to proactively improve their AML/CFT frameworks.
We recommend that the RTS include structured parameters such as:
- The degree of residual risk resulting from the breach, especially when such exposure is traceable to specific business lines, products, or transactions;
- Duration of the breach, but weighted by the institution’s demonstrated efforts to detect and remediate the issue;
- Reputational and systemic impact, including whether the breach contributed to broader market vulnerabilities or public concern;
- Capacity and size of the institution, to ensure that penalties are meaningful but not disproportionate;
- Cooperation and transparency, including whether the institution voluntarily disclosed the breach and cooperated fully with authorities;
- Previous record, giving due consideration to past enforcement actions, remediation efforts, and whether repeat patterns are evident.
Furthermore, we suggest allowing for reductions in the penalty calculation where institutions employ demonstrably effective early-warning or risk measurement systems that detect and mitigate risk exposures before they materialize into harm. This would incentivize the development and use of advanced compliance and monitoring systems.
To further reinforce this approach, we propose that periodic penalties should be dynamically adjusted based on evidence of residual risk reduction during the remediation period. If an institution demonstrates that its mitigation efforts are actively lowering the residual risk associated with the original breach, measured through verifiable data or indicators, then the financial penalties should proportionally decrease in response. This would not only maintain the deterrent effect but also serve as a constructive incentive for institutions to accelerate and sustain their remediation efforts.
Ultimately, the objective of periodic penalty payments should not only be deterrence but also the promotion of sustainable, self-correcting compliance behavior. More explicit rules would help align penalties with actual risk impact and compliance performance, improving regulatory coherence and reinforcing trust across the financial system.
Question 9: Do you think that the draft RTS should create a more harmonised set of administrative rules for the imposition of periodic penalty payments, and if yes, which provisions of administrative rules would you prefer to be included into EU legislation compared to national legislation and why?
We believe that establishing a more harmonised set of administrative rules for the imposition of periodic penalty payments at the EU level would enhance consistency, transparency, and fairness in enforcement across Member States. A uniform baseline would help eliminate divergence in supervisory practices, ensuring that institutions operating in multiple jurisdictions are subject to comparable expectations and penalties.
We recommend that the harmonised rules incorporate the following provisions:
- A core methodology for calculating penalty amounts, reflecting residual risk exposure, duration of breach, size of institution, and degree of cooperation, to ensure predictable and proportionate outcomes;
- Clear criteria for adjusting penalties dynamically during remediation, based on measurable reductions in residual risk or demonstrated progress in control effectiveness;
- Defined thresholds for triggering supervisory actions, such as when residual risk exceeds a certain benchmark or when remedial actions stall;
- Rights of representation and appeal, ensuring that institutions can engage meaningfully with regulators throughout the enforcement process;
- A shared taxonomy for breach categories, so that infractions are consistently interpreted and graded across Member States;
- Publication standards for penalties, with discretion applied in a way that balances public interest and reputational fairness. We believe that when penalties are made public, accompanying context should be provided—especially where the institution has demonstrated progress in reducing residual risk or implementing remediation measures. Such contextualisation helps avoid disproportionate reputational harm and provides a clearer signal to stakeholders about the institution’s risk posture and governance trajectory.
EU-level harmonisation would also support supervisory convergence, enabling the development of centralised benchmarks, shared best practices, and coordinated peer reviews. This would reduce the regulatory burden for cross-border institutions and reinforce the credibility of the EU's AML/CFT framework.
While flexibility for national regulators to tailor supervision to local conditions remains important, minimum harmonised rules would serve as an anchor for fairness and coherence across the Single Market.