2 Paragraph 9 The current definition of Digital Identity Issuer risks complicating the single market for remote onboarding service providers in the EU. There is a distinction between an Issuer of a Digital Identity (i.e. eID service providers under eIDAS) and a business providing verification/authentication services for the purposes of onboarding. This distinction should be reflected in the EBA guidelines to avoid any confusion. European legislative tools (including Regulations, Directives, Delegated Acts and Guidelines) should have consistent definitions to avoid fragmentation in the application of rules, and to provide legal certainty to market participants (i.e. third party onboarding service providers, financial institutions, consumers).
4.1 Paragraph 16 As well as directly referencing qualified trust providers, the guidelines should also allow for financial service providers to consider service providers that have received a certification through a national conformity assessment body under the eIDAS Regulation (which are not considered qualified trust service providers) to still appropriately meet the criteria in paragraph 15.
Service providers having received equivalent certification under the eIDAS Regulation or the AML Directive uphold the same standards, and should therefore not have to prove their process requirements through yet another series of criteria. If this is not the case, paragraph 15 should also stipulate that once one Member State recognises that a service provider meets all the criteria set out in the paragraph, this should be mutually recognised across the Union, so as to not mandate that service providers provide the same proof every time they choose to expand their service to a new marketplace within the EU.
4.3 Paragraph 33 on the authenticity of document copies, begins with stating that steps to verify authenticity of such copies “may include.” The main motivation behind these guidelines is to provide increased harmonisation across the continent. Phrases such as “may include” are exactly what leads to differing interpretations of the rules by national competent authorities (ie one authority may require all steps under 4.3. paragraph 33 for a copy to be reliable, whilst others may only require one). We would argue for a risk-based distinction of the various steps, which would set out what steps are required depending on what risks are involved.
4.4 paragraph 42 on insufficient quality of evidence being provided. Onfido believes that the necessary move to a physical location should be a last resort. The guidelines should allow for repeated attempts of remote customer onboarding with other pieces of evidence as well as a face-to-face verification over videoconference. Only after repeated attempts and the virtual face-to-face prove unable to alleviate the ‘uncertainty and ambiguity’ should the customer be required to have a face-to-face in the same physical location.
4.4 paragraph 43(a) required “absolute clarity” in photographs taken as a means to verify one’s identity. We would argue to make this language more precise, “the required properties are captured with the necessary clarity to allow the proper verification of the customer’s identity.” as is the case in paragraph 44(a).
4.4 paragraph 43(c) appears to require liveness detection verifications as part of any instance of photograph-based onboarding. This is not appropriate as it does not take a technology neutral approach, fails to take into account solutions which are widely used in the market and work effectively, and is disproportionate in terms of what it seeks to achieve. The type of technology used to authenticate does not directly correlate to the effectiveness of the authentication itself. Services that do not use liveness can still alleviate the risks involved with photograph-based onboarding. Under a technologically neutral approach, such services should be allowed to exist in the European single market.
4.4 paragraph 43(d) Onfido strongly welcomes the allowance of strong and reliable algorithms completing verification instead of the need for human verification. Nevertheless, we do still require clarity on what the wording “in the absence of human verification” means. Does this mean that operators can choose AI over humans to verify without issues, or is this only allowed where humans cannot do so for particular reasons? We assume this would be the former, and if so, we would recommend that this be clarified in text to avoid misinterpretations at the national level.
4.4 paragraph 44(b/c) Onfido believes that staff knowledge may not be required in each and every videoconference if a trained AI is overseeing physical and psychological reactions during the videoconference, whilst the employee holding the interview will also be provided with the guide.
4.4 paragraph 45 again does not appear to take a technology neutral approach. Similar to our comments about paragraph 43(c), it fails to take into account solutions which are widely used in the European market and effectively uphold existing KYC requirements. In essence, paragraph 45 is disproportionate in terms of what it seeks to achieve. There are already a number of solutions on the market that provide high levels of security to financial services operators and effectively support them in meeting their KYC requirements without the need for randomness (these include, yet are not limited to, document and biometric checks).
Paragraph 50(a) To ensure consistency throughout the guidance the paragraph should specify that the level of assurance to be determined should be “in relation to reducing substantially the risk of impersonation, misuse or alteration of the identity” as set out in paragraph 48. The subparagraph should therefore read “determine the level of assurance in relation to reducing substantially the risk of impersonation, misuse or alteration of the identity based on the elements of technical specifications and procedures outlined in the Annex to Regulation (EU) 2015/1502”.
Service providers that are “regulated, recognised, approved or accepted by the relevant national authorities”, should be recognised as such throughout the EU.
4.6.2 paragraph 59(b) The phrasing “access to the data is strictly limited and registered;” may prevent the development of future data consortiums for the purposes of AML, which would in fact improve existing systems and reduce fraud. Paragraph 59 is indeed not needed, as it duplicates existing horizontal legislation, in particular GDPR. Duplication of legislation is unnecessary and adds additional complexity and burden to all parties without adding any value and, as such, it should be removed.
Furthermore, section 4.6 also would merit from providing guidance at the national level with regards to sub-outsourcing. Overall, we consider that the upcoming AML Regulation’s Article 40 will improve the current landscape and lead to a more harmonised application of outsourcing rules. Nevertheless, neither Article 40, nor these guidelines encompass sub-outsourcing. To effectively harmonise the EU outsourcing regime and create further certainty for CDD service providers, a common approach to sub-outsourcing should also be included within the guidelines.
This is especially important as tech businesses/startups in the AML space tend to specialise on specific elements of the CDD process (i.e. remote customer onboarding). To effectively compete with Big Tech and established players, such businesses need to be able to cooperate with other businesses to provide one joint solution. In the area of eKYC and CDD this is achieved in many instances through sub- outsourcing of the services which an outsourced company is not specialised in. This allows for businesses to remain specialised and to develop higher-end solutions, whilst remaining competitive in the wider CDD marketplace by being sub-outsourced.
We would therefore recommend the inclusion of a paragraph that clarifies how sub-outsourcing should be allowed as long as certain criteria are met (i.e. this has been agreed upon between the outsourced party and the operator, regular reviews/monitoring can still occur, etc.)