Response to consultation on draft Guidelines on the use of remote customer onboarding solutions
Go back
Clause 23: we agree strongly and request added clarity. Our experience is that it is vital to conduct ongoing monitoring of the reliability and adequacy of the solution. Unfortunately most automated face verification systems and liveness systems do not do so, with the result that systematic, repeatable and large scale compromises can occur undetected and persist for extended periods of time. To prevent this, we suggest that this clause be strengthened to make clear that it includes any automated biometric elements of the solution.
The purpose of gathering biometric data is to bind a natural person to a set of identity attributes provided on an identity document. The face on the identity document is always available to an attacker. Using a variety of techniques, an attacker can in all cases use this face to create a forgery, either physical or digital. The cost of creating such an attack is low, tending to zero for simple attacks. Such attacks include: printed still images, still images replicated on a PC or tablet screen, images animated using freeware software and displayed on a screen, images animated using freeware software and injected via a virtual webcam. None of these methods involve significance cost or effort. Without liveness checks, they will all succeed, 100% of the time. Hence a biometric check is completely valueless unless liveness is also tested. Indeed it is dangerous, since it creates an illusion of security for the financial services operator where none is justified, in full compliance with EBA regulations.
Our experience is that such attacks are mounted with very great intensity and volume. We had the experience of introducing liveness testing into a biometric flow which previously had none, and hence observing the unrestricted level of successful fraud. It was nearly 10% of all transactions, all of which were successful. Once attackers learnt that liveness prevented such attacks, attempts fell to under 1% and were unsuccessful.
We would like to make the EBA aware that all customer onboarding relying on biometric verification without liveness will suffer a high level of ongoing fraud, no matter what the risk associated with the transaction.
We believe the cost-benefit analysis in clause 3 of section 5.1 is incorrect – in particular the view that “Implementation of liveness detection may be costly, but, by itself, it is not the unique key safeguard for the verification process.”. Within the cost of onboarding (today €0.25 - €2.50) the cost of liveness is normally an acceptable fraction (€0.01 - €1), according to the level of security required. To contradict the cost-benefit analysis, our experience is that it is indeed the unique key safeguard for the biometric component of the verification process.
We further note that current best practice for onboarding identity assurance is embodied in ETSI Standard TS 119 461 v1.1.1 (July 2021). In Section 8.4, liveness (PAD) is made mandatory in requirements BIN-8.4.2 – 02/04/05/06. In particular requirements 8.4.2-05/06 state:
“ [CONDITIONAL] BIN-8.4.2-05: If face biometrics is used for binding to applicant, at least one image of sufficient quality for binding to applicant shall be extracted from the video stream.
BIN-8.4.2-06: The video stream capture shall apply PAD measures in compliance with ISO/IEC 30107-3 [3].”
We would suggest that a high evidential basis would be necessary for the EBA to allow a more relaxed approach than that specified in relevant and applicable ETSI standards, given the high level of risk involved.
To conclude: if biometric verification is used, it must ALWAYS be protected by liveness detection, or it should not be used at all.
Clause 43: this is in direct contradiction with clause 39, because 43(c) requires liveness whether or not biometric verification is used. Furthermore, clause 43 lists only two of the three methods of liveness detection and should therefore include the third method in widespread use, to avoid embedding a technology preference into the regulation. This is needed also to align it with (amended) clause 45. It also needs correction of the language. Therefore Clause 43(c) should be amended to read:
“c) perform liveness detection verifications, which may include procedures where a specific action from the customer is used to verify that he/she is present in the communication session and/or it can be based on the introduction of variable elements into the procedure that do not depend on actions by the customer and/or on the analysis of the received data without reliance on an action by the customer;”
Clause 44: BSI in Germany, amongst others, has for many years pointed out that modern deepfake technology makes a procedure based on the current conditions of clause 44 extremely insecure. At the current state of the art, it requires very little skill and the use of zero-cost freeware to enable an impostor to bypass the requirements of clause 44(b). No human operator can be trained to detect a real-time deepfake. For clarity, this attack involves an impostor superimposing the face of the victim (or the synthetic identity) shown on the ID document onto their own face in real time during a video interview. If the EBA is interested, a demo of this attack can be provided. Hence automated liveness checking, resistant to deepfakes, must be included in the procedure. An additional sub-bullet in clause 44 would have the suggested wording:
“(c) implement means to detect deception techniques based on computerised image manipulation used to alter or replace the face of the remote customer”
Clause 45: this correctly addresses the need for liveness confirmation in a remote interaction. An alternative – and proven by attack testing to be effective and resilient – method is to introduce random elements into the interaction which are independent of the customer’s actions. To assure technological independence, this clause should be amended to read:
“45. Where possible, financial sector operators should use remote customer onboarding solutions that include randomness in the sequence of actions to be performed by the customer for verification purposes, and/or random elements introduced into the interaction independent of actions performed by the customer.”
“64. Where a multi-purpose device is used to perform the remote customer onboarding process, a secure environment should be provided for the deployment and usage of the software code on the side of the customer, where applicable. A security mechanism should be used to ensure the integrity of the software code and the confidentiality and authenticity of sensitive data, and this mechanism should include methods that do not rely on the integrity of the device or its software.”
1. Do you have any comments on the section ‘Subject matter, scope and definitions’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
No comments – strong agreement2. Do you have any comments on Guideline 4.1 ‘Internal policies and procedures’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
Clause 16: we agree and request a clarification. However we request that an important equivocation in this clause be clarified: where components have been issued with module certification to Qualified level in accordance with this regulation, these should also be included in this derogation. Hence the wording should be amended to read “Financial sector operators should consider the assessment criteria in paragraph 15 to be appropriately met to the extent that the solution includes qualified trust services, or component Trust Services or modules certified to this level, in accordance with Regulation (EU) 910/2014” . Without this clarity, financial sector operators will be uncertain on the degree to which Module Certification does or does not absolve them of the need to repeat the obligations of clause 15 in respect of these modules, and could incur redundant costs to do so.Clause 23: we agree strongly and request added clarity. Our experience is that it is vital to conduct ongoing monitoring of the reliability and adequacy of the solution. Unfortunately most automated face verification systems and liveness systems do not do so, with the result that systematic, repeatable and large scale compromises can occur undetected and persist for extended periods of time. To prevent this, we suggest that this clause be strengthened to make clear that it includes any automated biometric elements of the solution.
3. Do you have any comments on the Guideline 4.2 ‘Acquisition of Information’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
No comments – strong agreement4. Do you have any comments on the Guideline 4.3 ‘Document Authenticity & Integrity’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
No comments – strong agreement5. Do you have any comments on the Guideline 4.4 ‘Authenticity Checks’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
Clause 39 & 40: we disagree strongly with the assessment made to require liveness only for high-risk transactions. Clause 39 allows many financial onboarding processes to be made without any liveness, and we think this represents a serious threat to the financial stability of the European Union. Our reasoning is based on both logic, experience and standards.The purpose of gathering biometric data is to bind a natural person to a set of identity attributes provided on an identity document. The face on the identity document is always available to an attacker. Using a variety of techniques, an attacker can in all cases use this face to create a forgery, either physical or digital. The cost of creating such an attack is low, tending to zero for simple attacks. Such attacks include: printed still images, still images replicated on a PC or tablet screen, images animated using freeware software and displayed on a screen, images animated using freeware software and injected via a virtual webcam. None of these methods involve significance cost or effort. Without liveness checks, they will all succeed, 100% of the time. Hence a biometric check is completely valueless unless liveness is also tested. Indeed it is dangerous, since it creates an illusion of security for the financial services operator where none is justified, in full compliance with EBA regulations.
Our experience is that such attacks are mounted with very great intensity and volume. We had the experience of introducing liveness testing into a biometric flow which previously had none, and hence observing the unrestricted level of successful fraud. It was nearly 10% of all transactions, all of which were successful. Once attackers learnt that liveness prevented such attacks, attempts fell to under 1% and were unsuccessful.
We would like to make the EBA aware that all customer onboarding relying on biometric verification without liveness will suffer a high level of ongoing fraud, no matter what the risk associated with the transaction.
We believe the cost-benefit analysis in clause 3 of section 5.1 is incorrect – in particular the view that “Implementation of liveness detection may be costly, but, by itself, it is not the unique key safeguard for the verification process.”. Within the cost of onboarding (today €0.25 - €2.50) the cost of liveness is normally an acceptable fraction (€0.01 - €1), according to the level of security required. To contradict the cost-benefit analysis, our experience is that it is indeed the unique key safeguard for the biometric component of the verification process.
We further note that current best practice for onboarding identity assurance is embodied in ETSI Standard TS 119 461 v1.1.1 (July 2021). In Section 8.4, liveness (PAD) is made mandatory in requirements BIN-8.4.2 – 02/04/05/06. In particular requirements 8.4.2-05/06 state:
“ [CONDITIONAL] BIN-8.4.2-05: If face biometrics is used for binding to applicant, at least one image of sufficient quality for binding to applicant shall be extracted from the video stream.
BIN-8.4.2-06: The video stream capture shall apply PAD measures in compliance with ISO/IEC 30107-3 [3].”
We would suggest that a high evidential basis would be necessary for the EBA to allow a more relaxed approach than that specified in relevant and applicable ETSI standards, given the high level of risk involved.
To conclude: if biometric verification is used, it must ALWAYS be protected by liveness detection, or it should not be used at all.
Clause 43: this is in direct contradiction with clause 39, because 43(c) requires liveness whether or not biometric verification is used. Furthermore, clause 43 lists only two of the three methods of liveness detection and should therefore include the third method in widespread use, to avoid embedding a technology preference into the regulation. This is needed also to align it with (amended) clause 45. It also needs correction of the language. Therefore Clause 43(c) should be amended to read:
“c) perform liveness detection verifications, which may include procedures where a specific action from the customer is used to verify that he/she is present in the communication session and/or it can be based on the introduction of variable elements into the procedure that do not depend on actions by the customer and/or on the analysis of the received data without reliance on an action by the customer;”
Clause 44: BSI in Germany, amongst others, has for many years pointed out that modern deepfake technology makes a procedure based on the current conditions of clause 44 extremely insecure. At the current state of the art, it requires very little skill and the use of zero-cost freeware to enable an impostor to bypass the requirements of clause 44(b). No human operator can be trained to detect a real-time deepfake. For clarity, this attack involves an impostor superimposing the face of the victim (or the synthetic identity) shown on the ID document onto their own face in real time during a video interview. If the EBA is interested, a demo of this attack can be provided. Hence automated liveness checking, resistant to deepfakes, must be included in the procedure. An additional sub-bullet in clause 44 would have the suggested wording:
“(c) implement means to detect deception techniques based on computerised image manipulation used to alter or replace the face of the remote customer”
Clause 45: this correctly addresses the need for liveness confirmation in a remote interaction. An alternative – and proven by attack testing to be effective and resilient – method is to introduce random elements into the interaction which are independent of the customer’s actions. To assure technological independence, this clause should be amended to read:
“45. Where possible, financial sector operators should use remote customer onboarding solutions that include randomness in the sequence of actions to be performed by the customer for verification purposes, and/or random elements introduced into the interaction independent of actions performed by the customer.”
6. Do you have any comments on the Guideline 4.5 ‘Digital Identities’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
Clause 48: we disagree that eIDAS level Substantial is sufficient to protect financial operators and the financial system against the level of attack to be expected. Level Substantial requires only the weakest of biometric verifications, if any (“the document appears to relate to the person presenting it”). This renders it extremely vulnerable to impersonation attacks by the kind of highly motivated adversary the EBA measures are intended to deter. We suggest that Level High is the minimum acceptable.7. Do you have any comments on the Guideline 4.6 ‘Reliance on third parties and outsourcing’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
No comments – strong agreement8. Do you have any comments on the Guideline 4.7 ‘ICT and security risk management’? If you do not agree, please set out why you do not agree and if possible, provide evidence of the adverse impact provisions in this section would have.
Clause 64: we agree with this clause. However the implication is that this requirement is not only necessary but sufficient for security-sensitive processes. Our experience and the advice we have received is that security mechanisms used to ensure the integrity of the software code and the confidentiality and authenticity of data on a multi-purpose device are desirable but insufficient to protect such authenticity. Well-known methods including reverse engineering and memory injection attacks will effectively bypass them. Hence it should also be a requirement that security mechanisms to protect the authenticity of sensitive data must also include methods not reliant on the integrity of the device or its software. Hence we propose that clause 64 should be amended to read:“64. Where a multi-purpose device is used to perform the remote customer onboarding process, a secure environment should be provided for the deployment and usage of the software code on the side of the customer, where applicable. A security mechanism should be used to ensure the integrity of the software code and the confidentiality and authenticity of sensitive data, and this mechanism should include methods that do not rely on the integrity of the device or its software.”