1. With respect to Article 97(1) (c), are there any additional examples of transactions or actions implying a risk of payment fraud or other abuses that would need to be considered for the RTS? If so, please give details and explain the risks involved.
2. Which examples of possession elements do you consider as appropriate to be used in the context of strong customer authentication, must these have a physical form or can they be data? If so, can you provide details on how it can be ensured that these data can only be controlled by the PSU?
3. Do you consider that in the context of “inherence” elements, behaviour-based characteristics are appropriate to be used in the context of strong customer authentication? If so, can you specify under which conditions?
4. Which challenges do you identify for fulfilling the objectives of strong customer authentication with respect to the independence of the authentication elements used (e.g. for mobile devices)?
The purpose of this submission is to confirm and to extend the validity of the following problem stated as remark 32 in the discussion paper:
“32. EBA has identified that a potential complication for compliance with the requirements of Article 97(2) might be related to the independence of the authentication elements, for example when the PSU makes a purchase or accesses his account on a mobile device which at the same time contains the credential (e.g. as part of the hardware and/or software layer) or is used to receive or retrieve the credential (e.g. via SMS or downloading codes from the cloud). Indeed, in that case a potential compromise of the mobile device itself compromises the reliability of the two authentication elements.”
The problem stated above will be confirmed from a Computer Science point of view. Moreover, it will be extended to Secure Elements and to biometry (=inherence).
The authentication scenario to be discussed is the following: The customer (PSU) accesses his account via password (knowledge element) on the same mobile device which holds a credential (possession element) which lets only this device be able to access the account. In the case of SMS this credential is the SIM card/phone number, otherwise it is some credential (data) stored on the mobile device. Note that even in case the cloud is contacted by the smartphone in order to check the possession element, there must be some kind of credential (key, password, device ID) stored on this mobile device in order to get the access to the cloud - otherwise any other mobile device could access the cloud for the possession element.
Malware which has infiltrated the operating system (OS) of the mobile device has very powerful rights – unlike an app it can access every process and it can read all data the apps are able to read. Therefore, such powerful malware (“OS trojan”) can tap the knowledge element (password) when it is typed by the PSU into the mobile device, and it can also read the possession element (secret data) which is stored in the storage space of the mobile device.
This means: The knowledge element and the possession element are not independent because if one is broken by an OS trojan, the other is broken, too. But the definition of “strong customer authentication” requires independence: “... the two elements are independent, in that the breach of one does not compromise the reliability of the others”. Therefore, this method does not represent a strong customer authentication. The problem stated in remark 32 is valid.
The existence of such powerful OS trojans for the main smartphone operating systems Android, iOS, and Windows is well-known and is not doubted. Nevertheless, some people claim
“(A) Such powerful malware cannot enter a regular smartphone”, or
“(B) Malware on the smartphone can always be detected by the bank app”.
Claim (A) is confronted with the following experience, valid until now: Once a new version of one of the operating systems named above was published there has always soon been a way of writing a regular app for that OS version which - when it is downloaded by the user to his regular smartphone in a regular way – contains a backdoor which lets such an powerful OS trojan sneak into the smartphone OS (“OS infection via regular app”). Via smartphone Internet the trojan can later on secretly contact its master – let it be a human or a server - and load more code, communicate, etc. Claim (B) is refuted by the fact that an app on a smartphone has only very limited rights, while the OS and therefore the OS trojan has unlimited rights. The powerful OS trojan can fully control/fool the bank app, it does not even have to change the bank app's code. For example, when the running bank app - directly or indirectly - asks the OS “Are you infected?” the infected OS will just answer “No, I'm not”.
Fake apps, i.e., malicious apps pretending to be the bank app and which are downloaded by the customer to his smartphone instead of the true bank app, are not only a serious danger in terms of attacks, but their existence also shows that the 2-factor bank app method is not PSD2-compliant: The fake app has access to both elements knowledge and possession and can therefore break both. Again, this means: the two elements are not independent.
The above “OS trojan/dependence” argument is extended to the case when the possession element is kept within an Secure Element (SE) on the smartphone, like a SIM-card or a built-in SE.
The SE (for example, SIM-card) stores the secret credential and moreover executes the computation for the confirmation of the possession element, which is usually a challenge/response protocol so that the credential is kept secret. In other words, the OS trojan cannot read/copy the stored credential because the SE will never reveal it. But the OS trojan is able to “secretly abuse the credential” - this means the following:
The OS trojan first taps the password for the bank server when the PSU enters it during some session. Sometime later the trojan secretly initiates via smartphone Internet an authentication session at the bank server. The trojan is pretending to be the PSU and supplies username and password which is accepted by the server. For the second element possession the trojan now executes the “abuse attack”: In order to check the possession element the server provides a challenge which the trojan forwards to the SE, asking for a response to the challenge. The SE generates internally the response for the challenge and returns it to the trojan, which will forward the response to the server. The server will accept the response as the proof that the possession element is owned by the PSU. Result: The trojan has broken both authentication elements and has secretly entered the PSU's account at the server. Note: (1) Neither the server nor the SE can distinguish the OS trojan from the human PSU, (2) the authentication by the trojan is executed secretly in the background - the human PSU does not notice anything.
This shows that the possession element can be broken by an OS trojan even if it is protected within a Secure Element. Note that “broken” here means “being abused”, not “being stolen” – nevertheless, this “abuse” represents a “breach” in the sense of the definition of “strong customer authentication”.
Both authentication elements knowledge and possession can be broken by the trojan, i.e. they are not independent: if one is broken, the other can be broken, too. Summarizing: Even a Secure Element does not help to solve the problem stated in remark 32, the method is still not a “strong customer authentication”.
There are attempts to mimick the functionality of a Secure Element on the smartphone in software or in hardware-supported software, usually called Trusted Execution Environment (TEE). But TEE's are not more secure than hardware SE's. This means that at least the same abuse attack like for SE's can be executed for a TEE by an OS trojan (or maybe even a direct attack – in case the OS trojan is able to infiltrate the TEE). In other words, a TEE on the smartphone does not solve the problem stated in remark 32 – the method does not represent a “strong customer authentication”.
Biometry (=inherence) on the smartphone like fingerprint is easily broken by an OS trojan: In case the biometry - say fingerprint - is checked at the server, the trojan will keep an electronic copy of the fingerprint when the PSU authenticates the first time with his fingerprint. From then on the trojan can pretend to be the PSU at the server via that copy. In case the fingerprint is checked locally on the smartphone it is even easier: the trojan can pretend to be the PSU by just falsely telling the server that the fingerprint check was ok. These easy trojan attacks are analogous for other kinds of biometry on the smartphone, like self-portraits via camera. Even reactive biometry like voice recognition of spoken texts different for each single authentication are potentially breakable by OS trojans (“voice cloning” since 2001).
Therefore, if one of the two elements knowlegde and possession is replaced by an inherence element (biometry) like fingerprint, still the problem stated in remark 32 stays valid: The method does not represent a “strong customer authentication”, because the two elements are not independent: if one is broken by the OS trojan the other can be broken, too.
Even if all three elements knowledge, possession, and inherence are put together to form a 3-factor authentication on the smartphone, and moreover, the possession element is protected within a Secure Element: This does not represent a “strong customer authentication” method because any 2 elements of the 3 elements are not independent.
The argumentation above especially holds for SMS-TAN for Mobile Banking (SMS sent to the same device): this authentication does not represent a strong customer authentication. Note that for example in Germany SMS-TAN for Mobile Banking is not allowed since 2008.
Appendix. Definition of “strong customer authentication” from the PSD2 document
‘strong customer authentication’ means an authentication based on the use of two or more elements categorised as knowledge (something only the user knows), possession (something only the user possesses) and inherence (something the user is) that are independent, in that the breach of one does not compromise the reliability of the others, and is designed in such a way as to protect the confidentiality of the authentication data;
5. Which challenges do you identify for fulfilling the objectives of strong customer authentication with respect to dynamic linking?
6. In your view, which solutions for mobile devices fulfil both the objective of independence and dynamic linking already today?
7. Do you consider the clarifications suggested regarding the potential exemptions to strong customer authentication, to be useful?
8. Are there any other factors the EBA should consider when deciding on the exemptions applicable to the forthcoming regulatory technical standards?
9. Are there any other criteria or circumstances which the EBA should consider with respect to transaction risks analysis as a complement or alternative to the criteria identified in paragraph 45?
10. Do you consider the clarification suggested regarding the protection of users personalised security credentials to be useful?
11. What other risks with regard to the protection of users’ personalised security credentials do you identify?
12. Have you identified innovative solutions for the enrolment process that the EBA should consider which guarantee the confidentiality, integrity and secure transmission (e.g. physical or electronic delivery) of the users’ personalised security credentials?
13. Can you identify alternatives to certification or evaluation by third parties of technical components or devices hosting payment solutions, to ensure that communication channels and technical components hosting, providing access to or transmitting the personalised security credential are sufficiently resistant to tampering and unauthorized access?
14. Can you indicate the segment of the payment chain in which risks to the confidentiality, integrity of users’ personalised security credentials are most likely to occur at present and in the foreseeable future?
15. For each of the topics identified under paragraph 63 above (a to f), do you consider the clarifications provided to be comprehensive and suitable? If not, why not?
16. For each agreed clarification suggested above on which you agree, what should they contain in your view in order to achieve an appropriate balance between harmonisation, innovation while preventing too divergent practical implementations by ASPSPs of the future requirements?
17. In your opinion, is there any standards (existing or in development) outlining aspects that could be common and open, which would be especially suitable for the purpose of ensuring secure communications as well as for the appropriate identification of PSPs taking into consideration the privacy dimension?
18. How would these requirement for common and open standards need to be designed and maintained to ensure that these are able to securely integrate other innovative business models than the one explicitly mentioned under article 66 and 67 (e.g. issuing of own credentials by the AIS/PIS)?
19. Do you agree that the e-IDAS regulation could be considered as a possible solution for facilitating the strong customer authentication, protecting the confidentiality and the integrity of the payment service users’ personalised security credentials as well as for common and secure open standards of communication for the purpose of identification, DP on future RTS on strong customer and secure communication under PSD2 31 authentication, notification, and information? If yes, please explain how. If no, please explain why.
20. Do you think in particular that the use of “qualified trust services” under e-IDAS regulation could address the risks related to the confidentiality, integrity and availability of PSCs between AIS, PIS providers and ASPSPs? If yes, please identify which services and explain how. If no, please explain why.