Primary tabs


We agree in general terms, but we must make a point on the following details:

It is stated that “comparison of availability and performance should be at channel level i.e. the channel chosen by the client.” (Recital 24) and that API availability and performance is to be compared to the best performing PSU interface. We do not consider this is an appropriate comparison as different interfaces might provide different access scope (i.e. mobile app vs. online banking interfaces: the first might allow only a limited access to the account and to limited types of transactions considered more appropriated for the user compared to the full online banking interface). Users can access to account information or payments initiation through web user interface, banking app and/or wallet app each having different performance metrics…In addition to that, availability of interfaces may differ according to customer’s targets and specific service levels defined even on individual basis. The comparison should be made to the equivalent customer interface.

Also, some modifications are to be considered in order to improve the practical implementation of the Guidelines:
• There should be some margin, even a small one, for differences in availability and performance compared to the best performing PSU interface and the dedicated interface, such as 0.01 percentage points;

• The service level should be measured as an average over a longer time-period, such as a month, rather than over each 24-hour period. The time-period reference is important in order to ensure that the dedicated interface is subject to high standard, and eventual “underperformance” is not triggered inappropriately or unnecessarily, particularly taking into account the reality of unexpected downtime. Unexpected downtime could possibly affect the dedicated and user interfaces at different moments (i.e. in different 24-hour periods). In such case the dedicated interface could register “under-performing” metrics in a 24-hour period when it could even significantly exceed the user interface service levels over a longer time horizon. The use of a monthly average in Article 2.4(a) of the Guidelines (instead of 24 hours) would implement a high threshold for the dedicated interface but reduce the number of “false positives”.

Furthermore, we do not agree with the publication of service level information of all other ASPSP’s user interfaces. This information is commercially sensitive and should not be publicly available. From our point of view, it is outside the scope of PSD2. What’s more, the definition of “down” for the user interface is not clearly defined and there are likely to be substantial differences in interpretation among different ASPSPs, which could generate confusion and inappropriate comparisons. For example, some ASPSPs might consider their user interface to be “down” if one particular functionality is not working, while another may only define their interface as down if the entire system is offline.
As a result, ASPSP’s should only be obliged to provide service level information on their dedicated interface to their Competent Authority, together with the reporting of the other relevant service levels, permitting Authorities to confirm their compliance with the RTS and Guidelines.

Concerning KPIs calculation, clear distinction between the planned/unplanned downtime is welcomed. The first should not be considered for performance comparison.
Yes, including the decision not to explicitly include testing such as security and penetration testing that is already part of an IT assessment.
Yes. Nevertheless, the guidelines should include references to the level of market activity, market intelligence and user complains to be used for the supervisory activity of CAs.
Yes, we broadly agree with the EBA’s proposal. However, we suggest the authentication terminology in the Guidelines to be reviewed in order to improve clarity and ensure consistency with the EBA Opinion, and more important with PSD2 and RTS on SCA and CSC.

Guideline 5.1 establishes that ASPSPs should provide Competent Authorities with “(a) a summary of the methods of access chosen by the ASPSP” and, “(b) where the ASPSP has put in place only one method of access, an explanation of the reasons why this method of access is not an obstacle as referred to in Article 32(3) of the RTS and how this method of access supports all authentication methods provided by the ASPSP to its PSU.”

When read in conjunction with the EBA Opinion (paragraphs 48 to 50), we understand that “methods of access” in part (a) refers to either “redirection, embedded approaches and decoupled approaches (or a combination thereof)”. We would suggest using similar language in the Guidelines as in the Opinion, e.g. refer to: methods for carrying out the authentication procedure.
Also, for consistency purpose, we suggest using similar language in the Guidelines 5.1 (b) as in the RTS on SCA and CSC when referring to the obstacles (art. 32.3), as follows:
“(b) where the ASPSP has put in place only one method of access, an explanation of the reasons why this method of access is not an obstacle as referred to in Article 32(3) of the RTS and how this method methods allows the use by payment service providers referred to in Article 30(1) of the credentials issued by account servicing payment service providers to their customers.
We agree with the EBA’s assessments for design and testing. However, we consider that it should be explicitly mentioned that testing should focus on functionalities and connectivity for the TPPs to test their own solutions, and not on performance, since testing is performed on a dedicated test environment, which differs in terms of service level and performance with the live environment.

In addition to that, the need of flexibility in order to ensure that firms are able to gain access to the technical specifications with a pending authorisation, provided that they can prove that an application has been received in the relevant CAs is understandable. But opposite to that, we do not think that this should imply that the ASPSPs should make available the testing facilities before authorisation is granted, considering the resources that will be required for the testing in both sides. They could result on losses in case of refusal of the authorisation. Even the availability of the technical specifications before authorisation should be reconsidered in order to avoid unnecessary diffusion of elements that could result in a fraudulent behaviour or attempts from non-authorised PSPs to access customers’ payments accounts.
We completely agree on the need to streamline the process during the transition period and on the need to inform the EBA of negative responses (i.e. decisions by a Competent Authority not to grant the exemption). Nevertheless, from our point of view, the mechanism to oppose a negative decision should also be described and harmonized by the Guidelines.

In addition, it would be helpful to clarify whether a PSP that is present in more than one country, should only request an exemption from its home country authority (which is then valid for all countries) or if it should request an exemption from each host country authority where it is providing APIs.
Our only concern is related to the process and envisaged timeliness for ASPSPs to meet the requirements and grant an exemption, even prior to deadline, if the PSD2 has not been transposed in their jurisdiction and no Competent Authority has been designated.
Concerning monitoring, we consider that the guidelines should include references to the level of market activity, market intelligence and user complains, to be used for the supervisory activity of CAs. (see question nº3).