We agree that uptime and downtime are the right KPIs for measuring the interface’s availability and performance, as required under Guideline 2, and we further agree that the proposal in Guideline 3 relating to the publication of indicators is appropriate.
With respect to specific targets for the KPIs, we note and understand paragraph 23 in the Background and Rationale that explains why no specific numeric targets have been set. In the absence of these targets it remains important that firms understand the levels of performance that will be considered sufficient for the purposes of meeting the objectives of Guideline 2, not least so that as we are collecting this data we can identify and proactively remediate any issues. In this respect, we also note and agree with the statement in paragraph 2.1 in the draft Guidelines, that an ASPSP should have in place the same service level objectives for the dedicated interface as for the PSU interface. In this context we propose that the KPIs for the PSU interface should also constitute the benchmark targets for the dedicated interface, and that results that are materially similar across the two interfaces should be considered to have met the KPI test. This would create an internal target for firms that is consistent with the aims of PSD2.
It is important that the requirements for calculating and representing the KPIs in Guidelines 2.3 and 2.4 take account of two additional variables.
First, as with any interface where there are upper bounds on capacity, there are likely to be differences across peak and off-peak periods on the key metric of response times. In order to capture the measure of performance that is most valuable to consumers and authorities – i.e., performance when the interface is in greatest use – ASPSPs should report their performance at peak periods only. As we expressed above, we believe these peak period KPIs should be materially similar across both the dedicated and PSU interfaces.
Second, we note that some downtime may be caused by external internet and related connectivity infrastructure, and therefore outside of an ASPSP’s direct control. If this is reported as “unplanned downtime”, it could give a misleading impression that the ASPSP interface is not sufficiently robust. Similarly, the user of an interface could be located in a jurisdiction or location with lower connectivity performance or requirements, which could have an impact on response times. To address these concerns about the potential negative skewing of performance results, we recommend that downtime related to external systems performance is not captured as “unplanned” downtime. This could require further guidance as to what reasonably constitutes an external issue outside the control of the ASPSP. We further recommend that ASPSPs should measure and report their KPIs on the basis of their own performance, in a way that does not take into account any given user’s connectivity or IT constraints. This would entail measuring the “time taken” in Guidelines 2.3(a), (b), and (c) as the amount of time it takes an ASPSP to send this information out, but without measuring the time it takes for that information to be received.
A standardised template would be helpful for the purposes of organising and submitting this information.
We agree with the EBA’s assessments in paragraphs 26-30 in the Background and Rationale that stress testing should only be in relation to the dedicated interface, and we further agree that those tests should be with respect to heavy loads, high concurrency, and requests for large volumes of data.
Guideline 4.2 asks firms to provide a summary of the results of several discrete stress tests, but as currently drafted the Guideline uses generalised and imprecise language to describe what those tests should be demonstrating. For instance, our stress tests must demonstrate an ability to support access by ‘multiple firms’; to deal with ‘unusually high numbers of requests’; to support the use of an ‘extremely high number of concurrent sessions’; and manage requests for ‘large volumes of data’. While our systems will be built to accommodate each of these scenarios in a qualitative sense, we are unsure as to the meaning of implicit thresholds such as ‘multiple’, ‘unusually high’, etc. That said, we also recognise that it may be difficult or inappropriate to allocate specific numeric thresholds for each of these tests, as what is necessary is very likely to vary across firms and/or markets.
For that reason, and with reference to our response to Q1, we believe it would be appropriate to benchmark our stress test results for the dedicated interface against the capacity of our existing PSU interface. Firms should be able to plan for, and build towards, usage and capacity demands that are consistent with our expectations of the market and our experience.
As per our response to Q1, a standardised template would be helpful for the purposes of organising and submitting this information.
We generally agree with the EBA’s assessment on obstacles, and with the requirement that ASPSPs describe to their competent authority the steps they have taken to avoid the imposition of obstacles for third parties; however, we have concerns about the drafting of Guidelines 5.2(b) and 5.2(d).
Guideline 5.2(b) requires ASPSPs to confirm that they do not require payment initiation service providers (PISPs), account information service providers (AISPs), and card based payment instrument issuers (CBPIIs) to comply with any additional requirements ‘that are not equally imposed on all other types of PSPs’. We believe this requirement to treat all third parties identically is inconsistent with the intent of PSD2. Access to payment systems must be offered on objective, non-discriminatory, and proportionate grounds, per Article 35 of the Directive. This is an important principle, often referred to as the ‘POND’ principle. It is reflected in the approach that is being implemented by the UK authorities, particularly the requirement that entire categories of PSPs are not treated in a uniform way and that the terms of access for individual PSPs are assessed on a case-by-case basis. We emphasise, however, that the POND principle does not require identical treatment of all third-party PSPs. Article 35 allows terms of access that take account of settlement risk, operational risk, and business risk, and protect overall system stability. We expect to be able to make those assessments on a case-by-case basis, in a manner consistent with the POND principle, but also recognising that certain entities may a) present different levels of risk; and b) be domiciled in jurisdictions where certain additional requirements are necessary to meet local legal and compliance obligations. As drafted, Guideline 5.2(b) would not allow ASPSPs to apply reasonable differences across the terms of access for different PSPs that reflect the specific operational or business models, risk profiles, or local requirements relevant to each PSP. Furthermore, we would not be in a position to make the confirmation requested without having first performed extensive jurisdiction-by-jurisdiction due diligence. We therefore recommend that 5.2(b) is redrafted to reflect the POND principle as contained in Article 35 of PSD2.
More generally, we strongly prefer that ASPSPs be required to demonstrate that they have “taken all reasonable steps” to ensure no obstacles are put in place, rather than “provide a confirmation” as is currently required by Guideline 5.2. While a confirmation is not particularly problematic for the information required under Guidelines 5.2(a), (b), and (c), it is difficult for the purposes of 5.2(d) to “confirm” the absence of unnecessary delays, frictions, or other attributes that could dissuade users from using PISPs, AISPs, and CBPIIs. We certainly do not intend to impose any obstacles of this kind, but we are concerned that a user might experience certain processes, steps, or design features of the interface that are allowed under PSD2 as delays or frictions, without our knowledge or intent. As drafted, 5.2(d) depends on a third party’s subjective assessment that an ASPSP cannot reasonably attest to. In fact, 5.2(d) can be read as a requirement to build a perfect interface from a commercial perspective, i.e., to the customer’s perfect consumer satisfaction. This would make it difficult for us to attest to 5.2(d) in the manner described in Guideline 5. Conversely, it would not be problematic for us to describe the reasonable steps we have taken to avoid such delays or frictions. Another solution to this challenge would be to delete 5.2(d) entirely.
As per our response to Q1 and Q2, a standardised template would be helpful for the purposes of organising and submitting the information required in Guidelines 5.1 and 5.2.
As per our response to Q1, Q2, and Q4, a standardised template would be helpful for the purposes of organising and submitting the information required in Guidelines 6.3, 6.4, and 6.5.
With reference to paragraphs 54-59 in the Background and Rationale, we appreciate the EBA’s effort to arrive at a practical interpretation of the requirement that the interface is being ‘widely used’ for the purposes of Guideline 7. Nevertheless, we are concerned that the test in Guideline 7 continues to rely on an unspecified numeric threshold, which, if not met, must be supplemented by evidence that the ASPSP attempted to secure wide use.
As the EBA notes in paragraph 58 of the Background and Rationale, there is no obligation on AISPs, PISPs, or CBPIIs to undertake testing, and it is not clear at this juncture how many firms will seek to participate in the testing facility that we make available. It is therefore difficult to see how ASPSPs can rely on meeting the numeric threshold test in Guideline 7.1.
In recognition of that challenge, the EBA has proposed a second test. However, with respect to the requirement that ASPSPs demonstrate the availability of the testing facility and related efforts to broadcast that availability to the market, the EBA appears to be of the view that ASPSPs should a) seek out and b) accommodate a large number firms within the testing facility. In our view, it is more appropriate to select a small number of reliable and engaged partners for the testing phase. This not only reduces the overall administrative and operational burden that ASPSPs will face while planning and conducting the testing, but also reduces the level of operational and security risk during testing. More important still, we anticipate that a small, focused group of participants in the testing facility will lead to better dialogue and engagement, i.e., a better test. In our view, this optimal number of testing partners is between 5 and 7.
As a result, we would endorse an alternative interpretation of ‘widely used’ that is more appropriate in the context of a systems test, namely, a wide cross-section of use cases. It is important that the interface is tested according to different types of use, i.e., different kinds of entities connecting to it, for different purposes, with different commercial goals and operational models. This will ensure that the interface is relevant and robust to a significant cross-section of market activities and potential uses. Such a criterion – that we have tested across a wide range of use cases – would also complement rather than mirror the requirement in Guideline 4 relating to stress testing. As drafted, Guidelines 4 and 7 both implicitly require an ASPSP to demonstrate high levels of activity, or at least an attempt to secure high levels of activity. Introducing more use cases into the testing phase will provide additional information that complements rather than repeats Guideline 4’s requirement that the system is robust to high levels of activity: namely, information showing that the interface is widely relevant, designed for a wide range of uses, and likely to be widely used going forward.
We agree that the information identified in Guidelines 8.1(a) and (b) should be shared with the competent authority as part of the exemption application process. On an ongoing basis, we will report any material breaches if and when they occur.
As per our response to Q1, Q2, Q4 and Q5, a standardised template would be helpful for the purposes of organising and submitting this information.
We support the use of templates in this process as a general matter, although we do not have specific views on whether this Assessment Forum is appropriate for use by competent authorities.
The EBA notes in the Background and Rationale that it is important to avoid undue delay in this assessment process, given the compressed timelines faced by firms and authorities alike. We agree. Per our responses to Q1, Q2, Q4, and Q7, we believe that it would be helpful for firms to be provided with templates such as the one contained in the Annex, so that we can organise and submit the required information in a way that will allow for efficient processing, and help avoid time-consuming reformatting and resubmission efforts.
It is important that ASPSPs have transparency and certainty on the status of their exemptions. Guideline 9.1 allows competent authorities to make a decision one month after submitting the Assessment Form to the EBA for consultation, without necessarily having received a formal response from the EBA. We strongly support this dispensation for competent authorities to move forward to grant exemptions, and similarly support the time-limited derogation contained in Guideline 9.3. That said, we would appreciate confirmation from the EBA about how and whether it intends to subsequently respond to competent authority notifications, i.e., after exemptions have been granted. We are particularly concerned about whether the EBA could give feedback that could result in the exemption being removed without any change in the ASPSP’s performance against the Guidelines. We do not believe that an exemption should be revoked after the one-month window has expired and a competent authority has granted it, in the absence of a material change in the ASPSP’s performance against the Guidelines.
It is clear that the EBA is aware of the compressed timelines that both ASPSPs and competent authorities are working towards, and that it is important for both applicants and competent authorities that these guidelines are finalised as quickly as practically possible. Per our response to Q8, we believe that templates and standardised forms will improve the efficiency of this process.
We believe more details or clarification with respect to certain elements are necessary to avoid incomplete or unsatisfactory applications, and we have flagged those in our response to Questions 1 through 9.
The EBA has noted that it will provide further guidance on the issue of revocations of exemptions where conditions are not met “for more than two consecutive calendar weeks”. This is an important planning consideration for ASPSPs, and we look forward to receiving timely guidance on this matter.