Special topic – Artificial intelligence

Broad and diverse adoption of AI in the EU/EEA banking sector

Over the past decade, the EU/EEA banking sector has undergone a profound digital transformation, embracing a broad spectrum of advanced technologies to enhance operational efficiency and customer experience. Among these technologies, AI, cloud computing, digital wallets, big data analytics, and biometrics have become increasingly prevalent, with most banks integrating them into their operations for at least the past 5 years. These technologies have played a pivotal role in reshaping banking processes, enabling more personalised services, optimising risk management, and improving decision-making processes. However, while their adoption is widespread, other innovations, such as DLT, smart contracts, and those underpinning tokenisation projects, have experienced a slower uptake (Figure 73).

Source: EBA Risk Assessment Questionnaire

Source: EBA Risk Assessment Questionnaire

Considering the provisions of the EU AI Act[1], references to AI encompass a broad range of machine-based systems designed to operate with varying levels of autonomy. AI is generally assumed as exhibiting adaptiveness to after deployment as well as generating outputs from input data. The adoption of AI has consolidated significantly within the EU banking sector. Over the past 5 years, there has been a consistent upward trend in the deployment of AI, thanks to the potential to impact various aspects of banking. According to the EBA’s RAQ (spring 2024), most EU banks are using AI methods such as regression analysis, decision trees, natural language processing and neural networks.

Regarding use cases, AI is most frequently employed in areas such as client and transaction profiling (for commercial purposes) and customer support. These use cases are consistent with trends observed in previous years' RAQs, highlighting AI's role in enhancing the accuracy and efficiency of customer segmentation and improving the responsiveness of customer service channels. Additionally, AI plays a role in fraud and AML/CFT efforts, where it is used to analyse vast amounts of data and detect patterns indicative of illicit activities. Beyond these prevalent applications, AI is also increasingly used to optimise internal processes within banks, in credit scoring and creditworthiness assessments, and in regulatory credit risk modelling. While most of EU banks are leveraging AI in these areas, there are other use cases where AI adoption is less widespread, such as supervisory reporting, monitoring conduct risk, real estate valuation or carbon footprint estimation (Figure 73).

Despite the benefits AI may bring to the EU banking sector, its adoption is accompanied by a range of challenges and risks that demand careful management and rigorous testing. Banks face competitive pressures from global financial institutions and technology companies to innovate rapidly around AI. However, many have taken a more measured approach, gradually developing their AI systems over the past years. This cautious approach is likely driven by the need to ensure compliance with existing horizontal and sectoral legislation, and a commitment to ensuring that AI is developed ethically and responsibly, aligning with regulatory standards and public expectations. Additionally, by developing AI systems in-house or retaining control over key components, banks can reduce their business and technical dependencies on third-party providers. This strategy not only enhances control over AI systems but also addresses some of the core challenges AI poses to the EU banking sector, such as data privacy, cybersecurity, and compliance with regulatory frameworks. In that sense, responses to the EBA’s RAQ indicate that the recent surge in AI adoption across EU banks may be linked to a shift away from the developmental and testing phases of AI systems, as, by 2024, only a small number of banks remain in the pilot testing phase of their AI initiatives, as compared to 2022. Most of these initiatives have moved beyond pilot stages and are now fully integrating AI into their IT infrastructure (Figure 74).

Source: EBA Risk Assessment Questionnaire

GPAI: rising interest, experimentation and early adoption

One of the most recent breakthroughs in AI is the rapidly increasing interest and experimentation with GPAI, which is also being observed in the EU banking sector. According to the EU AI Act, GPAI is AI that has the capability to serve a variety of purposes. One of the main components of GPAI systems are GPAI models (or foundation models), which are trained with large amounts of data using self-supervision at scale. One of the most popular applications of GPAI is Generative AI, which can be understood as a subset of GPAI which refers specifically to systems that can create new content such as text, images, audio or video.

Box 11: The EBA’s role in GPAI

The EBA has a statutory duty to monitor and assess market developments, including financial innovation, and to achieve a coordinated approach to the regulatory and supervisory treatment of new or innovative financial activities. In accordance with this mandate and the EBA’s priorities on innovative applications for 2024/2025[2], that include AI/machine learning, including GPAI, the EBA is deepening its monitoring and analysis of the uses of GPAI by EU banks. Since September 2023, the EBA has collected data on the testing and use of GPAI by EU banks via the EBA’s RAQ. The results have provided the EBA with detailed insights into the levels of adoption and uses of GPAI in the EU banking sector.

Additionally, in April 2024 the EBA organised a Workshop on GPAI in the banking sector, which included the participation of a range of EU stakeholders representing banks, consumer organisations and technology providers, in addition to relevant EU and national competent authorities. The workshop aimed to promote a common understanding of the uses, risks and opportunities associated with GPAI in the banking sector. During the workshop the EBA found that the use cases of GPAI are rather limited, with activity mainly focused on testing around a small number of use cases. The EBA found that the banking sector has limited tools to fully mitigate consumer protection concerns associated with GPAI, which justifies the cautious and gradual approach to the adoption of this technology.

While GPAI activity is still largely limited to testing, experimenting and piloting, the EBA RAQ data shows an increase in its use by banks since 2023. Approximately one-third of respondent EU banks have already implemented GPAI in at least one-use case (Figure 75) highlighting the sector's growing recognition of its potential to drive efficiencies, enhance customer interactions, and generate new business insights. This rapid adoption reflects the banking sector's focus on leveraging cutting-edge technologies to remain competitive in a digital landscape. However, the increasing reliance on AI, and especially GPAI, may augment existing challenges and risks and bring new ones. This merits careful consideration and robust risk management strategies.

Consequently, the EBA is actively monitoring and assessing the vulnerabilities associated with a potential wider adoption of GPAI in the EU banking sector (see Box 11 above for more on the EBA’s role and activities on GPAI). The EBA’s objectives are to ensure that consumers continue to benefit from high-level standards of protection and that the supervisory and regulatory framework remains fit-for-purpose and harmonised across the EU, also taking account of the EU’s new AI Act.

Adoption of GPAI in the EU banking sector

According to the EBA’s RAQ responses, around 40% of EU banks are already using GPAI, with the adoption mainly reaching significant levels in the areas of customer support and the optimisation of internal processes (Figure 75). For instance, banks are engaging in the following non-exhaustive set of use cases:

  • Customer service (internal and external): GPAI can help a bank improve the resolution of customer queries via chatbot, including for employees’ questions about internal policies, procedures or allowances.
  • Call centres: GPAI can improve the transcription and summarisation of contact centre audio calls into text and the assessment of the quality and outcome of interactions.
  • Programming and coding: GPAI can improve a bank’s technology and operations units by generating new code from natural language, detecting code errors, converting code from one programming language into another or helping migrate legacy code.
  • Legal analysis: GPAI can help a bank’s legal unit improve the monitoring of legal and regulatory changes, the summarisation of court rulings, the analysis of the impact of contractual clauses and the proposal of new clauses and conditions.
Figure 75: Proportion of EU banks using GPAI, per use case, autumn 2024

Source: EBA Risk Assessment Questionnaire

Overall, the adoption of GPAI in the EU banking sector is still at an early stage, with banks mostly testing and experimenting with GPAI via proof-of-concepts  or a sandbox approach. According to the EBA’s RAQ, around 10% of EU banks are already testing the use of GPAI for many other use cases, such as those related to AML/CFT and to the profiling and clustering of clients and transactions. Even in the domain of customer support, many banks are still testing and experimenting with GPAI before actively using it (Figure 76).

Figure 76: Proportion of banks testing GPAI, but still not using it in production, per use case, autumn 2024

Source: EBA Risk Assessment Questionnaire

Diverse approaches for the integration of GPAI systems and models

Banks are exploring and testing various deployment approaches to integrate AI and GPAI systems or models into their technical infrastructure. Notably, banks are assessing the potential of GPAI by testing use cases, or already integrating GPAI, following different approaches, such as developing the models or systems themselves, outsourcing the development of the models or systems, or using models or systems developed by third-parties and deploying them either via Cloud application programming interfaces (APIs)[3] or ‘on-premises’[4]. The choice of approach depends on factors such as scalability, cost, control over data security and compliance, need for internal resources, and internal skills and expertise. When choosing the approach, banks are also varying between relying on a single or multiple providers, or relying on proprietary or open-source models. Finally, regardless of the approach selected, to ensure and improve the performance and applicability of GPAI model for specific applications, banks are resorting to techniques such as RAG[5] or ‘fine-tuning’[6].

According to the EBA RAQ, EU banks most commonly deploy models and systems using between one and three different approaches. Notably, banks appear to be combining the use of third-party open-source models or systems with other methods, such as deploying third-party models via Cloud APIs or on-premises solutions and developing models in-house or outsourcing development to third parties. Generally, regardless of the combination of approaches followed, banks appear to be resorting to third parties to deploy GPAI. Most of respondent EU banks are already using third-party services, mainly via Cloud APIs offered by large model developers, due to the flexibility and scalability of doing so. A lower proportion of banks are integrating third-party GPAI systems or models ‘on-premises’, attracted by a higher degree of control in the hands of banks (which could benefit the scope of resources and costs dedicated to GPAI, and ultimately on the cost-effectiveness of deploying GPAI). However, many banks may face difficulties in following this deployment approach, due to limited technical skills and resources in-house. Finally, only a small number of respondent EU banks are developing proprietary GPAI models and systems, indicating the necessary financial and technical resources required as the main challenge (Figure 77).

Figure 77a: Diversity of deployment approaches by banks for GPAI. autumn 2024

Source: EBA Risk Assessment Questionnaire

Figure 77b: Deployment approaches adopted by banks to integrate or adopt AI (in general) and GPAI, autumn 2024

Source: EBA Risk Assessment Questionnaire

Hence, EU banks seem to be following a multimodal approach strategy, while assessing the effectiveness of each approach on a case-by-case basis. This may be indicative of the nascent stage of GPAI technology within the banking sector. Each approach entails different opportunities and risks, and decisions are influenced by cost, security, skills availability in-house (and via third parties), quality of data available and data privacy considerations. As banks are still primarily in the testing and piloting phase, experimentation allows banks to assess the strengths and limitations of various approaches.

Drivers and obstacles to GPAI adoption in the EU banking sector

To-date, the EBA has observed that GPAI applications currently being tested and experimented with by banks focus on areas where the technology can bring operational efficiencies, either via increasing productivity, lowering costs, or supporting and improving the speed of back-office processes (such as, coding, programming, auditing or fraud detection). Banks appear to be attracted to GPAI adoption by efficiency gains in areas such as content intelligence, content generation, customer engagement or code generation. 

In addition, some EU banks are engaging with GPAI as part of wider initiatives to foster an innovative culture and position their entity as innovation-friendly towards customers and other businesses. In this sense, banks across the EU are showing a considerable interest in exploring the potential of GPAI in financial terms as well. According to the EBA RAQ data, while more than half of the respondent banks plan to invest between 0% and 0.25% of their equity in GPAI, around 21% of respondent EU banks have indicated plans to invest above 0.25% of equity, with one bank in the sample planning a substantial investment exceeding 4% of equity into GPAI (Figure 78). These investment figures may suggest that EU banks are interested in the potential of GPAI but are approaching it with a measured level of financial commitment, likely reflecting the early stages of its adoption and the need to carefully assess the return on investment[7].

Figure 78: Proportion of banks investing or planning to invest in GPAI (investment levels as % of equity), autumn 2024

Source: EBA Risk Assessment Questionnaire

Despite the abovementioned drivers, the EBA observes that currently EU banks do not consider GPAI is sufficiently robust to be used in other use cases. However, the EBA notes that some are researching around the potential of GPAI in other areas such as AML/CFT, the profiling and clustering of clients or risk modelling. In doing so, some EU banks are experimenting with GPAI in those use cases. Consequently, the EBA will keep monitoring those uses via surveys, workshops and desk-based research, as research and experience of the banking sector with GPAI improves.

Potential risks associated with the use of GPAI in the banking sector

In terms of the risks associated with GPAI, as compared to ‘other AI’ or ‘traditional AI’, based on engagement with competent authorities and industry, the EBA has identified potential risks regarding the following:

  • Explainability. Several parameters are used to train GPAI models and randomness is a key feature of outputs. As a result, GPAI models and systems can be highly complex and opaque. Consequently, explainability techniques used for ‘traditional AI’, which are limited in efficiency, are not necessarily valid for GPAI. Besides, GPAI-specific explainability techniques (such as, RAG techniques or asking a GPAI model to point back to the source of an output) are not sufficient to comply with explainability expectations on consumer-facing interfaces. The explainability challenges therefore appear to be difficult to tackle considering the existing research and technical developments.
  • Reliability. In addition to the abovementioned explainability challenges, most of the state-of-the-art GPAI models suffer from ‘hallucinations’ - i.e. output generated by GPAI which contains incorrect or misleading information presented as fact to the end-user, for which researchers have not yet found a solution. Consequently, the outputs of GPAI systems may face reliability issues, which may justify the cautious approach that banks are adopting towards the adoption of GPAI. 
  • Transparency. Transparency around the capabilities and limitations of GPAI models is particularly important for the banking sector, considering the requirements that banks face regarding model validation, privacy, security, accessibility or consumer protection. While GPAI model providers often develop GPAI-specific technical documentation (e.g. ‘model cards’), there are still some concerns observed related to privacy and intellectual property issues.
  • ICT risks. GPAI models are often developed and provided by third-party technical providers. Therefore, the integration of GPAI into a credit institution’s technical architecture may enhance challenges to operational resilience, as well as security and privacy concerns. While banks face similar challenges when integrating other technical service providers, the complexity of GPAI applications and the limited explainability of their outputs may increase ICT risks.
  • Data governance. GPAI models rely on training data that may have limitations in quality, reliability and privacy. Due to the extensive size of datasets used, GPAI could present challenges in ensuring sound data governance for banks.
  • Access to necessary skills. The use of GPAI introduces challenges that require banks to adopt a ‘human-in-the-loop’ approach, involving human intervention throughout the application’s lifecycle. However, the rapid development in GPAI research and the limited availability of expertise in this area can make it difficult for banks to access the necessary skills and talent. This has led to an effort in upskilling and re-skilling efforts, especially among larger banks. Smaller banks, however, might face difficulties competing for the required skills and talent.
  • Other concerns: GPAI also presents challenges such as environmental impacts from the development of GPAI models (i.e. water and electricity consumption) or concerns around competition and market concentration due to the substantial investments needed to compete in the development of state-of-the-art GPAI models and the limited number of technical services providers in the market.

Box 12: Consumer protection challenges associated to GPAI with the EU banking sector

Based on engagement with competent authorities, market stakeholders and consumer organisations, the EBA has identified that the following consumer protection issues could arise if banks adopt GPAI in consumer-facing use cases:

Accountability. The complexity and limited transparency associated with GPAI could raise challenges in terms of ensuring that providers of GPAI systems and models to banks are accountable for inaccurate or inappropriate outcomes, such as inaccurate, misleading or false information or advice to bank customers. However, as banks are accountable for inaccurate, misleading, or false information provided to consumers according to general financial protection legislation, consumers may seek compensation for harms through banks.

Bias, discrimination and financial exclusion. Training data used by GPAI model developers can, without mitigation, exacerbate discrimination and bias against minority or misrepresented groups, leading to risks of financial exclusion. Due to the randomness associated with GPAI outputs, banks deploying GPAI in customer-facing applications may face limited capabilities for understanding and explaining the logic behind biased outputs. Additionally, for instance, consumer support for speakers of minority languages may be impacted if such support gradually shifts from human to GPAI-powered virtual assistants, since GPAI models are mainly trained on the largest languages.

Transparency. The opacity of GPAI applications implies that informing consumers about the use of GPAI in consumer-facing interfaces may not be sufficient for raising their awareness. Therefore, banks might need to explore the use of other means (e.g. application-specific disclaimers) to provide more complete information to consumers.

Data security. The improved technical abilities of attackers (including new types of cyber attacks, prompt-related attacks, misinformation campaigns, data poisoning attacks, and data exfiltration techniques) can raise data security concerns for consumers. Use of GPAI can also contribute to data security risks for consumers via information leakage or inappropriate or non-factual responses. However, thanks to GPAI’s potential benefits in areas such as programming script analysis or malware detection and investigation, existing research[8] suggests that there is no substantial evidence yet suggesting that GPAI can automate sophisticated cybersecurity tasks which could tip the balance between cyberattackers and defenders in favour of attackers.

Other concerns. As GPAI can potentially change the threat landscape for banking consumers by boosting cyberattackers' technical skills, aiding social engineering or phishing attacks, and speeding up manipulation techniques through consumer personification and hyper-realistic scams, GPAI could introduce challenges in consumer experiences.

In view of these potential risks, EU banks appear to be adopting a risk-based and graduated approach to GPAI and are focusing on building out guardrails, controls and ensuring human intervention during the early adoption of GPAI. As a consequence, higher risk use cases are being tested by banks only after they have reached an advanced understanding of GPAI models, deployment methods and their potential effects and necessary mitigants. 


 Abbreviations and acronyms

[1] The EU AI Act it entered into force in August 2024. However, different elements of the Regulation will be applicable in a phase approach, with the full application expected by 2 August 2026. See: http://data.europa.eu/eli/reg/2024/1689/oj

[3] The use of cloud-based APIs enables banks to access and integrate GPAI models into their systems without the need for extensive in-house infrastructure.

[4] The ‘on-premises’ solution involves installing and running GPAI models directly on a bank's own servers and infrastructure, providing greater control over data security and compliance, but requiring significant internal resources for maintenance and scalability.

[5] RAG techniques aim to optimise the output of GPAI models with facts retrieved from authoritative sources distinct from training datasets, thus extending their capabilities to specific domains or an organisation's internal knowledge base, without the need to retrain the model.

[6] Fine-tuning is a technique in which pre-trained GPAI models are customised to perform specific tasks or behaviours by using examples and instructions.

[7] However, the EBA notes that a quarter of the respondent banks have indicated plans to increase their investments in GPAI over the coming years. Nevertheless, most banks either do not plan to increase their investments in GPAI or were unable or unwilling to provide a response to this question. This may reflect ongoing uncertainties about the long-term strategic value of GPAI or signal that banks are still in the process of determining the role this technology will play in their broader digital transformation strategies

[8] See the Interim International Scientific Report on the Safety of Advanced AI, published in May 2024: https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai