Pulsantiera di navigazione Home Page
Pagina Facebook Pagina Linkedin Canale Youtube Italian version
News
Legal news

INFORMATION TECHNOLOGY

European Banking Authority – EBA: published a working document on the impact of the EU Regulation on Artificial Intelligence on the banking sector.

With the entry into force of the AI Act (EU Reg. 2024/1689), the banking and payments sector is also called upon to deal with specific regulations on the use of artificial intelligence systems. The European Banking Authority (EBA) has published a summary document that maps the obligations of the AI Act with respect to the already existing sector legislation (CRR/CRD, DORA, PSD, CCD, MCD and related EBA Guidelines), focusing in particular on the use of AI for the assessment of creditworthiness and credit scoring of natural persons, classified as "high risk" by the AI Act.

The AI Act defines AI systems as machine-based systems, with a certain degree of autonomy, that process input data to generate outputs – predictions, content, recommendations or decisions – capable of impacting physical or digital contexts. In the banking sector, the attention of the European legislator is mainly focused on the algorithms that decide whether to grant a loan, under what conditions and with what risk assessment: precisely because these systems can directly affect the fundamental rights of the customer, they are subject to strengthened requirements.

In 2025, the EBA carried out a 'mapping' exercise to understand how the requirements for high-risk AI systems overlap or complement the rules already applicable to banks and other intermediaries under its supervision. The main result is reassuring: no significant contradictions emerge between the AI Act and banking and payment regulations; on the contrary, the AI Act is read as a complementary framework to a regulatory system that already contains very articulated safeguards on risk governance, internal controls, data management and consumer protection. Some adjustments will still be necessary, especially at the operational level, to integrate the two sets of rules coherently.

The summary prospectus on page 2 of the EBA document shows, for each key obligation of the AI Act (quality management systems, post-market monitoring, risk management, transparency to users, human oversight, robustness and cybersecurity, data governance, AI literacy), whether there is full, partial, complementary alignment or no corresponding obligation in EU banking law. A patchy picture emerges: in some areas the alignment is substantial, in others the AI Act adds an "extra dimension" – typically linked to fundamental rights and algorithmic impact – compared to traditional prudential rules.

To avoid unnecessary duplication, the AI Act expressly provides for "regulatory synergy" mechanisms: in certain cases, sector legislation may replace (derogate) or supplement (supplement/combination) some of the obligations of the AI Act. The table on page 3 of the EBA document identifies, for example, the areas in which risk management systems, technical documentation, registers and monitoring plans required by banking law can serve as a basis for satisfying, at least in part, the similar obligations of the AI Act. On the other hand, certain requirements typical of the AI Act remain fully autonomous and not "covered" by financial law,  such as the obligation to guarantee the right to explanation, the impact assessment on fundamental rights or specific AI literacy measures.

Another element that the mapping highlights is the coexistence of several competent authorities: on the one hand, the prudential and conduct authorities of the financial sector, and on the other, the Market Surveillance Authorities provided for by the AI Act. This mosaic of competences makes the issue of supervisory cooperation central, to avoid overlaps, gaps or inconsistent messages to intermediaries. It is no coincidence that among the "next steps" indicated by the EBA for the two-year period 2026-2027 are precisely the promotion of a common supervisory approach between national authorities and the structured collaboration with the AI Office and the AI Board's subgroup on AI in financial services.

Importantly, the document is not in the nature of a binding guideline nor does it represent an official legal position of the EBA. Rather, it is preparatory work, intended to provide input to the European Commission for future guidelines on the interaction between the AI Act and sectoral law and to guide, in a soft way, banks and payment institutions in the initial phase of implementation of the new discipline. Looking ahead, however, the message for operators is clear: the use of high-impact AI systems, in particular for credit and payments, can no longer be managed only as a "risk model" or technological innovation issue, but will require an integrated government that brings together prudential requirements, protection of fundamental rights,  algorithmic accountability and dialogue with an increasingly interconnected ecosystem of supervisors.

Stampa la pagina