INFORMATION TECHNOLOGY
EU Commission: a study on the spread of Artificial Intelligence in the healthcare sector has been published.
The European Commission has published a study dedicated to analysing the impact and deployment of artificial intelligence (AI) in the healthcare sector.
The document highlights how AI is now considered a strategic factor in addressing the challenges of the healthcare sector, thanks to the potential to:
improve early diagnosis;
allow the development of personalized care plans;
optimize the operational and administrative efficiency of healthcare facilities.
However, the study points out that the adoption of AI still faces significant obstacles, which require a systematic approach to overcome technological, regulatory, social and organisational barriers.
The document draws on the experiences of all EU Member States and non-EU countries with advanced health systems in the use of AI, including the United States, Israel and Japan.
The study identifies a number of structural and operational challenges, including:
absence of uniform standards in the structuring of data;
poor interoperability between AI solutions and existing IT systems;
obsolete IT infrastructures;
lack of validation protocols for available AI solutions;
lack of transparency and explanatory capacity of AI tools;
uneven quality of use by end operators;
variability of services between health facilities and reference populations.
From a regulatory perspective, the report highlights additional obstacles, such as:
complexity and slowness of authorisation processes for the commercialisation of AI-based products;
issues related to the protection of privacy and the protection of personal data;
cybersecurity vulnerabilities and data breach risk;
lack of clear guidelines on the alignment of AI solutions with existing legislation;
uncertainty regarding civil liability and allocation of liability for errors attributable to AI.
Particular attention was paid to the concerns raised by generative AI, with reference to:
reliability, transparency and ethical implications;
phenomenon of AI-generated "hallucinations";
difficulties in managing intellectual property (IP) rights related to content creation.
The study highlights how several stakeholders, both in the health and technology sectors, have expressed concerns about the increase in organizational and compliance burdens resulting from Regulation (EU) 2024/1689 (AI Act).
While the legislation aims to ensure better patient protection and greater safety of AI solutions, it entails the need to:
strengthen risk management protocols;
increase staff training activities;
allocate more financial and operational resources for adaptation.
Healthcare professionals suggested adopting short and accessible training programmes, complemented by peer-to-peer support networks, while hospital representatives highlighted the opportunity to introduce government-accredited audit bodies and expand access to training resources.
Among the most felt critical issues:
difficulty in accessing the data necessary for the training of AI systems;
uncertainties about validation requirements and informed consent of patients;
economic constraints and problems similar to those already faced with the entry into force of the GDPR.
AI developers have emphasized additional notable issues, such as:
intellectual property and data rights;
the need to ensure greater transparency;
criticalities in the alignment between the AI Act and the Medical Devices Regulation, resulting in additional administrative burdens.