Pulsantiera di navigazione Home Page
Pagina Facebook Pagina Linkedin Canale Youtube Italian version
News
Legal news

DATA PROTECTION

Italian Data Protection Authority: alarm on the risks for sensitive data contained in medical reports published on Artificial Intelligence platforms.

 

The practice of uploading clinical analyses, X-rays and other medical reports to generative AI platforms asking for interpretations and diagnoses is increasingly widespread.

 

This is an alarming phenomenon both for the risk of loss of control over health data of extraordinary importance for people, and for the risk that artificial intelligence solutions not specifically designed for the purpose of providing the required information and not made available to the public as medical devices downstream of the necessary tests and controls required by the sector regulations provide incorrect indications.

 

The Italian Data Protection Authority launches an alarm and invites users of these platforms to carefully evaluate the opportunity to proceed with the sharing of health data with providers of generative artificial intelligence services and to rely on the answers automatically generated by these services, answers that should always be verified with a medical professional.

 

With regard to the first aspect, in particular, the Authority draws attention to the advisability of reading the privacy notices that the platform operators are obliged to publish in order to verify whether the health data contained in the clinical examinations uploaded online for the purpose of the request for interpretation and/or diagnosis are intended to be deleted following the request itself,  at a later time or to be stored by the service operator for the purpose of training its algorithms.

 

Many of the best-known generative artificial intelligence services, in fact, allow users to decide the fate of the data and documents uploaded online as part of the use of the service.

 

The Authority therefore draws attention to the importance, recognized by both the European Regulation on AI and the Superior Council of Health, of always ensuring qualified human intermediation in the processing of health data through AI systems.

 

Human intervention is essential to prevent risks that could directly affect the health of the person (see art. 14 AI Regulation and opinion "Artificial intelligence systems as a diagnostic support tool", Superior Council of Health, Sec. V, 9 November 2021).

 

"Qualified human supervision", among other things, must be ensured at all stages of the AI system's life cycle: from development and training to testing and validation, before it is placed on the market or in its use.

 

The issue had already been anticipated by the Authority in the Decalogue for the implementation of national health services through AI systems adopted in October 2023 (web doc. no. 9938038) in which additional aspects of personal data protection were also highlighted that must also be ensured in the implementation of such systems to support the understanding of diagnostic reports:  such as the presence of a suitable prerequisite of lawfulness; the necessary and preventive impact assessment; transparency and security obligations.

 

Finally, the Authority reminds developers of AI systems and healthcare professionals on the risks involved in the phenomenon of massive collection of personal data from the web for the purpose of training generative artificial intelligence models, which were highlighted in the document published in May 2024 on web scraping (web doc. no. 10020334).

Stampa la pagina