DATA PROTECTION
Italian Data Protection Authority: decalogue for the implementation of healthcare services at national level through artificial intelligence (AI) systems.
The Italian Data Protection Authority issues a decalogue for the implementation of healthcare services at national level through artificial intelligence (AI) systems. Transparency of decision-making processes, human-supervised automated decisions, algorithmic non-discrimination: these are the three cardinal principles enucleated by the Authority on the basis of the Regulation and in the light of the jurisprudence of the Council of State.
According to the Authority's indications, the patient must have the right to know, also by means of communication campaigns, whether decision-making processes exist and what they are (e.g. in the clinical or health policy sphere) based on automated treatments carried out by means of AI tools and to receive clear information on the logic used to arrive at those decisions.
The decision-making process should include human supervision that allows healthcare professionals to check, validate or deny the processing performed by AI tools. It is advisable, warns the Garante, that the data controller uses reliable AI systems that reduce errors due to technological or human causes and periodically checks their effectiveness, putting in place appropriate technical and organisational measures. This is also with a view to mitigating potential discriminatory effects that the processing of inaccurate or incomplete data could have on a person's health.
According to the Authority's indications, the patient must have the right to know, also by means of communication campaigns, whether decision-making processes exist and what they are (e.g. in the clinical or health policy sphere) based on automated treatments carried out by means of AI tools and to receive clear information on the logic used to arrive at those decisions.
The decision-making process should include human supervision that allows healthcare professionals to check, validate or deny the processing performed by AI tools. It is advisable, warns the Garante, that the data controller uses reliable AI systems that reduce errors due to technological or human causes and periodically checks their effectiveness, putting in place appropriate technical and organisational measures. This is also with a view to mitigating potential discriminatory effects that the processing of inaccurate or incomplete data could have on a person's health.