INFORMATION TECHNOLOGY
Luxembourg Data Protection Authority: published a guide on how to comply with the Artificial Intelligence literacy obligation.
The Luxembourg Data Protection Authority (Commission nationale pour la protection des données – CNPD) has published a guide for public and private organisations on how to ensure an adequate level of artificial intelligence (AI) literacy, in accordance with Article 4 of Regulation (EU) 2024/1689 (AI Act), which is fully applicable in all EU Member States as of 2 February 2025.
According to the CNPD, the obligation to ensure adequate AI literacy for personnel involved in the management and use of artificial intelligence systems constitutes a specific requirement under the AI Act and must be included in regulatory compliance processes already being implemented. The Luxembourg authority's guidance is based on the guidelines adopted at EU level, in particular the European Commission's frequently asked questions (FAQs) on AI literacy
Article 4 of the Regulation states that providers and users of AI systems must take appropriate measures to ensure, "to the best extent possible", a sufficient level of AI literacy by staff and other entities acting on their behalf. These measures must take into account:
the technical knowledge, experience, training and level of education of the training recipients;
the context of use of AI systems;
the characteristics of the persons or groups of persons on which such systems are intended to affect.
In its document, the CNPD clarified that organizations must take a holistic approach to AI literacy, concretely assessing:
the degree of familiarity of employees with the concepts and mechanisms of AI;
the sector of use and the purpose of the systems used;
the categories of stakeholders, with a particular focus on vulnerable populations, workers and users of public services.
With reference to the competence of the staff, the guide recommends that organizations adapt training programs to real needs, distinguishing between subjects with advanced technical knowledge and subjects with basic skills, for whom introductory training on fundamental concepts such as machine learning and algorithms is appropriate. The CNPD also suggests taking into account seniority and onboarding paths in the development of training plans.
As for the application context, literacy measures should be calibrated according to the level of risk associated with the specific use of AI systems in the reference sector (e.g. health, finance, public, etc.).
Finally, the identification of entities impacted by AI systems is considered by the CNPD to be a crucial step in increasing awareness of risks and correctly setting up supervision and control mechanisms, in line with the provisions of Article 3 of the Regulation.
In the document, the CNPD also provides an overview of the practices already adopted by some organizations in the field of AI literacy, offering useful examples for setting up effective, proportionate training strategies that comply with the new regulatory framework.