INFORMATION TECHNOLOGY
European Insurance Authority (EIOPA): published the opinion on the governance of Artificial Intelligence risks.
The European Insurance and Occupational Pensions Authority (EIOPA) has published an opinion on AI governance and risk management. The opinion addresses the impact of artificial intelligence (AI) systems as defined and considered under the EU AI Act (Regulation 2024/1689) in relation to sector-specific insurance legislation. The insurance legislation explicitly mentioned in the opinion includes:
the Insurance Distribution Directive (IDD);
the Solvency II Directive; and
the Digital Operational Resilience Act (DORA).
The opinion also covers high-risk AI systems, as the AI Act mentions AI systems used for risk assessment and pricing in life and health insurance as high-risk. As a result, the opinion clarifies the fundamental principles and requirements provided for in insurance industry legislation that should be taken into account in relation to insurance AI systems that are not considered as prohibited or high-risk AI practices under the AI Act.
As a first step, for AI systems falling within the scope of the opinion, it is recommended that organisations assess the risk of the different AI systems used. Organizations should assess their risks and develop governance and risk management measures that are appropriate and proportionate to the characteristics and risks of the specific use of AI systems at hand.
In a second step, taking into account the impact assessment referred to above, the opinion provides that organisations should develop a set of proportionate measures to ensure the responsible use of the AI system.
A risk management system, in line with Article 41 of the Solvency II Directive, Article 25 of the IDD Directive and Articles 4, 5 and 6 of the DORA Regulation, should develop proportionate and risk-based governance and risk management systems, considering the following areas:
fairness and ethics;
data governance;
documentation and record keeping;
transparency and explainability;
human supervision;
precision, robustness and cybersecurity.
Regarding the principle of fairness, under the IDD Directive, the opinion highlights that adequate redress mechanisms, such as a complaints mechanism, should also be put in place to allow customers to access and seek compensation when they have been harmed by an AI system.
Similarly, according to the principle of data governance as set out in the Solvency II Directive, robust data governance should be applied throughout the entire lifecycle of the AI system for data collection, processing and post-processing.
As regards transparency obligations under the IDD and the AI Act, the opinion stresses that any explanation needs to be tailored to the specific uses of AI systems and the needs of different stakeholders.
As regards human oversight, under Article 46 of the Solvency II Directive, insurance bodies must have effective internal control systems in place at all levels of the insurance body. In line with this legislative obligation, human oversight by competent personnel should support the identification and mitigation of potential biases, in line with the organisation's policy. Appropriate guardrails should be put in place to ensure that the AI system works as intended, respects customer rights and maintains a high level of safety.