EIOPA published a report, from the Consultative Expert Group on Digital Ethics, that sets out artificial intelligence governance principles for an ethical and trustworthy artificial intelligence in the insurance sector in EU. The report builds on recent international and EU developments in the area of digitalization and artificial intelligence. Taking into account the opportunities and challenges of artificial intelligence, the Consultative Expert Group on Digital Ethics has developed an ethical and trustworthy artificial intelligence governance framework that seeks to enable stakeholders in the insurance sector to harness the benefits, and address the challenges, arising from artificial intelligence.
The proposed framework recognizes the freedom of insurance firms to select a combination of governance measures to better adapt to their respective business models and to the concrete artificial intelligence use cases that they aim to implement. The proposed framework also highlights the areas that require special consideration with respect to promoting trust in the use of artificial intelligence by insurance firms. The report details the following governance principles for an ethical and trustworthy artificial intelligence the insurance sector:
- Principle of proportionality. Insurance firms should conduct an artificial intelligence use case impact assessment to determine the governance measures required for a specific artificial intelligence use case. The artificial intelligence use case impact assessment and the governance measures should be proportionate to the potential impact of a specific artificial intelligence use case on consumers and/or insurance firms. Insurance firms should then assess the combination of measures put in place to ensure an ethical and trustworthy use of artificial intelligence.
- Principle of fairness and non-discrimination. Insurance firms should adhere to principles of fairness and non-discrimination when using artificial intelligence as well as consider the outcomes of artificial intelligence systems, while balancing the interests of all the stakeholders involved. Insurance firms should consider financial inclusion issues, assess and develop measures to mitigate the impact of rating factors such as credit scores, and avoid the use of certain types of price and claims optimization practices, such as those aiming to maximize consumers’ “willingness to pay” or “willingness to accept.”
- Principle of transparency and explainability. Insurance firms should strive to use explainable artificial intelligence models, in particular in high-impact artificial intelligence use cases, although, in certain cases, they may combine model explainability with other governance measures insofar as they ensure the accountability of firms, including enabling access to adequate redress mechanisms. Explanations should be meaningful and easy to understand to help stakeholders make informed decisions. Insurance firms should transparently communicate the data used in artificial intelligence models to consumers and ensure that they are aware that they are interacting with an artificial intelligence system, and its limitations.
- Principle of Human Oversight. Insurance firms should establish adequate levels of human oversight throughout the lifecycle of an artificial intelligence system. The organizational structure of insurance firms should assign and document clear roles and responsibilities for the staff involved in artificial intelligence processes, fully embedded in their governance system. The roles and responsibilities of staff members may vary from one artificial intelligence use case to another. Insurance firms must also assess the impact of artificial intelligence on the work of employees and provide staff with adequate training.
- Principle of data governance of record keeping. The provisions included in national and European data protection laws should be the basis for the implementation of sound data governance throughout the artificial intelligence system lifecycle adapted to specific artificial intelligence use cases. Insurance firms should ensure that data used in artificial intelligence systems is accurate, complete, and appropriate and should apply the same data governance standards regardless of whether data is obtained from internal or external sources. Data should be stored in a safe and secure environment; for high-impact use cases, insurance firms should keep appropriate records of the data management processes and modeling methodologies to enable their traceability and auditability.
- Principle of Robustness and Performance. Insurance firms should use robust artificial intelligence systems, both when developed in-house or outsourced to third parties, taking into account their intended use and the potential to cause harm. Artificial intelligence systems should be fit for purpose and their performance should be assessed and monitored on an ongoing basis, including the development of relevant performance metrics. It is important that the calibration, validation, and reproducibility of artificial intelligence systems is sound and ensures that the outcomes of the artificial intelligence system are stable over time. Artificial intelligence systems should be deployed in resilient and secure IT infrastructures.
EIOPA welcomes these principles and findings from the Consultative Expert Group and believes that they provide a highly valuable starting point for better establishing the boundaries for appropriate use of artificial intelligence in insurance. EIOPA will use these findings to identify possible supervisory initiatives in this area while taking into account the ongoing EU-level developments with respect to digitalization and artificial intelligence.
Keywords: Europe, EU, Insurance, Artificial Intelligence, Governance, Proportionality, Insurtech, Data Governance, Cyber Risk, Digital Ethics, Big Data, Regtech, EIOPA
Previous ArticleECB Amends Guideline on Temporary Collateral Easing Measures
The Prudential Regulation Authority (PRA) published the final policy statement PS21/21 on the leverage ratio framework in the UK. PS21/21, which sets out the final policy of both the Financial Policy Committee (FPC) and PRA
The Consumer Financial Protection Bureau (CFPB) proposed to amend Regulation B to implement changes to the Equal Credit Opportunity Act (ECOA) under Section 1071 of the Dodd-Frank Act.
The Prudential Regulation Authority (PRA) decided to maintain, at the 2019 levels, the buffer rates for the Other Systemically Important Institutions (O-SII) for another year, with no new rates to be set until December 2023.
The Financial Stability Board (FSB) published a progress report on implementation of its high-level recommendations for the regulation, supervision, and oversight of global stablecoin arrangements.
In a letter to the authorized deposit taking institutions, the Australian Prudential Regulation Authority (APRA) announced an increase in the minimum interest rate buffer it expects banks to use when assessing the serviceability of home loan applications.
The Committee on Payments and Market Infrastructures (CPMI) and the International Organization of Securities Commissions (IOSCO) are consulting on the preliminary guidance that clarifies that stablecoin arrangements should observe international standards for payment, clearing, and settlement systems.
The European Banking Authority (EBA) and the European Insurance and Occupational Pensions Authority (EIOPA) have set out their respective work priorities for 2022.
The Malta Financial Services Authority (MFSA) updated the guidelines on supervisory reporting requirements under the reporting framework 3.0, in addition to the reporting module on leverage under the common reporting (COREP) framework.
The European Commission (EC) published the Implementing Decision 2021/1753 on the equivalence of supervisory and regulatory requirements of certain third countries and territories for the purposes of the treatment of exposures, in accordance with the Capital Requirements Regulation or CRR (575/2013).
EC published the Implementing Regulation 2021/1751, which lays down implementing technical standards on uniform formats and templates for notification of determination of the impracticability of including contractual recognition of write-down and conversion powers.