EIOPA published a report, from the Consultative Expert Group on Digital Ethics, that sets out artificial intelligence governance principles for an ethical and trustworthy artificial intelligence in the insurance sector in EU. The report builds on recent international and EU developments in the area of digitalization and artificial intelligence. Taking into account the opportunities and challenges of artificial intelligence, the Consultative Expert Group on Digital Ethics has developed an ethical and trustworthy artificial intelligence governance framework that seeks to enable stakeholders in the insurance sector to harness the benefits, and address the challenges, arising from artificial intelligence.
The proposed framework recognizes the freedom of insurance firms to select a combination of governance measures to better adapt to their respective business models and to the concrete artificial intelligence use cases that they aim to implement. The proposed framework also highlights the areas that require special consideration with respect to promoting trust in the use of artificial intelligence by insurance firms. The report details the following governance principles for an ethical and trustworthy artificial intelligence the insurance sector:
- Principle of proportionality. Insurance firms should conduct an artificial intelligence use case impact assessment to determine the governance measures required for a specific artificial intelligence use case. The artificial intelligence use case impact assessment and the governance measures should be proportionate to the potential impact of a specific artificial intelligence use case on consumers and/or insurance firms. Insurance firms should then assess the combination of measures put in place to ensure an ethical and trustworthy use of artificial intelligence.
- Principle of fairness and non-discrimination. Insurance firms should adhere to principles of fairness and non-discrimination when using artificial intelligence as well as consider the outcomes of artificial intelligence systems, while balancing the interests of all the stakeholders involved. Insurance firms should consider financial inclusion issues, assess and develop measures to mitigate the impact of rating factors such as credit scores, and avoid the use of certain types of price and claims optimization practices, such as those aiming to maximize consumers’ “willingness to pay” or “willingness to accept.”
- Principle of transparency and explainability. Insurance firms should strive to use explainable artificial intelligence models, in particular in high-impact artificial intelligence use cases, although, in certain cases, they may combine model explainability with other governance measures insofar as they ensure the accountability of firms, including enabling access to adequate redress mechanisms. Explanations should be meaningful and easy to understand to help stakeholders make informed decisions. Insurance firms should transparently communicate the data used in artificial intelligence models to consumers and ensure that they are aware that they are interacting with an artificial intelligence system, and its limitations.
- Principle of Human Oversight. Insurance firms should establish adequate levels of human oversight throughout the lifecycle of an artificial intelligence system. The organizational structure of insurance firms should assign and document clear roles and responsibilities for the staff involved in artificial intelligence processes, fully embedded in their governance system. The roles and responsibilities of staff members may vary from one artificial intelligence use case to another. Insurance firms must also assess the impact of artificial intelligence on the work of employees and provide staff with adequate training.
- Principle of data governance of record keeping. The provisions included in national and European data protection laws should be the basis for the implementation of sound data governance throughout the artificial intelligence system lifecycle adapted to specific artificial intelligence use cases. Insurance firms should ensure that data used in artificial intelligence systems is accurate, complete, and appropriate and should apply the same data governance standards regardless of whether data is obtained from internal or external sources. Data should be stored in a safe and secure environment; for high-impact use cases, insurance firms should keep appropriate records of the data management processes and modeling methodologies to enable their traceability and auditability.
- Principle of Robustness and Performance. Insurance firms should use robust artificial intelligence systems, both when developed in-house or outsourced to third parties, taking into account their intended use and the potential to cause harm. Artificial intelligence systems should be fit for purpose and their performance should be assessed and monitored on an ongoing basis, including the development of relevant performance metrics. It is important that the calibration, validation, and reproducibility of artificial intelligence systems is sound and ensures that the outcomes of the artificial intelligence system are stable over time. Artificial intelligence systems should be deployed in resilient and secure IT infrastructures.
EIOPA welcomes these principles and findings from the Consultative Expert Group and believes that they provide a highly valuable starting point for better establishing the boundaries for appropriate use of artificial intelligence in insurance. EIOPA will use these findings to identify possible supervisory initiatives in this area while taking into account the ongoing EU-level developments with respect to digitalization and artificial intelligence.
Keywords: Europe, EU, Insurance, Artificial Intelligence, Governance, Proportionality, Insurtech, Data Governance, Cyber Risk, Digital Ethics, Big Data, Regtech, EIOPA
Previous ArticleBoE and PRA Publish Annual Reports for 2020-21
The Hong Kong Monetary Authority (HKMA) revised the Supervisory Policy Manual module CG-5 that sets out guidelines on a sound remuneration system for authorized institutions.
The European Banking Authority (EBA) published the final guidelines on the monitoring of the threshold and other procedural aspects on the establishment of intermediate parent undertakings in European Union (EU), as laid down in the Capital Requirements Directive (CRD).
In a recent Market Notice, the Bank of England (BoE) confirmed that green gilts will have equivalent eligibility to existing gilts in its market operations.
The Financial Conduct Authority (FCA) published the policy statement PS21/9 on implementation of the Investment Firms Prudential Regime.
The European Banking Authority (EBA) proposed regulatory technical standards that set out criteria for identifying shadow banking entities for the purpose of reporting large exposures.
The Board of the International Organization of Securities Commissions (IOSCO) proposed a set of recommendations on the environmental, social, and governance (ESG) ratings and data providers.
The European Securities and Markets Authority (ESMA) published recommendations from the Working Group on Euro Risk-Free Rates (RFR) on the switch to risk-free rates in the interdealer market.
The European Central Bank (ECB) published a paper as well as an article in the July Macroprudential Bulletin, both of which offer insights on the assessment of the impact of Basel III finalization package on the euro area.
The International Swaps and Derivatives Association (ISDA) published a paper that explores the impact of the Fundamental Review of the Trading Book (FRTB) on the trading of carbon certificates.
The Prudential Regulation Authority (PRA) published the remuneration policy self-assessment templates and tables on strengthening accountability.