EIOPA published a report, from the Consultative Expert Group on Digital Ethics, that sets out artificial intelligence governance principles for an ethical and trustworthy artificial intelligence in the insurance sector in EU. The report builds on recent international and EU developments in the area of digitalization and artificial intelligence. Taking into account the opportunities and challenges of artificial intelligence, the Consultative Expert Group on Digital Ethics has developed an ethical and trustworthy artificial intelligence governance framework that seeks to enable stakeholders in the insurance sector to harness the benefits, and address the challenges, arising from artificial intelligence.
The proposed framework recognizes the freedom of insurance firms to select a combination of governance measures to better adapt to their respective business models and to the concrete artificial intelligence use cases that they aim to implement. The proposed framework also highlights the areas that require special consideration with respect to promoting trust in the use of artificial intelligence by insurance firms. The report details the following governance principles for an ethical and trustworthy artificial intelligence the insurance sector:
- Principle of proportionality. Insurance firms should conduct an artificial intelligence use case impact assessment to determine the governance measures required for a specific artificial intelligence use case. The artificial intelligence use case impact assessment and the governance measures should be proportionate to the potential impact of a specific artificial intelligence use case on consumers and/or insurance firms. Insurance firms should then assess the combination of measures put in place to ensure an ethical and trustworthy use of artificial intelligence.
- Principle of fairness and non-discrimination. Insurance firms should adhere to principles of fairness and non-discrimination when using artificial intelligence as well as consider the outcomes of artificial intelligence systems, while balancing the interests of all the stakeholders involved. Insurance firms should consider financial inclusion issues, assess and develop measures to mitigate the impact of rating factors such as credit scores, and avoid the use of certain types of price and claims optimization practices, such as those aiming to maximize consumers’ “willingness to pay” or “willingness to accept.”
- Principle of transparency and explainability. Insurance firms should strive to use explainable artificial intelligence models, in particular in high-impact artificial intelligence use cases, although, in certain cases, they may combine model explainability with other governance measures insofar as they ensure the accountability of firms, including enabling access to adequate redress mechanisms. Explanations should be meaningful and easy to understand to help stakeholders make informed decisions. Insurance firms should transparently communicate the data used in artificial intelligence models to consumers and ensure that they are aware that they are interacting with an artificial intelligence system, and its limitations.
- Principle of Human Oversight. Insurance firms should establish adequate levels of human oversight throughout the lifecycle of an artificial intelligence system. The organizational structure of insurance firms should assign and document clear roles and responsibilities for the staff involved in artificial intelligence processes, fully embedded in their governance system. The roles and responsibilities of staff members may vary from one artificial intelligence use case to another. Insurance firms must also assess the impact of artificial intelligence on the work of employees and provide staff with adequate training.
- Principle of data governance of record keeping. The provisions included in national and European data protection laws should be the basis for the implementation of sound data governance throughout the artificial intelligence system lifecycle adapted to specific artificial intelligence use cases. Insurance firms should ensure that data used in artificial intelligence systems is accurate, complete, and appropriate and should apply the same data governance standards regardless of whether data is obtained from internal or external sources. Data should be stored in a safe and secure environment; for high-impact use cases, insurance firms should keep appropriate records of the data management processes and modeling methodologies to enable their traceability and auditability.
- Principle of Robustness and Performance. Insurance firms should use robust artificial intelligence systems, both when developed in-house or outsourced to third parties, taking into account their intended use and the potential to cause harm. Artificial intelligence systems should be fit for purpose and their performance should be assessed and monitored on an ongoing basis, including the development of relevant performance metrics. It is important that the calibration, validation, and reproducibility of artificial intelligence systems is sound and ensures that the outcomes of the artificial intelligence system are stable over time. Artificial intelligence systems should be deployed in resilient and secure IT infrastructures.
EIOPA welcomes these principles and findings from the Consultative Expert Group and believes that they provide a highly valuable starting point for better establishing the boundaries for appropriate use of artificial intelligence in insurance. EIOPA will use these findings to identify possible supervisory initiatives in this area while taking into account the ongoing EU-level developments with respect to digitalization and artificial intelligence.
Keywords: Europe, EU, Insurance, Artificial Intelligence, Governance, Proportionality, Insurtech, Data Governance, Cyber Risk, Digital Ethics, Big Data, Regtech, EIOPA
Previous ArticleECB Amends Guideline on Temporary Collateral Easing Measures
The Office of the Superintendent of Financial Institutions (OSFI) published the strategic plan for 2022-2025 and the departmental plan for 2022-23.
The European Banking Authority (EBA) is consulting, until August 31, 2022, on the draft implementing technical standards specifying requirements for the information that sellers of non-performing loans (NPLs) shall provide to prospective buyers.
The European Council and the Parliament reached an agreement on the revised Directive on security of network and information systems (NIS2 Directive).
The European Banking Authority (EBA) published the final draft regulatory technical standards specifying information that crowdfunding service providers shall provide to investors on the calculation of credit scores and prices of crowdfunding offers.
The European Securities and Markets Authority (ESMA) published a paper that examines the systemic risk posed by increasing use of cloud services, along with the potential policy options to mitigate this risk.
The European Commission (EC) published a public consultation on the review of revised payment services directive (PSD2) and open finance.
The European Commission (EC) has issued two letters mandating the European Supervisory Authorities (ESAs) to jointly propose amendments to the regulatory technical standards under Sustainable Finance Disclosure Regulation or SFDR.
The European Banking Authority (EBA) published its annual report on convergence of supervisory practices for 2021. Additionally, following a request from the European Commission (EC),
The Swiss National Bank (SNB) published Version 1.2 of the reporting forms (NSFR_G and NSFR_P) on the net stable funding ratio (NSFR) of banks, along with the associated documentation.
The Farm Credit Administration published, in the Federal Register, the final rule on implementation of the Current Expected Credit Losses (CECL) methodology for allowances