ACPR published a discussion paper on the governance of artificial intelligence in finance. In this paper, ACPR has proposed four principles for evaluating artificial intelligence algorithms and tools—namely, data management, performance, stability, and explainability. ACPR also recommends the governance concerns that need to be taken into account, as early as the design phase of an algorithm. These concerns involve the integration of artificial intelligence into traditional business processes; the impact of this integration on internal controls; the relevance of outsourcing (partially or fully) the design or maintenance phases; and the internal and external audit functions. The comment period for this discussion paper ends on September 04, 2020.
The governance of artificial intelligence algorithms requires careful consideration of the validation of each decision-making process. The regulatory compliance and the performance objectives of these algorithms are only achievable through a certain level of explainability and traceability. In the discussion paper, ACPR recommends focus on the following aspects of governance concerns:
- Integration of artificial intelligence into business processes. This involves ascertaining whether the artificial intelligence component fulfills a critical function, by dint of its operational role or of the associated compliance risk and whether the engineering process follows a well-defined methodology throughout the machine learning lifecycle (from algorithmic design to monitoring in production), in the sense of reproducibility, quality assurance, architectural design, auditability, and automation.
- Human-algorithm interactions. Those can require a particular kind of explainability, intended either for internal operators who need to confirm or reject an algorithm’s output, or for customers who are entitled to understand the decisions impacting them or the commercial offers made to them. Besides, processes involving artificial intelligence often leave room for human intervention, which is beneficial or even necessary, but also bears new risks. Such new risks include the introduction of biases into the explanation of an 4 algorithm’s output, or a stronger feeling of engaging one’s responsibility when contradicting the algorithm than when confirming its decisions.
- Security and outsourcing. Machine learning models are exposed to new kinds of attacks. Furthermore, strategies such as development outsourcing, skills outsourcing, and external hosting should undergo careful risk assessment. More generally, third-party risks should be evaluated.
- Initial and continuous validation process. This process must often be re-examined when designing an artificial intelligence algorithm intended for augmenting or altering an existing process. For instance, the governance framework applicable to a business line may in some cases be maintained, while, in other cases, it will have to be updated before putting the artificial intelligence component into production. Continuous validation process. The continuous monitoring of machine learning algorithm, for instance, requires technical expertise and machine-learning-specific tools to ensure the aforementioned principles are followed over time (appropriate data management, predictive accuracy, stability, and availability of valid explanations).
- Audit. For internal and external audits of artificial-intelligence-based systems in finance, exploratory work led by the ACPR suggests adopting a dual approach. The first facet combines analysis of the source code and data with methods for documenting artificial intelligence algorithms, predictive, models and datasets. The second facet leverages methods providing explanation for an individual decision or for the overall behavior of the algorithm; it also relies on two techniques for testing an algorithm as a black box: challenger models (to compare against the model under test) and benchmarking datasets, both curated by the auditor.
Comment Due Date: September 04, 2020
Keywords: Europe, France, Banking, Insurance, Governance, Artificial Intelligence, Fintech, Machine Learning, Regtech, Outsourcing Arrangements, ACPR
Previous ArticleACPR Publishes Version 2.1.0 of the CREDITHAB Taxonomy
The Prudential Regulation Authority (PRA) published the final policy statement PS21/21 on the leverage ratio framework in the UK. PS21/21, which sets out the final policy of both the Financial Policy Committee (FPC) and PRA
The Consumer Financial Protection Bureau (CFPB) proposed to amend Regulation B to implement changes to the Equal Credit Opportunity Act (ECOA) under Section 1071 of the Dodd-Frank Act.
The Prudential Regulation Authority (PRA) decided to maintain, at the 2019 levels, the buffer rates for the Other Systemically Important Institutions (O-SII) for another year, with no new rates to be set until December 2023.
The Financial Stability Board (FSB) published a progress report on implementation of its high-level recommendations for the regulation, supervision, and oversight of global stablecoin arrangements.
In a letter to the authorized deposit taking institutions, the Australian Prudential Regulation Authority (APRA) announced an increase in the minimum interest rate buffer it expects banks to use when assessing the serviceability of home loan applications.
The Committee on Payments and Market Infrastructures (CPMI) and the International Organization of Securities Commissions (IOSCO) are consulting on the preliminary guidance that clarifies that stablecoin arrangements should observe international standards for payment, clearing, and settlement systems.
The European Banking Authority (EBA) and the European Insurance and Occupational Pensions Authority (EIOPA) have set out their respective work priorities for 2022.
The Malta Financial Services Authority (MFSA) updated the guidelines on supervisory reporting requirements under the reporting framework 3.0, in addition to the reporting module on leverage under the common reporting (COREP) framework.
The European Commission (EC) published the Implementing Decision 2021/1753 on the equivalence of supervisory and regulatory requirements of certain third countries and territories for the purposes of the treatment of exposures, in accordance with the Capital Requirements Regulation or CRR (575/2013).
EC published the Implementing Regulation 2021/1751, which lays down implementing technical standards on uniform formats and templates for notification of determination of the impracticability of including contractual recognition of write-down and conversion powers.