EC launched a pilot phase to ensure that ethical guidelines for the development and use of artificial intelligence, or AI, can be implemented in practice. EC is taking a three-step approach, which involves setting out the key requirements for trustworthy artificial intelligence, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric artificial intelligence. EC also presented the next steps for building trust in artificial intelligence by taking forward the work of the High-Level Expert Group, which was appointed in June 2018.
As per the EC approach, trustworthy artificial intelligence should respect all applicable laws and regulations as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight. Artificial intelligence systems should enable equitable societies by supporting human agency and fundamental rights and not decrease, limit, or misguide human autonomy.
- Robustness and safety. Trustworthy artificial intelligence requires algorithms to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases of artificial intelligence systems.
- Privacy and data governance. Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency. The traceability of artificial intelligence systems should be ensured.
- Diversity, non-discrimination, and fairness. Artificial intelligence systems should consider the whole range of human abilities, skills, and requirements and ensure accessibility.
- Societal and environmental well-being. Artificial intelligence systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability. Mechanisms should be put in place to ensure responsibility and accountability for artificial intelligence systems and their outcomes.
In terms of building international consensus for human-centric artificial intelligence, EC wants to bring this approach to artificial intelligence ethics to the global stage because technologies, data, and algorithms know no borders. To this end, EC will strengthen cooperation with like-minded partners such as Japan, Canada, or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organizations. EC is inviting industry, research institutes, and public authorities to test the detailed evaluation list drawn up by the High Level Expert Group, which complements the guidelines.
Following the pilot phase, in early 2020, the Artificial Intelligence Expert Group review the assessment lists for the key requirements, building on the feedback received. Building on this review, EC will evaluate the outcome and propose any next steps. Furthermore, to ensure the ethical development of artificial intelligence, EC will, by the Autumn 2019, launch a set of networks of artificial intelligence research excellence centers, begin setting up networks of digital innovation hubs, and—together with member states and stakeholders—start discussions to develop and implement a model for data-sharing and making best use of common data spaces.
Keywords: Europe, EU, Banking, Insurance, Securities, PMI, Regtech, Artificial Intelligence, Guidelines, EC
Previous ArticleEIOPA Appoints Expert Practitioner Panel on PEPP
PRA proposed rules (in CP12/21) for the application of existing consolidated prudential requirements to financial holding companies and mixed financial holding companies that have been approved or designated in accordance with Part 12B of the Financial Services and Markets Act 2000 (FSMA).
ECB Banking Supervision announced that euro area banks it directly supervises may continue to exclude certain central bank exposures from the leverage ratio until March 2022.
OSFI decided to increase the Domestic Stability Buffer from 1.00% to 2.50% of total risk-weighted assets, with effect from October 31, 2021.
HKMA is requesting banks to participate in a tech baseline assessment, which forms part of the HKMA Fintech 2025 strategy.
OSFI published two documents to consult on the management of operational risk capital data for institutions required, or for those applying, to use the Basel III standardized approach for operational risk capital in Canada.
The NGFS Study Group on Biodiversity and Financial Stability published a Vision paper exploring the case for action in addressing the financial stability concerns arising from biodiversity loss.
ACPR published the final version of CREDITIMMO 2.3.0 taxonomy for the decree of October 31, 2021.
EC, has approved, under the EU State Aid rules, the fourth prolongation of the Italian guarantee scheme to facilitate the securitization of non-performing loans.
ECB published Guideline 2021/975, which amends Guideline ECB/2014/31, on the additional temporary measures relating to Eurosystem refinancing operations and eligibility of collateral.
EIOPA published a report, from the Consultative Expert Group on Digital Ethics, that sets out artificial intelligence governance principles for an ethical and trustworthy artificial intelligence in the insurance sector in EU.