EC Launches Pilot Phase on Implementation of Ethical Guidelines for AI
EC launched a pilot phase to ensure that ethical guidelines for the development and use of artificial intelligence, or AI, can be implemented in practice. EC is taking a three-step approach, which involves setting out the key requirements for trustworthy artificial intelligence, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric artificial intelligence. EC also presented the next steps for building trust in artificial intelligence by taking forward the work of the High-Level Expert Group, which was appointed in June 2018.
As per the EC approach, trustworthy artificial intelligence should respect all applicable laws and regulations as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight. Artificial intelligence systems should enable equitable societies by supporting human agency and fundamental rights and not decrease, limit, or misguide human autonomy.
- Robustness and safety. Trustworthy artificial intelligence requires algorithms to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases of artificial intelligence systems.
- Privacy and data governance. Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency. The traceability of artificial intelligence systems should be ensured.
- Diversity, non-discrimination, and fairness. Artificial intelligence systems should consider the whole range of human abilities, skills, and requirements and ensure accessibility.
- Societal and environmental well-being. Artificial intelligence systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability. Mechanisms should be put in place to ensure responsibility and accountability for artificial intelligence systems and their outcomes.
In terms of building international consensus for human-centric artificial intelligence, EC wants to bring this approach to artificial intelligence ethics to the global stage because technologies, data, and algorithms know no borders. To this end, EC will strengthen cooperation with like-minded partners such as Japan, Canada, or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organizations. EC is inviting industry, research institutes, and public authorities to test the detailed evaluation list drawn up by the High Level Expert Group, which complements the guidelines.
Following the pilot phase, in early 2020, the Artificial Intelligence Expert Group review the assessment lists for the key requirements, building on the feedback received. Building on this review, EC will evaluate the outcome and propose any next steps. Furthermore, to ensure the ethical development of artificial intelligence, EC will, by the Autumn 2019, launch a set of networks of artificial intelligence research excellence centers, begin setting up networks of digital innovation hubs, and—together with member states and stakeholders—start discussions to develop and implement a model for data-sharing and making best use of common data spaces.
Related Links
Keywords: Europe, EU, Banking, Insurance, Securities, PMI, Regtech, Artificial Intelligence, Guidelines, EC
Previous Article
RBNZ Consults on Framework for Identifying D-SIBsRelated Articles
BIS Examines Use of Big Data and Machine Learning at Central Banks
BIS published a paper that provides an overview on the use of big data and machine learning in the central bank community.
APRA Finalizes Reporting Standard for Operational Risk Requirements
APRA finalized the reporting standard ARS 115.0 on capital adequacy with respect to the standardized measurement approach to operational risk for authorized deposit-taking institutions in Australia.
ECB Publishes Guide for Determining Penalties for Regulatory Breaches
ECB published a guide that outlines the principles and methods for calculating the penalties for regulatory breaches of prudential requirements by banks.
MAS Sets Out Good Practices to Manage Operational Risks Amid COVID
MAS and The Association of Banks in Singapore (ABS) jointly issued a paper that sets out good practices for the management of operational and other risks stemming from new work arrangements adopted by financial institutions amid the COVID-19 pandemic.
ACPR Announces New Data Collection Application for Banks and Insurers
ACPR announced that a new data collection application, called DLPP (Datalake for Prudential), for collecting banking and insurance prudential data will go into production on April 12, 2021.
BCB Maintains CCyB at 0%, Initiates First Cycle of Regulatory Sandbox
BCB announced that the Financial Stability Committee decided to maintain the countercyclical capital buffer (CCyB) for Brazil at 0%, at least until the end of 2021.
EIOPA Launches Study on Non-Life Underwriting Risk in Internal Models
EIOPA has launched a European-wide comparative study on non-life underwriting risk in internal models, also kicking-off of the data collection phase.
SRB Publishes Overview of Resolution Tools Available in Banking Union
SRB published an overview of the resolution tools available in the Banking Union and their impact on a bank’s ability to maintain continuity of access to financial market infrastructure services in resolution.
EBA Consults on Pillar 3 Disclosure Standards for ESG Risks Under CRR
EBA is consulting on the implementing technical standards for Pillar 3 disclosures on environmental, social, and governance (ESG) risks, as set out in requirements under Article 449a of the Capital Requirements Regulation (CRR).
ESAs Issue Advice on KPIs on Sustainability for Nonfinancial Reporting
ESAs Issue Advice on KPIs on Sustainability for Nonfinancial Reporting