EC launched a pilot phase to ensure that ethical guidelines for the development and use of artificial intelligence, or AI, can be implemented in practice. EC is taking a three-step approach, which involves setting out the key requirements for trustworthy artificial intelligence, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric artificial intelligence. EC also presented the next steps for building trust in artificial intelligence by taking forward the work of the High-Level Expert Group, which was appointed in June 2018.
As per the EC approach, trustworthy artificial intelligence should respect all applicable laws and regulations as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight. Artificial intelligence systems should enable equitable societies by supporting human agency and fundamental rights and not decrease, limit, or misguide human autonomy.
- Robustness and safety. Trustworthy artificial intelligence requires algorithms to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases of artificial intelligence systems.
- Privacy and data governance. Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency. The traceability of artificial intelligence systems should be ensured.
- Diversity, non-discrimination, and fairness. Artificial intelligence systems should consider the whole range of human abilities, skills, and requirements and ensure accessibility.
- Societal and environmental well-being. Artificial intelligence systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability. Mechanisms should be put in place to ensure responsibility and accountability for artificial intelligence systems and their outcomes.
In terms of building international consensus for human-centric artificial intelligence, EC wants to bring this approach to artificial intelligence ethics to the global stage because technologies, data, and algorithms know no borders. To this end, EC will strengthen cooperation with like-minded partners such as Japan, Canada, or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organizations. EC is inviting industry, research institutes, and public authorities to test the detailed evaluation list drawn up by the High Level Expert Group, which complements the guidelines.
Following the pilot phase, in early 2020, the Artificial Intelligence Expert Group review the assessment lists for the key requirements, building on the feedback received. Building on this review, EC will evaluate the outcome and propose any next steps. Furthermore, to ensure the ethical development of artificial intelligence, EC will, by the Autumn 2019, launch a set of networks of artificial intelligence research excellence centers, begin setting up networks of digital innovation hubs, and—together with member states and stakeholders—start discussions to develop and implement a model for data-sharing and making best use of common data spaces.
Keywords: Europe, EU, Banking, Insurance, Securities, PMI, Regtech, Artificial Intelligence, Guidelines, EC
Previous ArticleRBNZ Consults on Framework for Identifying D-SIBs
FASB issued a proposed Accounting Standards Update that would grant insurance companies, adversely affected by the COVID-19 pandemic, an additional year to implement the Accounting Standards Update No. 2018-12 on targeted improvements to accounting for long-duration insurance contracts, or LDTI (Topic 944).
EBA published a statement on resolution planning in light of the COVID-19 pandemic.
ESMA updated the reporting manual on the European Single Electronic Format (ESEF).
BCBS and FSB published a report on supervisory issues associated with benchmark transition.
APRA updated the regulatory approach for loans subject to repayment deferrals amid the COVID-19 crisis.
BCBS Finalizes Revisions to Credit Valuation Adjustment Risk Framework
PRA published a statement to insurers that clarifies the approach to application of the matching adjustment during COVID-19 crisis.
EBA published a report on the implementation of selected COVID-19 policies within the prudential framework for banking sector.
EC launched a consultation to revise the network and information systems (NIS) Directive (2016/1148), which was adopted in July 2016 and is the first horizontal internal market instrument aimed at improving the resilience of the EU against cybersecurity risks.
PRA published a statement that outlines its view on the implications of LIBOR transition for contracts in scope of the “Contractual Recognition of Bail-In” and “Stay in Resolution” parts of the PRA Rulebook.