EC launched a pilot phase to ensure that ethical guidelines for the development and use of artificial intelligence, or AI, can be implemented in practice. EC is taking a three-step approach, which involves setting out the key requirements for trustworthy artificial intelligence, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric artificial intelligence. EC also presented the next steps for building trust in artificial intelligence by taking forward the work of the High-Level Expert Group, which was appointed in June 2018.
As per the EC approach, trustworthy artificial intelligence should respect all applicable laws and regulations as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:
- Human agency and oversight. Artificial intelligence systems should enable equitable societies by supporting human agency and fundamental rights and not decrease, limit, or misguide human autonomy.
- Robustness and safety. Trustworthy artificial intelligence requires algorithms to be secure, reliable, and robust enough to deal with errors or inconsistencies during all life cycle phases of artificial intelligence systems.
- Privacy and data governance. Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
- Transparency. The traceability of artificial intelligence systems should be ensured.
- Diversity, non-discrimination, and fairness. Artificial intelligence systems should consider the whole range of human abilities, skills, and requirements and ensure accessibility.
- Societal and environmental well-being. Artificial intelligence systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
- Accountability. Mechanisms should be put in place to ensure responsibility and accountability for artificial intelligence systems and their outcomes.
In terms of building international consensus for human-centric artificial intelligence, EC wants to bring this approach to artificial intelligence ethics to the global stage because technologies, data, and algorithms know no borders. To this end, EC will strengthen cooperation with like-minded partners such as Japan, Canada, or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organizations. EC is inviting industry, research institutes, and public authorities to test the detailed evaluation list drawn up by the High Level Expert Group, which complements the guidelines.
Following the pilot phase, in early 2020, the Artificial Intelligence Expert Group review the assessment lists for the key requirements, building on the feedback received. Building on this review, EC will evaluate the outcome and propose any next steps. Furthermore, to ensure the ethical development of artificial intelligence, EC will, by the Autumn 2019, launch a set of networks of artificial intelligence research excellence centers, begin setting up networks of digital innovation hubs, and—together with member states and stakeholders—start discussions to develop and implement a model for data-sharing and making best use of common data spaces.
Keywords: Europe, EU, Banking, Insurance, Securities, PMI, Regtech, Artificial Intelligence, Guidelines, EC
Previous ArticlePRA Issues PS7/19 Related to Definition of Default Under Credit Risk
The European Commission (EC) published the Delegated Regulation 2021/1527 with regard to the regulatory technical standards for the contractual recognition of write down and conversion powers.
The Australian Prudential Regulation Authority (APRA) published a new set of frequently asked questions (FAQs) to provide guidance to authorized deposit-taking institutions on the interpretation of APS 120, the prudential standard on securitization.
The Single Resolution Board (SRB) published a Communication on the application of regulatory technical standard provisions on prior permission for reducing eligible liabilities instruments as of January 01, 2022.
The Australian Prudential Regulation Authority (APRA) published a new set of frequently asked questions (FAQs) to clarify the regulatory capital treatment of investments in the overseas deposit-taking and insurance subsidiaries.
The European Banking Authority (EBA) published the final report on the guidelines specifying the criteria to assess the exceptional cases when institutions exceed the large exposure limits and the time and measures needed for institutions to return to compliance.
The Prudential Regulation Authority (PRA) issued the policy statement PS20/21, which contains final rules for the application of existing consolidated prudential requirements to financial holding companies and mixed financial holding companies.
The European Banking Authority (EBA) revised the guidelines on stress tests to be conducted by the national deposit guarantee schemes under the Deposit Guarantee Schemes Directive (DGSD).
The European Commission (EC) announced that Nordea Bank has signed a guarantee agreement with the European Investment Bank (EIB) Group to support the sustainable transformation of businesses in the Nordics.
The Hong Kong Monetary Authority (HKMA) issued a circular, for all authorized institutions, to confirm its support of an information note that sets out various options available in the loan market for replacing USD LIBOR with the Secured Overnight Financing Rate (SOFR).
The Office of the Comptroller of the Currency (OCC) issued a new "Problem Bank Supervision" booklet of the Comptroller's Handbook. The booklet covers information on timely identification and rehabilitation of problem banks and their advanced supervision, enforcement, and resolution when conditions warrant.