The U.S. President Joe Biden signed an Executive Order, dated October 30, 2023, to ensure safe, secure, and trustworthy development and use of artificial intelligence (AI). This Order was followed soon after by a draft memorandum from the Office of Management and Budget (OMB), the comment period for which ended on December 05, 2023. The OMB memorandum provides implementation guidance for the federal agencies to manage AI risks and mandate accountability while advancing innovation in AI. The speculation is that other policymakers—from Congress to the U.S. states—can use these documents as a guide for future action in requiring accountability in the use of AI.
The White House, in May 2023, had issued a request for input on the U.S. national priorities and future actions on AI. The resulting comments from the public informed the development of the AI Executive Order and other executive actions. Both the Executive Order and the draft OMB memo also build on the earlier Biden administration efforts like the Blueprint for an AI Bill of Rights (released in October 2022) and the AI Risk Management Framework from the National Institute of Standards and Technology or NIST (released in January 2023). Both the documents set up mandates for accountability and for the federal government to be a model for accountable AI. The Executive Order directs federal agencies to develop additional guidance, which we are likely to see over the next year.
The Executive Order sets out several guiding principles and priorities, including standards to promote AI safety and security, consumer and worker protection, data privacy, equity and civil rights, innovation, competition, and responsible government use of AI. The Order defines AI systems broadly and is not just limited to generative AI. The Order impacts any machine-based system that supports predictions, recommendations, or decisions and addresses policies on the following key aspects, among many others:
- Tasks U.S. Agencies to create standards to protect against AI misuse, for example, mandating NIST to create standards to test AI models before public release and OMB to issue guidance to federal agencies for labeling and authenticating official US government content
- Focuses on “red-teaming“ as the testing methodology, requiring private companies to preemptively test their models for specific safety concerns
- Requires developers of large AI models to test and report the resulting documentation of safety testing practices and results to the federal government
- Directs the Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content (Meanwhile, AI companies, such as OpenAI, Alphabet, and Meta Platforms, have voluntarily agreed to watermark AI-generated content.)
- Aims to establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software
- Seeks to “strengthen United States leadership of global efforts to unlock AI’s potential and meet its challenges” by focusing on expanding bilateral, multilateral, and multi-stakeholder engagements to collaborate on AI and accelerating development and implementation of vital AI standards with international partners to solve global challenges
In general, the market views on this Order vary considerably. Certain market participants welcomed this Executive Order while others noted that the Order is vague, depends on goodwill of large technology firms, or takes a political stand, instead of formulating concrete rules. It is also being highlighted that such Executive Orders are not as stable or concrete as a legislation, since future administrations may reverse these.
Visit Moody's Analytics | Digital Banks Microsite to find out how we harness new technologies to deliver growth through our banking solutions.
- Executive Order
- Fact Sheet on Executive Order
- Press Release on OMB Guidance
- Draft Policy from OMB
- Blueprint for an AI Bill of Rights
- NIST AI Risk Management Framework
Keywords: Americas, US, Regtech, Fintech, Suptech, Artificial Intelligence, Executive Order, OMB, NIST, Department of Commerce, White House
Previous ArticleIndustry Agency Expects Considerable Uptake for Swiss Climate Scores
The U.S. regulators recently released baseline and severely adverse scenarios, along with other details, for stress testing the banks in 2024. The relevant U.S. banking regulators are the Federal Reserve Bank (FED), the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC).
The regulatory landscape for artificial intelligence (AI), including the generative kind, is evolving rapidly, with governments and regulators aiming to address the challenges and opportunities presented by this transformative technology.
The European Union (EU) has been working on the final elements of Basel III standards, with endorsement of the Banking Package and the publication of the European Banking Authority (EBA) roadmap on Basel III implementation in December 2023.
The European Financial Reporting Advisory Group (EFRAG), which plays a crucial role in shaping corporate reporting standards in European Union (EU), is seeking comments, until May 21, 2024, on the Exposure Draft ESRS for listed SMEs.
Banking regulators worldwide are increasingly focusing on addressing, monitoring, and supervising the institutions' exposure to climate and environmental risks.
The use cases of generative AI in the banking sector are evolving fast, with many institutions adopting the technology to enhance customer service and operational efficiency.
As part of the increasing regulatory focus on operational resilience, cyber risk stress testing is also becoming a crucial aspect of ensuring bank resilience in the face of cyber threats.
A few years down the road from the last global financial crisis, regulators are still issuing rules and monitoring banks to ensure that they comply with the regulations.
The European Commission (EC) recently issued an update informing that the European Council and the Parliament have endorsed the Banking Package implementing the final elements of Basel III standards
The Swiss Federal Council recently decided to further develop the Swiss Climate Scores, which it had first launched in June 2022.