BoE published a paper that discusses a study on the textual complexity of banking regulations post the financial crisis of 2007-08. The authors have attempted to interpret regulatory complexity in terms of processing complexity by using techniques from natural language processing, or NLP, and network analysis and applying these techniques to the new post-crisis international banking rules. The results of the study suggest that the linguistic complexity in banking regulation is concentrated in a relatively small number of provisions and the post-crisis reforms have accentuated this feature.
The paper covers the ultimate normative question: “How complex does bank regulation have to be?” However, before this, there is another question: “How complex is bank regulation?” This study provides evidence to help answer this question by calculating textual complexity indicators on the near universe of UK prudential rules. The dataset used for this study captures legal sources comprehensively, allows like-for-like comparison between the post- and pre-crisis frameworks, and captures the entire structure of cross-references within the regulatory framework (to facilitate network analysis). The dataset included the near universe of prudential legal obligations and supervisory guidance that applied to UK banks in 2007 and 2017. It captured changes in both the scope of what regulators seek to control and in the legal architecture.
In this paper, the authors define complexity in terms of the processing difficulty encountered when comprehending a particular linguistic unit—for example, a single regulatory provision. Dimensions of processing difficulty for a provision include its length, lexical diversity, use of conditional statements, and the overall readability of its sentences (defined as “local” complexity). Some processing difficulties can only be resolved after accessing information outside the immediate context of the provision—for instance, cross-references or regulatory precedents needed to understand a provision’s intent (“global” complexity). The authors use natural language processing and network analysis techniques to measure these dimensions of local and global complexity and apply these measures to the constructed dataset.
The study found that linguistic complexity in banking regulation is concentrated in a relatively small number of provisions. Starting from the simplest provisions, the measures of complexity increase slowly, but then pick up rapidly as the study approaches the last 10% of most complex provisions. This stylized fact has been accentuated by the post-crisis reforms, which have resulted in the rise of highly complex provisions, in particular a tightly connected core. The authors recognize that more benchmarking for these indicators is a necessary next step toward answering the question on how complex does bank regulation have to be. Benchmarking against non-financial regulatory frameworks, or frameworks in other jurisdictions, is challenging given differences in legal systems and policy substance. However, authors plan to exploit variation within the used dataset to compare changes in complexity measures for different policy standards and test how they correspond to the expectations of policymakers.
The authors stress that these measures do not exhaust all the dimensions of linguistic complexity—in particular, resolving ambiguity in regulation is very likely to be important for information burden. In addition, to understand the economic effect of regulatory complexity “soft” textual information needs to be combined with traditional “hard” numeric data. For example, textual regulatory complexity could be compared to balance sheet complexity. Eventually, natural language processing can help enrich the economic evaluation of rules in terms of the interaction between rules, the impact of linguistic complexity, and the effectiveness of “rules vs standards.” The study contributes to this long-term research agenda, by creating a dataset of all provisions for UK banks and analyzing how they have changed with post-crisis reforms.
Related Link: Staff Working Paper
Keywords: Europe, UK, Banking, NLP, Machine Learning, Machine-Readable Regulations, Artificial Intelligence, Regtech, BoE
Previous ArticleHKMA Revises Guidance on Risk Management of Electronic Banking
ECB published Guideline 2021/975, which amends Guideline ECB/2014/31, on the additional temporary measures relating to Eurosystem refinancing operations and eligibility of collateral.
EIOPA published a report, from the Consultative Expert Group on Digital Ethics, that sets out artificial intelligence governance principles for an ethical and trustworthy artificial intelligence in the insurance sector in EU.
HKMA published the seventh and final issue of the Regtech Watch series, which outlines the three-year roadmap of HKMA to integrate supervisory technology, or suptech, into its processes.
EC launched a targeted consultation to improve transparency and efficiency in the secondary markets for nonperforming loans (NPLs).
BIS, Danmarks Nationalbank, Central Bank of Iceland, Norges Bank, and Sveriges Riksbank launched an Innovation Hub in Stockholm, making this the fifth BIS Innovation Hub Center to be opened in the past two years.
FDITECH, the technology lab of FDIC, announced a tech sprint that is designed to explore new technologies and techniques that would help expand the capabilities of community banks to meet the needs of unbanked individuals and households.
EC released the EU Taxonomy Compass, which visually represents the contents of the EU Taxonomy starting with the EU Taxonomy Climate Delegated Act.
FDIC is seeking comments on a rule to amend the interagency guidelines for real estate lending policies—also known as the Real Estate Lending Standards.
EIOPA published its annual report, which sets out the work done in 2020 and indicates the planned work areas for the coming months.
The ESRB paper that presents an analytical framework that assesses and quantifies the potential impact of a bank failure on the real economy through the lending function.