General Information & Client Services
  • Americas: +1.212.553.1653
  • Asia: +852.3551.3077
  • China: +86.10.6319.6580
  • EMEA: +44.20.7772.5454
  • Japan: +81.3.5408.4100
Media Relations
  • New York: +1.212.553.0376
  • London: +44.20.7772.5456
  • Hong Kong: +852.3758.1350
  • Tokyo: +813.5408.4110
  • Sydney: +61.2.9270.8141
  • Mexico City: +001.888.779.5833
  • Buenos Aires: +0800.666.3506
  • São Paulo: +0800.891.2518

With regulators questioning the appropriateness of models, implementing a robust model governance is of paramount importance for banks. This article delves into governance best practices – including model definitions, inventory, categorization, and risk teams – and regulatory expectations.

Financial organizations use quantitative techniques and models for a variety of reasons – setting the business strategy, managing risk, calculating regulatory capital, monitoring and setting internal limits, calculating exposures, pricing different instruments, performing stress testing, etc. Since the last financial crisis, there has been ever increasing regulatory pressure over the appropriateness of these models. Regulators are questioning the assumptions and limitations of models, the quality of the data used for their calibration, and the thoroughness and independence of the validation process. Another key area of focus has been the overall governance set around a model’s lifecycle – model development, implementation, calibration, and validation – at financial institutions, with emphasis on the robustness of the policies, process and controls, use tests, and the quality of the institutions’ documentation.

Regulatory expectations

In particular, regulators suggest that an institution’s senior management should be in a position to understand the limitations of the models, regardless of where the limitations come from, such as:

  • Methodological underpinning
  • The quality and availability of the data used for the calibration
  • Limitations linked to the models’ implementation (numerical inaccuracies, technological issues, source code bugs, etc.)
  • Limitations imposed by the context in which the model will be used
Figure 1. Model governance lifecycle
Model governance lifecycle
Source: Moody's Analytics

Regulators also expect senior management and model users to challenge the assumptions made by the developers and whether or not the models would be adequate in real-life situations. In particular, it should be clear to model users under what circumstances the assumptions would no longer hold. This is especially critical when using data proxies, which may “break” during market or financial condition changes. Therefore, applying robust model governance is becoming of paramount importance for firms. Furthermore, as models can be a significant source of risk, institutions are setting dedicated teams to manage and minimize this risk.

Agreeing on a firm-wide model definition

The first thing an institution needs to agree on is a firm-wide definition of what constitutes a model. A good starting point is the definition used by the Fed in SR 11-7:1

“The term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”

While it is also important to note:2

“The definition of model also covers quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature.”

Setting up a model inventory

Once firms agree on a definition, the next step is to set up a model inventory that covers the whole of the organization: financial planning, accounting and reporting, treasury, risk, front office, etc. Figure 2 illustrates a model inventory and includes a list of areas that an effective model inventory should cover. The inventory should cover at a minimum:

  • Intended and approved use of the model (e.g., which instruments it covers, what types of measures can be calculated using this model, etc.)
  • Last calibration date and calibration frequency
  • Validation status (passed, failed, passed with caveats)
  • Date of model sign-off and level of sign-off
  • Link to relevant documentation and date of latest documentation review
  • Significant manual overrides and their justification
  • Other known issues
  • Dependencies on other models/modules

Categorizing models

In order to manage the risk associated with the use of models, firms can also separate them in different groups by their complexity (both theoretical and implementation) and materiality, measured by the impact that their use (or misuse) may have on the firm. Materiality can also be determined in accordance with the use of model outputs, such as limits monitoring, calculation of regulatory capital and other regulatory disclosures, or valuations to be included in the official books and records.

Figure 2. Model inventory, covering an entire organization
Model inventory, covering an entire organization
Source: Moody's Analytics

These categorizations will also inform the frequency of the calibration and validation needed for the different models, or whether there is a need to recalibrate or revalidate when a change in the market or financial conditions warrants a review. They will also inform banks of the intended and correct use of a model and help them decide whether its use can be extended to other purposes or instruments without a thorough review by the model developers, recalibration, and/or revalidation. The thoroughness of the model, calibration, and implementation documentation, and the frequency on which these documents should be reviewed, will also be determined by categorization.

Establishing model risk teams

Ideally, model risk teams, together with model validation teams, are independent of model development teams to avoid a conflict of interest. Their mandate should cover all aspects of the development, calibration, validation, and implementation of the models, as well as the quality of the model results and reporting.

They should also be able to challenge the effectiveness of the policies that affect the use of these models, review the robustness of the process used in data management, model calibration, and implementation, as well as the thoroughness of the controls. They should be able to determine the required level of automation, which minimizes human error while allowing for reasonable judgment overrides.

Model risk teams should establish a process for approvals, including key stakeholders and sign-off levels required before the model can be used (i.e., whether department/regional committee approval is enough or whether business line or even group level approvals are required).

Finally, once the firm has accepted that the use of models carries a specific type of risk, this risk should be included in the firm’s risk appetite statement. In particular, budget approvals for modeling teams and critical investments on model, data, and reporting technology should be part of these discussions.

Regulators also expect senior management and model users to challenge the assumptions made by the developers and whether or not the models would be adequate in real-life situations. In particular, it should be clear to model users under what circumstances the assumptions would no longer hold. This is especially critical when using data proxies, which may “break” during market or financial condition changes.

With regulators questioning the assumptions and limitations of models, the quality of the data used for their calibration, and the thoroughness and independence of the validation process, banks should focus on effective model governance. In addition to responding to regulatory pressure, banks should prudently and closely look at the models they employ to protect their business and its reputation. Models must be governed and controlled by the key risk and management areas of an institution. As models can be a significant source of risk, the institution’s senior management should be aware of that risk and the limitations of their models.

Sources

1 Board of Governors of the Federal Reserve, (SR 11-7), Guidance on Model Risk Management, 2011.

2 Board of Governors of the Federal Reserve, (SR 11-7), Guidance on Model Risk Management - Attachment, 2011.

As Published In:
Related Insights

Advanced Estimation and Simulation Methods for Retail Credit Portfolios: Frequentist vs. Bayesian Techniques

In this article, we compare the results of estimating retail portfolio risk parameters (e.g., PDs, EADs, LGDs) and simulating portfolio default losses using traditional – frequentist – methods versus Bayesian techniques.

December 2015 WebPage Dr. Juan M. Licari, Dr. Gustavo Ordóñez-Sanz, Chiara Ventura

Multi-Period Stochastic Scenario Generation

This article describes how to build consistent projections for standard credit risk metrics and mark-to-market parameters simultaneously within a single, unified environment: stochastic dynamic macro models.

June 2015 Pdf Dr. Gustavo Ordóñez-Sanz

Multi-Period Stochastic Scenario Generation

Robust models are currently being developed worldwide to meet the demands of dynamic stress testing. This article describes how to build consistent projections for standard credit risk metrics and mark-to-market parameters simultaneously within a single, unified environment.

May 2015 WebPage Dr. Juan M. Licari, Dr. Gustavo Ordóñez-Sanz