General Information & Client Services
  • Americas: +1.212.553.1653
  • Asia: +852.3551.3077
  • China: +86.10.6319.6580
  • EMEA: +44.20.7772.5454
  • Japan: +81.3.5408.4100
Media Relations
  • New York: +1.212.553.0376
  • London: +44.20.7772.5456
  • Hong Kong: +852.3758.1350
  • Tokyo: +813.5408.4110
  • Sydney: +61.2.9270.8141
  • Mexico City: +001.888.779.5833
  • Buenos Aires: +0800.666.3506
  • São Paulo: +0800.891.2518

This article discusses the role of third-party data and analytics in the stress testing process. Beyond the simple argument that more eyes are better, we outline why some stress testing activities should definitely be conducted by third parties. We also dispel the notion that a bank can, in isolation, fully account for all of its risks. We then consider the incentives of banks, regulators, and third-party entities to engage in research and development related to stress testing.

“But in consequence of the division of labor, the whole of every man’s attention comes naturally to be directed towards some one very simple object. It is naturally to be expected, therefore, that some one or other of those who are employed in each particular branch of labor should soon find out easier and readier methods of performing their own particular work ...” – Adam Smith

Sometimes, we encounter a perception among banks that regulators expect them to build all their risk management tools in-house and use only internal data. Other times, we find that banks are free to buy external data, mainly when internal supplies are low, but that models estimated using industry-wide databases are unacceptable for use in stress testing, unless they are heavily customized and calibrated to portfolio-specific data.

Such extreme views are at odds with the stated aim of the stress testing experiment. In the wake of the global financial crisis, legislators around the world instigated reforms designed to force large banks to better understand the risks associated with their books. Regulators envisaged that stress tests, when combined with enhanced regulatory scrutiny, could minimize the potential for future government bank bailouts and thus solve the problems of adverse selection of risks and moral hazard.

We describe this process as an “experiment” because, while hopes are high, no one yet knows whether stress testing will actually reduce overall banking system risk. For the experiment to be a success, a significant period of time needs to pass without a bank-failure-induced recession. For the US, a period of 50 years seems appropriate given that the Great Depression, the Savings and Loan Crisis, and the Great Recession all occurred during the past century.

Truly understanding all the risks a bank takes at a given time is a daunting challenge. If analysis of an external data set, or work by a third-party analyst, can help a bank or regulator understand risk more fully, does it matter that the arrangement involves entities and resources external to the bank? We contend that for the stress testing experiment to succeed, regulators should welcome and encourage research and development, as well as data collection and improvement, by anyone who is willing to engage in such activities. This call to arms extends not only to banks, bank employees, and regulators but equally to academics, data collectors, consultants, students, advisors, and freelance analysts. After all, if an amateur astronomer identifies the comet on a collision course, should the analysis fail validation because he or she is not employed by NASA?

Must banks consider other banks?

One of the main causes of the US subprime crisis was that a number of major institutions had taken long positions in this risky sector. If subprime had instead remained a niche industry with few players, the crisis may never have materialized. This type of behavior is a very common element in historical banking crises. By their very nature, credit-fueled asset price bubbles – the most dangerous phenomena for the survival of banking systems – are characterized by widespread irrational exuberance of many borrowers and many lenders. Banks see their peers making excess profits lending to a certain group of people and rush to join the party. As each new bank enters the market, the risk the initial entrants face rises even if their risk appetite and underwriting standards do not change. A safe, profitable activity for a few banks becomes gravely dangerous when many engage in the same behavior.

The level of risk in a bank’s book depends critically on how the book aligns with those of other banks. In other words, it was important for a hypothetical subprime lender circa 2006 to know and consider the implications of so many other similar lenders being active during the critical time. Furthermore, conservative mortgage lenders that were not engaged in subprime needed to know about and account for the effects of distortions to their industry created by the growth of lending to borrowers at the opposite end of the credit quality spectrum.

To gain a full understanding of risks, therefore, banks must explicitly reference data collected from beyond their own walls. This statement is true for a bank with poor internal data assets, where the external information serves a further purpose of giving modelers something to model. It is also true for banks with abundant internal data at their disposal that can conceivably build any model.

Portfolio alignment across banks seems to be a necessary condition for banking sector stress. Ironically, if lending markets are healthy and thus unlikely to cause problems for large financial institutions, banks probably can safely consider the nature of their portfolios in isolation and gain a largely accurate view of baseline portfolio risk. It is only under stress, when markets are distorted by collective irrational exuberance and its aftermath, that the need for external data becomes truly critical. But it is, after all, the stress events that most interest us here.

We contend that for the stress testing experiment to succeed, regulators should welcome and encourage research and development, as well as data collection and improvement, by anyone who is willing to engage in such activities. After all, if an amateur astronomer identifies the comet on a collision course, should the analysis fail validation because he or she is not employed by NASA?

Where do banks source the needed external data to accurately gauge stress? Call reports might be one ready source. For some applications, however, these currently public sources may be insufficiently detailed. Regulators could make the data they collect from banks as part of the stress testing process public, though lawmakers or privacy activists might not favor this. In addition, many of the biggest players in the subprime saga were shadow banks and potentially invisible to banking regulators.

For financial institutions to be willing to share their data with their competitors, the data must be suitably anonymized and aggregated. Private-sector companies have historically provided a conduit through which banks can happily share information without giving up any sensitive trade secrets. Data gathering start-ups may already be collecting the data that holds the key to identifying the next crisis. They should be encouraged to continue the search and, when successful, charge an appropriate price for their products.

Research and development

Having established the case for the use of external data, the next question concerns who should model it. The concept of stress testing a bank’s book, especially against a pre-specified, exogenous macroeconomic scenario, is a relatively young discipline compared to other risk management practices. The reality is that in universities, NGOs, regulatory offices, banks, and consulting companies, dedicated professionals are busy trying to better understand stress testing methodology to make it easier for banks to implement and make it more accurate for users.

Regulators and academics will presumably continue to engage in considerable innovative effort. Academics will likely pursue stress testing because it is consequential, yields large amounts of interesting data, and is intellectually stimulating. Regulators, meanwhile, will seek to innovate out of pure necessity. These organizations must seek the best available stress testing tools to confront rogue banks and stay ahead of the next banking crisis.

Banks have a strong incentive to maintain at least a minimum standard of stress test model performance. Shareholders expect banks to pay dividends, and if a failed stress test results in a reduction or suspension of such payments, the incumbent CEO could lose the support of the shareholders. Nevertheless, among the bank holding companies that have not suffered a qualitative failure, it is unclear whether those institutions that took the stress test most seriously were rewarded for their efforts, compared to those that merely did enough to fall over the line. Because a bank’s management team represents the interests of shareholders, it is more likely to invest in activities that increase shareholder value than in those that minimize the FDIC’s losses should the bank happen to fail.

Regulators, of course, have called on banks to stitch the stress test into their day-to-day operations. For this to become a reality, however, stress test models must yield insights that enable business managers to lower risk for a given return or clearly increase the profitability associated with running the portfolio. If using the model does not yield such insights, banks may pretend to take the models seriously when under the regulatory spotlight but make no actual changes in their banking behavior or operations.

In terms of downside risk management, the incentive for banks to innovate may therefore be thin. True, they must take the stress test seriously enough that the probability of a failure is sufficiently low, but there is little incentive for them to do any more. If the stress test process is improved to the point where managers can rely on the models to make money, the innovation floodgates will open and banks will be motivated to invest in research that will give them an edge over their competitors, on both the upside and downside.

Vendors, meanwhile, invest in innovation with the hope of realizing a financial return. One way that a vendor can be rewarded for innovation is by becoming the standard source for a particular analytical tool like a credit score, an asset price forecast, a probability of default, or a rating. They can then charge a premium over an upstart market entrant. Once a vendor has been established as the source and the tool is being used productively for business decisions, it will be in the vendor’s best interest to ensure the quality of the analysis and carefully maintain the infrastructure used to produce the information. If the provider is motivated by something other than profits, continuity of service may be illusive. Consequently, market participants may be reluctant to invest in adopting tools that are not produced by forprofit entities.

A recurring footnote in the Federal Reserve Supervisory's Dodd-Frank Act Stress Test (DFAST) results provides a sample of approximately 25 vendors whose analytics the Federal Reserve Supervisory used to conduct their DFAST analysis.1 Of these, only three are not-for-profit organizations; four are financial institutions. The rest are for-profit vendor companies that rely on a combination of financing from investors and revenue from the sales of analytics to fund both their operating costs and whatever research and development they conduct.

Although many of these firms are likely to do some bespoke consulting work, we find it interesting that well-known management consulting firms (such as the Big Four) are not on the list. These firms help banks both build internal models using bank data and validate their use of vendor models. As such, they may be reluctant to sell analytics as it would conflict with their core business.

Sharing of analytical breakthroughs among banks

If a bank develops a promising new technique that it uses to beat the market, it will likely be reluctant to sell such analytical tools to similar institutions; for competitive reasons, other financial institutions may also be reluctant to buy them. However, if a vendor does achieve a breakthrough, it will expect to be well compensated for its success, but this will be achieved through propagation of its innovation throughout the industry.

While vendors will naturally seek to protect their intellectual property, the propagation of soft knowledge in the industry will likely be greater than if the technology is locked in a specific bank’s intellectual vault. While all scientific progress is welcome, information externalities are arguably greater if a vendor, as opposed to a bank, is responsible for the breakthrough.

Economies of scale and scope in analytics

Home-grown analytics, those produced within a bank, have the advantage that bank managers and executives retain complete control. Vendor analytics, in contrast, often reflect the experiences of many market participants, offer more features and documentation, and are less expensive to implement.

Because the incremental costs of making analytics available to additional clients declines as the number of clients using the analytics grows, it is generally efficient for one party to produce them and then to share them with multiple parties. Our interpretation of what regulators have written about the use of vendor models is that they expect financial institutions to take ownership of whatever analytics and data they use, but this does not imply that they should necessarily build their own analytics with their own data in all cases.

WHEN DOES DATA BECOME A MODEL?
The line between data and model is vague. For example, when bonds are sold, the invoice is based on the “dirty price,” but the vendor is likely to present bond price data in a different format. The vendor may provide the “clean price,” the yield, the spread over a reference curve, and finally the option adjusted spread. These transformations of raw data are increasing in complexity; an option-adjusted spread clearly involves the use of a model that makes a number of important assumptions that could be qualitatively challenged. There are many other examples: Quarterly GDP growth at the state level is estimated; dealer quotes for CDS spreads are based on the pricing models of the dealers; the exact calculation of the VIX involves fairly complex statistical manipulation; and macroeconomic variables are often seasonally adjusted with an algorithm that is, in reality, a time series econometric model.

In the analytics business, producing new products and supporting existing products is work, but the work product is scalable across many users. New products frequently necessitate that a firm invest in collecting data and developing analytics for several years prior to the sale of the product to the first client. There are no guarantees that the new product will be successful.

Supporting data and models requires documentation, validation, and periodic model updates. Users also require guidance from vendors on the use of the model. Much of this effort is reusable: The needs of one client will overlap heavily with the needs of other clients. Nevertheless, because every financial institution is different, there will always be a customization aspect to the provided support. Consequently, the marginal costs of providing analytics will decline as the number of users grows, but the marginal costs of providing support services always remain positive.

Having a set of firms producing data analytics for many banks is more efficient than every bank attempting to replicate all of these products on their own. Further, the more heavily used the product, the higher the quality of the product. Suppose there is an issue with a particular model. If 10 banks are using the product, the issue is likely to be discovered sooner than if the client base consists of only one institution. If one bank discovers an issue that affects nine others, beneficial externalities accrue to all banks as a result of the actions of the observant institution. Such externalities are not present in a system that relies only on internal modeling.

One issue that is often mentioned in the context of scalable analytics is that it can foster potentially dangerous concentration risks. Suppose a particular model becomes an industry standard, to the point where all banks must use the model’s predictions to be viewed as competitive by financial markets. If the model has a structural flaw that causes it to under-predict losses in the industry, this could conceivably destabilize every institution using the model instead of just one.

Assume that the vendor model under consideration produces accurate insight into the riskiness of a portfolio that simply cannot be gleaned from any other source. We are not saying that the vendor model produces a complete picture of risk, just that it shades a particular color in a way that cannot otherwise be captured by risk managers. Forcing banks to exclude the use of such a vendor model will result in an incorrect rendering of the risk picture. Thus, system risk could decline if all banks adopted the use of the vendor model, as only the model’s users would know that the fig leaf is, in reality, poison ivy.

Analytical concentration risk therefore depends critically on what exactly the concentration is. How the information is used by banks is also critical. If the vendor model contains unique, accurate, and pertinent information, it is not necessarily a bad thing if all banks adopt the model. If the model is flawed, a feature common to every model ever built, the onus shifts to the bank’s risk managers to ensure that the information is correctly harnessed in assessing portfolio risk. We would never advocate blind acceptance of one of our models or, indeed, of any model built by any mortal. This classification certainly extends to any and all of our current and future competitors. Downside model concentration risks tend to be realized only when banks confuse a model’s predictions with gospel truth and take actions based on that “truth.”

The best defense for this issue is the concept of “effective challenge,”2 which is a regulatory expectation for all models that have a material impact on business decisions. The Federal Reserve Supervisory defines an effective challenge as a “critical analysis by informed parties that can identify model limitations and assumptions and produce appropriate changes.” For the challengers to be effective, they must be independent from the model builders, have the appropriate degree of expertise, and enough influence so that their challenges will be appropriately addressed. A well-built third-party model can certainly play this role in the validation process.

External analytics as mitigants to agency issues

Analytics can mitigate issues that result from misaligned incentives owing to principal-agent issues, asymmetric information, and the moral hazard problem. The analytics for this purpose should be valid and unbiased, and use objective and verifiable inputs. An analytic produced by a third party is more likely to fit these purposes than one produced by a financial institution or regulator.

To give one example, after graduate school, one of the authors paid a significant commission to a real estate agent to help him lease a rentstabilized apartment in New York City. The agent used his credit score to verify that he was a person likely to fulfill his financial obligations to potential property owners.

This situation is a very common one; it is instructive to consider the motivations of the parties involved and why the analytical second opinion was sought from a third party. The lessee felt he would be a good tenant but had no way of quickly making his case. The realtor’s position was more tenuous in the sense that he knew little of the potential tenant, but wanted to make the commission and move on to the next deal. The owner of the property, meanwhile, could have made time to interview the potential lessee, check references, and verify income, though this would have provided uncertain signals and would have been relatively expensive and time-consuming to procure.

Though the credit score does not measure tenant soundness per se – it gives no indication of tidiness or proclivity for playing loud music – it is cheap to procure, has no horse in the race, and is sound enough to provide a useful signal to all relevant parties to the transaction. In this case, a third-party model mitigated the principal-agent problem while also helping to overcome significant informational asymmetries the parties to the transaction faced. Financial institutions use models in very similar ways. For example, credit risk buyers will often ask sellers to use a specific vendor model to indicate the likely future performance of the portfolio. In this case, the third-party model partially mitigates the issue of asymmetric information, in that sellers know more than buyers about the underwriting conditions applied in originating the loan. This situation arises with considerable frequency. In the mortgage industry, banks will often corroborate home appraisals using AVMs – auto valuation models – that are owned and operated by third parties. In auto leasing, residual prices will be set using analytical forecasting tools that are not owned by any of the parties to the transaction. Credit ratings from reputable companies will often be required before institutional investors take positions in certain risky assets. In all of these cases, there are sound reasons for why analytics simply must be undertaken by external entities.

Assume that the vendor model under consideration produces accurate insight into the riskiness of a portfolio that simply cannot be gleaned from any other source. Forcing banks to exclude the use of such a vendor model will result in an incorrect rendering of the risk picture. Thus, system risk could decline if all banks adopt the use of the vendor model.

But back to stress testing. As we mentioned, regulators want banks to stitch stress testing tools into a business’ day-to-day operations. This can be achieved by either incorporating models that are already in place for day-to-day operations into the Comprehensive Capital Analysis and Review (CCAR) stress testing or using CCAR stress testing models for day-today operations either in addition to or in place of existing practices. In stress testing models, a portfolio’s initial risk level is a key determinant of the expected losses. Using internal models to determine initial risk levels will benefit banks with more aggressive models. Such a policy could lead to a moral hazard problem, because banks with more aggressive models would have an incentive to make more aggressive loans. An industry model that can be applied to all banks can serve as a check on the banks with more aggressive models. This type of industry model could be developed by regulators – provided that they have the required data. Still, even if the regulators’ data is comparable to that of the third party, the third-party model may be more credible if the banks push back on the regulator for being too conservative.

In addition to economies of scale and scope, third-party analytics – because they are produced by third parties – can mitigate incentive issues associated with the principalagent problem, asymmetric information, and moral hazard. In each context, the two parties that are sharing risk can agree to use the thirdparty analytics to make risk more transparent.

Conclusion

We are now seven years gone since the financial crisis triggered the Great Recession. The first stress test (the 2009 Supervisory Capital Assessment Program) did help restore the market's confidence in the US banking system, and there is little doubt that the US banking system is now better capitalized than it was in August of 2008. But it is equally true that the new regulatory environment has yet to be tested by a new banking crisis or, for that matter, a recession of any flavor. Before making a proper assessment of how robust the new system actually is, we would want to see it perform under real stress.

We also would like to see banks use their stress testing infrastructures for their day-to-day business decisions. These infrastructures can be used for tactical decisions – e.g., how to manage a specific exposure – as well as strategic decisions – e.g., whether to expand/contract exposure to an industry, region, or asset class. In either case, the infrastructure would presumably enhance shareholder value.

If banks choose the analytics with the most attractive balance of costs and benefits, they will happily invest shareholder funds in stress testing models, in the knowledge that doing so will increase share prices. If the decision of which analytic to use is constrained, however, a bank is likely to use the analytic only to ensure that it meets regulatory requirements; the bank is unlikely to use the analytic to make business decisions.

For model risk management, the concept of an effective challenge plays a key role. For an effective challenge to be credible, a bank should look to all possible sources of information and knowledge. For a bank to only look internally for answers to these critical questions is simply anathema to the goals of regulatory stress tests.

Sources

1 Cf. footnote 43 of the 2014 DFAST results.

2 See OCC 2011-12.

SUBJECT MATTER EXPERTS
As Published In:
Related Insights

The Effect of Ride-Sharing on the Auto Industry

Many in the auto industry are concerned about the impact of ride-sharing. In this article analyze the impact of ride-share services like Uber and Lyft on the private transportation market.

July 2017 Pdf Dr. Tony Hughes

Combining Information to Better Assess the Credit Risk of Small Firms and Medium-Sized Enterprises

In this article, we discuss the issues associated with acquiring behavioral and financial data and transforming it into a business decision. We also present a unified modeling approach for combining the information into a credit risk assessment for both small firms and medium-sized enterprises.

July 2017 Pdf Dr. Douglas Dwyer

The Effect of Ride-Sharing on the Auto Industry

In this article, we consider some possible long-term ramifications of ride-sharing for the broader auto indust

July 2017 WebPage Dr. Tony Hughes

Combining Information to Better Assess the Credit Risk of Small Firms and Medium-Sized Enterprises

In this article, we combine financial information with behavioral factors to more accurately assess credit risk for small firms and medium-sized enterprises.

July 2017 WebPage Dr. Douglas Dwyer

"How Will the Increase in Off-Lease Volume Affect Used Car Residuals?" Presentation Slides

Increases in auto lease volumes are nothing new, yet the industry is rife with fear that used car prices are about to collapse. In this talk, we will explore the dynamics behind the trends and the speculation. The abundance of vehicles in the US that are older than 10 years will soon need to be replaced, and together with continuing demand from ex-lessees, this demand will ensure that prices remain supported under baseline macroeconomic conditions.

February 2017 Pdf Dr. Tony HughesMichael Vogan

How Will the Increase in Off-Lease Volume Affect Used Car Residuals?

Increases in auto lease volumes are nothing new, yet the industry is rife with fear that used car prices are about to collapse. In this webinar, we explore the dynamics behind the trends and the speculation. The abundance of vehicles in the US that are older than 10 years will soon need to be replaced, and together with continuing demand from ex-lessees, this demand will ensure that prices remain supported under baseline macroeconomic conditions.

February 2017 WebPage Dr. Tony HughesMichael Vogan

Economic Forecasting & Stress Testing Residual Vehicle Values

To effectively manage risk in your auto portfolios, you need to account for future economic conditions. Relying on models that do not fully account for cyclical economic factors and include subjective overlay, may produce inaccurate, inconsistent or biased estimates of residual values.

December 2016 WebPage Dr. Tony Hughes

The Value of Granular Risk Rating Models for CECL

Granular risk rating models allow creditors to understand the credit risk of individual loans in a portfolio, facilitating underwriting and monitoring activities. In this webinar we will outline the value of granular risk rating models for CECL.

November 2016 WebPage Christian HenkelDr. Tony Hughes

Improved Deposit Modeling: Using Moody's Analytics Forecasts of Bank Financial Statements to Augment Internal Data

In this article, we demonstrate how to combine our forecasts of bank financial statements with internal data to produce forecasts that better reflect the macroeconomic environment posited under the various Comprehensive Capital Analysis and Review scenarios.

August 2016 Pdf Dr. Tony HughesBrian Poi

RiskCalc Banks v4.0 Model

There has been a significant increase in the demand for quantitative tools that assess the default risk of banks across different geographies. Pooling data from more than 90 countries, we see commonalities in linking default risk to a specific set of financial ratios. This finding suggests that a prescribed set of financial ratios, properly transformed, works well in estimating banks' default risk in a robust fashion. With this insight, we constructed the RiskCalc™ Banks v4.0 Model, intended for assessing the probability of default (PD) for banks across different geographies and regulatory environments. The model provides a unified framework to assess bank risk across different countries and regions, as well as different economic cycles. The one-year model is based upon a set of well-defined and ready-to-calculate financial ratios that effectively measure bank profitability, leverage, liquidity, growth, and asset quality. The five-year model combines these ratios with a measure derived from an economic capital framework based upon portfolio theory. Specifically, this measure captures the unexpected loss of a bank's loan portfolio relative to its loss-absorbing capital. Validation results show that the model delivers strong and robust power in rank ordering high risk banks from low risk banks, and that the results are robust across geographies and bank sizes.

July 2016 Pdf Dr. Douglas DwyerDr. Janet Zhao, Yanruo Wang

Improved Deposit Modeling: Using Moody's Analytics Forecasts of Bank Financial Statements to Augment Internal Data

We demonstrate how our service can be used to produce more realistic forecasts of income and balance sheet statements.

July 2016 Pdf Dr. Tony HughesBrian Poi

Are Deposits Safe Under Negative Interest Rates?

In this article, I take a theoretical look at negative interest rates as a means to stimulate the economy. I identify key factors that may influence the volume of deposits held in the economy. I then empirically describe the unique situation of negative interest rates.

June 2016 WebPage Dr. Tony Hughes

AutoCycle™: Residual Risk Management and Lease Pricing at the VIN Level

We demonstrate the core capabilities of our vehicle residual forecasting model to capture aging and usage effects and illustrate the material implications for car valuation of different macroeconomic scenarios such as recessions and oil price spikes.

May 2016 Pdf Dr. Tony Hughes

Benefits & Applications: AutoCycle - Vehicle Residual Value Forecasting Solution

With auto leasing close to record highs, the need for accurate and transparent used-car price forecasts is paramount. Concerns about the effect of off-lease volume on prices have recently peaked, and those exposed to risks associated with vehicle valuations are seeking new forms of intelligence. With these forces in mind, Moody's Analytics AutoCycle™ has been developed to address these evolving market dynamics.

May 2016 Pdf Dr. Tony HughesDr. Samuel W. MaloneMichael Vogan, Michael Brisson

Alternatives to Long-Term Car Loans?

In this article, our experts focus on two recent developments: how to manage lease-term or model-year concentration risk and how to find affordable finance options for subprime or near-prime sector.

February 2016 Pdf Dr. Tony Hughes

Small Samples and the Overuse of Hypothesis Tests

With powerful computers and statistical packages, modelers can now run an enormous number of tests effortlessly. But should they? This article discusses how bank risk modelers should approach statistical testing when faced with tiny data sets.

December 2015 WebPage Dr. Tony Hughes

Systemic Risk Monitor 1.0: A Network Approach

In this article, we introduce a new risk management tool focused on network connectivity between financial institutions.

Residual Car Values Forecasting Using AutoCycle™

In this paper we discuss our approach to forecasting residual car values that accounts for cyclical economic factors affecting the automotive industry, under normal and stressed scenarios.

July 2015 Pdf Dr. Tony Hughes

Credit Risk Modeling of Public Firms: EDF9

EDF9 — the 9th generation of the Moody's Analytics Public Firm EDFTM (Expected Default Frequency) model — expands the frontiers of structural credit risk modeling. EDF metrics are forward-looking probabilities of default, available on a daily basis for 35,000-plus corporate and financial firms. The updated EDF9 model incorporates insights attained by evaluating the behavior of the prior version, EDF8, over the course of the recent financial and sovereign debt crises.

June 2015 Pdf Dr. Douglas Dwyer, Pooya Nazeran

Forecasts and Stress Scenarios of Used-Car Prices

The market for new cars is growing strongly and lessors need forecasts and associated stress scenarios of future vehicle value to set the initial terms, to monitor the performance of their book and to stress-test cash flows. This presentation offers insight and tools to help lessors in this pursuit.

May 2015 Pdf Dr. Tony Hughes, Zhou Liu, Pedro Castro

Measuring Systemic Risk in the Southeast Asian Financial System

This article looks back at the Asian financial crisis of 1997-1998 and applies new methods of measuring systemic risk and pinpointing weaknesses, which can be used by today’s financial institutions and regulators.

Multicollinearity and Stress Testing

Multicollinearity, the phenomenon in which the regressors of a model are correlated with each other, apparently causes a lot of confusion among practitioners and users of stress testing models. This article seeks to dispel this confusion.

May 2015 WebPage Dr. Tony HughesBrian Poi

What if PPNR Research Proves Fruitless?

This article addresses how banks should look to sources of high-quality, industry-level data to ensure that their PPNR modeling is not only reliable and effective, but also better informs their risk management decisions.

May 2015 WebPage Dr. Tony Hughes

May 2015 U.S. Middle Market Risk Report

This semiannual report examines credit risk in the otherwise opaque U.S. private firm credit market. We report trends in 4 different areas of risk measurement.

May 2015 Pdf Stephanie Yu, Brian Waldman, Irina Korablev, Stafford Perkins, Dr. Douglas Dwyer

Vehicle Equity and Long-Term Car Loans

In this article, we consider the increasing prevalence of long term loans and use the AutoCycle™ wholesale price forecasts to uncover equity held by the borrower under different economic scenarios.

April 2015 Pdf Dr. Tony Hughes

Putting Systemic Stress into the Stress-Testing System

In this article, banks can significantly improve the effectiveness of their stress-testing exercises by incorporating systemic risk measures.

March 2015 Pdf Dr. Tony Hughes

RiskCalc Plus Stress Testing Model (ratio-based approach)

In this paper, we detail a RiskCalc™ Stress Testing Model (ratio-based approach), based upon economic and accounting principles. Our simple, yet fundamental, model assumptions make the framework adaptable to many uses, including: loss forecasting, pro forma analysis, stress testing, as a challenger or benchmark model, and for customized scenario analysis.

July 2014 Pdf Dr. Douglas DwyerDr. Janet Zhao, Monalisa Sen

RiskCalc Plus C&I Stress Testing PD & LGD Model (granular approach) Overview

To help our clients build benchmark commercial and industrial (C&I) loss models for the Federal Reserve's Comprehensive Capital Analysis and Review (“CCAR”)/DFAST exercises, we have developed an approach designed specifically to calculate provisions for losses of C&I portfolios. Our approach utilizes Moody's Analytics probability of default (PD), loss given default (LGD), and exposure at default (EAD) econometric models, which are intuitive, parsimonious, make economic sense, and have good statistical fit. We construct these models using our public EDF credit measures, RiskCalc™ private firm EDF credit measures, and Moody's Default & Recovery Database and Credit Research Database.

July 2014 Pdf Nan Chen, Jian Du, Heather Russell, Dr. Douglas DwyerDr. Jing Zhang, Zhong Zhuang

Usage and Exposures at Default of Corporate Credit Lines: An Empirical Study

A major source of firm funding and liquidity, credit lines can pose significant credit risk to the underwriting banks. Using a unique dataset pooled from multiple U.S. financial institutions, we empirically study the credit line usage of middle market corporate borrowers. We examine to what extent borrowers draw down their credit lines and the characteristics of those firms with high usage. We study how line usage changes with banks' internal ratings, collateral, and commitment size and through various economic cycles. We find that defaulted borrowers draw down more of their lines than non-defaulted borrowers. They also increase their usage when approaching default. Risky borrowers tend to utilize a higher percentage of their credit lines as well.

Modeling the Entire Balance Sheet of a Bank

This article explores the interaction between a bank’s various models and how they may be built into a comprehensive stress testing framework, contributing to the overall performance of a bank.

November 2013 WebPage Dr. Tony Hughes

Is Now the Time for Tough Stress Tests?

The banking industry needs a regulatory framework that is carefully designed to maximize economic outcomes, both in terms of stability and growth, rather than one dictated by past banking sector excesses.

November 2013 WebPage Dr. Tony Hughes

October 2013 U.S. Middle Market Risk Report

This semiannual report examines credit risk in the otherwise opaque U.S. private firm credit market. We report trends in four different areas of risk measurement: realized defaults, internal bank ratings, financial statement-based information, and model-based risk estimates. We derive the statistics in this report from Moody's Analytics CRD™ (Credit Research Database).

October 2013 Pdf Shivansh Gulwadi, Irina Korablev, Stafford Perkins, Dr. Douglas Dwyer

Establishing Best Practices for Stress Testing your Private Company C&I Portfolios

Learn about stress testing best practices and our RiskCalc™ Plus United States Stress Testing Models. This webinar focuses on stress testing best practices for the private company C&I asset class.

July 2013 Pdf Dr. Douglas Dwyer, Mehna Raissi, Christian Henkel

May 2013 U.S. Middle Market Risk Report

This semiannual report examines credit risk in the otherwise opaque US private firm credit market. We report trends in four different areas of risk measurement: realized defaults, internal bank ratings, financial statement-based information, and model-based risk estimates. We derive the statistics in this report from Moody's Analytics Credit Research Database® (CRD).

May 2013 Pdf Bryce Bewley, Irina Korablev, Stafford Perkins, Dr. Douglas Dwyer, Dhivya Madhavan

October 2012 U.S. Middle Market Risk Report

This semiannual report examines credit risk in the otherwise opaque US private firm credit market. We report trends in four different types of risk measures: actual defaults, internal bank ratings, financial statement-based information, and model-based risk estimates. The statistics in this report are derived from Moody's Analytics Credit Research Database® (CRD).

October 2012 Pdf Bryce Bewley, Dhivya Madhavan, Irina Korablev, Stafford Perkins, Dr. Douglas Dwyer

Stressed EDF Credit Measures for Western Europe

In this paper we describe the modeling methodology behind Moody's Analytics Stressed EDF measures for Western Europe. Stressed EDF measures are one-year, default probabilities conditioned on holistic economic scenarios developed in a large-scale,structural macroeconometric model framework.

October 2012 Pdf Danielle Ferry, Dr. Tony Hughes, Min Ding

Stressed EDF™ Credit Measures for North America

In this paper we describe the modeling methodology behind Moody's Analytics Stressed EDF measures. Stressed EDF measures are one-year, default probabilities conditioned on holistic economic scenarios developed in a large-scale, structural macroeconometric model framework. This approach has several advantages over other methods, especially in the context of stress testing. Stress tests or scenario analyses based on macroeconomic drivers lend themselves to highly intuitive interpretation accessible to wide audiences – investors, economists, regulators, the general public, to name a few.

May 2012 Pdf Danielle Ferry, Dr. Tony Hughes, Min Ding

The Moody's CreditCycle Approach to Loan Loss Modeling

This whitepaper goes in-depth into the Moody's CreditCycle approach to loan loss modeling.

The Effect of Imperfect Data on Default Prediction Validation Tests

Analysts often find themselves working with less than perfect development and/or validation samples and data issues typically impact the interpretation of default prediction validation tests. Discriminatory power and calibration of default probabilities are two key aspects of validating default probability models. Both are susceptible to data issues. In this paper, we look at how data issues affect three important power tests: the Accuracy Ratio, the Kolmogorov-Smirnov test, and the Conditional Information Entropy Ratio, as well as how data issues affect the Hosmer-Lemeshow test, a default probability calibration test. We employ a simulation approach that allows us to assess the impact of data issues on model performance when the exact nature of the data issue is known.

August 2011 Pdf Heather Russell, Qing Kang Tang, Dr. Douglas Dwyer

RiskCalc™: New Research and Model Validation Results

In this presentation we examine the strengths of a risk calculation model that assesses localized accounting practices of individual countries within the wider context of the credit cycle. The model takes account of liquidity, profitability, activity, leverage, growth variables and other integrated factors to deliver objective results. Here we put the spotlight on exactly what this model can do and how it works.

May 2011 Pdf Dr. Douglas Dwyer

Combining Quantitative and Fundamental Approaches in a Rating Methodology

This methodology proposes a combined approach to credit valuation and provides the framework for it.

November 2010 Pdf Dr. Douglas Dwyer, Heather Russell

CDS-implied EDF™ Credit Measures and Fair-Value Spreads

In this paper, we present a framework that links two commonly used risk metrics: default probabilities and credit spreads. This framework provides credit default swap-implied (CDS-implied) EDF™ (Expected Default Frequency) credit measures that can be compared directly with equity-based EDF credit measures.

March 2010 Pdf Dr. Douglas Dwyer, Zan Li, Shisheng Qu, Heather Russell, Dr. Jing Zhang

Bank Failures Past and Present: Validating the RiskCalc V3.1 U.S. Banks Model

This document outlines the validation results for the RiskCalc v3.1 U.S. Banks model, and highlights the deteriorating financial ratios present in the banking sector. We contrast trends of key risk measures to those of the savings and loan crisis of the late 1980s and early 1990s. We also explore the speed and nature of recent bank failures and demonstrate the model?s strong performance in light of this rapidly changing environment.

October 2009 Pdf Dr. Douglas Dwyer, Daniel Eggleton

Level and Rank Order Validation of RiskCalc v3.1 United States

In this paper, we validate the Moody's KMV RiskCalc v3.1 United States private firm default model. We show that the EDF™ (Expected Default Frequency) produced by the model continues to rank order risk effectively by providing substantial discriminatory power across multiple cuts of the data.

September 2009 Pdf Dr. Douglas Dwyer, Daniel Eggleton

The Distribution of Defaults and Bayesian Model Validation

Quantitative rating systems are increasingly being used for the purposes of capital allocation and pricing credits. For these purposes, it is important to validate the accuracy of the probability of default (PD) estimates generated by the rating system and not merely focus on evaluating the discriminatory power of the system. The validation of the accuracy of the PD quantification has been a challenge, fraught with theoretical difficulties (mainly, the impact of correlation) and data issues (eg, the infrequency of default events).

July 2007 Pdf Dr. Douglas Dwyer

EDF™ 8.0 Model Enhancements

The Moody's KMV Expected Default Frequency model of public firms is the pioneering implementation of a structural model that gives investors the ability to monitor credit risk across a broad range of firms. The release of the EDF™ 8.0 model represents a major recalibration of the model, which incorporates both a larger default dataset and improved estimation techniques that derive the EDF term structure from credit migration.

January 2007 Pdf Dr. Douglas Dwyer, Shisheng Qu

Inferring the Default Rate in a Population by Comparing Two Incomplete Default Databases

Default databases play a key role in the development, validation and application of credit models. Nevertheless, it has often been difficult to ascertain the extent to which these databases accurately capture all of the default events that have occurred over a particular time period or market segment. While it is generally understood that not all default events are captured in any one dataset, estimates of the magnitude of missed defaults are previously non-existent (to our knowledge) even though such information is extremely valuable for credit risk management.

October 2005 Pdf Dr. Douglas Dwyer, Roger Stein

Examples of Overfitting Encountered When Building Private Firm Default Prediction Models

The key to building default prediction models, if they are to be incorporated into credit risk management systems, is to build the most powerful model possible subject to the constraints that it is transparent, usable, and intuitive. In this process, we must constantly be on guard for whether or not we have overfit the data.

May 2005 Pdf Dr. Douglas Dwyer

Moody's KMV RiskCalc® V3.1 Model: Next-Generation Technology for Predicting Private Firm Credit Risk

This white paper outlines the methodology, performance, and key economic benefits of the Moody's KMV EDF™ (Expected Default Frequency™) RiskCalc model, which powers the next-generation of default prediction technology for middle market, private firms.

April 2004 Pdf Dr. Douglas Dwyer, Ahmet Kocagil, Roger Stein

Stress Testing and Strategic Planning Using Peer Analysis

Banks face the difficult task of building hundreds of forecasting models that disentangle macroeconomic effects from bank-specific decisions. We propose an approach based on consistently reported industry data that simplifies the modeler’s task and at the same time increases forecast accuracy.

Previewing This Year's Stress Tests Using the Bank Call Report Forecasts

Risk modelers at banks often feel pressure to produce conservative, as opposed to strictly accurate, forecasts of a bank’s resilience in times of stress. Regulators typically frown on capital plans that have even the barest whiff of optimism[1].