General Information & Client Services
  • Americas: +1.212.553.1653
  • Asia: +852.3551.3077
  • China: +86.10.6319.6580
  • EMEA: +44.20.7772.5454
  • Japan: +81.3.5408.4100
Media Relations
  • New York: +1.212.553.0376
  • London: +44.20.7772.5456
  • Hong Kong: +852.3758.1350
  • Tokyo: +813.5408.4110
  • Sydney: +61.2.9270.8141
  • Mexico City: +001.888.779.5833
  • Buenos Aires: +0800.666.3506
  • São Paulo: +0800.891.2518

Banks face the difficult task of building hundreds of forecasting models that disentangle macroeconomic effects from bank-specific decisions. That is impossible when modelers rely solely on internal performance data and the standard set of macroeconomic variables released as part of the CCAR exercise. We propose an alternative approach based on consistently reported industry data that simplifies the modeler’s task and at the same time increases forecast accuracy. Our approach is also useful for strategic planning as it allows one bank to compare its balance sheet and income statement to its peers and the industry and to explore potential mergers and acquisitions.


Responding to the dictates of the Dodd-Frank Act is, perhaps by regulatory design, a highly complex task for large financial institutions. In principle, banks must seek to carefully model every potential cash flow that may stem from the operation of their businesses. These models cover not only credit losses for all asset categories at a granular level – for many banks down to a loan level – but also asset and liability balances, loan origination volumes, deposits, interest and non-interest revenues and expenses, costs of staff and premises, and ultimately the exact future capital position of the institution.

In this article, we propose an alternative simple and coherent methodology that allows us to forecast and stress test the entire balance sheet and profit and loss statement for all of the roughly 6,000 banks in the US in a consistent manner. The output is presented as a bank-level panel database containing forecasts and stress scenarios for (potentially) every item covered by public call report data. We can currently project about 200 individual line items from the call report, with the potential to extend our methodology to more than 1,000 items.

Addressing Stress Test Limitations

Stress tests developed within banks have primarily utilized complex bottom-up modeling techniques. Analysts are tasked with building a model of a specific, narrowly defined cash flow or credit loss measure for the institution. They source data relevant to the line item, primarily from inside the bank, and then build a model that relates the collected data to macroeconomic variables. Once 1,000 modelers have built 1,000 models of 1,000 different variables, the series are projected and then combined to calculate the capital position of the bank under each scenario.

The complexity of this task has major implications for the banking system. First, even those institutions with the most keenly developed stress testing infrastructure cannot run an ad hoc stress test quickly and accurately. For example, suppose that a large, unexpected event – like the UK’s vote to exit the European Union – occurs one weekend and the chief risk officer wants to determine its effect on the bank’s future capital position. At present, it may take weeks or months for the manager to get the answer, by which time the next crisis, and the one after that, will have already come and gone. Ideally, bank executives should be able to conceive of a stress scenario during a morning coffee break, mull over detailed stress projections during a quick working lunch, and devise an appropriate strategy to deal with the potential threat by the close of business. One wonders whether a stress test that takes months to perform can ever have any meaningful strategic or tactical relevance to a bank of any size.

Another problem with the stress testing protocols, as currently implemented, is that banks often cannot compare their projected performance with that of their peers. With each bank building its own idiosyncratic, bottom-up model primarily based on internally sourced data, one bank’s model outcomes may not easily compare with another’s. This holds true even if the underlying portfolios face identical levels of risk. Banks can use an industry-wide model to calculate, say, default probabilities for a specific portfolio, but this will not account for changes in the mix of loans held by the bank or its rival institutions. In contrast, our approach is based on call report data, providing a consistent basis on which to compare banks across the size spectrum.

Ideally, bank executives should be able to conceive of a stress scenario during a morning coffee break, mull over detailed stress projections during a quick working lunch, and devise an appropriate strategy to deal with the potential threat by the close of business.

The fact that we can apply our method consistently for all US banks opens up a plethora of intriguing analytical options. For a specific bank, we can provide a coherent external projection of the complete financial position under baseline or stress circumstances. This can be used as a champion, challenger, or benchmark stress testing formulation to be compared with internal stress testing engines. Scenarios can be deployed within this framework in minutes, bringing the tactical stress testing protocol to a point that is well within reach.

Adding to the strategic possibilities, executives can lay their own bank’s stress position alongside that of their competitors or potential collaborators. A bank considering an acquisition can fold the target’s data into its own legacy data and make projections for the hypothetical merged bank. Banks can gain key insight into which of their competitors are more or less recession-prone than themselves, and can then potentially improve their recession resilience through acquisition. Additionally, a bank can determine whether its own internal managers are outperforming their peers in similar roles at competitor banks or whether they are merely riding industry waves.

Another intriguing element of this work is the breadth of banks that the analysis covers. We began this research to develop benchmarking options for the largest banks. We were pleasantly surprised to find that our methodology worked as effectively for small banks, even those with less than $1 billion in assets, as it did for Comprehensive Capital Analysis and Review (CCAR) giants. For banks in the Dodd-Frank Act Stress Test (DFAST) range – with $10 billion to $50 billion in assets – where available data often fail to deliver valid models, and where subjectivity plays an outsized and unwanted role in stress testing, our approach can be used to provide scientific rigor. For smaller community banks, our approach opens to them the stress testing floodgates. Insofar as larger banks are obtaining competitive benefit from stress testing, small banks will now be able to enjoy similar benefits.

For large banks, the methodology provides a useful, consistent benchmark for a variety of pre-provision net revenue calculations. Large bank executives will also be able to run quick stress tests for both themselves and their competitors, individually or jointly, and they can perform the analysis on potential merger and acquisition (M&A) targets. Mid-size banks may find the methodology suitable as a champion model. These banks have much smaller armies of modelers, and building models using only in-house data is often not practical. Many of the smallest banks do not have sufficient data for modeling, let alone any modelers to make use of those data, so they may benefit by having a source of quantitative, unbiased forecasts that can be compared to their competitors. Banks of all sizes can use our data for peer-group and market analysis.

Later in this paper we study a peer group of small banks to demonstrate our methodology. Specifically, we consider four banks that are all active in the central area of Texas: Extraco, First National Bank Texas, Central National Bank, and First National Bank of Central Texas. Assets for this group total about $8 billion.


When developing a model for strategicallyanalyzing bank portfolios, being able to distinguish between internal and external forces is critical. For example, suppose that during the housing boom of 2005 and 2006, your bank was making a concerted effort to increase its market share in the prime credit card sector. In analyzing portfolio originations and volumes, regressing observed volume on a range of economic variables will uncover a clear procyclicality; when the economy improves, loan origination growth tends to accelerate. The analyst must then try to dentify whether it was the improving economy or the bank’s aggressive marketing activity that was primarily responsible for the outcome. If the marketing strategy was effective, this would tend to falsely magnify the perceived effect of the business cycle on growth in the portfolio.

A model that does not explicitly account for internal actions cannot accurately forecast what will happen in a renewed stress scenario unless management is assumed to be inert and inflexible. For the models to have strategic applications, they must be capable of simulating a variety of management actions and the manner in which they interact with the external environment the bank faces.

Suppose we have two banks in separate universes, Good Bank and Bad Bank, both of which are subject to DFAST. They have similar overall risk profiles and both have made large loans to a hypothetical HWC Corporation, a maker of widgets. In 2008, a recession kicked off and HWC was in big trouble – there was a speculative boom in widgets, the bursting of which caused the recession, and HWC had massively over-invested in its Albuquerque operations. In both universes, the distribution of manager talent is the same, and industry commercial and industrial (C&I) losses in both realms rose to 6% as a result of the recession.

Both sets of bank managers tried various treatments to keep HWC afloat. The problem for Bad Bank’s shareholders was that their managers were poorly skilled and, as a result, HWC failed; the bank therefore suffered deep losses. Good Bank’s people, consummate professionals, offered HWC a timely refinancing package that staved off disaster for the company and for the bank. The recession was still tough on Good Bank’s bottom line, whose C&I losses rose from 2% to 4%. Bad Bank also survived the subprime widget recession, albeit just barely. Its C&I losses soared from 2% to a whopping 12%.

While the distribution of management talent is the same in both universes, Bad Bank just happened to hire an inordinate proportion of bad managers before the last recession. The good managers were hired elsewhere.

After the recession, Bad Bank methodically fired its entire management team and rebranded itself as Satisfactory Bank.

Now DFAST rolls around again. Our friends at the rebranded SatBank are trying to build C&I models for use in the regulatory exercise. If they build a model of the internal data alone and seek to project under the severely adverse scenario, an event similar to the global widget crisis of 2008, they will project a 12% loss rate. The new CEO of SatBank is dissatisfied with this result, since she is certain that the new management team will do a better job than last time. Even if they hired a group of managers of average quality out of the available pool, they should at least be able to match the 6% result observed for the industry during the crisis.

Many of Good Bank’s managers, meanwhile, have cashed in their options and are busy swinging in hammocks in warm places. The bank has restaffed from the same talent pool as SatBank. Can we not infer, therefore, that the two banks will now experience similar outcomes during a future severe recession?

It is possible to believe that Good Bank and SatBank will enjoy or endure similar results to those they experienced during the last recession. It would be more accurate, however, to assume that both banks will regress to the mean and behave more like the average bank going forward.

A conservative position, meanwhile, would involve assuming that both banks will err in their staffing choices. A tough but reasonable regulator may be justified in forcing Good Bank to capitalize to SatBank’s numbers during its capital adequacy assessment. Moreover, even though Good Bank weathered the previous recession relatively well, it is not exempt from the need to benchmark to consistent external data. If the bank internalizes the view that it is recession-proof, that other banks’ data do not pertain to it because it is above the fray, it is hard to see how the stress testing imperative has made the bank any safer.

How We Do It

Given the large number of banks in the US, we assume perfect competition so that exogenous actions taken by managers at an individual bank will not affect the trajectory of industry-level aggregates. Of course, decisions made by the large CCAR banks might in fact affect aggregate volumes, but for our purposes the assumption is especially powerful. It allows us to model the data on industry-level aggregate outcomes for each line item on the call report without worrying about the effect of any specific action taken by a manager at an individual bank. As Figure 1 shows, while the number of independent banks has been in steady decline, there are still 6,000 today, ensuring a high level of competition.

As a consequence of our assumption, we can model the behavior of the industry against cyclical economic variables and thus isolate the pure effect of the macroeconomy on the series of interest. This basic principle applies, to a greater or lesser extent, to all the series in the call report. Our assumption of perfect competition vastly simplifies our analysis by allowing us to concentrate on pure macroeconomic factors and thus isolate the internal factors for analysis conducted separately.

Figure 1. The number of banks in the US is in secular decline
The number of banks in the US is in secular decline
Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

We model the aggregate series using standard macroeconometric techniques that are familiar to most readers. We also ensure that relevant identities hold. For example, our forecasts for total deposits reflect the sum of checking, savings, time, and other forms of deposits. We enforce consistency between the aggregate income statement and balance sheet by making interest income and expenses functions of assets and liabilities, respectively.

Before we can model an individual bank, we must adjust each existing bank’s data for past M&A activity. For example, when Wells Fargo merged with Wachovia in 2009, the entity labeled “Wells Fargo” in the call reports almost doubled in scale from 2009Q4 to 2010Q1. We construct hypothetical data for Wells Fargo by combining Wells Fargo data with historical data for Wachovia and all other entities that have come under the Wells Fargo moniker over the years. Thus, our data for Wells Fargo represent what it would look like had it always owned Wachovia and the other banks it has acquired. Doing this allows us to control for what are perhaps the most obvious structural breaks in the data, those specifically due to M&A activity. Details of this approach are available on request.

Even after we adjust the bank-level data for M&A activity, some structural breaks will inevitably remain. Wells Fargo’s managers clearly had different strategic goals than the Wachovia managers they replaced. If, say, Wachovia had a defensive manager in a specific area, replaced by a more aggressive post-merger overseer, we would see changes in the dynamic behavior of the relevant series after the merger. We use the database of bank M&A activity maintained by the Federal Reserve Bank of Chicago, a database which does not include acquisitions of non-bank entities. For example, if a bank acquires assets from an insurance company, our data will still show a spike in the bank’s total assets and other series because our M&A adjustment algorithm does not account for that activity. Nevertheless, our adjustment handles the vast majority of M&A-related discontinuities seen in data.

With the merger-adjusted bank data in hand, we use the industry-level data and forecasts to produce our bank-specific forecasts. When the variable being modeled can take on negative values, as is usually the case with income and expense variables, we use a beta-model approach in which we model the bank-level variable as a function of the industry-level variable and macroeconomic factors. When the variable being modeled can only take on positive values, as is usually true of assets and liabilities, we use a share-based approach in which we model the ratio of the bank-level variable to the industry-level variable as a function of macroeconomic factors.

Rather than attempt to find a small set of macroeconomic variables that can explain each of the 200 bank series we currently forecast, we instead use a principal components analysis (PCA) to identify a few uncorrelated latent variables that can account for the bulk of macroeconomic fluctuations. That is, we start with more than 100 macroeconomic and interest-rate variables and then run a PCA. The first three principal components can account for more than 85% of the total variance of all the macroeconomic variables. For small banks located within a particular state, we often replace the third PCA with a single PCA based on state-specific macroeconomic variables.

These principal components have intuitive interpretations. See Figure 2. The first principal component, PCA 1, closely follows long-term interest rates. Interest rates have been in secular decline since the early 1990s, but we have nonetheless experienced periods of rising rates. The second principal component reflects the cyclical behavior of the economy, falling during the recessions of 2001-2002 and 2008-2009 and bouncing back along with the economy. The third principal component is a lagging indicator of the economy; it peaked in 2004 and 2010, several quarters after the economy bottomed.

Figure 2. Three principal components of the economy
Three principal components of the economy
Source: Moody's Analytics

Whether we use the beta model or the share model to forecast an individual bank based on industry data, we use these same three principal components to control for economic conditions. We employ various combinations of lags of these three variables and then pick the model with the best forecast accuracy. Together with the scenario-specific forecasts of the PCAs and the industry-level bank data, we use the resulting model to forecast the individual bank.

The cyclical component of market share can be viewed as a measure of risk appetite. Suppose a particular bank has a rising market share in C&I loans during a period of strong growth for the C&I sector. Now compare this institution to one whose share of C&I volume rises during recessions. The bank that gains share during a recession is likely the more conservative bank. Conservative banks will lose market share to more aggressive competitors during upturns and then recover the lost share when competitors falter during tough times. In general, a bank with a high beta has a higher risk appetite. Approaching the problem in this way, we can measure the appetite for risk for all banks line-by-line across the entire call report.

Regional economic conditions are most prominent among small banks. Figure 3 shows the total assets for a peer group of four banks located in central Texas as well as for the entire industry, and Figure 4 shows the peer group’s assets as a share of industry assets. During the Great Recession, this peer group was able to increase its share from 0.038% to 0.047% during a nine-quarter period from 2008 to 2010. This increase was mainly due to banks in other parts of the country posting large declines in asset values. Including a component that controls for regional conditions helps us capture these effects. Interestingly, though, our four banks have been able to maintain (and even further increase) the share they achieved during the recession.

Figure 3. Peer group and industry assets
Peer group and industry assets
Sources: FDIC Statistics on Depository Institutions, Moody's Analytics
Figure 4. Peer group assets as a share of industry
Peer group assets as a share of industry
Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

Changes in market share not attributable to national and regional economic factors are idiosyncratic in nature. An increase in market share can be achieved by taking greater risk, but it can also be achieved through more effective management. In either case, if the increase in market share cannot be attributed to the business cycle or regional variations, it can instead be chalked up to the good fortune and effectiveness of the bank’s managers.

Figure 5 shows the market share for our four Texas banks, where we also included several smaller banks when defining the size of the market. Extraco has clearly been losing market share over the past two decades. While Central’s share has been steady, First National Bank Texas has seen its peer-group share rise consistently to the point where it has recently surpassed Extraco as the group leader in total assets. Having no data about the internal actions taken by any of the banks, we can nonetheless conclude that First National Bank Texas has been very effective in grabbing market share from its competitors. Extraco may well be pursuing a margin growth strategy and may be highly profitable for its shareholders, though there is no doubt that it is shrinking in scale relative to its fast-growing peer.

Figure 5. First National Bank Texas gaining assets at Extraco’s expense
First National Bank Texas gaining assets at Extraco’s expense
Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

We have several options when forecasting market shares under baseline and stress scenarios. The simplest alternative is to assume a constant market share, either at its last historical value or perhaps its mean over a longer recent period. Even with this approach, our forecasts of the underlying bank-level variable of interest will still show different forecast trajectories because our industry-level forecasts do. A second alternative is to use an autoregressive integrated moving average (ARIMA) model to forecast market shares. This extends the flat-line approach by using recent market share momentum to help forecast the share going forward.

The third approach, as discussed previously, includes using PCA so that market share forecasts are conditional on the economic environment. When we fit these market share models (and beta models) using principal components as regressors, we do not place any restrictions on the signs of the corresponding parameters. Forming prior views of how these principal components affect a bank’s market share is difficult. Is Extraco’s (or Wells Fargo’s) market share of commercial real estate loan origination pro- or counter-cyclical? In practice, that answer depends on the bank’s strategic plan and tolerance for risk. We allow the data to speak for themselves in determining the dynamics of market share under different scenarios.

Figure 6 shows our forecast peer-group market shares of net loans and leases for First National Bank Texas and Extraco for the CCAR baseline and severely adverse scenarios. The behavior of Extraco’s market share in the severely adverse scenario is interesting. It initially rises but then falls slightly. After 2017, its market share levels off, offering somewhat of a respite from the declines it has experienced for most of the past 20 years. In contrast, First National’s market share growth comes to a halt before resuming in 2019 under the severely adverse scenario. Figures 7 and 8 show our forecasts for net loans and leases under all three regulatory CCAR scenarios. Compare Figure 6 to Figure 8. In the baseline scenario, Extraco’s market share continues to wane. Nevertheless, the industry grows enough so that Extraco’s shrinking slice of the pie still translates to a growing loan and lease portfolio. In summary, for several line items in the call report, we have demonstrated that industry-level aggregate data are smooth and highly amenable to modeling against macroeconomic variables to produce stress scenarios. Further, we have demonstrated that the derived market share for individual banks (or a peer group of banks) is stable, demonstrating cyclical, regional, and idiosyncratic behavior. We have modeled such shares against principal components to extract the cyclical elements and thus isolated some key idiosyncratic behavior that is unique to the specific banks in our group. We have argued that measures of correlation between individual bank data and related industry-level data provide a valid measure of risk appetite that can be presented for each line item in the call report.

Figure 6. Net loan and lease share forecasts
Net loan and lease share forecasts
Sources: FDIC Statistics on Depository Institutions, Moody's Analytics
Figure 7. First National Bank Texas net loans and leases forecast
First National Bank Texas net loans and leases forecast
Sources: FDIC Statistics on Depository Institutions, Moody's Analytics
Figure 8. Extraco net loans and leases forecast
Extraco net loans and leases forecast
Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

While the approach presented here is clearly top-down in nature, we note that the methodology allows us to dig down to any level of granularity required by end users. Had the alternative bottom-up approach been taken, and data for each bank were considered in turn, it would have been impossible for us to statistically separate the data into their collective and idiosyncratic components. We contend that it is only the collective components that should be stressed during a stress test and that a variety of idiosyncratic management responses should be considered as part of a strategic analysis used by the bank’s managers to deal with stress.


Distinguishing between internal and external drivers is a problem of Gordian complexity. Stress testing, to date, has focused on individual banks building stress testing models based solely or primarily on internal data sources. It is unclear whether it is possible for a bank to understand all the risks it faces without looking for clues outside the castle walls.

There are various ways for banks to do this. One would involve producing detailed benchmark forecasts against which to peg internally derived solutions. There is, however, a distinct lack of suitable approaches that could be used to provide such a comparison. The methodology we present here represents arguably the first credible attempt to provide a universal benchmarking solution. We have used only externally sourced public data and have relied in no way on any information that is specific or proprietary to any individual bank. Despite this, we contend that the stressed and baseline projections produced would compete strongly with internally produced forecasts that rely on a detailed understanding of the inner workings of the bank.

The universality of our approach provides any number of benefits that are external to the core stress testing imperative. Managers of banks of all sizes can look at their own bank and competitor banks through the same lens. This means that strategic analysis can proceed via consideration both of action and competitor reaction to a variety of management plans. The external environment, meanwhile, is truly external in this approach. The stress scenario is therefore truly exogenous if considered in our framework.

Modelers who rely solely on internal data and macroeconomic variables cannot disentangle the effects of the macroeconomy on the one hand and bank-specific actions on the other. Our example of Good Bank and Bad Bank highlights the fact that a model that predicts credit losses in a stress scenario similar to those seen during the Great Recession is not conservative and is unlikely to be accurate. A rigorous stress test or strategic analysis requires the bank to compare itself to industry and peer-group aggregates.

We propose a novel, powerful modeling framework that uses FDIC call report data to develop both industry and bank-level forecasts. Aggregated call report data do not suffer from idiosyncratic management actions, so identifying the impacts of the macroeconomy becomes straightforward. Forecasting a individual bank then becomes a matter of explaining how its market share has evolved over time and how it is likely to behave under various economic scenarios. That task is much easier, and less error-prone, than trying to build a bottom-up model that tries to capture both macroeconomic effects and internal decisions.

As Published In:
Related Insights

Forecasting Income & Balance Sheet Projections for Compliance

Regulators are placing increased emphasis on the rigor by which banks model their income and balance sheet projections.

July 2017 WebPage Brian Poi

The Effect of Ride-Sharing on the Auto Industry

Many in the auto industry are concerned about the impact of ride-sharing. In this article analyze the impact of ride-share services like Uber and Lyft on the private transportation market.

July 2017 Pdf Dr. Tony Hughes

The Effect of Ride-Sharing on the Auto Industry

In this article, we consider some possible long-term ramifications of ride-sharing for the broader auto indust

July 2017 WebPage Dr. Tony Hughes

"How Will the Increase in Off-Lease Volume Affect Used Car Residuals?" Presentation Slides

Increases in auto lease volumes are nothing new, yet the industry is rife with fear that used car prices are about to collapse. In this talk, we will explore the dynamics behind the trends and the speculation. The abundance of vehicles in the US that are older than 10 years will soon need to be replaced, and together with continuing demand from ex-lessees, this demand will ensure that prices remain supported under baseline macroeconomic conditions.

February 2017 Pdf Dr. Tony HughesMichael Vogan

How Will the Increase in Off-Lease Volume Affect Used Car Residuals?

Increases in auto lease volumes are nothing new, yet the industry is rife with fear that used car prices are about to collapse. In this webinar, we explore the dynamics behind the trends and the speculation. The abundance of vehicles in the US that are older than 10 years will soon need to be replaced, and together with continuing demand from ex-lessees, this demand will ensure that prices remain supported under baseline macroeconomic conditions.

February 2017 WebPage Dr. Tony HughesMichael Vogan

Economic Forecasting & Stress Testing Residual Vehicle Values

To effectively manage risk in your auto portfolios, you need to account for future economic conditions. Relying on models that do not fully account for cyclical economic factors and include subjective overlay, may produce inaccurate, inconsistent or biased estimates of residual values.

December 2016 WebPage Dr. Tony Hughes

The Value of Granular Risk Rating Models for CECL

Granular risk rating models allow creditors to understand the credit risk of individual loans in a portfolio, facilitating underwriting and monitoring activities. In this webinar we will outline the value of granular risk rating models for CECL.

November 2016 WebPage Christian HenkelDr. Tony Hughes

Improved Deposit Modeling: Using Moody's Analytics Forecasts of Bank Financial Statements to Augment Internal Data

We demonstrate how our service can be used to produce more realistic forecasts of income and balance sheet statements.

July 2016 Pdf Dr. Tony HughesBrian Poi

Are Deposits Safe Under Negative Interest Rates?

In this article, I take a theoretical look at negative interest rates as a means to stimulate the economy. I identify key factors that may influence the volume of deposits held in the economy. I then empirically describe the unique situation of negative interest rates.

June 2016 WebPage Dr. Tony Hughes

AutoCycle™: Residual Risk Management and Lease Pricing at the VIN Level

We demonstrate the core capabilities of our vehicle residual forecasting model to capture aging and usage effects and illustrate the material implications for car valuation of different macroeconomic scenarios such as recessions and oil price spikes.

May 2016 Pdf Dr. Tony Hughes

Benefits & Applications: AutoCycle - Vehicle Residual Value Forecasting Solution

With auto leasing close to record highs, the need for accurate and transparent used-car price forecasts is paramount. Concerns about the effect of off-lease volume on prices have recently peaked, and those exposed to risks associated with vehicle valuations are seeking new forms of intelligence. With these forces in mind, Moody's Analytics AutoCycle™ has been developed to address these evolving market dynamics.

May 2016 Pdf Dr. Tony HughesDr. Samuel W. MaloneMichael Vogan, Michael Brisson

Alternatives to Long-Term Car Loans?

In this article, our experts focus on two recent developments: how to manage lease-term or model-year concentration risk and how to find affordable finance options for subprime or near-prime sector.

February 2016 Pdf Dr. Tony Hughes

Small Samples and the Overuse of Hypothesis Tests

With powerful computers and statistical packages, modelers can now run an enormous number of tests effortlessly. But should they? This article discusses how bank risk modelers should approach statistical testing when faced with tiny data sets.

December 2015 WebPage Dr. Tony Hughes

Do Banks Need Third-Party Models?

This article discusses the role of third-party data and analytics in the stress testing process. Beyond the simple argument that more eyes are better, we outline why some stress testing activities should definitely be conducted by third parties.

December 11, 2015 WebPage Dr. Douglas DwyerDr. Tony Hughes

Forecasting Income Statements & Balance Sheets Using Industry Data

In this webinar, Dr. Brian Poi, Director, Economic Research, demonstrates how forecasts based on industry data can be used to generate an objective benchmark for internally generated forecasts.

October 2015 Pdf Brian Poi

Stress Testing Used-Car Prices

In this presentation we presented a quantitative methodology for incorporating economic factors into car price forecasts.

August 2015 WebPage Dr. Tony HughesMichael Vogan

Systemic Risk Monitor 1.0: A Network Approach

In this article, we introduce a new risk management tool focused on network connectivity between financial institutions.

Residual Car Values Forecasting Using AutoCycle™

In this paper we discuss our approach to forecasting residual car values that accounts for cyclical economic factors affecting the automotive industry, under normal and stressed scenarios.

July 2015 Pdf Dr. Tony Hughes

Forecasts and Stress Scenarios of Used-Car Prices

The market for new cars is growing strongly and lessors need forecasts and associated stress scenarios of future vehicle value to set the initial terms, to monitor the performance of their book and to stress-test cash flows. This presentation offers insight and tools to help lessors in this pursuit.

May 2015 Pdf Dr. Tony Hughes, Zhou Liu, Pedro Castro

Measuring Systemic Risk in the Southeast Asian Financial System

This article looks back at the Asian financial crisis of 1997-1998 and applies new methods of measuring systemic risk and pinpointing weaknesses, which can be used by today’s financial institutions and regulators.

Multicollinearity and Stress Testing

Multicollinearity, the phenomenon in which the regressors of a model are correlated with each other, apparently causes a lot of confusion among practitioners and users of stress testing models. This article seeks to dispel this confusion.

May 2015 WebPage Dr. Tony HughesBrian Poi

What if PPNR Research Proves Fruitless?

This article addresses how banks should look to sources of high-quality, industry-level data to ensure that their PPNR modeling is not only reliable and effective, but also better informs their risk management decisions.

May 2015 WebPage Dr. Tony Hughes

Vehicle Equity and Long-Term Car Loans

In this article, we consider the increasing prevalence of long term loans and use the AutoCycle™ wholesale price forecasts to uncover equity held by the borrower under different economic scenarios.

April 2015 Pdf Dr. Tony Hughes

Putting Systemic Stress into the Stress-Testing System

In this article, banks can significantly improve the effectiveness of their stress-testing exercises by incorporating systemic risk measures.

March 2015 Pdf Dr. Tony Hughes

Modeling the Entire Balance Sheet of a Bank

This article explores the interaction between a bank’s various models and how they may be built into a comprehensive stress testing framework, contributing to the overall performance of a bank.

November 2013 WebPage Dr. Tony Hughes

Is Now the Time for Tough Stress Tests?

The banking industry needs a regulatory framework that is carefully designed to maximize economic outcomes, both in terms of stability and growth, rather than one dictated by past banking sector excesses.

November 2013 WebPage Dr. Tony Hughes

Stressed EDF Credit Measures for Western Europe

In this paper we describe the modeling methodology behind Moody's Analytics Stressed EDF measures for Western Europe. Stressed EDF measures are one-year, default probabilities conditioned on holistic economic scenarios developed in a large-scale,structural macroeconometric model framework.

October 2012 Pdf Danielle Ferry, Dr. Tony Hughes, Min Ding

Stressed EDF™ Credit Measures for North America

In this paper we describe the modeling methodology behind Moody's Analytics Stressed EDF measures. Stressed EDF measures are one-year, default probabilities conditioned on holistic economic scenarios developed in a large-scale, structural macroeconometric model framework. This approach has several advantages over other methods, especially in the context of stress testing. Stress tests or scenario analyses based on macroeconomic drivers lend themselves to highly intuitive interpretation accessible to wide audiences – investors, economists, regulators, the general public, to name a few.

May 2012 Pdf Danielle Ferry, Dr. Tony Hughes, Min Ding

The Moody's CreditCycle Approach to Loan Loss Modeling

This whitepaper goes in-depth into the Moody's CreditCycle approach to loan loss modeling.

Previewing This Year's Stress Tests Using the Bank Call Report Forecasts

Risk modelers at banks often feel pressure to produce conservative, as opposed to strictly accurate, forecasts of a bank’s resilience in times of stress. Regulators typically frown on capital plans that have even the barest whiff of optimism[1].