Featured Product

    Stress Testing and Strategic Planning Using Peer Analysis

    Banks face the difficult task of building hundreds of forecasting models that disentangle macroeconomic effects from bank-specific decisions. That is impossible when modelers rely solely on internal performance data and the standard set of macroeconomic variables released as part of the CCAR exercise. We propose an alternative approach based on consistently reported industry data that simplifies the modeler’s task and at the same time increases forecast accuracy. Our approach is also useful for strategic planning as it allows one bank to compare its balance sheet and income statement to its peers and the industry and to explore potential mergers and acquisitions.

    Introduction

    Responding to the dictates of the Dodd-Frank Act is, perhaps by regulatory design, a highly complex task for large financial institutions. In principle, banks must seek to carefully model every potential cash flow that may stem from the operation of their businesses. These models cover not only credit losses for all asset categories at a granular level – for many banks down to a loan level – but also asset and liability balances, loan origination volumes, deposits, interest and non-interest revenues and expenses, costs of staff and premises, and ultimately the exact future capital position of the institution.

    In this article, we propose an alternative simple and coherent methodology that allows us to forecast and stress test the entire balance sheet and profit and loss statement for all of the roughly 6,000 banks in the US in a consistent manner. The output is presented as a bank-level panel database containing forecasts and stress scenarios for (potentially) every item covered by public call report data. We can currently project about 200 individual line items from the call report, with the potential to extend our methodology to more than 1,000 items.

    Addressing Stress Test Limitations

    Stress tests developed within banks have primarily utilized complex bottom-up modeling techniques. Analysts are tasked with building a model of a specific, narrowly defined cash flow or credit loss measure for the institution. They source data relevant to the line item, primarily from inside the bank, and then build a model that relates the collected data to macroeconomic variables. Once 1,000 modelers have built 1,000 models of 1,000 different variables, the series are projected and then combined to calculate the capital position of the bank under each scenario.

    The complexity of this task has major implications for the banking system. First, even those institutions with the most keenly developed stress testing infrastructure cannot run an ad hoc stress test quickly and accurately. For example, suppose that a large, unexpected event – like the UK’s vote to exit the European Union – occurs one weekend and the chief risk officer wants to determine its effect on the bank’s future capital position. At present, it may take weeks or months for the manager to get the answer, by which time the next crisis, and the one after that, will have already come and gone. Ideally, bank executives should be able to conceive of a stress scenario during a morning coffee break, mull over detailed stress projections during a quick working lunch, and devise an appropriate strategy to deal with the potential threat by the close of business. One wonders whether a stress test that takes months to perform can ever have any meaningful strategic or tactical relevance to a bank of any size.

    Another problem with the stress testing protocols, as currently implemented, is that banks often cannot compare their projected performance with that of their peers. With each bank building its own idiosyncratic, bottom-up model primarily based on internally sourced data, one bank’s model outcomes may not easily compare with another’s. This holds true even if the underlying portfolios face identical levels of risk. Banks can use an industry-wide model to calculate, say, default probabilities for a specific portfolio, but this will not account for changes in the mix of loans held by the bank or its rival institutions. In contrast, our approach is based on call report data, providing a consistent basis on which to compare banks across the size spectrum.

    Ideally, bank executives should be able to conceive of a stress scenario during a morning coffee break, mull over detailed stress projections during a quick working lunch, and devise an appropriate strategy to deal with the potential threat by the close of business.

    The fact that we can apply our method consistently for all US banks opens up a plethora of intriguing analytical options. For a specific bank, we can provide a coherent external projection of the complete financial position under baseline or stress circumstances. This can be used as a champion, challenger, or benchmark stress testing formulation to be compared with internal stress testing engines. Scenarios can be deployed within this framework in minutes, bringing the tactical stress testing protocol to a point that is well within reach.

    Adding to the strategic possibilities, executives can lay their own bank’s stress position alongside that of their competitors or potential collaborators. A bank considering an acquisition can fold the target’s data into its own legacy data and make projections for the hypothetical merged bank. Banks can gain key insight into which of their competitors are more or less recession-prone than themselves, and can then potentially improve their recession resilience through acquisition. Additionally, a bank can determine whether its own internal managers are outperforming their peers in similar roles at competitor banks or whether they are merely riding industry waves.

    Another intriguing element of this work is the breadth of banks that the analysis covers. We began this research to develop benchmarking options for the largest banks. We were pleasantly surprised to find that our methodology worked as effectively for small banks, even those with less than $1 billion in assets, as it did for Comprehensive Capital Analysis and Review (CCAR) giants. For banks in the Dodd-Frank Act Stress Test (DFAST) range – with $10 billion to $50 billion in assets – where available data often fail to deliver valid models, and where subjectivity plays an outsized and unwanted role in stress testing, our approach can be used to provide scientific rigor. For smaller community banks, our approach opens to them the stress testing floodgates. Insofar as larger banks are obtaining competitive benefit from stress testing, small banks will now be able to enjoy similar benefits.

    For large banks, the methodology provides a useful, consistent benchmark for a variety of pre-provision net revenue calculations. Large bank executives will also be able to run quick stress tests for both themselves and their competitors, individually or jointly, and they can perform the analysis on potential merger and acquisition (M&A) targets. Mid-size banks may find the methodology suitable as a champion model. These banks have much smaller armies of modelers, and building models using only in-house data is often not practical. Many of the smallest banks do not have sufficient data for modeling, let alone any modelers to make use of those data, so they may benefit by having a source of quantitative, unbiased forecasts that can be compared to their competitors. Banks of all sizes can use our data for peer-group and market analysis.

    Later in this paper we study a peer group of small banks to demonstrate our methodology. Specifically, we consider four banks that are all active in the central area of Texas: Extraco, First National Bank Texas, Central National Bank, and First National Bank of Central Texas. Assets for this group total about $8 billion.

    Motivation

    When developing a model for strategicallyanalyzing bank portfolios, being able to distinguish between internal and external forces is critical. For example, suppose that during the housing boom of 2005 and 2006, your bank was making a concerted effort to increase its market share in the prime credit card sector. In analyzing portfolio originations and volumes, regressing observed volume on a range of economic variables will uncover a clear procyclicality; when the economy improves, loan origination growth tends to accelerate. The analyst must then try to dentify whether it was the improving economy or the bank’s aggressive marketing activity that was primarily responsible for the outcome. If the marketing strategy was effective, this would tend to falsely magnify the perceived effect of the business cycle on growth in the portfolio.

    A model that does not explicitly account for internal actions cannot accurately forecast what will happen in a renewed stress scenario unless management is assumed to be inert and inflexible. For the models to have strategic applications, they must be capable of simulating a variety of management actions and the manner in which they interact with the external environment the bank faces.

    Suppose we have two banks in separate universes, Good Bank and Bad Bank, both of which are subject to DFAST. They have similar overall risk profiles and both have made large loans to a hypothetical HWC Corporation, a maker of widgets. In 2008, a recession kicked off and HWC was in big trouble – there was a speculative boom in widgets, the bursting of which caused the recession, and HWC had massively over-invested in its Albuquerque operations. In both universes, the distribution of manager talent is the same, and industry commercial and industrial (C&I) losses in both realms rose to 6% as a result of the recession.

    Both sets of bank managers tried various treatments to keep HWC afloat. The problem for Bad Bank’s shareholders was that their managers were poorly skilled and, as a result, HWC failed; the bank therefore suffered deep losses. Good Bank’s people, consummate professionals, offered HWC a timely refinancing package that staved off disaster for the company and for the bank. The recession was still tough on Good Bank’s bottom line, whose C&I losses rose from 2% to 4%. Bad Bank also survived the subprime widget recession, albeit just barely. Its C&I losses soared from 2% to a whopping 12%.

    While the distribution of management talent is the same in both universes, Bad Bank just happened to hire an inordinate proportion of bad managers before the last recession. The good managers were hired elsewhere.

    After the recession, Bad Bank methodically fired its entire management team and rebranded itself as Satisfactory Bank.

    Now DFAST rolls around again. Our friends at the rebranded SatBank are trying to build C&I models for use in the regulatory exercise. If they build a model of the internal data alone and seek to project under the severely adverse scenario, an event similar to the global widget crisis of 2008, they will project a 12% loss rate. The new CEO of SatBank is dissatisfied with this result, since she is certain that the new management team will do a better job than last time. Even if they hired a group of managers of average quality out of the available pool, they should at least be able to match the 6% result observed for the industry during the crisis.

    Many of Good Bank’s managers, meanwhile, have cashed in their options and are busy swinging in hammocks in warm places. The bank has restaffed from the same talent pool as SatBank. Can we not infer, therefore, that the two banks will now experience similar outcomes during a future severe recession?

    It is possible to believe that Good Bank and SatBank will enjoy or endure similar results to those they experienced during the last recession. It would be more accurate, however, to assume that both banks will regress to the mean and behave more like the average bank going forward.

    A conservative position, meanwhile, would involve assuming that both banks will err in their staffing choices. A tough but reasonable regulator may be justified in forcing Good Bank to capitalize to SatBank’s numbers during its capital adequacy assessment. Moreover, even though Good Bank weathered the previous recession relatively well, it is not exempt from the need to benchmark to consistent external data. If the bank internalizes the view that it is recession-proof, that other banks’ data do not pertain to it because it is above the fray, it is hard to see how the stress testing imperative has made the bank any safer.

    How We Do It

    Given the large number of banks in the US, we assume perfect competition so that exogenous actions taken by managers at an individual bank will not affect the trajectory of industry-level aggregates. Of course, decisions made by the large CCAR banks might in fact affect aggregate volumes, but for our purposes the assumption is especially powerful. It allows us to model the data on industry-level aggregate outcomes for each line item on the call report without worrying about the effect of any specific action taken by a manager at an individual bank. As Figure 1 shows, while the number of independent banks has been in steady decline, there are still 6,000 today, ensuring a high level of competition.

    As a consequence of our assumption, we can model the behavior of the industry against cyclical economic variables and thus isolate the pure effect of the macroeconomy on the series of interest. This basic principle applies, to a greater or lesser extent, to all the series in the call report. Our assumption of perfect competition vastly simplifies our analysis by allowing us to concentrate on pure macroeconomic factors and thus isolate the internal factors for analysis conducted separately.

    Figure 1. The number of banks in the US is in secular decline
    The number of banks in the US is in secular decline
    Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

    We model the aggregate series using standard macroeconometric techniques that are familiar to most readers. We also ensure that relevant identities hold. For example, our forecasts for total deposits reflect the sum of checking, savings, time, and other forms of deposits. We enforce consistency between the aggregate income statement and balance sheet by making interest income and expenses functions of assets and liabilities, respectively.

    Before we can model an individual bank, we must adjust each existing bank’s data for past M&A activity. For example, when Wells Fargo merged with Wachovia in 2009, the entity labeled “Wells Fargo” in the call reports almost doubled in scale from 2009Q4 to 2010Q1. We construct hypothetical data for Wells Fargo by combining Wells Fargo data with historical data for Wachovia and all other entities that have come under the Wells Fargo moniker over the years. Thus, our data for Wells Fargo represent what it would look like had it always owned Wachovia and the other banks it has acquired. Doing this allows us to control for what are perhaps the most obvious structural breaks in the data, those specifically due to M&A activity. Details of this approach are available on request.

    Even after we adjust the bank-level data for M&A activity, some structural breaks will inevitably remain. Wells Fargo’s managers clearly had different strategic goals than the Wachovia managers they replaced. If, say, Wachovia had a defensive manager in a specific area, replaced by a more aggressive post-merger overseer, we would see changes in the dynamic behavior of the relevant series after the merger. We use the database of bank M&A activity maintained by the Federal Reserve Bank of Chicago, a database which does not include acquisitions of non-bank entities. For example, if a bank acquires assets from an insurance company, our data will still show a spike in the bank’s total assets and other series because our M&A adjustment algorithm does not account for that activity. Nevertheless, our adjustment handles the vast majority of M&A-related discontinuities seen in data.

    With the merger-adjusted bank data in hand, we use the industry-level data and forecasts to produce our bank-specific forecasts. When the variable being modeled can take on negative values, as is usually the case with income and expense variables, we use a beta-model approach in which we model the bank-level variable as a function of the industry-level variable and macroeconomic factors. When the variable being modeled can only take on positive values, as is usually true of assets and liabilities, we use a share-based approach in which we model the ratio of the bank-level variable to the industry-level variable as a function of macroeconomic factors.

    Rather than attempt to find a small set of macroeconomic variables that can explain each of the 200 bank series we currently forecast, we instead use a principal components analysis (PCA) to identify a few uncorrelated latent variables that can account for the bulk of macroeconomic fluctuations. That is, we start with more than 100 macroeconomic and interest-rate variables and then run a PCA. The first three principal components can account for more than 85% of the total variance of all the macroeconomic variables. For small banks located within a particular state, we often replace the third PCA with a single PCA based on state-specific macroeconomic variables.

    These principal components have intuitive interpretations. See Figure 2. The first principal component, PCA 1, closely follows long-term interest rates. Interest rates have been in secular decline since the early 1990s, but we have nonetheless experienced periods of rising rates. The second principal component reflects the cyclical behavior of the economy, falling during the recessions of 2001-2002 and 2008-2009 and bouncing back along with the economy. The third principal component is a lagging indicator of the economy; it peaked in 2004 and 2010, several quarters after the economy bottomed.

    Figure 2. Three principal components of the economy
    Three principal components of the economy
    Source: Moody's Analytics

    Whether we use the beta model or the share model to forecast an individual bank based on industry data, we use these same three principal components to control for economic conditions. We employ various combinations of lags of these three variables and then pick the model with the best forecast accuracy. Together with the scenario-specific forecasts of the PCAs and the industry-level bank data, we use the resulting model to forecast the individual bank.

    The cyclical component of market share can be viewed as a measure of risk appetite. Suppose a particular bank has a rising market share in C&I loans during a period of strong growth for the C&I sector. Now compare this institution to one whose share of C&I volume rises during recessions. The bank that gains share during a recession is likely the more conservative bank. Conservative banks will lose market share to more aggressive competitors during upturns and then recover the lost share when competitors falter during tough times. In general, a bank with a high beta has a higher risk appetite. Approaching the problem in this way, we can measure the appetite for risk for all banks line-by-line across the entire call report.

    Regional economic conditions are most prominent among small banks. Figure 3 shows the total assets for a peer group of four banks located in central Texas as well as for the entire industry, and Figure 4 shows the peer group’s assets as a share of industry assets. During the Great Recession, this peer group was able to increase its share from 0.038% to 0.047% during a nine-quarter period from 2008 to 2010. This increase was mainly due to banks in other parts of the country posting large declines in asset values. Including a component that controls for regional conditions helps us capture these effects. Interestingly, though, our four banks have been able to maintain (and even further increase) the share they achieved during the recession.

    Figure 3. Peer group and industry assets
    Peer group and industry assets
    Sources: FDIC Statistics on Depository Institutions, Moody's Analytics
    Figure 4. Peer group assets as a share of industry
    Peer group assets as a share of industry
    Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

    Changes in market share not attributable to national and regional economic factors are idiosyncratic in nature. An increase in market share can be achieved by taking greater risk, but it can also be achieved through more effective management. In either case, if the increase in market share cannot be attributed to the business cycle or regional variations, it can instead be chalked up to the good fortune and effectiveness of the bank’s managers.

    Figure 5 shows the market share for our four Texas banks, where we also included several smaller banks when defining the size of the market. Extraco has clearly been losing market share over the past two decades. While Central’s share has been steady, First National Bank Texas has seen its peer-group share rise consistently to the point where it has recently surpassed Extraco as the group leader in total assets. Having no data about the internal actions taken by any of the banks, we can nonetheless conclude that First National Bank Texas has been very effective in grabbing market share from its competitors. Extraco may well be pursuing a margin growth strategy and may be highly profitable for its shareholders, though there is no doubt that it is shrinking in scale relative to its fast-growing peer.

    Figure 5. First National Bank Texas gaining assets at Extraco’s expense
    First National Bank Texas gaining assets at Extraco’s expense
    Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

    We have several options when forecasting market shares under baseline and stress scenarios. The simplest alternative is to assume a constant market share, either at its last historical value or perhaps its mean over a longer recent period. Even with this approach, our forecasts of the underlying bank-level variable of interest will still show different forecast trajectories because our industry-level forecasts do. A second alternative is to use an autoregressive integrated moving average (ARIMA) model to forecast market shares. This extends the flat-line approach by using recent market share momentum to help forecast the share going forward.

    The third approach, as discussed previously, includes using PCA so that market share forecasts are conditional on the economic environment. When we fit these market share models (and beta models) using principal components as regressors, we do not place any restrictions on the signs of the corresponding parameters. Forming prior views of how these principal components affect a bank’s market share is difficult. Is Extraco’s (or Wells Fargo’s) market share of commercial real estate loan origination pro- or counter-cyclical? In practice, that answer depends on the bank’s strategic plan and tolerance for risk. We allow the data to speak for themselves in determining the dynamics of market share under different scenarios.

    Figure 6 shows our forecast peer-group market shares of net loans and leases for First National Bank Texas and Extraco for the CCAR baseline and severely adverse scenarios. The behavior of Extraco’s market share in the severely adverse scenario is interesting. It initially rises but then falls slightly. After 2017, its market share levels off, offering somewhat of a respite from the declines it has experienced for most of the past 20 years. In contrast, First National’s market share growth comes to a halt before resuming in 2019 under the severely adverse scenario. Figures 7 and 8 show our forecasts for net loans and leases under all three regulatory CCAR scenarios. Compare Figure 6 to Figure 8. In the baseline scenario, Extraco’s market share continues to wane. Nevertheless, the industry grows enough so that Extraco’s shrinking slice of the pie still translates to a growing loan and lease portfolio. In summary, for several line items in the call report, we have demonstrated that industry-level aggregate data are smooth and highly amenable to modeling against macroeconomic variables to produce stress scenarios. Further, we have demonstrated that the derived market share for individual banks (or a peer group of banks) is stable, demonstrating cyclical, regional, and idiosyncratic behavior. We have modeled such shares against principal components to extract the cyclical elements and thus isolated some key idiosyncratic behavior that is unique to the specific banks in our group. We have argued that measures of correlation between individual bank data and related industry-level data provide a valid measure of risk appetite that can be presented for each line item in the call report.

    Figure 6. Net loan and lease share forecasts
    Net loan and lease share forecasts
    Sources: FDIC Statistics on Depository Institutions, Moody's Analytics
    Figure 7. First National Bank Texas net loans and leases forecast
    First National Bank Texas net loans and leases forecast
    Sources: FDIC Statistics on Depository Institutions, Moody's Analytics
    Figure 8. Extraco net loans and leases forecast
    Extraco net loans and leases forecast
    Sources: FDIC Statistics on Depository Institutions, Moody's Analytics

    While the approach presented here is clearly top-down in nature, we note that the methodology allows us to dig down to any level of granularity required by end users. Had the alternative bottom-up approach been taken, and data for each bank were considered in turn, it would have been impossible for us to statistically separate the data into their collective and idiosyncratic components. We contend that it is only the collective components that should be stressed during a stress test and that a variety of idiosyncratic management responses should be considered as part of a strategic analysis used by the bank’s managers to deal with stress.

    Conclusion

    Distinguishing between internal and external drivers is a problem of Gordian complexity. Stress testing, to date, has focused on individual banks building stress testing models based solely or primarily on internal data sources. It is unclear whether it is possible for a bank to understand all the risks it faces without looking for clues outside the castle walls.

    There are various ways for banks to do this. One would involve producing detailed benchmark forecasts against which to peg internally derived solutions. There is, however, a distinct lack of suitable approaches that could be used to provide such a comparison. The methodology we present here represents arguably the first credible attempt to provide a universal benchmarking solution. We have used only externally sourced public data and have relied in no way on any information that is specific or proprietary to any individual bank. Despite this, we contend that the stressed and baseline projections produced would compete strongly with internally produced forecasts that rely on a detailed understanding of the inner workings of the bank.

    The universality of our approach provides any number of benefits that are external to the core stress testing imperative. Managers of banks of all sizes can look at their own bank and competitor banks through the same lens. This means that strategic analysis can proceed via consideration both of action and competitor reaction to a variety of management plans. The external environment, meanwhile, is truly external in this approach. The stress scenario is therefore truly exogenous if considered in our framework.

    Modelers who rely solely on internal data and macroeconomic variables cannot disentangle the effects of the macroeconomy on the one hand and bank-specific actions on the other. Our example of Good Bank and Bad Bank highlights the fact that a model that predicts credit losses in a stress scenario similar to those seen during the Great Recession is not conservative and is unlikely to be accurate. A rigorous stress test or strategic analysis requires the bank to compare itself to industry and peer-group aggregates.

    We propose a novel, powerful modeling framework that uses FDIC call report data to develop both industry and bank-level forecasts. Aggregated call report data do not suffer from idiosyncratic management actions, so identifying the impacts of the macroeconomy becomes straightforward. Forecasting a individual bank then becomes a matter of explaining how its market share has evolved over time and how it is likely to behave under various economic scenarios. That task is much easier, and less error-prone, than trying to build a bottom-up model that tries to capture both macroeconomic effects and internal decisions.

    Featured Experts
    As Published In:
    Related Articles
    Article

    How Will Climate Change Impact Banks?

    We look at climate risk and consider how a heating planet might impact a bank's performance

    November 2019 Pdf Dr. Tony Hughes
    Presentation

    Expanding Roles of Artificial Intelligence and Machine Learning in Lending and Credit Risk Management ‍

    With ever-expanding and improving AI and Machine Learning available, we explore how a lending officer can make good decisions faster and cheaper through AI. Will AI/ML refine existing processes? Or lead to completely new approaches? Or Both? What is the promise? And what is the risk?

    November 2019 Pdf Dr. Tony Hughes
    Article

    Conservative Banks Do Not Need Conservative Models

    When banks manage risk, conservatism is a virtue. We, as citizens, want banks to hold slightly more capital than strictly necessary and to make, at the margin, more provisions for potential loan losses. Moreover, we want them to be generally cautious in their underwriting. But what is the best way to arrive at these conservative calculations?

    October 2019 Pdf Dr. Tony Hughes
    Article

    Model Validation Need Not Be a Blood Sport

    The traditional build-and-validate modeling approach is expensive and taxing. A more positive and productive validation experience entails competing models developed by independent teams.

    September 2019 Pdf Dr. Tony Hughes
    Article

    Will CECL Ultimately Be Worth All the Fuss?

    The industry is currently a hive of CECL-related activity. Many banks are busily testing their systems or finalizing their preparations for the go-live date, which is either in January 2020 or somewhat later, depending on the organization. Some are still making plans for implementation, and the rest are worried that they should be.

    August 2019 Pdf Dr. Tony Hughes
    Article

    The Real Value of Stress Testing: Has CCAR Been Validated?

    The theory that banks are now safer because of CCAR, though, has not yet been tested.

    July 2019 Pdf Dr. Tony Hughes
    Article

    CECL, IFRS 9 and the Demand for Forecast Stability

    Loan-loss provisioning models must take a variety of economic and client factors into account, but, with the right approach, banks can develop sensible loss forecasts that are more accurate and less susceptible to volatility.

    June 2019 WebPage Dr. Tony Hughes
    Article

    Climate Change Stress Testing

    As evidence of climate change builds and threats materialize,data will be invaluable in creating a framework for making future credit decisions.

    June 2019 Pdf Dr. Tony Hughes
    Article

    Human Versus Machine: The Pros and Cons of AI in Credit

    In recent years, attention has increasingly turned to the promise of artificial intelligence (AI) to further increase credit availability and to improve the profitability of banks and other lenders. But what is AI?

    May 2019 Pdf Dr. Tony Hughes
    Article

    Finding a CECL Solution for Smaller Banks

    Good-quality CECL projections can be developed using high-quality data that is available free of charge.

    April 2019 Pdf Dr. Tony Hughes
    RESULTS 1 - 10 OF 60