Banks must be savvy about all the forces at work before trusting their PPNR models. This article addresses how banks should look to sources of high-quality, industry-level data to ensure that their PPNR modeling is not only reliable and effective, but also better informs their risk management decisions.
While most banks can now produce decent stress tests for credit losses, research continues in the important area of pre-provision net revenue (PPNR). Even though PPNR is an important part of a bank’s proactive stress testing regime, researchers must consider all the factors before trusting the accuracy of their models – or expecting bank executives to trust them.
Regulators require banks to produce forecasts of loan and deposit volume, fees collected, and interest rate spreads (both paid and received), thus generating stress predictions of interest and non-interest revenues and expenses. These factors play an important role in determining a bank’s financial position should a dire economic scenario start to unfold.
PPNR should complement stress testing, but the models it produces may not be as trustworthy as they seem. For one thing, many bank portfolios contain either scant or noisy PPNR data. It is not atypical for a bank to be forecasting, say, commercial loan origination volume with only 30 or 40 time-series observations at their disposal.
Within this context, modelers need to account for a number of other key factors that may influence business volume. Though the main aim of PPNR modeling is to identify robust macroeconomic drivers, managers would surely feel slighted if their actions were dismissed as irrelevant to the portfolio’s projections. Indeed, if a business experiences strong growth, how do they know that the upswing is a result of general economic improvement and not a manager’s improved sales procedures?
If the latter explanation has even a grain of truth (and if portfolio-specific factors are excluded from the model), the underlying effect of the economy on volume will be distorted and projections drawn from the model will be dangerously misleading.
Sometimes even diligent, well-designed research finds nothing. With a huge array of macro factors influencing the observed behavior of a portfolio, even focused research may not lead banks to a concrete destination.
Suppose banks diligently and intelligently produce the best possible model given this situation. They try to be parsimonious, using simple but powerful techniques and employing an intuitive behavioral framework. They then carefully consider any statistical issues that arise as they produce their models.
What happens, then, if the model produced by this process – the best possible model – is demonstrably unreliable or fragile?
When quantitative research falls short, the solution is invariably the same: collect more data! But in the case of PPNR modeling, it is often impossible to source more information from within the bank. Origination volume, the example used, is inherently a time-series concept. Stress testers, though highly skilled, have yet to unlock the secrets of time travel.
The only sensible alternative is to look for data from external sources. In the case of commercial loan volume, for instance, the Federal Reserve Board has quarterly data stretching back to the late 1940s. Using such a long series makes it easy to identify macroeconomic relationships through many distinct business cycles.
This data is not specific to any one bank, meaning that modeling the effect of management actions is not possible at this level. Despite this drawback, this method provides the best possible avenue through which a diligent modeler could find appropriate macroeconomic drivers of activity in the commercial lending space. Individual bank actions, under some reasonable assumptions, simply do not impact industry dynamics. This means that banks can focus their attention on identifying pertinent macro factors without having to worry about acquisitions, customers switching banks, staffing shifts, or changes in management strategy.
Modeling 30 or 40 bank-specific observations becomes much easier when stressed industry variables are already in hand and the right macro variables are understood with a high degree of confidence. Now, stress testers can focus almost exclusively on bank-specific drivers of observed portfolio behavior.
A researcher might notice, for example, that his portfolio has been growing at a faster rate than the broader industry and that the bank’s market share is rising as a result. He can then interrogate relevant managers on the business side of the bank to find out why this is happening and whether the trend is likely to continue. More formally, he could seek quantitative drivers that explain the bank’s growth anomaly and thus project the bank’s performance under a number of alternative scenarios. The research is now usable and relevant.
Banks may wonder whether the current approach to PPNR modeling is very informative. Most banks, relying on scant internal data, have to cut corners or mine the data to find macro linkages that are likely to be spurious or, at best, fragile. The relationships they do find are unlikely to last through the next downturn.
Taking a realistic and holistic approach to PPNR modeling, then, risk modelers should look to the many sources available for high-quality, industry-level data for PPNR components. Only by using these data will PPNR stress testing be the basis of reliable risk management decisions and be taken seriously by bank executives.
Juan M. Licari, PhD, is Chief International Economist with Moody's Analytics. As the Head of Economic and Credit Research in EMEA, APAC and Latin America, Juan and his team specialize in generating alternative macroeconomic forecasts and building econometric tools to model credit risk portfolios.
Focuses on helping financial institutions improve their data management practices and capabilities for enhanced risk management, business value, and regulatory compliance.
We look at climate risk and consider how a heating planet might impact a bank's performance
Expanding Roles of Artificial Intelligence and Machine Learning in Lending and Credit Risk Management
With ever-expanding and improving AI and Machine Learning available, we explore how a lending officer can make good decisions faster and cheaper through AI. Will AI/ML refine existing processes? Or lead to completely new approaches? Or Both? What is the promise? And what is the risk?
When banks manage risk, conservatism is a virtue. We, as citizens, want banks to hold slightly more capital than strictly necessary and to make, at the margin, more provisions for potential loan losses. Moreover, we want them to be generally cautious in their underwriting. But what is the best way to arrive at these conservative calculations?
The traditional build-and-validate modeling approach is expensive and taxing. A more positive and productive validation experience entails competing models developed by independent teams.
The industry is currently a hive of CECL-related activity. Many banks are busily testing their systems or finalizing their preparations for the go-live date, which is either in January 2020 or somewhat later, depending on the organization. Some are still making plans for implementation, and the rest are worried that they should be.
The theory that banks are now safer because of CCAR, though, has not yet been tested.
Loan-loss provisioning models must take a variety of economic and client factors into account, but, with the right approach, banks can develop sensible loss forecasts that are more accurate and less susceptible to volatility.
As evidence of climate change builds and threats materialize,data will be invaluable in creating a framework for making future credit decisions.
In recent years, attention has increasingly turned to the promise of artificial intelligence (AI) to further increase credit availability and to improve the profitability of banks and other lenders. But what is AI?
Good-quality CECL projections can be developed using high-quality data that is available free of charge.