Alternative macroeconomic scenarios identify economic risks to help banks and firms determine the impact of adverse economic conditions on their balance sheets and investment portfolios.

Although alternative scenarios have been used most extensively by financial institutions to comply with post-crisis regulatory stress-testing exercises, more recent interest has focused on implementation of the IFRS 9 and CECL accounting standards. In this article, we describe the methodology used by Moody’s Analytics to assign probabilities to its regularly produced alternative macroeconomic scenarios and to calibrate these scenarios by taking into consideration recent post-crisis economic conditions.

Both the IFRS 9 and CECL accounting standards move from an incurred loss impairment model to a forward-looking framework that requires banks to calculate lifetime expected credit losses.^{1} Because of this, most banks are opting to run their analyses conditional on a range of future macroeconomic outcomes. These outcomes therefore need to be determined by generating multiple alternative scenarios rather than relying on one baseline projection.

An alternative scenario’s severity needs to be determined quantitatively and with a high degree of rigor so that it can be easily compared across different scenarios. This requires calibrating scenarios by assigning appropriate probability weights that are driven by the scenario’s severity.

**Moody’s Analytics standard alternative scenarios**

Moody’s Analytics regularly produces a number of alternative macroeconomic scenarios each month in addition to its baseline forecast to meet client needs for alternative macroeconomic forecasts. The baseline scenario is the most likely scenario and is designed to fall in the middle of a distribution of possible outcomes. Since the chances of the economy realizing any specific time path, no matter how reasonable, are small, the baseline is viewed as representing an outcome in which there is a 50% probability that economic conditions will be worse and a corresponding 50% probability that they will be better over the forecast horizon.

Moody’s Analytics also produces nine alternative scenarios each month along with the baseline (see Table 1). The same hypothetical events drive several of these alternative scenarios, but they occur with varying severity. Of these scenarios, the two recession scenarios have 1-in-10 and 1-in-25 probabilities, the two upside scenarios have 1-in-10 and 1-in-25 chances of occurring, and the scenario of a slow recovery has a 1-in-4 chance. In other words, the 1-in-25 probability downside scenario is constructed so that there is a 96% probability that the economy would perform better over the forecast horizon and a 4% probability that it would perform worse. Since many possible events could produce the same downside outcome for the economy, the events included in the scenario are considered most likely at the time the scenario is constructed.

Moody’s Analytics also produces two other 1-in-10 downside scenarios, but these have different narratives. One scenario is driven by an unanticipated wage-price spiral leading to much higher near-term inflation; the other shows the U.S. economy entering a severe recession over the next five years. In addition, there is a 1-in-25 scenario characterized by economic growth that remains below its prerecession potential rate indefinitely. A 1-in-10 scenario in which oil prices stay low at $35 per barrel over the next three years rounds out the nine alternative scenarios, which are updated monthly.

**Scenario calibration approaches **

With more financial institutions being required to adhere to an expected credit loss impairment framework both globally and in the United States, the construction of alternative time paths for the economy based on hypothetical events remains important, even as supervisory stress-testing regimes are applied more narrowly. Although the forthcoming CECL standard currently does not provide prescriptive guidance in the use of economic scenarios, IFRS 9 explicitly instructs institutions outside of the United States to run their credit loss forecasts using probability-weighted scenarios, highlighting the importance of appropriate scenario calibration.

The most basic approach to calibrating a downside scenario of a given severity is to compare it with episodes in observed history. These episodes could be deep recessions, house-price collapses, or a severe correction in equity markets. One way to assign probabilities under this approach would be to count the number of times the economy has suffered a downturn of similar severity in past business cycles. For example, in the last 50 years, there have been only two recessions in which the unemployment rate reached or exceeded 10%. Therefore, a 4% probability could be assigned to any scenario in which the peak unemployment rate exceeds 10%.

This historical approach is a useful first pass in scenario calibration, but there are several important drawbacks. First, history may not provide a sufficient number of observations to judge the severity of a specific scenario with a high degree of confidence, making it insufficiently robust. Second, even if long data samples are available for most economic variables in a country, as they are in the United States, economies undergo structural changes over long periods of time and historical data may not be suitable for evaluating future economic performance. Finally, this approach produces identical historical severity without conditioning on recent economic trends. This ignores the path-dependency that can characterize macroeconomic outcomes and may not allow for easy comparison of ostensibly similar scenarios.

Because of these drawbacks, Moody’s Analytics assigns probabilities to scenarios using a simulation-based approach. The advantage of such an approach is that it can generate a large number of alternative time paths for major macroeconomic variables that can be used to evaluate the severity of scenarios.

This is operationalized by first developing an econometric model that describes the behavior of the variables of interest, and then using the model to simulate the alternative paths. In particular, Moody’s Analytics uses a vector autoregression model of the macroeconomy to generate the alternative forecasts through a Monte Carlo procedure. We then calibrate the scenario by calculating the proportion of these time paths in which the unemployment rate exceeds a certain level.

Before illustrating the simulation procedure, it is important to describe VAR models in more detail and to contrast them with the more traditional structural models that are used to generate the Moody’s Analytics macroeconomic baseline and alternative forecasts.

Moody’s Analytics structural model

The model used by Moody’s Analytics to build its baseline and alternative forecasts is a 12,000-equation dynamic simultaneous equations, or structural, model of the global economy. The U.S. part of the global model includes more than 2,000 equations. The global model is specified to reflect the interaction between aggregate demand and supply in the economy using equations that are statistical links between the variables based on econometric regressions.^{2} For example, growth in real consumer spending per capita is a function of growth in real disposable income per capita as well as other factors.

In the short run, fluctuations in economic activity are primarily determined by shifts in aggregate demand, with the level of resources and technology available for production taken as given. Prices and wages adjust slowly to equate aggregate demand and supply.

In the longer term, changes in aggregate supply determine the economy’s growth potential. The rate of expansion of the resource and technology base of the economy is the principal determinant of economic growth.

VAR models

A VAR model is used to capture linear dependencies among a relatively small set of economic time series and generalizes the univariate autoregressive (AR) model. The reduced form version of this multi-equation model is specified such that each variable
to be determined by the model, or endogenous variable, has a separate equation. Each equation contains explanatory variables that include the dependent variable’s own lagged values, the lagged values of all other dependent variables in the system, and a serially uncorrelated error term. A set of exogenous variables that are determined outside of the VAR model may also be added to each equation to capture global economic conditions. By construction, all equations in the system have identical right-side variables and differ only in their dependent, or left-side, variable.

VARs model only time series relationships between variables and are completely agnostic about structural linkages in the economy. For example, whereas many economists believe that the correlation between consumer spending and disposable personal income should be positive, a VAR model that included these variables but featured one or more negative coefficients would not necessarily be respecified.

The use of VARs in macroeconomic forecasting first began to grow in the early 1980s following research published by Professor Christopher Sims, the 2011 Nobel laureate in economics, that critiqued the strong identifying restrictions and other implausible assumptions of many macro models at the time.^{3} Since then, VARs have become widely used in economic forecasting and empirical analysis on account of their data-driven nature, high degree of modeling flexibility, and ease of use.

**Model limitations for scenario calibration **

Structural models of the macroeconomy are well-suited to generate stable long-term forecasts, performing inference and policy analysis, as well as having good shock responses. Unlike a VAR model, a structural model builds up aggregates by first estimating stochastic equations for granular subcomponents as a function of intuitive economic drivers. The forecasts for these subcomponents are then combined in identity equations for the aggregates that hold by definition.

Although structural country models can be used to simulate and calibrate alternative macroeconomic scenarios, their use has important drawbacks. First, because these models have been designed to capture the many dynamic relationships and theoretical dependencies in an economy, this makes them less convenient for generating the large number of forecasts needed for Monte Carlo simulation, as the model needs to be solved repeatedly and quickly. In addition, structural country models that rely on cross-country consistency and whose equations depend strongly on variables outside the model would lead to an even more complicated process, requiring simulating multiple models at once.

Despite the atheoretical nature of a data-driven, time-series model such as a VAR, it can often provide superior forecasting results in comparison to a structural model. However, the large number of required lagged variables on the right side of the equations forces the model to be small and prevents it from incorporating many important details that a structural model can easily accommodate. This is a limitation of VAR models, but a well-specified VAR with a small number of endogenous variables provides the basis to create the large number of alternative time paths for major macroeconomic variables in a Monte Carlo simulation procedure.

Monte Carlo simulations

Monte Carlo simulations are used to determine how random variation affects the sensitivity, performance, or reliability of a system being modeled. The procedure simulates the various sources of uncertainty inherent in dynamic complex systems, and can generate a distribution of values for the major macroeconomic variables under consideration in scenario calibration. For our purposes, the uncertainty inherent in the parameter estimates and the error terms of a VAR model are translated into a range of alternative forecasts through the Monte Carlo procedure. The distribution of these time paths allows Moody’s Analytics to assign probability weights to its alternative scenarios.

The following analysis describes the theory behind this procedure. The regression results for any model, structural or VAR, are statistical estimates of the true but unknown model parameters. Therefore, for a given point estimate, there is a confidence interval around this estimate comprising a range of values that will contain the value of the unknown parameter with a specified level of confidence. To reflect the effect of this coefficient uncertainty in each simulation trial, the estimated parameters in the VAR model are replaced with randomly drawn choices from within the confidence interval.

Once all the replacements have been made, the Monte Carlo procedure also takes into account the uncertainty inherent in the error term. This uncertainty is simulated using a random draw from a normal distribution for each regression residual. Accounting for these dual sources of uncertainty in each simulation trial results in an alternative forecast to the baseline produced by the VAR model. This process can be repeated indefinitely, each time resulting in another alternative time path that is at least slightly different from the baseline.

The distribution of these forecasts provides the information needed to assign probabilities to the Moody’s Analytics alternative scenarios. More specifically, the distribution of the unemployment rate—arguably the best overall barometer of an economy’s performance and an important variable for many of the users of the forecasts—is used to calibrate the scenarios.

**VAR model specification**

The VAR model used for the purpose of calibrating alternative scenarios includes real GDP, the GDP price deflator, the difference between the unemployment rate and the non-accelerating inflation rate of unemployment, existing single-family median house prices provided by the National Association of Realtors, the yield on the 10-year U.S. Treasury note, the Standard & Poor’s 500 Composite Price Index, and realized volatility of the S&P 500 (see Table 2). Following testing, a lag length of two quarters was selected for all regression variables. The estimation period was the first quarter of 1980 through the second quarter of 2018. The Federal Housing Finance Agency purchase-only house price index is a tailpipe variable in the model and is driven solely by the median house price.

The VAR is estimated using Bayesian methods where model parameters are treated as random variables with attached Litterman/Minnesota prior probabilities.

^{4 }Assigning priors to parameters shrinks the parameter set and produces macroeconomic forecasts with lower standard errors than unrestricted VARs, which place no restrictions on parameters.

**Calibration procedure**

The estimated VAR model was used to generate 10,000 Monte Carlo simulations running from the third quarter of 2018 through the fourth quarter of 2022. The average annualized cumulative change in real GDP under these simulations is 2.3% at the 50th percentile. This is in line with long-term output growth, suggesting that the VAR simulations are reasonable. The full distribution includes years in which the economy is buoyant and GDP growth is above average, as well as the opposite (see Chart 1). The average annualized cumulative change in the median house price is 3.3% at the 50th percentile (see Chart 2).

For example, two of the regularly produced downside alternative scenarios have the same 1-in-10 probability. However, one assumes that the U.S. recession results in a more accommodative monetary policy and interest rates close to zero, while the other assumes a sharp increase in the federal funds rate to counter inflation. Clearly, despite having the same probabilities, the two scenarios will have vastly different interest rate paths.

With the main scenario variables adjusted, the Moody’s Analytics structural model solves for the remaining variables. The result is a scenario where all the variables are stressed consistently. This internal consistency is important to clients, and achieving this is one of the goals of the Moody’s Analytics scenario construction process.

**Alternative ways of calibrating scenarios**

A question that often comes up is the choice to use the unemployment rate as the variable to calibrate the scenarios. In other words, clients have wondered whether Moody’s Analytics could use other indicators of business cycles, such as real GDP or house prices, to assign probabilities to these scenarios. The framework described above does indeed allow for that. This is one reason why Moody’s Analytics decided to include house prices in the VAR model.

The distribution of the start-to-trough declines in real GDP and house prices under the VAR simulations match up with the severity of the current 1-in-10 recession scenario, but are somewhat lower than in the current 1-in-25 recession scenario (see Table 3). In
4% of the simulations, the start-to-trough decline in real GDP is 4.1%. For 10% of the simulations, this number is 2.3% (see Chart 4). Similarly, under the VAR simulations, the start-to-trough decline in the median existing-home price is 14.5% for 4% of the simulations and 10.3% for 10% of the simulations (see Chart 5).

Although the distribution of house prices paints a consistent picture, the unemployment rate is still deemed the best single variable indicator of stress in the economy. This is because in past business cycles, although changes in the unemployment rate have been well-correlated with changes in real GDP, the same cannot be said of house prices.^{5} For example, the Great Recession of 2008-2009 resulted in a 4% cumulative decline in real GDP, a 500-basis point increase in the unemployment rate, and a 21% decline in the median house price relative to December 2007. However, in previous U.S. recessions, while maintaining a similar relationship between the unemployment rate and GDP growth, home values suffered either modest or no declines (see Table 4). This is because those recessions were precipitated by factors other than housing. In short, since house price declines are not well-correlated with contractions in real output, they cannot be used to measure the severity of a downturn. Financial institutions recognize this fact and use the unemployment rate as the key variable to drive delinquencies and defaults in their loss models.

**Conclusion**

Over the past couple of years, the construction and calibration of alternative macroeconomic scenarios in which the economic forecast deviates from the baseline outlook have become increasingly important in the context of post-crisis stress testing, as well as in the introduction of forward-looking accounting standards.

Although a structural model is best suited for Moody’s Analytics to generate stable, long-term forecasts that also exhibit good shock properties, a limited-variable VAR model is used to calibrate these scenarios. The VAR model in this updated calibration generates a reasonable distribution of alternative forecasts, taking into consideration both coefficient and residual uncertainty. Moody’s Analytics considers the distribution of the Monte Carlo simulations for a single variable—the unemployment rate—to calibrate the scenarios. The model equations and the narrative for the particular scenario are then used to build out the forecasts for the other variables. Though this framework also allows the use of house prices to calibrate the scenarios, the unemployment rate is chosen because unlike house prices, the unemployment rate has been closely related to real output in past business cycles.

^{1} “Project Summary: IFRS 9 Financial Instruments,” IFRS, July 2014, retrieved August 21, 2018.

“Financial Instruments—Credit Losses,” FASB, June 2016, retrieved August 21, 2018.

^{2} M. Zandi, The Moody’s Analytics U.S. Macroeconomic Model, January 2011.

^{3} C.A. Sims, “Macroeconomics and Reality,” Econometrica, Econometric Society, Vol. 48(1) (1980): 1-48.

^{4} R.B. Litterman, “Techniques of Forecasting Using Vector Autoregressions,” Working Paper 115, Federal Reserve Bank of Minneapolis, 1979. T. Doan, R.B. Litterman, and C.A. Sims, “Forecasting and Conditional Projection Using Realistic Prior Distributions,” Econometric Reviews Vol. 3 1984): 1-100.

^{5} Okun’s law is an empirical observation proposed in 1962 that associates a 2% drop in real output with a 1-percentage point increase in the unemployment rate. However, changes in labor force participation, productivity, and capacity utilization have led to much higher unemployment rates in recent U.S. recessions than Okun’s law would predict.