Featured Product

    Stress Testing of Retail Credit Portfolios

    In this article, we divide the stress testing process for retail portfolios into four steps, highlighting key activities and providing details about how to implement each step.

    Practitioners apply various methods of portfolio analysis to the evaluations of the credit risk of retail debt. This article divides the stress testing process for retail portfolios into four steps, highlighting key activities and providing details about how to implement each step.

    It also discusses how Moody’s Analytics leverages on panel-data and time-series econometrics in order to (i) understand the dynamic behaviour of the bank’s risk drivers and their interactions/ feedback-effects, (ii) quantify their sensitivities to changes in the macro economy, and (iii) produce forward-looking projections that are consistent with one another and with the shape of the future economic cycle.

    Retail credit methodologies – stress testing process steps

    Based on our experience, the stress testing process for retail portfolios can be segmented into four key steps:

    1. Data collection
    2. Modelling development
    3. Model validation
    4. Model forecasting/stress testing

    All steps carry full documentation as to any assumptions or data manipulation that has been considered. The key estimation and validation results should be fully documented to ensure complete transparency and to achieve a smooth knowledge transfer process.

    Step 1 – Data collection:

    Historical data needs to be collected for as many years as possible across asset classes and geographies.

    1. Endogenous variables: The models will need observed performance for the endogenous variables across time and across asset classes, geographies, and industries/sectors.

    • Examples of these risk parameters are: defaults, severity of losses, and prepayments.
    • Additional performance metrics, such as early arrears, can also add value to the modelling effort (these metrics can serve as early warning indicators for defaults). Examples of these are: 30-59-day, 60-89- day, 90-plus-day, etc.

    2. Portfolio characteristics: In order to understand and quantify the quality of the underlying assets in a bank’s portfolio, the modeller needs information about the characteristics of the admission policy profile for loans and lines of credit, interest rate/ pricing information for assets and liabilities, term/maturity of the exposures, etc.

    • Examples of these are: region and industry breakdown, interest rates, loan-types, LTVs (loan-to-values), and credit scoring distributions. These will act as right-hand side or control variables.

    3. Macroeconomic data: The second subset of right-hand side variables will consist of macro and sector-specific data. Banks should have an extensive macro data warehouse with historic and forecast variables across countries, sub-national regions, and industries. An analyst typically leverages this extensive dataset to test for the strongest (and consistent) correlations between the endogenous and macro variables.

    Examples of retail credit data structures are:
    • Segment-vintage-time data (SVTD): SVTD is the most common template for the performance data. The portfolio is grouped by segments (mainly business-driven sub-portfolios), vintages (monthly, quarterly, or even annual cohorts, depending on the size of the portfolio), and time (monthly or quarterly observations on the performance of segments and cohorts). Panel-data and dynamic panel-data techniques are brought forward to model these portfolios.
    • The key performance components become: (a) segment quality (rank-ordering of sub-portfolios, channel distributions, customer groups, or others), (b) life cycle or seasoning of the cohorts (nonlinear relationship between performance and age of the accounts), (c) vintage quality/risk (rank-ordering of cohorts according to acquisition and other policies), and (d) exposure of the accounts to the underlying economic cycle.
    • Vintage-time data: A special case of the previous type is when there is a single segment/group, but the vintage and time dimensions remain relevant. For these portfolios, the modeller can identify the life cycle, vintage quality, and time components.
    • Segment-time data: There are cases in which the vintage decomposition is neither feasible nor desired. Some portfolios are grouped into segments (according to business decisions or risk categories) and the metric to be modelled (for example, delinquency or default rate) is observed over time. This process becomes a standard (balanced) panel-data model. A platform should be equipped with all the standard econometric tools to handle these models. Techniques such as pooled-OLS, random and fixed effects, or Arellano-Bond estimators can be tested on these portfolios.
    • Time data: Time-series tools can be leveraged to handle portfolios whose performance is measured through several time-series variables. Multi-variate time-series techniques such as vector autoregressive estimations can be implemented to capture the dynamic behaviour or credit portfolios. Value at Risk (VAR) and structural VAR tools are widely used in econometrics for forecasting and simulation purposes. Similarly, AutoRegressive Integrated Moving Average (ARIMA) and Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) methods can also be tested.
    • Account/loan level data: Credit behaviour can be modelled at an individual account level, providing forecasted values for each account/loan. The detailed nature of this data lends itself towards binary outcome models (for instance, a binomial regression on a default indicator). Variables for this approach could include segment, vintage, age, and time variables. The segment splits could be any field dividing the accounts at origination such as product, region, or risk. The age variable relates to the time period since the point modelling started. Examples include time since origination or time since charge-off. The vintage and time variables can be either numeric fields modelling these aspects or alternatively macroeconomic and business data to represent these components. An example of business data that could be utilised for modelling the vintage aspect is an application score.

    The modellers can run equations using time-series, cross section, and panel-data techniques. Several estimation methods are available: OLS, MLE, GLM, GMM, Pooled-OLS, fixed or random effects, Arellano-Bond, quantile techniques, probit and logit, VAR, ARIMA, GARCH, etc.

    Step 2 – Model development

    The objective in this phase is to explain as much of the variability of risk parameters (endogenous variables) as possible, making use of (i) internal/ portfolio drivers and (ii) macroeconomic and other external factors. The specific estimation method (model) will depend on the nature of the historical data collected.

    There are three alternative model structures depending on the depth and aggregation of the historic data:

    1. Fully aggregated model: in case there is only aggregate, market data for this asset class as a whole
    2. Segmented data: in case there is a need to collect more granular performance data with dimensions across banks countries/regions and/or sectors/industries
    3. Loan-specific data: in case there is performance data at the loan or customer level

    Note that modellers can always aggregate up from a granular segmentation into an aggregate model; that is, go from (3) to (2) or (1) or from (2) to (1).

    Step 2.1 – Modelling, Case 1: Aggregate performance

    If the dataset that is put together contains observed performance for the endogenous variables across time, with no other dimension (no country or industry breakdown, no loan-specific segmentation, etc.), the modeller can make use of multi-variate time-series techniques, such as VAR and S-VAR (structural VAR). This point is illustrated by concentrating on three lefthand- side variables (represented together as the vector yt: PD (or default metric), LGD (or severity/ recovery metric), and prepayment risk. These can be modelled simultaneously, against their lags and against a list of internal and external drivers (xt1 represents a vector of internal drivers or portfolio characteristics, xt2 stands for a vector of macroeconomic factors):

    This aggregate model will produce dynamic forecasts for yt consistent with the assumptions specified for the economic series (embedded in (x2t+1, x2t+2,..., x2t+M). By changing the future values of the macro series, the model will produce alternative shapes for the vectors. Sensitivity and scenario analyses can be applied to this model by changing the future values of different sets of macro drivers. Standard impulse-response exercises can be carried out, as is common practice in time series analysis. An important feature of the proposed methodology is that it can allow for interactions between the three vectors; that is, prepayment and credit risk are not modelled independently.

    Note that the economic series were taken as exogenous and the future paths of yt do not affect the shape of the economic cycle; there are no feedback effects between credit performance and the economy. Models in line with equation (A.1) can work very well for cases in which the modeller can isolate the economy from the outcome of the risk parameters. Cases where the sector/industry are modelled are not systemically important. Alternatively, if we are interested in analysing the effects of some macroeconomic shock to an asset class or a sector that is systemically important (key to the underlying performance of the economy), the system (A.1) needs to be generalised to incorporate some macroeconomic variables into the left-hand side of the equation.

    The vector of macroeconomic data has been split into endogenous (x̃2t) and exogenous (x̄2t) subgroups. For systemically important asset classes or models, the effects of, say, a very high PD outcome, will influence the performance of the economy and vice versa. There is now a clear and transparent feedback effect that should be present in a market-wide stress testing framework.

    Step 2.2 – Modelling, Case 2: Segmented data

    If the dataset contains an observed performance for the three vectors over time and across banks, industries/sectors or countries, the dynamic panel-data techniques can now be used. The estimation equation looks like (A.1) but with an additional dimension coming from the country, industry/sector, and/or bank to which it belongs.

    This segment is referred to as ‘j’. Equations (A.1) and (A.2) become:

    This version of the model will produce values for the endogenous variables across all segments ‘j’, yj,t. Aggregating over all possible segments will provide us with a metric similar to the one described in (A.1). In other words, the aggregate model from (1) is simply a special case of the segmented equation (but with a single segment). As was the case with the previous estimation method, dynamic panel-data tools will allow us to run sensitivity and scenario analyses, perform impulse-response exercises, and test the shape of the three vectors under different economic assumptions.

    Equation (A.2) extends the notion of systemically important segments to allow for interaction between macro factors and portfolio performance. With more granular specifications, the systemic importance of the model is less relevant. And this point is a key weakness of loan-level and other very granular modelling techniques when it comes to be applied to market-wide stress testing exercises. They lack the systemic dimension and will take macro factors as given, exogenously.

    Step 2.3 – Modelling, Case 3: Account or client-level performance

    If the data that is collected is granular enough to have loan-specific performance, the structure of the equation is that of a discrete-choice model. Several techniques can be tested on these equations (Probit, Logit, Censored-regressions) depending on the exact nature of the historic data. To illustrate the structure of a generic loan-specific equation, consider ‘i’ as the index for a specific loan that belongs to industry or country ‘j’ observed at a point in time ‘t’, as represented in equation (c).

    The output of a model like (C) can also be aggregated up into segment predictions, in line with (B.1) or even (A.1) if the aggregation is done on all dimensions (but ‘t’).

    Step 3 – Model validation

    The modelling development phase concludes with strict in- and out-of-sample validation exercises. Forecasting performance and model robustness are tested, and the results are fully documented. Below is a summary of some of the validation tests that are recommended:

    i. In-sample fit and residual analysis.
    Goodness of fit is studied for all equations to ensure a model presents no estimation bias or asymmetries. Several statistics help a modeller understand the goodness of fit of the model (R2, Adj R2, RMSE, likelihood-if normality was tested, etc.). Understanding correlation patterns for residuals is also important. It also helps identify and control for outliers.

    ii. Out-of-sample accuracy.
    It is standard to work with holdout samples in order to (a) re-estimate the equations using this subsample and (b) compare the model predictions against actual history. The closer the model gets to observed values the better. This forecasting accuracy is analysed through mean-squared-errors and mean-absolute-errors. The holdout sample period can vary from six to 12 months, sometimes even 24 months (if there is a long historic sample to re-estimate the model).

    iii. Model robustness.
    With the holdout samples described above, the modeller should also be interested in testing how stable and robust their models are. The modeller needs to compare the estimation outputs of the full vs. holdout samples, looking for cases where some parameters change their signs or lose predictive power.

    Table 1. Example of a Dynamic Panel-Data Equation, in Line with Equation (B)
    Example of a Dynamic Panel-Data Equation, in Line with Equation (B)
    Source: Moody's Analytics
    Step 4 – Generating stressed predictions

    After the model has been developed and validated, the equation can finally be used to produce outputs; that is, predicted vectors for defaults, severities, and prepayments. Alternative assumptions on the future values of the macroeconomic series (reflected in (x2t+1, x2t+2,..., x2t+M) ) will produce alternative shapes for the endogenous variables. With the estimated functional form for the model equation, the modeller will be able to shock the macro drivers with alternative assumptions and generate predicted values for all left-hand-side variables.

    Figure 1. Forecast vectors, examples of mortgages, auto loan and small business loans
    Forecast vectors, examples of mortgages, auto loan and small business loans
    Source: Moody's Analytics

    Stress testing framework for retail portfolios

    Techniques used for the stress testing of retail exposures

    Moody’s Analytics leverages panel-data and time-series econometrics in order to (i) understand the dynamic behaviour of the bank’s risk drivers and their interactions and feedback effects, (ii) quantify their sensitivities to changes in the macro economy, and (iii) produce forward-looking projections that are consistent with one another and with the shape of the future economic cycle.

    The choice of a specific technique depends upon data availability and its structure. Techniques can typically be categorised in the following types:

    1. Very granular performance data (e.g., loan-level information)
    2. An intermediate, segmented data set (e.g., sub-portfolio data segmented across countries and across risk levels)
    3. An aggregated set of time-series (for portfolios and sub-portfolios across regions).

    Type 1 datasets allow modellers to develop granular, bottom-up stress testing set-ups. A top-down approach is typically applied to datasets in line with Type 3. In many instances, an intermediate approach such as Type 2 is the optimal choice.

    Example of a mortgage portfolio – segmented by cohorts

    In order to reduce the number of variables in the model but maintain the explanatory power that age provides, a cubic spline function is applied to capture the nonlinear relationship between defaults and months-in-book. After identifying the life cycle component, the risk heterogeneity or defaults across vintages and seasonality of default rates over time are modelled.

    Figure 2. Stress Testing Equations – Macro Drivers of a Mortgage Portfolio
    Stress Testing Equations – Macro Drivers of a Mortgage Portfolio
    Source: Moody's Analytics

    After controlling for these components (age, quality, and seasonality), the modeller should consider the effects of external, macro drivers. With economic factors included, the modeller can run macro stress testing exercises.

    Retail credit methodologies – alternative estimation methods

    The toolkit of estimation methods that can be leveraged for stress testing purposes is quite comprehensive. This section lists examples of alternative methodologies whose applications will depend on the nature and availability of data.

    Standard OLS estimations

    The OLS estimation fits a model of a dependent variable on independent variables using linear regression, optimised on the ordinary least squares method. Note that although the estimation is done on a linear equation, the modeller can still apply nonlinear transformations to dependent and independent variables before running the estimation (for example, logistic or logarithmic mappings).

    Quantile regressions

    The QREG command fits quantile (including median) regression models, also known as least-absolute-value models (LAV or MAD) and minimum L1-norm models. There is an option associated with quantile regression for modifying the asymmetry parameter. This is a parameter in quantile regression that weights the possibly distinct costs of under-prediction and over-prediction. With this parameter set to 0.5, positive and negative errors are weighted equally. When the asymmetry parameter is 0.5, our best predictor is the median; it does not give as much weight to outliers. When the asymmetry parameter is 0.7, the loss is asymmetric; large positive errors are more heavily penalized then negative errors.

    XTREG Regression (FE, RE, and MLE option)

    The XTREG command is a regression technique that allows FE (fixed effects), RE (random effects) and MLE (maximum likelihood estimation) options. The MLE option maximises the log-likelihood function. The FE option allows the regression to use a fixed effects estimator, whilst the RE option allows the regression to use a random effects estimator.

    Discrete choice models (PROBIT and LOGIT commands)

    Discrete choice models are appropriate where there is dichotomy in the required dependent variable. These models can be utilised if loan-specific data are provided (i.e., modelling a default indicator) or, alternatively, a vintage-level indicator (default rate > 3%, for instance).

    • PROBIT: The probit command fits a maximum-likelihood probit model (binary regression).
    • LOGIT: The logit command fits a maximum-likelihood logit model (logistic regression).
    Vector AutoRegressive techniques

    The VAR model fits a multivariate time-series regression of each dependent variable on lags of itself and on lags of all other dependent variables. VAR also fits a variant of models known as the VARX model, which also includes exogenous variables.

    AutoRegressive Integrated Moving- Average Modelling

    ARIMA fits a model of dependent variable on independent variables where the disturbances are allowed to follow a linear autoregressive moving-average, or ARMA, specification. The dependent and independent variables may be differenced or seasonally differenced to any degree. When independent variables are included in the specification, such models are often called ARMAX models. When independent variables are not specified, they reduce to Box-Jenkins ARIMA models in the dependent variable.

    Arellano-bond estimation

    Linear dynamic panel-data models include p lags of the dependent variable as covariates and contain unobserved panel-level effects, fixed or random. By construction, the unobserved panel-level effects are correlated with the lagged dependent variables, making standard estimators inconsistent. Arellano and Bond (1991) derived a consistent generalised method of moments, or GMM, estimator for the parameters of this model; XTABOND implements this estimator. This estimator is designed for datasets with many panels and few periods, and it requires that there be no autocorrelation in the idiosyncratic errors.

    Application: technical description of the vintage model

    For a given segment, say i =1,..., N, we have aggregate data on a set of sequential vintages observed over subsequent time periods. A vintage indicator is defined as: v =1,..., V. And the time indicator is the standard: t =1,..., T.

    The point here is that vintage is a time-series concept: the February 2006 vintage followed the January 2006 vintage. Thus, a time dimension is running in two different directions. In other words, there is a cross section of a time series of time series, as opposed to just the cross section of time series that is in standard panel datasets.

    These types of data are commonly analysed in consumer credit. One has a vintage of loans that all originated at a specific time in a specific segment. Modelling the nonlinear life cycle of the loans using spline functions is favoured, and then incorporating two additional components called ‘vintage quality’ and ‘prevailing conditions,’ which are constant within vintage and time, respectively.

    The basic model is thus of the form:

    Where xt is a set of variables that help define the macroeconomic conditions or the internal policy variables that the segment/ sector i faces at time t, and ziv are variables that define the conditions (including macroeconomic factors and internal policy variables) that pertained at the time each vintage was formed. The μi variables are region-based fixed effects. One can also have variables that vary over both vintage and time, contained in rivt. These variables, which describe how the economy of segment/ sector i has altered since origination, often prove to be very useful. The function fi (t – v) is a nonlinear baseline life cycle that might also be a function of macroeconomic or internal factors; t – v is just the age of the vintage at time t.

    This model is estimated by OLS, pooled-OLD, GLS, MLE, GMM or similar unbalanced dynamic panel data-specific approaches (pooled-OLS, fixed effects, random effects, etc.). Careful attention is paid to functional form to ensure that predicted delinquency or default rates remain bounded between zero and unity.

    Figure 3. Home Prices Assumptions
    Home Prices Assumptions
    Source: Moody's Analytics
    Figure 4. Geographic Heterogeneity of Mortgage Defaults
    Geographic Heterogeneity of Mortgage Defaults
    Source: Moody's Analytics
    Figure 5. Mortgage Default Rates – Baseline Forecasts
    Mortgage Default Rates – Baseline Forecasts
    Source: Moody's Analytics
    Figure 6. Mortgage Default Rates – Severe
    Mortgage Default Rates – Severe
    Source: Moody's Analytics
    Featured Experts
    As Published In:
    Related Articles
    Article

    ESG Score Predictor: Applying a Quantitative Approach for Expanding Company Coverage

    Assessing Environmental, Social, Governance (ESG) and climate risk is often subject to data constraints, including limited company coverage. This paper provides an overview of Moody’s ESG Score Predictor, an analytical framework that can expand coverage gaps by generating a wide array of ESG and climate risk metrics.

    June 2021 WebPage Dr. Juan M. LicariDr. Olga Loiseau-Aslanidi, Simone Piscaglia, Brenda Solis Gonzalez
    Article

    Climate Risk Macroeconomic Forecasting - Executive Summary

    This paper describes Moody's Analytics approach to generating climate risk scenarios.

    March 2021 Pdf Moody's Analytics, Chris Lafakis, Dr. Juan M. Licari, Petr Zemcik
    Presentation

    Global Economic Outlook: December 2020

    Presentation slides from the Council of the Americas CFO Forum of 2020.

    December 2020 Pdf Dr. Juan M. Licari
    Whitepaper

    Continued Stress of the UK Mortgage Market

    We use the UK Mortgage Portfolio Analyzer to assess the adverse economic impact from of the global pandemic on a representative portfolio of the UK mortgages.

    October 2020 Pdf Dr. Juan M. Licari, Petr Zemcik
    Whitepaper

    COVID-19: Living Through the Stress Test of the U.K. Mortgage Market

    We use the Moody's Analytics Mortgage Portfolio Analyzer to quantify the impact of this significant economic stress on a portfolio of U.K. mortgages.

    May 2020 Pdf Dr. Juan M. Licari
    Whitepaper

    Analytical Solutions for Multi-Period Credit Portfolio Modelling

    A framework for credit portfolio modelling where exact analytical solutions can be obtained for key risk measures such as portfolio volatility, risk contributions to volatility, Value-at-Risk (VaR) and Expected Shortfall (ES).

    August 2019 Pdf Dr. Juan M. Licari
    Presentation

    Analytical Solutions for Multi-Period Credit Portfolio Modelling

    A framework for credit portfolio modelling where exact analytical solutions can be obtained for key risk measures such as portfolio volatility, risk contributions to volatility, Value-at-Risk (VaR) and Expected Shortfall (ES).

    August 2019 Pdf Dr. Juan M. Licari
    Article

    Dynamic Model-Building: A Proposed Variable Selection Algorithm

    In this article, we propose an innovative algorithm that is well suited to building dynamic models for credit and market risk metrics, consistent with regulatory requirements around stress testing, forecasting, and IFRS 9.

    January 2018 WebPage Dr. Juan M. LicariDr. Olga Loiseau-Aslanidi, Dr. Dmytro Vikhrov
    Whitepaper

    U.K. Residential Mortgages Risk Weights: PRA Consultation Paper CP29/16

    This paper presents best practices for addressing PRA Consultation Paper CP29/16.

    October 2016 Pdf Dr. Juan M. LicariDr. Dimitrios Papanastasiou, Maria Valle del Olmo
    Article

    Probability-Weighted Outcomes Under IFRS 9: A Macroeconomic Approach

    In this article, we discuss development of a framework that addresses the forward-looking and probability-weighted aspects of IFRS 9 impairment calculation using macroeconomic forecasts. In it, we address questions around the practical use of alternative scenarios and their probabilities.

    June 2016 WebPage Barnaby Black, Glenn LevineDr. Juan M. Licari
    RESULTS 1 - 10 OF 30