Featured Product

    Modeling Techniques in Scenario-Based Risk Appetite Management

    To get senior stakeholders to buy in to alternative macroeconomic scenarios, risk management and ALM teams must assemble risk models and risk-adjusted performance measurements in their simulation tools. Institutions must switch from a qualitative to a quantitative approach to analysis so they can effectively define risk appetite. This article addresses these issues, as well as building repeatable measurements, resolving data gaps, using data flow automation tools, and implementing processes to enforce and monitor such measurements.

    Introduction

    The regulatory stress testing requirements that have been published in the United States, Europe, and soon in Asia-Pacific are increasingly guiding financial institutions toward scenario-based governance and risk appetite management. From an internal practice perspective, management information reports are now expected to articulate a consistent set of profitability and risk forecasts for different time horizons. Governance practice is therefore shifting from a qualitative approach to a quantified framework, which evaluates the sustainability of the institution’s compliance and ability to deliver value to its shareholders, through the cycle.

    The overarching goal of a risk appetite framework is to provide senior management with a quantitative assessment of profitability, budget, and dividends in different macroeconomic assumptions. Such a framework outlines a variety of scenarios (e.g., US liquidity crisis, euro zone sovereign default, or a recession in China) and provides potential mitigation actions. Institutions can then make a decision, taking into account the cost of hedging compared to the likelihood and severity of the scenario. For example, the cost of a sovereign credit default swap may be measured against the probability-weighted losses incurred in a sovereign-default scenario.

    Synchronizing profitability with risk forecasts in a macroeconomic scenario presents a significant organizational challenge. Indeed, aside from combining simulations across risk management and asset and liability management systems, measurements need to account for the consistent effects on market factors, credit transitions, and transaction volumes across the usual organizational silos. This challenge sometimes attracts such focus that key project risks are overlooked, such as data gaps.

    Scenario narrative and likelihood calibration

    Scenario narrative and severity have traditionally been expressed in terms of frequency, such as once-in-seven years market downturn, once-in-twenty-five years commodity crisis, or once-in-a-hundred years sovereign default. This practice, however, has reduced stress testing to a repetitive exercise, lacking the ability to account for the evolution of the economy from its current state, worldwide and locally.

    A more informed approach consists of centering the construction of scenarios around a baseline outlook, representative of the economists’ consensus, increasing the relevance of the stress testing exercise to the current situation. Revisited monthly or quarterly using the latest macroeconomic data and economists’ opinions, alternative macroeconomic scenarios are then built using stochastic analysis, whereby shocks are applied throughout global macroeconomic models and measured against the contours of previous business cycles. As a result, each scenario is calibrated in terms of likelihood against the distribution (Figure 1).

    Figure 1. Calibration of scenario likelihood around a baseline consensus
    Calibration of scenario likelihood around a baseline consensus
    Source: Moody's Analytics

    With this method, each scenario deemed relevant by senior stakeholders can be extracted from the distribution, along with the economic narrative explaining the possible causes of the scenario compared with the baseline outlook. Ideally, to allow meaningful regressions to key portfolio indicators, scenario definitions need to be assorted with observable time-series of macroeconomic and market factors that can be drilled down geographically to country and city levels.

    This framework can also be used for regulatory scenarios such as the Federal Reserve’s Comprehensive Capital Analysis and Review (CCAR) scenarios, the Financial Services Authority’s Anchor scenarios in the UK, the European Banking Authority’s stress scenarios, the International Monetary Fund’s scenarios, as well as others proposed by local authorities. Using the same approach, banks can benchmark and leverage regulatory scenarios throughout the stress testing exercise using a single framework

    Resolving data gaps

    A key aspect emerging from the practical application of regulatory stress testing is that risk and performance models need to establish how credit behaviors, liquidity cash flows, market risks, profitability, and budget forecasts are related to macroeconomic time-series, by leveraging the historical time-series observable in the portfolio. While most of this data can be well described, for most institutions the only source of credit data is related to their internal ratings practice, which is based on quarterly financial statements. With at most one point per quarter, the resulting credit time-series imply very static and insensitive credit behaviors, leading to significant noise both in the elasticity models of credit transitions and in the evaluation of correlations. This affects not only credit forecasts, but also the subsequent liquidity behavioral models, which are based on credit ratings, as well as profitability adjusted for credit losses.

    Credit time-series augmentation techniques that use credit estimates based on market prices can significantly improve credit, liquidity, and profitability models leveraged in the risk appetite framework. These techniques are available not only for publicly listed firms, but also for private firms and small- and medium-sized enterprises, as well as sovereign entities.

    Analysts often assume that the best data available inherently includes such data gaps and time-series deficiencies. However, experience in loss forecasting shows that under-sampled historical time-series have a significant impact on the consistency of model outputs. Practitioners faced similar challenges in modeling forecast losses in economic capital measurement. As a solution to data gaps, time-series augmentation techniques have proved efficient in delivering consistent reports over time. This is even more relevant considering that regulators have used this technique to calibrate current parametric regulatory functions.

    Overall, project risks due to data gaps in credit time-series cannot be overstated. Traditional modeling and simulation practices for liquidity and risk-adjusted performance measurements need to be revisited so as to allow a proper scenario-driven forecast.

    Figure 2. Interpolating internal ratings with market-price driven credit time-series
    Interpolating internal ratings with market-price driven credit time-series
    Source: Moody's Analytics

    Credit time-series augmentation techniques (Figure 2) that use credit estimates based on market prices can significantly improve credit, liquidity, and profitability models leveraged in the risk appetite framework. These techniques are available not only for publicly listed firms, but also for private firms and small- and medium-sized enterprises, as well as sovereign entities.

    Experience shows that this approach can deliver robust statistical regressions against macroeconomic assumptions, as well as correlations, providing for consistent forecasts over different time horizons. It ensures quality and repeatability, allowing senior stakeholders to understand trends, acquire reference points, and build trust in the numbers and practice.

    Forecasting key portfolio indicators

    Scenario-based risk appetite management leverages multiple measurements according to different time horizons: short-term liquidity compliance, medium-term net revenue, income volatility, dividend sustainability and, in the long-term, capital adequacy (Figure 3). These reports require a comprehensive description of risks that examines the relationship between macroeconomic factors and key portfolio indicators.

    Figure 3. Key portfolio indicators at different time horizons
    Key portfolio indicators at different time horizons
    Source: Moody's Analytics

    In liquidity modeling, the behavior of a counterparty depends highly on its own credit situation. Therefore, forecasting behavioral cash flow demands a precise description of credit transitions. This is illustrated, for instance, in regulatory liquidity coverage ratio (LCR) calculations, in which the eligibility of bond positions for the liquidity reserve is tied to the credit assessment of the issuer, and in which inflows and outflows depend on the past-due status of the contracts. As a result, the accuracy of liquidity-monitoring models depends on the ability to evaluate realistic credit transitions over a time horizon as short as 30 days. The use of quarterly financial statements can lead to a significant underestimation of volatility. Overall, credit and behavioral models are reflected at each time horizon, impacting liquidity, then profitability, and – as a consequence – capital adequacy and dividend sustainability (Figure 4).

    Figure 4. Key portfolio indicators under stress
    Key portfolio indicators under stress
    Source: Moody's Analytics

    Assessing volatility

    Risk measurements in the current portfolio can generally be described as either an expected value or a value-at-risk within a risk distribution. Such a distribution is typically built on through-the-cycle assumptions, reflecting cyclical or long-run behaviors, while sensitivities are assessed by applying calibrated shifts on market data.

    In stress testing, however, whether key portfolio indicators represent a median or a tail-risk assessment, forecasting models typically provide expected outcomes for each scenario assumption. In a recession scenario, for instance, an institution might forecast a decrease in the LCR to 105% within a year. In this case, a key risk managers would need to anticipate is how narrowly distributed security prices will be around the expected value, so as to gauge the likelihood that the liquidity compliance threshold will be breached, even though the expected LCR value is compliant.

    Providing this additional distribution through stochastic modeling might seem like a vast undertaking, given the wide range of simulation inputs (from macroeconomic and market factors to creditworthiness and budget figures). However, undertaking a comprehensive Monte Carlo process across risks can lead to excessive or false precision, misaligned with the simulation time horizon and other key stress testing assumptions.

    Because a key purpose of the stress testing framework is to identify and quantify outcomes of drastic but plausible situations, it is relevant to focus on the key contributors to volatility during a crisis. Spikes in market prices and credit downgrades explain a significant part of such volatility, so a stochastic simulation of macroeconomic-driven credit transitions can provide a good understanding of volatility for each risk and time horizon: short-term liquidity, mid-term profitability, and long-term capital adequacy.

    On the technology side, data flow automation is becoming increasingly necessary. Institutions are streamlining their computation flows for both internal and regulatory purposes, in areas of strategic planning, credit portfolio management, asset and liability management, and liquidity risk management.

    Short-term forecasts

    Credit transitions in the liquidity reserve explain a significant part of the volatility in the LCR. Even if high-quality liquid asset (HQLA) positions are replaceable, it is worth simulating how risk can build up by monitoring an extended set of positions and possible replacement issuers. For this purpose, running an analysis of credit value-at-risk on the HQLA portfolio (pre-haircut) provides a good gauge of the LCR forecast distribution (Figure 5).

    Figure 5. Impact of credit transitions on liquidity reserves volatility
    Impact of credit transitions on liquidity reserves volatility
    Source: Moody's Analytics

    Medium-term forecasts

    Analyzing earnings-at-risk traditionally encompasses gauging the adverse impact of interest rates and exchange rates onto net interest income forecasts. By adding the effect of unexpected credit losses to the earnings in each scenario, the simulation provides a comprehensive assessment of the volatility in forecast incomes (Figure 6).

    Figure 6. Impact of credit transitions on risk-adjusted earnings volatility
    Impact of credit transitions on risk-adjusted earnings volatility
    Source: Moody's Analytics

    Long-term forecasts

    Capital adequacy is expressed through operational, market, and credit value-at-risk. Running a tail-risk simulation can help provide a credit loss tail distribution, thereby affording a clear understanding of volatility for the forecast of capital requirements.

    Figure 7. Impact of credit transitions on capital requirement volatility
    Impact of credit transitions on capital requirement volatility
    Source: Moody's Analytics

    Conclusion

    Overall, for the purpose of extending the dialogue with senior stakeholders to alternative macroeconomic scenarios, risk management and asset and liability management teams are required to work closely together to assemble risk models and risk-adjusted performance measurements in their simulation tools. Key stakeholders and supervisors need repeatable measurements, ensuring that the assumptions in the forecasts are consistent over time and allowing them to understand trends, acquire reference points, and build trust in the numbers and the practice. This process then allows institutions to switch from a qualitative to a quantitative approach to risk appetite analysis. In the modeling exercise, data gaps have a significant impact, for which best practices in econometric augmentation are a proven solution.

    On the technology side, data flow automation is becoming increasingly necessary. Institutions are streamlining their computation flows for both internal and regulatory purposes, in areas of strategic planning, credit portfolio management, asset and liability management, and liquidity risk management. They are taking advantage of data flow automation tools that help institutions with regulatory and internally driven stress testing initiatives, handling scenario libraries, driving inputs for each computation engine, and running parametric regression models from macroeconomic scenarios into key indicator forecasts.

    The final consideration, related to a quantitative formulation of risk appetite, is the need for processes to enforce and monitor such measurements. Concentration monitoring and risk appetite limit-setting are excellent starting points. After leveraging a data aggregation initiative, a logical next step is to implement an enterprise-wide and consistent limit-monitoring framework that translates risks into exposure limits in different business lines and units, market segments, industries, geographies, and currencies. This allows risk managers and front offices to improve performance and control the build-up of risk in the portfolio at the point of origination.

    Featured Experts
    As Published In:
    Related Articles
    Webinar-on-Demand

    Reducing Volatility in IFRS9 Provisions & Earnings, Through Governance and Credit Decision

    As preliminary IFRS9 results are being released, many institutions have concerns about variations in point-in-time credit assessment and forward-looking credit forecasts. These measurements are responsive to the economic environment, and highly dependent on changes in an institution’s macroeconomic outlook.

    October 2017 WebPage Roshni Patel, Pierre Gaudin
    Whitepaper

    Anticipating and Benchmarking Variance in IFRS 9 Expected Credit Losses

    Many financial institutions are designing their model overlay with a view to manage macroeconomic forecast uncertainty and model risks. For this purpose, aside from the expected credit losses, risk management teams can provide the finance department with more measurements to anticipate variability and uncertainty levels around expected credit losses. This document discusses risk measurements that can be leveraged to achieve these objectives.

    July 2016 Pdf Pierre Gaudin
    Webinar-on-Demand

    Webinar with the Asian Banker: Managing IFRS 9 expected credit losses and forecast

    For IFRS 9 impairment calculations, point-in-time forward-looking credit assessments are prone to be responsive to the economic environment and the periodic revision of the economic outlook. Therefore, the management of provision variances over time is a particular area of focus.

    July 2016 WebPage Pierre Gaudin
    Whitepaper

    Managing IFRS 9 expected credit losses variance and forecast uncertainty

    As financial institutions are currently focusing on the execution of their IFRS 9 program and solution integration, risk and finance teams are working together to anticipate their effect on the financial reports. Especially, on the impairment modeling side, point-in-time forward-looking credit assessments are prone to be more responsive to the surrounding economic environment than the through-the-cycle measurements in practice so far. As institutions are anticipating some variability of provisions levels in relation to evolving macro-economic assumptions as well as forecast uncertainty, the details of the macro-economic outlook and scenario assumptions as well as clarifications of provision variances over time, are set to be a particular area of focus.

    June 2016 Pdf Pierre Gaudin
    Article

    Leveraging Basel III Compliance Implementations

    This article examines how regulatory compliance initiatives worldwide have shaped current risk management systems and practices. It then covers the challenges and benefits of funds transfer pricing practices, profitability analysis, and stress testing-based governance practices.

    November 2014 WebPage Pierre Gaudin
    Whitepaper

    Optimizing the Capital Ratio under Basel III

    This paper explores the integration of credit and liquidity risk in Basel III, and shows how banks can optimize their capital under Basel III.

    May 2013 Pdf Cayetano Gea-Carrasco, Mikael Nyberg, Pierre-Etienne Chabanel, Pierre Gaudin
    RESULTS 1 - 6 OF 6