This article delves into some of the arguments for and against the decision to build Economic Scenario Generators (ESGs) in house by reviewing the steps required for implementation and highlighting key challenges and considerations for insurers.
An ESG provides all the financial economic and macroeconomic variables necessary for risk management and to comply with the complex demands of regulations, such as Solvency II. An ESG is the cornerstone of a market-consistent valuation of the balance sheet. In particular, ESG represents an appropriate tool to properly monitor and manage both market and credit risk from an integrated perspective.
Beyond its relevance as a key prospective element in the context of Solvency II, ESG is not only an effective tool for an efficient ALM strategy, but also facilitates a better understanding of the market risk drivers embedded into some complex life insurance products (e.g., variable annuities).
For some problems, simple solutions often suffice. And while it is possible to build a simple ESG relatively quickly, the problems faced by financial intermediaries are rarely simple and the behavior of the capital markets is complex.
Similar to other financial institutions, insurance groups face a wide range of valuation, risk, capital management, and communication challenges that require deep expertise and a set of analytical tools that fit a broad array of purposes. Developing a coherent, multi-asset stochastic model that can be regularly calibrated across a range of economies and building the robust, fast, auditable, and fully documented technology to run it is a significant challenge. In practice, such a project may require different models and calibration choices for different purposes, such as market-consistent valuation and real-world projections or analysis over different planning and valuation horizons.
In the ESG design process, some of the tasks required fall unambiguously within the responsibility of the in-house user, whereas others may be outsourced. Figure 1 illustrates an overview of the ESG process.
Systems design and development is key for an effective ESG. Firms must build all the procedures or tools required to initiate runs of the models, calibrations, and validation and reporting analysis on the output. Furthermore, firms must specify the functionality needed from their tools and be comfortable that the tools will meet these requirements. The remainder of the tasks – research and development of models and methods, specification, design, build, testing and documentation – may be readily outsourced.
Alternatively, some institutions opt for developing the entire software system in-house, which requires several skill sets. First, this option requires firms to attract (and retain) the deep quantitative skills needed to understand the modeling and calibration method challenges. Second, it involves acquiring the software engineering disciplines to ensure code is well designed, controlled, built to required standards, is capable of extension, and runs efficiently.
Eight tasks to consider in software design and build:
- Functional and non-functional specification for the ESG
- Research and development of mathematical models for the required risk factors
- Preparation of technical specifications and test plans
- Design, build, and testing of software
- Management of software release cycles and version control
- User acceptance testing (e.g., Solvency II statistical quality)
- User documentation creation
- Technical documentation of models and software (this should be suitable for external parties, such as regulators or auditors, as well as internal users)
Projecting future “real-world” possibilities requires a view of the future target distributions. Firms must identify the scope of coverage across territories, asset types, required update frequency, and be able to demonstrate their understanding of the assumptions underlying the models and how they have changed. Generating real-world targets involves a number of tasks that can all potentially be outsourced, including identifying market data sources, defining acceptance criteria, and designing, building, and executing data collection and target updating processes.
The assumptions underlying any risk factor model can have a big impact on the results. In order to be effective, assumptions should be reviewed and updated regularly and documented in a way so that decision-makers can understand the impact of their choices. This requires economic expertise and an ability to analyze and understand information from markets and other data sources.
Six tasks to consider when constructing a house view:
- Identification of scope of coverage across territories, asset types, and required update frequency
- Market data source identification and definition of acceptance criteria and tests
- Design, build, and execution of market data collection process
- Documentation and implementation of process for updating targets
- Definition of processes and the pass through each process
- Analysis of all assumptions and how they have changed
Market-consistent valuation requires market data. Markets for instruments relevant to valuation work are frequently neither deep nor liquid. Some market prices may simply not exist at all. Methods are required in order to assess the available data, identify missing prices, and fill any gaps. Other tasks include designing, building, and executing these methods and the data collection process. Firms must be comfortable with the methodology, but they do not necessarily need to create or even run them. In fact, firms choosing to do this fundamentally important step in-house will need to convince themselves and auditors that their choices are independent of the valuation process itself.
The methods should be clear, auditable, and ultimately independent of the valuation process. It is not often clear how to best build these methods and, in our experience, this requires an understanding of how the data is sourced, together with quantitative, statistical, and economic skills and a good measure of commonsense.
Five tasks to consider in market price discovery:
- Research and development of price discovery methods
- Outlining of price discovery methods
- Creation of an auditable process showing independence of methods from valuation
- Documentation and implementation of the market data collection process
- Design, build, and execution of methods
Any mathematical model is only as good as its parameterization. Getting the process right for calibrating these parameters is critical to the overall quality of an ESG.
Calibration involves setting up models to match selected targets or prices for specified purposes. This is not a one-off task, but something that needs to be done regularly – even as frequently as daily for some applications. Firms need to clearly identify the purpose and key performance criteria they will use for assessing model performance. They also need to select the models and targets or prices relevant to the task in hand. While there is a need to show a clear understanding of the trade-offs, the data used, and calibration decisions made, analysts do not always need to perform calibration calculations themselves.
The task of calibration requires judgment about the trade-offs that are made to fit a model (i.e., the information included in the calibration exercise and how it is weighted). These processes and material decisions should be open, with an appropriate level of governance and documentation. Given their recurring nature, firms will be required to apply significant skilled resources to these tasks. Further, given the demands of users of financial statements for objectivity, some firms believe that calibration should be performed independently and choose an outsourcing option.
Five tasks to consider in calibration:
- Identification of the purpose of each risk factor model and key performance criteria
- Selection of models and calibration targets / prices
- Execution of regular calibrations
- Determination of calibration choices and acceptance of performance
- Documentation of processes and each pass through the process
Once models are calibrated, producing scenarios requires firms to set up and run models, perform and document validations, and deliver output to the end users. This process requires hardware, which for large problems can be a real constraint on the speed of the entire exercise. Firms need a documented and auditable process, but could choose to outsource scenario production.
Models are only useful if the results and their sensitivity are well understood. Firms must demonstrate that they have the ability to train and retain staff that can communicate these model risks and make informed choices. However, many of the other tasks described above involve hiring, motivating, and retaining staff with deep technical skills and then managing the key-man dependence that arises.
Five tasks to consider in production and operational:
- Execution of ESG runs, output formats set up, validation execution and documentation, and output delivery
- Design, configuration, and hardware platform testing
- Documentation of an auditable process
- Internal business process development across different teams
- Hire, train, and retain staff with deep technical skills (and management of key-man risks)
Once all these factors are taken into consideration, the complexity of implementing an ESG becomes clear. In addition to the implementation, it is important to consider that these tasks need to be performed in conjunction with a parallel set of tasks required to implement a full internal model of liabilities, normally using other third-party actuarial modeling software.
It is fair to say that the aggregate required resources, irrespective of how many of the tasks listed above are executed in-house, will be large, technical, and expensive. These technically complex functions are notoriously difficult to manage. Further, any solution must adapt to the evolving business needs and external demands on the risk management function. In the current environment, firms have to reconsider the option of completing a simple one-off build. The complexity of market risks and the demands of management and regulators mean that model users are likely to face a sustained period of development and refinement of models, software, assumptions, and related practices.
Even the very largest corporations would not seriously consider building and maintaining all their software in-house. So, why do some large insurance groups choose to take on the task of designing, building, and maintaining Economic Scenario Generators in addition to the considerable challenges of actually using them?
Building and operating an ESG requires the execution of a large number of complex tasks. In reality, some of these business-critical tasks must be retained in-house, not least because regulators and rating agencies now (quite rightly) insist that firms acquire expertise and document their compliance. However, there are a large number of tasks that can be outsourced. Access to a library of models, calibrations, validation tools, technical research, and skilled advisers brings huge efficiency gains and mitigates the significant operational risk that arises when these skills and activities are concentrated in very small teams or inside the head of an individual.
Some of the common objections in the debate of whether to manage an ESG in-house are listed in Table 1 below.
Sohini Chowdhury is a Director and Senior Economist with Moody’s Analytics, specializing in macroeconomic modeling and forecasting, scenario design, and market risk research, with a special focus on stress testing and CECL applications. Previously, she led the global team responsible for the Moody’s Analytics market risk forecasts and modeling services while managing custom scenarios projects for major financial institutions worldwide.