This article discusses the ways in which insurers can use their analytical and operational data to measure risk and performance, and support their business decision-making processes.
The insurance marketplace is more competitive today than ever, owing to the pressure to reduce operating costs, the impetus to provide better returns to shareholders, the emergence of new distribution channels, and the climate of low investment returns.
Consequently, making the right business decisions is absolutely crucial – decisions that are directly linked to high-quality data.
Equally, insurers have to contend with an ever-increasing regulatory burden, including Solvency II (SII), International Financial Reporting Standards (IFRS), Dodd-Frank, and the Retail Distribution Review (RDR) in the UK, and equivalent distribution legislation across Europe. Compliance with this legislation is challenging, considering the operating environment. While most of the legislation is primarily concerned with governance, much of the practical management is driven by data – particularly Solvency II and IFRS.
Data is the common theme between decisionmaking and regulatory compliance. A significant amount of analytical data is required for Solvency II and IFRS, but much of this data is also relevant for making business decisions. While additional data may be required for decision-making, the key to success is developing a common approach and standards to data management and storage that can meet both regulatory and business needs. A further crossover is that informed business decisions are at the heart of Solvency II – the Own Risk and Solvency Assessment (ORSA) and Use Test processes.
Analytical data, or risk data, refers to the actuarial, finance, investment, and risk data required for SII/IFRS and multi-faceted management and business reporting. Operational data is the day-to-day information insurers use, such as client, claims, customer relationship management (CRM), and distribution data. Traditionally, insurers are more apt at handling operational data than analytical data. Figure 1 illustrates these main differences.
As the fundamental business of an insurer is to underwrite and pool risk, strategic business decisions are often based on risk data (or analytical data). Granular data on all these risk factors is required to support the decision-making process and regulatory reporting. Much of this data is analytical in nature, but some operational data is needed, such as policy and claims data for input into an insurer’s actuarial modeling engines.
While much of the regulatory risk data is defined and can effectively be reused for decision-making purposes, new data may be required, particularly for generating risk-adjusted metrics – for instance, risk-adjusted return on capital (RAROC) or return on risk-adjusted capital (RORAC) – and for forward-looking planning (e.g., multi-year projection of balance sheets). This latter aspect is the most problematic and, as well as base data, it may require new models, methodologies, and macroeconomic scenarios to project into the future.
From a business perspective, a plethora of unstructured data stored in a database is of little value. Value is added when that data is structured and made available to users in a format that is readily understandable – “translated” so that business users and senior management can understand it.
A significant amount of data will already exist (particularly data required for Solvency and IFRS reporting); however, new data sets will often be required to generate the information to support the decision-making process. This requires the business to provide IT with the exact information needed. IT can then ensure that the data is first available and then structure the data into Online Analytical Processing (OLAP) cubes.1
In terms of strategic decision-making, a significant amount of information may be required to meet key questions, such as:
- How much risk capital will be needed to survive the next five years?
- What is the most effective and profitable use of the firm’s capital?
- How can the business grow profitably?
- What should the firm’s product portfolio look like?
- What are the scenarios that might put the firm out of business?
- How would an acquisition impact the firm’s capital requirements?
In order for insurers to remain competitive, they must be able to react quickly to change, which involves instant access to accurate and relevant information. This information is often provided via interactive dashboards that are produced at a set time or when certain events occur.
An insurer’s data, both analytical and operational, is typically scattered across many systems in a series of non-integrated silos. There are usually no common enterprise data models and standards. Instead, there are disparate data architectures, applications, and methodologies. To overcome this problem, some insurers have already built operational data repositories, but few have built analytical repositories. Figure 2 illustrates how an analytical repository would plug into an insurer’s risk architecture with potential links to an operational repository. Some data from existing operational data repositories is part of the analytical process.
At the heart of the architecture is a centralized repository that works for both operational and analytical data. This section speaks to an analytical repository, yet applies to both operational and integrated repositories. A major benefit of a centralized repository is that risk and capital data and metrics are available to the whole enterprise to access and analyze. This approach also avoids the duplicated and unnecessary movement of data.
A risk and capital literate development team is essential for providing insight into how successful the repository is in supporting risk-based decision-making. Nonetheless, building the repository can be a complex task because it involves numerous detailed steps:
- Business owners and the users must define the reports, dashboards, and data they require and the drill-down (granularity) capability needed
- IT can then accordingly build a flexible data model and the structure of the repository
- IT will have to build the data extraction and transformation processes required prior to loading the data into the repository
- IT will have to construct the OLAP cubes necessary to support the generation of multidimensional reports and dashboards specified by the business
- Many insurers already have sophisticated actuarial engines, which may have to be extended or supplemented with new tools to provide the required metrics
The ORSA and Use Test put risk and capital-based decision-making at the core of the strategic planning process and need a significant amount of analytical data. Insurers can adopt a minimal compliance approach to the ORSA or adopt a proactive risk and capital culture throughout the business by embedding it within the overall strategic planning and decision-making process. The latter is the most advantageous approach.
Figure 3 highlights the ORSA, which comprises, at the center, three key process layers: risk identification and processes, risk and capital calculations, and management controls and action.
The key outputs of the ORSA process are the metrics that the process generates. Analytical data is critical as it feeds the engines that produce the metrics – data from actuarial systems, risk systems, capital projection engines, and finance systems – which ideally will be stored in a central repository. The metrics also have to be stored at a low level of granularity so that the business can not only view the figures, but also have the capability to drilldown into the underlying data to obtain a better understanding of the metric makeup.
Volatility in the financial markets and historically low yields mean that making the right investment decisions is of paramount importance, but what type of investment information (and hence data) should decision makers and senior management look for? The following provides some areas they could consider:
- Market risk dashboard: Shows capital adequacy across a range of measures based on current market prices and is typically produced daily or weekly
- Optimal asset portfolio: Provides information that compares yield and capital allocated to assets. It indicates the best yields, capital, and risk ratios
- Credit/concentration risk: As insurers seek better yields, they increasingly look to invest in alternative credit assets, such as infrastructure and corporate loans, and credit default swaps (CDS). These provide a better yield than bonds and match the insurer’s long-term liabilities, but more granular credit risk information is required to fully understand the risks and returns
It can be a challenge for insurers to obtain granular asset information from investment managers and internal investment systems so they may evaluate returns and assess how differing investment portfolios impact the level of capital required. Such information, however, is essential to supporting the decision-making and the regulatory reporting processes (e.g., the D1-D6 of the EIOPA quantitative reporting templates).
Low yields on bonds and the capital requirements of Solvency II have encouraged insurers to switch from insurance products with inherent guarantees, such as with-profit life contracts, toward unit-linked contracts, where all investment risk is effectively transferred to the policyholder. As a consequence, insurers are reassessing their product portfolios with the aim of reducing capital held in relation to products. To support this, however, insurers need a whole raft of actuarial modeling and capital data at their disposal.
Legal entity structure and diversification Solvency II is also driving insurers to closely examine their legal entity structures and how it impacts capital and where capital is held. For instance, some multi-national insurers have restructured their legal entities to different groupings or converted subsidiary companies into branches. To obtain regulatory and capital advantages, others have sought to relocate the group geographically. Clearly tax and fungibility rules come into play, but there is a definite trend toward legal entity simplification. For this to happen, granular analysis of finance, actuarial, asset, and risk data at entity levels are required.
The advantages of seeking diversification benefits are widely endorsed, which has led many insurers to carefully consider what businesses and books of business to acquire or divest. The traditional approach of valuing books of business on an embedded value basis is now influenced by its impact on diversified capital.
The interaction of investment portfolio, product portfolio, legal entity structure, and the potential effect of diversification benefits are heightening interest in conducting what-if analysis. The scope of any what-if analysis might include:
- Economic capital/solvency capital requirement projections under multiple scenarios
- Hedging strategy analysis
- Changes in product portfolios
- Strategic asset allocation
- Acquisitions and sales
- Changes in entity structures
- Interim balance sheet valuation
The analysis and performance measures previously discussed require a high volume of analytical data. And, while the data is critical, many insurers will also have to review their actuarial models and develop new ones (e.g., for International Financial Reporting Standard figures). They will also need to look for new tools and techniques, such as economic capital calculators and proxy modeling, particularly for large complex liability portfolios, where full stochastic modeling is required for internal model solvency capital calculations.
Generating correct and meaningful information, reports, and dashboards is undoubtedly an important part of the decision-making process, but so too is the willingness of management to take action on the basis of the information provided. In some situations, management actions can be pre-built into certain scenarios, so that in the event of the scenario materializing, a series of pre-planned actions are triggered. In other circumstances, actions will have to be much more reactive.
Complying with regulation such as Solvency II is a major cost to insurers and requires a vast amount of data. That data, however, has tremendous value if it is enhanced and used properly. Deriving benefits from Solvency II programs is a topic on the agenda of most boards. One of those benefits is undoubtedly better decision-making. To support this, insurers need high quality data that is stored in a structured repository to generate the reports, dashboards, and KPIs the business needs. Data is one of the most valuable assets an insurer has – they should make sure they use it to the fullest.
1 Transforming data primarily involves the construction of online analytical processing (OLAP) cubes by IT specialists, which enables databases to be accessed to provide specific views (or sub-sets) of multiple data elements that meet a user’s requirements. This process effectively translates data into information, which can be presented in the format of reports, dashboards, or other forms of graphical output.
Leading economist; commercial real estate; performance forecasting, econometric infrastructure; data modeling; credit risk modeling; portfolio assessment; custom commercial real estate analysis; thought leader.
Addresses the challenges and opportunities in the global insurance sector, and how they impact the risk management practices of insurers.
This paper is the first in a series of short whitepapers where Brian Heale examines the major challenges and issues insurers face for report production, data management, and SCR calculation for Solvency II. The series of papers also examines the approaches insurers have taken in their Solvency II projects to date.
The way insurance and investment products are distributed and managed in the future will undoubtedly change, but firms can benefit from the new paradigm. This article addresses how financial institutions can remain competitive by delivering intuitive customer journeys at a low cost using the latest technology.
In this paper, we look at the latest developments in the Quantitative Reporting Templates. We consider how insurers can address the challenge of maintaining Solvency II reporting systems to keep pace with the changing and emerging regulatory requirements.
This article details the organizational and data challenges that insurers face when harnessing the historical and forward-thinking information needed to create interactive dashboards.
In this White Paper, we look at the challenges that insurers, fund managers and market data providers face in providing and aggregating the asset data required for the completion of the QRT templates and the SCR calculation.
In this interview, Moody's Analytics Senior Director Brian Heale shares his unique expertise on insurance and Solvency II. Learn how global regulations, demographic trends, and technology will impact insurers over the next few years and how they can best prepare for the changes.
This article focuses on developing an effective data management framework for the analytical data used for regulatory and business reporting.
This Whitepaper explores how the Solvency II Solvency Capital Requirement (SCR) calculation process can be automated to facilitate efficient and timely regulatory reporting. The SCR calculation process is complex, requiring significant data consolidation, cleansing and transformation to produce accurate and consistent results.
As the deadline for Solvency II approaches, many insurers are assessing the best approach to delivering the Pillar III reports required by EIOPA. Many recognize the challenges of data consolidation, data cleansing, calculating accurate results and formatting reports to submit to the regulators.
This publication addresses the full spectrum of data challenges: data governance, data quality, tactical and strategic reporting,