General Information & Client Services
  • Americas: +1.212.553.1653
  • Asia: +852.3551.3077
  • EMEA: +44.20.7772.5454
  • Japan: +81.3.5408.4100
Media Relations
  • New York: +1.212.553.0376
  • London: +44.20.7772.5456
  • Hong Kong: +852.3758.1350
  • Tokyo: +813.5408.4110
  • Sydney: +61.2.9270.8141
  • Mexico City: +001.888.779.5833
  • Buenos Aires: +0800.666.3506
  • São Paulo: +0800.891.2518

In this article, we address why insurers should view the data collated for Pillar III reporting as an essential information source for all strategic risk and capital decision-making within their organizations.

Although their attention is currently focused on establishing processes for Pillar III reporting compliance, many insurers are already determining how they can extract long-term benefits well beyond the Solvency II deadline of January 1st, 2016. In particular, they seek to use the foundation laid for Pillar III reporting to break the barriers that have traditionally existed between risk and finance – facilitating the adoption of a more holistic and strategic approach to risk and capital management.

Leveraging Quantitative Reporting Template data

The data required for Quantitative Reporting Templates (QRTs) is complex and many parts of an organization need to be involved to aggregate and consolidate the data for capital calculations, asset valuation, technical provisions, etc. However, with an extension to the QRT’s target data perimeter, insurers can reuse the required data to support wider risk and capital management decision-making. For example, it makes sense to extend the use of QRT data to other areas of their business, such as the Own Risk and Solvency Assessment (ORSA), capital budgeting, business intelligence analysis, and stress testing. The data requirements for QRTs, the national specific templates, and European Central Bank (ECB) reports are particularly good starting points for considering the ways in which assets, liabilities, and Pillar II data can be better exploited across all areas of Solvency II.

Tackling QRT data requirements

Completing QRTs requires approximately 10,000 cells of information to be pulled from a broad spectrum of sources. This data is essential for informed decision-making and comprises capital calculations, technical provisions, and asset information. The bulk of the data is primarily an extension of the QRT data perimeter, but further information will be required to carry out capital calculations, such as economic capital, performance measurement, and risk-adjusted return measures.

Much of the data needed for both QRTs and decision-making will come from the same sources – finance, risk, asset, and actuarial systems. It is therefore essential that insurers adopt a common process for automating and improving the data collection, quality, and validation. Furthermore, the large volumes of analytical data for QRT reporting call for a high level of granularity. Examples of complexity created by a large volume of data include:

  • The asset data for the D1-D6 QRT templates requires not only a large volume of data, but also the granularity to enable “look through” capabilities.
  • The Solvency Capital Requirement (SCR) and Technical Provisions QRT contain a distillation of results, which come from a number of actuarial models.
  • The Balance Sheet and Own Funds reports include detailed consolidated financial information from both the general ledgers and financial consolidation engines.

Pillar I and Pillar III data overlaps with asset and liability data

The substantial QRT data demands, relating to both solo and group reporting, capture details such as the insurer’s capital position, high-level financials, assets, liabilities, revenue/expenses, business analysis, claims experience, and reinsurance. This same data can also be reused in other areas of the standard Solvency II setup. However, the benefits insurers derive from their Solvency II programs will depend largely on how effective their processes are for generating granular risk and capital metrics.

The key to producing the necessary reports and eventually complying with Solvency II is to aggregate and consolidate data from a myriad of internal systems and some external sources. Adopting an integrated approach, which concentrates on one central source of truth for data, can provide the input for multiple steps of the Solvency II calculation and reporting requirements.

There is commonality in the data needed for both assets and liabilities:

  • Assets: Much of the data contained in the Asset QRTs can be viewed as a main ingredient of an insurer's entire Solvency II setup, as the granularity of assets is a key component of the SCR’s market risk calculation.1
  • Liabilities: Data such as claims triangles used by actuarial engines to compute best estimates are also included in the QRT TP E3 template.

Insurers can gain significant benefits from using a central data repository. For instance, non-life data may be loaded in high levels of granularity (non-life policies with all the claims) and claims triangles generated by line of business may be used concurrently by the actuarial and reporting engines.

As input data is required for both risk calculation and reporting, it is also recommended that insurers capitalize on those processes to manage data quality under Solvency II in a centralized way. Data quality processing can be performed in a three-step process outlined in Figure 1.

  1. Validate: Execute data profiling, quality checks, and validations on all the data used for calculation and reporting. The tools typically associated with a central data repository could undertake this step. In addition to improving data quality, it is also important to monitor quality and demonstrate to the regulators that the data is fit for purpose. A key way of doing this is by generating dashboards that allow insurers to easily view and understand the consistency of the data. Data quality improvements are often a combination of automated checks enhanced by manual reviews based on expert judgment.
  2. Reconcile: Ensure one single view of the data. Often the same data element may come from several sources. For example, premium data might come from both the policy administration systems and the general ledger. The reconciliation of these figures can be best achieved in the repository.
  3. Historize: Document the data by accumulating a version of the data for each reporting date. This exercise will enable access to any past data should the supervisor require a report or audit.
Figure 1. Data quality processing three-step process
Data quality processing three-step process
Source: Moody's Analytics

There are synergies between Pillar III requirements and analytical data management projects – examples include data preparation for calculations and reports, and data quality. These synergies may be exploited by adopting an integrated approach when addressing the Pillar III project and the data management framework for the overall Solvency II program. This approach, in turn, will bring the following advantages:

  • A centralized repository containing a “golden copy” of the data for risk and capital management (i.e., SCR, ORSA)
  • Consistency of data for calculations, dashboards, and reports
  • Accuracy and completeness of data to meet the European Insurance and Occupational Pensions Authority (EIOPA) requirements
  • Access to claims triangles and model points generation
  • Improved management effectiveness of solo and consolidated data
  • The ability to apply value in Pillar III data to Pillar I calculations

Synergies between Pillar III data and Pillar I calculations The data required for Pillar III reporting is synergistic with asset and liability data requirements. As such, several opportunities to leverage Pillar III data to perform Pillar I calculations at both the solo and group level, according to the Standard Formula.

The first area involves “calculations with a closed formula structure.” The risk taxonomy described in the Standard Formula contains numerous closed formulas with data requirements that overlap with some QRTs. For example, it is possible to reuse the same data to generate the Asset D1 QRT to perform the spread, concentration, and default risk calculations according to the Standard Formula. Assets for each of these activities should come from the same source (a centralized repository) in order to avoid inconsistencies and comply with data quality requirements. With similar data, leveraging the calculations in Pillar III and Pillar I is possible.

Further synergies can be found when calculating other market risk modules (such as interest rate, equity, property, spread, currency, concentration, and illiquidity) in the SCR calculation according to the Standard Formula. Another area in which reusing data is possible is “life risk calculations.” Here, the calculations can be performed with the data required for the B3A (market risk) and B3C (life underwriting risk) QRTs. These reports contain fair values and best estimates according to the regulatory stress test.2

Finally, the data associated with the diversification effect and the detection of intra-group transactions (IGT) also shares commonalities. This is important in two areas: group SCR and in reinsurance programs. Group SCR calculations require IGT detection and elimination while group QRT requires IGT reporting. For instance, group internal reinsurance programs are eliminated from the group SCR and reported at a group level in IGT3 and at a solo level in Re-J1. To undertake the necessary consolidations, granular data provided by solo entities needs to be collated and then consolidated. This is typically done in a financial consolidation engine or a centralized repository.

Reusing Solvency II data across your firm Solvency II’s Pillar III has driven insurers to invest in systems and processes to collate and store all the required data. By adopting a prudent and strategic approach to Pillar III reporting programs, much of the Solvency II data can be reused across the wider organization. Indeed, with a relatively small extension to the target data perimeter, insurers can support wider risk and capital management decision-making.

This article discusses ways in which Solvency II data can be reused to support Pillar I calculations, provided it is stored in a common repository. Solvency II data can also be reused as the core data feeding the asset and liability valuation process.

As data is at the heart of any successful Solvency II project, it is crucial that insurers implement an analytical data repository that not only holds all the risk and capital data required, but also provides the lineage, auditability, and tools necessary for extracting and improving the inherent quality of the data. Ultimately, it is the insurance companies that act strategically by stepping beyond the minimum regulatory requirements – and start investing in data today – that will be most successful.


1 Regardless of whether the regulatory capital requirement is calculated with the Standard Formula or with an internal model.

2 The data required to generate the reports B3A and B3C could also be reused to perform automatically delta NAV calculations and aggregations.

As Published In:
Related Insights

Extract long term benefit from Pillar III Reporting Data

In this paper, we address why insurers should view the data collated for Pillar III reporting as an essential information source for all strategic risk and capital decision-making within their organizations.

May 2015 Pdf Karim Ben Ayed