Featured Product

    Implementing an ERM Program in the North American Life Insurance Industry

    Life insurers, irrespective of company size or asset and liability profile, continue to invest in successful ERM programs. Within these programs, common themes and best practices are emerging. This article discusses some of these best practices and offers ideas on how life insurers may further fine-tune their programs to add greater value to their organization.

    Background and scope

    Enterprise Risk Management (ERM), a common practice in the North American life insurance industry over the past 20 years, has evolved significantly. In 2000, an industry panel conducted the first major survey of risk reporting and methodology for the North American insurance industry. Among the panel’s findings was an emphasis placed on reporting, including a Total Company Risk Exposure Report.

    “In addition to improving operational risk reporting, the industry should look forward to seeing future improvements in the preparation of reports which bring together all of the various risk exposures. Only 10 of the 44 survey participants indicate that they prepare a Total Company Risk Exposure Report. Although relatively few appear to do this type of analysis, clearly this type of overall picture could be of much interest to top management, and this looks like an area where we would anticipate there being major developments in the years to come.” 1

    Since that time, all of the Tier 1 North American institutions – the largest life and property and casualty (P&C) companies – have implemented an enterprise-wide risk management program. Nearly all have a dedicated Chief Risk Officer (or a combined Chief Actuary/Chief Risk Officer role), a role that would have been considered a novelty in the industry 15 years ago. Moreover, the Society of Actuaries, acknowledging the growing importance of risk management among its members, has developed the Certified Enterprise Risk Analyst (CERA) designation. This designation, an internationally recognized credential, may be the first step toward developing a Society of Actuaries Fellowship.

    Despite the growing emphasis placed on risk management, methodologies of implementation vary from institution to institution. However, there are common best practices emerging in North America. This article discusses some of these best practices and offers ideas on how life insurers may further fine-tune their programs to add greater value to their organization.

    This article also focuses on the life insurance industry and references comparative practices in the P&C industry. Many ERM considerations are global in nature, so we discuss ERM practices outside of North America – particularly in Europe, where the emerging Solvency II regulatory regime has fast-tracked the development of risk reports and analytics by insurance companies.

    The ERM Continuous Improvement Cycle

    ERM may be thought of as a Continuous Improvement Cycle, comprising seven broad areas of activity as illustrated in Figure 1:

    1. Risk culture: Define or redefine risk culture
    2. Risk/return definition: Define risk (capital), and return (value), and establish risk appetite
    3. Risk reporting: Develop risk reporting requirements
    4. ERM technology: Design, build, and implement ERM technology (to meet risk reporting requirements)
    5. Calculate risk/return: Perform calculations of risk and return
    6. Analysis of change: Analyze risks contributing to capital and the change in capital over time
    7. Strategy implementation: Apply results to management decisions (capital allocation, risk management, product strategy, and asset and liability management) At the end of activity 7, firms re-evaluate their risk culture and the cycle begins again.

    The Continuous Improvement Cycle is useful as a framework and contains two activities that can either make or break an ERM program:

    • In activity 2: Defining risk/capital and return/ value may not accurately reflect how senior management seeks to run its business, which will quickly render the ERM program ineffective
    • In activity 3: The establishment of ERM technology is fraught with pain points. However, there are three main actions that may make a significant difference:
    1. Enabling ERM information to be processed and disseminated quickly, so it does not lose its relevance
    2. Efficiently handling risk data
    3. Eliminating duplication of process

    Adopting best practices in these two activities will be critical to a successful ERM program. For that reason, the remainder of this article focuses on these areas.

    Figure 1. ERM Continuous Improvement Cycle
    ERM Continuous Improvement Cycle
    Source: Moody's Analytics

    Definition of risk and return

    For an ERM program to obtain corporate-wide buy-in and genuinely add value, it must be built around analysis and metrics that are truly reflective of the company’s risk culture. In other words, it must be in line with how senior management wants to run the company. While this practice may seem obvious, it is often not applied.

    It should flow naturally from a company’s risk culture, stated risk appetite, and definition of risk and return. These steps should form the building blocks on which the ERM program can be built.

    Return may be based on statutory or Generally Accepted Accounting Principles (GAAP) earnings, or it may be a more “economic” measure, such as an internally calculated economic value. Similarly, risk may be defined in terms of certain external risk-based metrics, such as statutory or rating agency capital, or again an economic capital measure (i.e., an internally assessed risk-based view of the amount of required capital). Having determined these metrics, the insurer can then focus on maximizing value while managing risk to a level reflective of the organization’s risk appetite. Ultimately, the determination of these risk-and-return measures comes down to a company-specific, philosophical senior management decision.

    For an ERM program to get corporate-wide buy-in and genuinely add value, it must be built around analysis and metrics that are truly reflective of the company’s risk culture and philosophy. In other words, it must be in line with how senior management wants to run the company.

    In the North American life insurance industry, a lively debate continues about how risk should be defined. In particular, opinions differ on whether or not a life office should take an economic capital approach and how an internal risk-based capital measure should be calculated. In Europe, Solvency II has required companies wishing to use an internal model to become conversant with calculating capital on the basis of shocks to the one-year forward value of the balance sheet. As a result, there has been a natural tendency to use the same “one-year Value at Risk (VaR)” approach when calculating internal risk-based capital. This is in stark contrast to North America, where regulators have focused on evaluating insurers on the basis of what is needed to meet the emerging policyholder liabilities as they become due, which is usually described as a “run-off” approach. Hence, in North America there has been much less of a natural tendency toward the one-year VaR approach for an internal risk-based capital measure.

    The picture is also further complicated by the difference between how life and P&C insurers handle the one-year VaR approach. Life companies invariably need to look at the market-consistent values of their balance sheets because of the optionality inherent in their assets and liabilities. Thus, they look at the market to see what the “price” is for such optionality. P&C liabilities generally do not have such optionality, so a one-year VaR calculation for a P&C insurer will typically involve doing stressed one-year roll-forward projections of statutory reserves and comparing them to the market values of assets at that one-year point. Table 1 compares the life versus P&C approaches to economic capital.

    Table 1. Comparison of life versus P&C approaches to economic capital in the North American insurance industry
    Comparison of life versus P&C approaches to economic capital in the North American insurance industry
    Source: Moody's Analytics

    This difference in the North American versus European regulatory perspective is reflected in how North American life offices assess capital for internal purposes. A significant number of institutions use a run-off approach for the internal assessment and a large number of medium-sized insurers simply manage statutory capital – which today remains still largely formulaic – without adjustment.

    Based on our discussions around the market, Table 2 summarizes the split of economic capital methodologies across the industry. Thus, around 70% of these life insurers take a “stat approach” or “stat-like” (real-world run-off) approach to managing capital. Of the approximately 30% of life insurers that use a one-year VaR approach, the vast majority are North American subsidiaries of European parents, where the Solvency II approach reigns and is used for reporting group-wide capital. In some instances, the North American subsidiary may be reporting the one-year VaR Solvency II-type number up to the group for consolidation, but locally this number is completely discarded. US statutory capital is used to manage and run the local business. In such cases, companies develop much more robust processes around measuring US statutory capital at the expense of the Solvency II-type number.

    While the regulatory precedent certainly contributes to the differences in approach to how capital is defined for internal management purposes, there are other fundamental differences that have a significant impact:

    • Historically, life insurers in North America are familiar with a “real-world run-off” approach, having used it for many years for traditional asset and liability management.
    • The market-consistent approach to valuation used prior to the 2008 crash has been criticized, particularly regarding how it could lead to highly volatile reserve/capital valuations. The fact that US insurance companies were not regulated on a “mark-to-market” basis when the crisis hit and the industry as a whole performed well on a solvency basis was highlighted by US regulators as a victory for the US framework. These issues were indeed acknowledged by the European regulators and led to the refining of the Solvency II calculations.
    • Many life insurers in North America genuinely manage their assets and liabilities on a buy-and-hold basis and both sides of the balance sheet are expected to be on the books for an extremely long period of time. What sense does it make to hold capital that is reflective of an immediate (or at least one-year forward) liquidation of the assets and liabilities?
    • There are measurement difficulties with both approaches, but the market-consistent approach raises a very fundamental problem – putting a market value on liabilities where there is no deep and liquid market. Simply put, the overwhelming majority of insurance liabilities are not “marked-to-market” and companies are managed accordingly.
    • Many life and annuity writers in the US write what are exclusively “spread products,” such as vanilla fixed deferred annuities and universal life products. On a market consistent basis, these products will have no added value and are therefore impossible to justify writing new business on using a market-consistent valuation approach.

    The importance of establishing risk and return measures that are genuinely reflective of how a life office runs its business cannot be highlighted enough. The exercise is not necessarily a straightforward one, however, and the discussion about how to define capital for internal management purposes is a case in point.

    Table 2. How the top 50 North American life insurance industry companies (by number of companies) define economic capital
    How the top 50 North American life insurance industry companies (by number of companies) define economic capital
    Source: Moody's Analytics

    Establishment of ERM technology

    Technology and data is ever changing, which is made especially complex in the context of an established life insurance company. A life insurer may have old policies on its books with inception dates decades in the past and legacy systems issues where the cost to extract, transform, and load (ETL process) to a new system is deemed to outweigh the benefits. Moreover, whenever the technology improves, there is another set of actuarial and risk reports that become de rigueur and usually involve processing requirements that are multiples more demanding than the predecessor analytics. And while all life insurers have processing issues to contend with, certain types of companies deal with particularly onerous requirements. For example, reinsurers can receive data from ceding companies in an incomplete state. A guaranteed living benefits writer is another example, where it is challenging to avoid policy-by-policy processing for virtually any actuarial projection.

    Ultimately, the technology issue may be distilled to three main challenges that need to be addressed in a logical and streamlined way:

    1. Speed: How quickly can ERM information be made available
    2. Data: How to obtain clean, efficiently stored, and accurate data that may be readily analyzed historically
    3. Process duplication: How to be efficient with technology processes and how to avoid duplication of processes

    Speed

    Many of the required computations for ERM can be extremely onerous calculations in terms of processing requirements. In many instances, these computations may simply be impossible using current hardware and software technology.

    Figure 2. Complex nested stochastic one-year VaR computation)
    Complex nested stochastic one-year VaR computation
    Source: Moody's Analytics

    Solvency II’s one-year VaR economic capital calculation for companies adopting an internal model is a case in point. The full version of this calculation involves a complete set of one-year forward real-world scenarios (say 1,000 scenarios). In addition, at each one-year forward point, insurers need to generate a full set of market-consistent scenarios to compute the market-consistent values of assets and liabilities (say another 1,000 scenarios at each one-year point). This “nested stochastic” calculation is illustrated in Figure 2. Firms are potentially looking at processing one million scenarios (1,000 outer x 1,000 inner scenarios). And if insurers want to do this across a large corporation with many complex lines of business potentially on a global basis, they are looking at an extremely burdensome calculation in terms of processing demands.

    Moreover, the above complex calculation covers one-year VaR capital at just a single point in time. By projecting capital at multiple time steps (annually over a three to five year period and possibly across six to seven stress scenarios), insurers will eventually reach a point where processing is simply impossible.

    This naturally leads to the usage of approximation techniques. Again, similar to the economic capital debate, the experience has been different in North America versus the rest of the world – especially when compared to Europe, where the Solvency II requirements have sent life insurers down a particular path.

    “Proxy modeling” has been commonly used in Europe, particularly in the context of internal models for Solvency II, where the nature of a nested stochastic market-consistent one-year projected real-world calculation necessitates some type of approximation technique, even for a point-in-time calculation. The projection of such a metric required for the Own Risks Solvency Assessment (ORSA) under Solvency II only compounds the issue. There are a number of ways in which a “proxy model” can be constructed. The aim of proxy modeling is to fit a function to a block of liabilities, and then use that function to put a value on the liabilities, rather than performing a full-blown computation on the actuarial projection platform. The technique is flexible enough to also fit assets. As such, the full actuarial ALM calculation can be consistently proxied to a high degree of accuracy.

    Of the largest European insurers who have developed an internal model to meet Solvency II – adopting what is referred to as an “Advanced Approach” – all are using some form of proxy function methodology to speed up the revaluation. In order to facilitate aggregation and analysis of the calculated risk and capital metrics, companies will typically produce proxy functions along product lines, business lines, different geographical entities, or any other dimensions along which they want to view risk and capital. For example, we are aware of one large European corporation using up to 1000 functions (300 liabilities and 700 assets) along different product lines and geographical entities within the insurance group.

    In North America, the need for approximation techniques is less pressing. First, the reserve and capital calculations required to meet both the US and Canadian statutory regulations are not as complex as the Solvency II internal model calculation. In the US, for example, the two lines of business where stochastic modeling is currently required – variable annuities and universal life with guarantees – need only a single set of real-world stochastic paths, not a nested calculation. Moreover, under the current regulation, a formulaic approach or deterministic calculation is used for all other business. Thus, the point-in-time calculation for capital under the North American regulations necessitates a “brute force” calculation, and any type of proxy approach is neither appropriate nor necessary. Second, as discussed earlier, the majority of life offices in North America use a statutory approach or a stat-like approach to internal risk-based capital, unless they have European affiliates. Finally, for purposes of projecting capital for the US and Canadian ORSAs, the statutory guidelines are, at least presently, laissez-faire, which is quite unusual in particular for the US regulators. There is strong empirical evidence suggesting that, at least for the first round of ORSA submissions in North America, very approximate techniques will be used for the capital projection (e.g., prorating capital by projected run-off of policies by number and size).

    There are also other factors evident in North America that have hampered the wider use of proxy models to date: the use of replicating portfolios and the explanation of proxy models.

    Use of replicating portfolios

    A number of Tier 1 companies used a replicating portfolio (RP) approach a few years ago when the technique gained some initial popularity – again, primarily in Europe – as a way to avoid huge liability processing run times. Under RP, the objective is to establish a portfolio of assets that “replicate” the liabilities. It is the replicating portfolio of assets that is then used for valuation of the liabilities. The success of the method relies on the assumption that there is a portfolio of assets available in the market that replicates the behavior of the liabilities, but yet is simpler to value than the liability portfolio. In practice, this was very difficult to achieve, especially in the tails of distributions. This is where insurers are most interested for capital calculations. As a result, RP has fallen out of favor with practitioners. This in turn has created a presentational barrier for proponents of proxy modeling, as there has tended to be an initial perception at the senior management level that proxy modeling is just another form of replicating portfolio “trickery” or “actuarial dark arts.”

    Explanation of proxy models

    Another aspect affecting the adoption of proxy models is – because of the newness of the technique and the seeming complexity of the resulting functions – it is not always easy to explain it to senior management. There is no doubt that it is a skill to adeptly present to senior management about what a proxy model is and why it works. Although the math can be set up to avoid “over fitting” a function, some proxy functions can still be incredibly complex. Fitting to a multi-dimensional risk factor space and having to explain the resulting higher order polynomials to senior management can be a daunting task. This presentational issue is an obstacle that has not yet been 100% successfully overcome from what we have seen in the North American life industry.

    Due to these regulatory and business drivers, the adoption of proxy functions in North America is in its infancy. That said, there certainly have been life companies in the US and Canada that have successfully implemented proxy modeling approaches. This looks like a growth area in the next few years, especially in the context of ORSA, as companies move from the formality of preparing and submitting the initial reports to potentially using more sophisticated methods for projection. For example, using their ORSA reports as a truly effective way to help manage and get value out of their business. Another area where proxy modeling is compelling is around hedge effectiveness testing, which is another form of a nested calculation with many stochastically generated real-world scenarios that may require multiple nested market-consistent simulations to calculate the Greeks at future points in time.

    As a parallel development, there is interesting testing work occurring in the market that looks at emerging state-of-the-art hardware and software with a view to continue to do “brute force” calculations, yet using the best technology to run it as fast as possible. In particular, some life offices in North America have invested resources in developing test data and code to run on Graphics Processing Unit (GPU) based platforms. There is greater adoption of cloud-based computing in order to overcome some of the run-time issues that companies face. These developments hold tremendous promise, and could genuinely revolutionize how life insurers perform their valuations.

    While cloud-based and GPU-based platforms hold great promise, there remain several stumbling blocks that firms need to overcome before there is genuine progress, including cost, legacy systems from acquisitions, and system updates.

    Cost

    There will be a short-term cost of developing a new platform using this technology. Thus, it will be tough for companies and vendors of existing platforms to make a financial decision to go down this road and essentially abandon its current platform – risking the loss of business. From an insurer’s perspective, it is a difficult decision as it could entail a lengthy transition from an old to a new system.

    Legacy systems from acquisitions

    When new blocks of business are purchased, insurers will have to contend with legacy system issues. How can they then transition from the acquired company’s system? Whenever this question comes up, the answer is often to leave the acquired block on the legacy system. And if a company achieves growth through acquisition, it is not uncommon to see them use multiple systems. In one instance, because of growth through acquisition, there was a large company that had every actuarial system that was commercially built in the last 25+ years.

    System updates

    Insurers testing the new GPU technology have been concerned that it is difficult to update the system with the introduction of a new product or product feature.

    Data

    Good data management seems almost too obvious to be part of a best practice process. When implemented, however, it is often seen as a pitfall of an ERM program and can quickly render a program valueless.

    There are three key aspects to risk data management: obtaining clean and accurate data, efficiently storing data, and performing historical trend analysis.

    Getting data that is clean and accurate

    Insurers should seek to get the data accurate as close to the original source as possible. This issue is one that insurers have done a great job of addressing in recent years, with many of the problems of manual entry for loading up new business greatly minimized. However, this still remains an issue for many companies, in particular reinsurers, where often data is arriving second hand.

    Many insurers are building intelligent data platforms, in which risk data is run through multiple checkpoints, and intelligent routines to clean up data where there are obvious errors.

    Efficient storage of data

    Data storage is expensive and careful planning needs to be done to establish what information is genuinely needed and for what purpose. Efficient data storage also means storing data in a way so that it can be easily accessed.

    Ease of historical data analysis

    A key requirement of ERM is the ability to perform historical trend analysis. Also, an analysis of change over time is a great check on the calculations and a useful indicator of what is contributing to risk and return.

    The requirements will also involve being able to analyze data accordingly.

    Process duplication

    While speed of managing, processing, and disseminating information is critical to an ERM program, it may sometimes be difficult to quantify the cost-benefits of investing in data and speed. Therefore, it is difficult to establish the direct impact they will have on the bottom line. It is much easier to sell the benefit of cutting costs by eliminating duplicative processes.

    Process duplication can be evident in many parts of the ERM process

    With the duplication of underlying in-force data and product data for different applications, for instance, there may be one system doing asset and liability management and another system for ERM. Yet, essentially the same underlying in-force and product data will be needed for both.

    Alternatively, consider the duplication of assumptions setting (i.e., multiple individuals in different areas looking at setting the same targets and assumptions). An example of this may be the setting of targets for economic scenario generation – there could be multiple individuals setting the same target depending on their department.

    Having a common centralized platform and assumptions setting area avoids the diseconomies of scale and potential errors of having many people around the firm doing the same thing. It is not uncommon for an insurance company to have two or more systems that require duplication of effort in creating actuarial extract files.

    Thus, a best practice process has a single platform for all the company’s data, which flows through from administration – new business – to accounting and actuarial, as well as to investment and risk management. In practice, however, all these requirements are typically handled by different data platforms and applications. Even within each broad area, there may be vast numbers of databases in use and software applications. For example, consider a global ERM program for a large insurer; there may be many actuarial projection systems in use across the organization, perhaps different systems by territory and by product. Also, data may be collected in different places and handled in different ways (e.g., market risk versus credit risk versus operational risk).

    Best practices for managing speed, data, and process duplication challenges

    An integrated ERM platform, as depicted in Figure 3, represents a best practice approach to managing many of the speed, data, and process duplication challenges faced by insurance companies. Such a platform should be modular and flexible enough to readily permit the integration of potentially many different data sources and projection capabilities.

    Figure 3. Integrated ERM platform architecture
    Integrated ERM platform architecture
    Source: Moody's Analytics

    This platform affords an insurance company the ability to use a common datamart to normalize data across the entire organization and run a deep set of analytics on clean and accurate data.

    Insurers can best extract value from their ERM program investment by defining risk and return in a way that genuinely reflects senior management objectives and by establishing ERM technology that focuses on three main challenges – the speed at which ERM information is processed and disseminated, the efficient handling of risk data, and the elimination of process duplication.

    Insurers that embrace a Continuous Improvement Cycle and establish an ERM process that is built on a best practice foundation will unlock value creation across their organization – helping them overcome many of the speed, data, and process duplication challenges common in insurance ERM today.

    Sources

    1 Stephen Britt, Anthony Dardis, Mary Gilkison, Francois R Morin & Mary M Wilson, Risk Position Reporting, Society of Actuaries, special publication, October 2001.

    Featured Experts
    As Published In:
    Related Articles
    Presentation

    Overcoming the Challenges of Insurance SIFI Fed Stress Testing

    If the banking experience is anything to go by, insurance SIFIs are going to face major challenges in preparing for and implementing the Fed stress testing. This presentation identifies and proposes solutions to these challenges, referencing our experience of working with the banks to address their stress testing needs and putting this in the specific context of an insurance operation.

    May 2014 Pdf Tony Dardis
    RESULTS 1 - 1 OF 1