Featured Product

    Data: The Foundation of Risk Management

    Obtaining and storing data is crucial, but it is only the first step. To facilitate effective risk decisions, data must be turned into the right information and delivered to the right people in an understandable format. This article focuses on developing an effective data management framework for the analytical data used for regulatory and business reporting.

    As banks face increasing regulatory scrutiny, risk managers and senior bankers must adapt their current practices. These new practices range from implementing digital and mobile banking to dealing with low investment yields and capital strengthening. Systematically adapting these practices to meet the evolving regulations – such as Basel II and III, FINREP, COREP, International Financial Reporting Standards (IFRS) 7 and 9, Dodd-Frank, Comprehensive Capital Analysis and Review (CCAR), the European Central Bank’s (ECB) Asset Quality Review (AQR), and the stress tests – is proving particularly onerous. Table 1 provides an overview of the regulations and their associated data requirements.

    One core element woven through these regulations is data. In particular, regulators seek greater emphasis on data quality, accuracy, granularity, full auditability (and lineage) of data utilized, and going forward, a central analytical data store.

    Data is not only needed in greater volumes, but also requires much greater levels of granularity than ever before. In addition to regulatory requirements for better data transparency, the business side demands more information that relies on data. Efficiently managing data in banks is such a problem that a recent survey estimated that it can take up to 7-10% of a bank’s operating income.1

    Data became particularly relevant in the requirements established by the Basel Committee on Banking Supervision (BCBS) and published in 2013 (BCBS 239) in the document, Principles for Effective Risk Data Aggregation and Risk Reporting.2 These requirements consist of fourteen principles, ranging from governance and accuracy to IT structure and delivery. The BCBS principles are designed to help improve a bank’s ability to identify and manage bank-wide risks. Of the fourteen principles, four sections in particular relate to data:

    • Governance and Architecture
    • Risk Data Aggregation
    • Risk Reporting
    • Supervisory Review

    While these principles apply only to globally systematically important banks (G-SIBs) with a target implementation date of January 1, 2016, national supervisors may apply the principles to domestically systematically important banks (D-SIBs) as well.

    In addition to the BCBS requirements, the Financial Stability Board (FSB) Data Gaps initiative also requests that SIFIs report additional data (large exposures, liquidity, and other balance sheet data). Regardless of the regulatory requirements, these principles make sound guidance for all banks.

    Table 1. An overview of regulations and data requirements
    An overview of regulations and data requirements
    Source: Moody's Analytics

    Obtaining and storing data is crucial, but it is only the initial step. To facilitate effective risk decisions, data must be turned into the right information and delivered to the right people in an understandable format.

    What is analytical data?

    Analytical data is the data a bank uses in its regulatory and business risk reporting and which is the subject of the BCBS principles. What comprises analytical data in a bank? Figure 1 illustrates the four main categories of analytical data – finance, asset and liabilities, risk, and capital planning.

    Most data projects do not fail because of technology – they fail because the business was not able to define their data requirements. Defining these requirements, however, is no easy task. According to a report by CSC, by 2020 the global pool of data will be 35 zettabytes (one zettabyte equals nearly 1.1 trillion gigabytes) – 44 times greater than it was in 2009.

    Analytical data is different from the operational or transactional data that banks traditionally use, in that it:

    • Is sourced from different types of systems – finance, assets, capital modeling, and risk systems – many of which are highly specialized and desktop-based. These systems in turn rely on data from core administration systems.
    • Requires a high degree of granularity to support multi-dimensional reporting (e.g., interactive dash boards).
    • Often has to be aggregated and consolidated.
    • Must be readily available, accurate, and comprehensive (covering all risks) to support monthly, quarterly, and annual reporting cycles – as well as ad hoc and real-time analyses.
    • Is used primarily in regulatory, business, and financial reporting and to support risk and capital decision-making.
    Figure 1. The four main categories of analytical data
    The four main categories of analytical data
    Source: Moody's Analytics

    For the most part, analytical data is used to support monthly, quarterly, and annual reporting cycles, but, increasingly, senior management is looking for more real-time data, such as daily market risk dashboards and continuous solvency monitoring. A high level of granularity is crucial.

    What are the key problems?

    Companies are moving from putting their models and calculations together in order to comply with Basel II from a technical perspective. Increasingly, banks are realizing just how dependent they are on the quality of data within their calculations. The fundamental problem is banks have an extensive amount of analytical data stored in multiple systems, which have their own data models, standards, and technology. This results in seven key problem areas:

    1. Massive amounts of data. Banks have large amounts of different types of data stored in multiple, siloed systems based on entities, lines of businesses, risk types, etc. Many of these systems are old and antiquated, with no standardized data models or data sets. They are essentially legacy systems. After identifying what systems contain analytical data, extracting, standardizing, and consolidating it is the next challenge.

    2. Lack of a common data model and standards. Few banks today have an enterprise-wide analytical data model, which makes standardizing data sets and aggregating data difficult.

    3. Reliance on manual data processes. Many banks still employ large numbers of employees to review and validate data, as well as find “gaps.” This manual approach is slow, costly, and almost impossible to analyze and audit.

    4. Low quality data and audit trails. Most banks struggle with the poor quality of data held in their core systems due to input errors, unchecked changes, and the age of the data. This limitation is compounded by the fact that banks often have multiple loan, credit card, asset, administration, and finance systems with no common data (or metadata) models. The lack of sound audit trails and lineage are an issue, particularly with legacy systems and manual processes.

    5. Structured and unstructured data. A significant amount of data within a bank is still unstructured (e.g., information that does not have predefined relationships), such as within portfolios or derivatives, and is not stored in existing centralized databases. Aggregating unstructured data and combining it with structured data is the key challenge.

    6. Accurate counterparty data. A particular problem for banks is getting the deep, accurate, and granular counterparty data essential for credit risk modeling – classification, jurisdiction, entity type, etc. Complexity increases with guarantors and insurers between layers. Counterparty data can also come from multiple sources.

    7. Regulatory compliance. A plethora of regulatory initiatives focus on accurate and correct data at the right level of granularity with full audit trails. Banks not only need governance frameworks, but also IT platforms that actually deliver and aid compliance.

    Figure 2. Data management and governance framework
    Data management and governance framework
    Source: Moody's Analytics

    It is also important that banks implement a data governance and IT architecture framework, as well as quality standards that not only meet the requirements of BCBS, but also satisfy the needs of both internal and external auditors. This governance framework must be supported by an IT architecture that manages and automates the data management and reporting processes.

    Data management and reporting architecture

    Technology is a key element in analytical data management and governance, particularly for quality control, auditability, and delivery of information. Figure 2 depicts one possible data management and governance framework using a number of integrated technology components.

    1. Source Systems: All banks have multiple core banking systems (client, loan, credit, etc.), as well as specialist treasury, asset, finance, forecasting, and modeling systems from which analytical data needs to be extracted.

    2/3. ETL Tools: Extract, Transform, and Load (ETL) tools extract data automatically from source systems, transform it into a common format, and load it into data quality tools or directly into a data repository.

    4. Data Profiling and Quality Tools: Data Profiling tools automate the identification of problematic data prior to loading it in the repository, collect statistics about that data, and present it in a useable report-based format. Data quality tools automatically improve the quality of data based on logic, rules, and algorithms supplemented by expert human analysis.

    5. Analytical Data Repository: This repository is a relational database that stores analytical data in a structured and accessible format for querying and reporting. It typically consists of a staging area where “raw” data can be loaded prior to quality checking and validation and a results area where approved data can be locked-down for reporting purposes. Organizations can have a single repository with multiple datamarts built in or dedicated data repositories for each major type of data.

    6. OLAP Cubes: OLAP cubes are multidimensional views (constructed by IT) on the data tables stored in the repository, which enable data to be loaded into reports and dashboards.

    Perhaps one of the most interesting recent developments is the ability to enrich validated data with extra data from external sources, such as the sociographic or demographic data of policyholders or buying habits from supermarket chains. This widening of data types is particularly useful for enhancing the single view of a customer. It does, however, add a layer of complexity into the system.

    7. Reporting Engine: This engine is technology that interrogates the repository in a structured manner based on OLAP cubes to physically produce and render reports, dashboards, and queries.

    8. Enterprise-Wide Data Model: Effectively, this is a common map of all the analytical data elements that an organization needs, and which should be used by all risk systems. Without a common data model, no data standards can be imposed to aid user understanding, making ETL and aggregation much harder.

    9. Workflow Engines: Data management and reporting tasks can be defined, documented, and then executed and controlled by a workflow engine.

    10/11. Governance and Compliance Framework: This is a data management and reporting governance framework supplemented with internal and external auditing practices.

    Improving data quality

    A key element in the management of data is improving the quality of raw data held in the source and modeling systems. Figure 3 illustrates a detailed process to improve data quality.

    Within the data quality process, there are two factors to consider. First, data quality should not be regarded as a one-off process. New data is always emerging, so there has to be a process to monitor data quality on an ongoing basis. The documentation of this process should be automated as much as possible.

    Second, data continues to diversify. Perhaps one of the most interesting recent developments is the ability to enrich validated data with extra data from external sources, such as the sociographic or demographic data of policyholders or buying habits from supermarket chains. This widening of data types is particularly useful for enhancing the single view of a customer. It does, however, add a layer of complexity into the system.

    Eight critical success factors

    Given the complexity of the types of data, and the effort required in order to process it for reporting, where do the banks go from here? The following is a list of eight critical factors for success in implementing an effective data management framework to support risk management and compliance within a bank.

    1. In relation to data, banks have to think of IT as more of a profit center, rather than just as a cost center. Data is key not only for regulatory compliance and reporting, but also for the business decision-making process.

    2. Banks need to understand the value of analytical data to the organization. They should develop an enterprise data model and standardize data sets across the bank. This will break down silos and greatly help aggregation and analysis.

    3. Ensuring the quality of analytical data is absolutely critical – without this, the accuracy of all generated risk and capital numbers becomes questionable.

    4. Data quality is not a one-off exercise – it must be treated as an ongoing process, which is documented and reviewed on a regular basis.

    5. Most banks already have some components that could form the foundation of a data management framework. It is important to leverage existing technologies as a starting point and then enhance them with new components as required.

    6. Regulatory reporting is highly prescribed, but business reporting less so. It is dependent on the business practitioners to define precisely the information they need in reports and dashboards. Thus, the business must liaise closely with IT to define their reporting requirements in terms of information, drill-through capabilities, frequency, delivery, etc. IT can then consider the data, data structures, source systems, and gaps that need to be filled to meet those needs.

    7. Banks should ensure there are capabilities and processes in place to meet ad hoc reporting requests from supervisors and demands during crises.

    8. While spreadsheets remain an important element of analytical data, they need to be carefully managed and controlled.

    Figure 3. Detailed process for improving data quality
    Detailed process for improving data quality
    Source: Moody's Analytics

    Most data projects do not fail because of technology – they fail because the business was not able to define their data requirements. Defining these requirements, however, is no easy task. According to a report by CSC, by 2020 the global pool of data will be 35 zettabytes (one zettabyte equals nearly 1.1 trillion gigabytes) – 44 times greater than it was in 2009.3 Establishing a framework to manage and make sense of the mountains of data and associated complexity is therefore of paramount importance.

    To foster more informed, risk-aware decisions, banks must turn data into the right information, delivered to the right people in an understandable format. Building an effective data management framework for analytical data will enable banks to enhance their regulatory and business reporting and be well positioned to flexibly scale with their data needs.

    Sources

    1 American Banker, 9 Big Data Challenges Banks Face, Penny Crosman, August 2012.

    2 Basel Committee on Banking Supervision, Principles for Effective Risk Data Aggregation and Risk Reporting, January 2013.

    3 CSC, Big Data Universe Beginning to Explode, 2012.

    Featured Experts
    As Published In:
    Related Articles
    Whitepaper

    The Challenges as Solvency II Reporting Goes Live

    This paper is the first in a series of short whitepapers where Brian Heale examines the major challenges and issues insurers face for report production, data management, and SCR calculation for Solvency II. The series of papers also examines the approaches insurers have taken in their Solvency II projects to date.

    June 2016 Pdf Brian Heale
    Article

    A New Advice and Distribution Paradigm in Financial Services

    The way insurance and investment products are distributed and managed in the future will undoubtedly change, but firms can benefit from the new paradigm. This article addresses how financial institutions can remain competitive by delivering intuitive customer journeys at a low cost using the latest technology.

    December 2015 WebPage Philip Allen, Brian Heale
    Whitepaper

    Latest Developments in the Quantitative Reporting Templates

    In this paper, we look at the latest developments in the Quantitative Reporting Templates. We consider how insurers can address the challenge of maintaining Solvency II reporting systems to keep pace with the changing and emerging regulatory requirements.

    September 2015 Pdf Brian Heale
    Article

    Using Analytical Data for Business Decision-Making in Insurance

    This article details the organizational and data challenges that insurers face when harnessing the historical and forward-thinking information needed to create interactive dashboards.

    May 2015 WebPage Brian Heale
    Whitepaper

    Solvency II and Asset Data

    In this White Paper, we look at the challenges that insurers, fund managers and market data providers face in providing and aggregating the asset data required for the completion of the QRT templates and the SCR calculation.

    December 2014 Pdf Brian Heale
    Article

    Challenges Impacting the Global Insurance Industry in 2015 and Beyond

    In this interview, Moody's Analytics Senior Director Brian Heale shares his unique expertise on insurance and Solvency II. Learn how global regulations, demographic trends, and technology will impact insurers over the next few years and how they can best prepare for the changes.

    November 2014 WebPage Brian Heale
    Whitepaper

    Automating the Solvency Capital Requirement Calculation Process

    This Whitepaper explores how the Solvency II Solvency Capital Requirement (SCR) calculation process can be automated to facilitate efficient and timely regulatory reporting. The SCR calculation process is complex, requiring significant data consolidation, cleansing and transformation to produce accurate and consistent results.

    July 2014 Pdf Brian Heale

    A Holistic Approach to Counterparty Credit Risk Management

    As the deadline for Solvency II approaches, many insurers are assessing the best approach to delivering the Pillar III reports required by EIOPA. Watch the Moody's Analytics Pillar III Reporting Webinar to learn the common implementation challenges of Pillar III reporting.

    June 16, 2014 WebPage Brian Heale
    Webinar-on-Demand

    Learn The Three Steps to Solvency II Pillar III Reporting

    As the deadline for Solvency II approaches, many insurers are assessing the best approach to delivering the Pillar III reports required by EIOPA. Many recognize the challenges of data consolidation, data cleansing, calculating accurate results and formatting reports to submit to the regulators.

    June 2014 WebPage Brian Heale
    Whitepaper

    Solving the Data Challenges of Solvency II

    This publication addresses the full spectrum of data challenges: data governance, data quality, tactical and strategic reporting,

    May 2014 Pdf Brian Heale
    RESULTS 1 - 10 OF 20