Skip to main content

DORA: an impact assessment

Due to the rapid use of digitization within the European financial sector, the European Commission introduces the Digital Operational Resilience Act (DORA) to set the basis for digital resilience at financial entities. Financial entities will be required to improve their digital resilience through enhancing their IT risk management processes, incident handling, management of third parties, while also sharing cyber-related information and their experiences with their peer organizations, to strengthen the sector as a whole. One distinct feature of DORA is that it also brings new financial segments in its regulatory scope under the supervision of the European Commission.

Please note that updates have been made to this article after the initial publication, to make this article correctly reflect the latest developments on the DORA legislation and its timelines.

Introduction

In the past few years, IT regulatory requirements on a European level have increased, due to increased use of IT and the risks it poses. In 2017, the European Banking Authority (EBA) announced its “Guidelines on ICT Risk Assessment under the Supervisory Review and Evaluation Process (SREP)”, soon to be followed by guidelines on PSD2, cloud service providers and ICT & Security. The European Insurance and Occupational Pensions Authority (EIOPA) published its guidelines on “information and communication technology security and governance” and “outsourcing to cloud service providers” in 2020, approximately around the same timing when the European Securities and Markets Authority (ESMA) published its “Guidelines on outsourcing to cloud service providers” in 2020. Just from the titles the overlap between these guidelines stemming from the European Supervision Authorities (ESA) is apparent. This triggers the question why each segment authority is operating in silos and reinventing the wheel instead of working together on a European scale. Apart from supervisory benefits, the financial institutions under supervision that operate in different segments of the financial sector would benefit in terms of time, costs and efforts from reporting on one single set of guidelines.

The European Commission (EC) seems to understand this notion and – in line with this – proposed a new regulation in 2020 that is directed at uniformity of the network and information security and operational resilience of the financial sector as a whole called the “Digital Operational Resilience Act” (DORA).

What does DORA entail?

DORA as a proposed regulation is part of the larger Digital Finance package of the European Commission. Its goal is to propagate, drive and support innovation and competition in the realm of digital finance, while effectively managing the ICT risks associated with it. Without a doubt, use of ICT in the financial sector has increased to the extent that ICT risks cannot be addressed indirectly as a subset of business processes. Moreover, it has seeped through to the different financial services ranging from payments to clearing and settlement and algorithmic trading.

On top of that, ICT risks form a consistent challenge to the operational resilience and stability of the European financial system. Since the financial crisis of 2008, ICT risks have only been addressed indirectly as part of operational risks and did not fully address digital operational resilience. Existing legislation by European Supervision Authorities overlaps too much as each of these authorities have their own IT framework for their segments, poses operational challenges and increases the costs of risk management for certain financial institutions that operate in different segments and therefore create a level playing field.

DORA aims to improve the alignment of financial institutions’ business strategies and the conduct for ICT risk management. Therefore, it is required that the management body maintains an active role in managing and steering ICT risk management and pursue a level of cyber hygiene.

This article will dive into the five pillars of DORA and explain these in further detail and provide a comparative analysis of the different types of entities and the extent of existing ICT control frameworks that cover the contents of the DORA regulation and the corresponding gaps. The article concludes with a general roadmap of what actions financial entities can take to fulfill the requirements out by DORA.

Pillars of DORA

The Digital Operational Resilience Act (DORA) consists of the following five pillars:

  • ICT risk management requirements. In order to stay up to date with the quickly evolving cyber threat landscape, financial institutions should set up processes and systems that minimize the impact of ICT risk. ICT risks should be identified on a continuous basis from a wide range of sources and addressed through internal control measures, disaster and recovery plans to safeguard the integrity, safety and resilience of ICT systems as well as physical infrastructures that support the ICT processes within the business.
  • ICT-related incident reporting. DORA prescribes to set up appropriate processes to ensure a consistent and integrated monitoring, handling and follow-up of ICT-related incidents, including the identification and eradication of root causes to prevent the occurrence of such incidents.
  • Digital operational resilience testing (DORT). Capabilities and functions within the ICT risk management framework require periodical assessment to identify weaknesses, deficiencies and gaps and implementation of corrective measures to solve these. Specific attention has been given to “Threat-Led Pen Testing” (TLPT) which enables financial entities to perform penetration testing based on the threats they are exposed to.
  • ICT third-party risk. Due to increasing use of ICT third-party providers, financial entities are required to manage ICT third-party throughout the lifecycle (from contracting until termination and post-contractual stages) based on the minimum requirements prescribed in DORA.
  • Information sharing agreements. In order to raise awareness and grow, the regulation gives room to financial entities to exchange cyber threat information and intelligence.

C-2022-3-Alam-1-klein

Figure 1. House of DORA. [Click on the image for a larger image]

DORA applies to the following financial entities. It can be noted that certain types of entities are the more mature and traditional entities that have been in scope of previous European ICT-related regulations. These concern the DNB Good Practice Information Security for banks, insurers and pension funds and EBA guidelines on Outsourcing and ICT & Security Risk Management for banks. At the same time, DORA introduces new types of entities that come into scope of an ICT regulation and are subject to it for the first time, due to their (in)direct involvement in the European financial processes. These include administrators of critical benchmarks like Moody’s, insurance intermediaries and ancillary insurance intermediaries (e.g., telecom companies that sell insurances on cell phones as by-product; see Table 1).

C-2022-3-Alam-1t-klein

Table 1. Scope of applicability. [Click on the image for a larger image]

DORA has passed the proposal phase on July 13 this year, when the Economic and Monetary Affairs Committee gave its approval for implementation of DORA. On November 9, the European Parliament will vote on this legislation. The expected planning is that per ultimo 2022, DORA will be finalized, which kicks off the two-year implementation period during which financial institutions are expected to take measures to implement DORA. Per ultimo 2024, compliance with DORA is required. One exception to this timeline is the implementation of “Threat-Led Pen Testing” (the “Digital operational resilience testing (DORT)” pillar in Figure 1), as this has a deadline per ultimo 2025 as the requirements are a bit technical in nature.

C-2022-3-Alam-2-klein

Figure 2. Timelines DORA. [Click on the image for a larger image]

Following the introduction of DORA, including the scope and timelines of implementation, as shown above, the next sections contain more details. We will start with an explanation of the requirements for ICT risk management.

ICT risk management

The ICT organization is subject to fundamental ICT risk management. The aim of DORA is to establish ICT risk management to realize (permanent) identification of risks and their sources, performance of proper follow-up and the setup of protection mechanisms to minimize the impact of ICT risks. Realizing this is subject to ICT governance and establishing a risk management framework which the EC describes as principle and risk based ([ECFC20]).

ICT governance & standards

The overall responsibility for ICT risk management lies with the management body, who is also required to receive regular ICT training. Management is required to play a critical and active role in directing the guardrails for ICT risk management.

DORA does not propose specific standards to meet the requirements as part of the ICT risk management. However, DORA does aim for a harmonized guideline subject to a European supervisory system. ICT Governance lies at the base of realizing this.

The ICT Governance’s function’s purpose is to design the accountability and process for the development and maintenance of an ICT risk management framework as well as the approvals, controls and reviews to complement, for example, ICT audit plans. Most important is the definition of clear roles and responsibilities for all ICT-related functions including their risk tolerance levels.

Also subject to the governance is the periodicity of testing and identification of weaknesses or gaps and potential mitigating measures. It is not determined yet which standard or which controls should be tested, besides that incident handling and ICT third-party risk management require explicit follow-up. Hence, the scoping and implementation of the right controls requires attention.

Scoping and applying ICT risk management with DORA

For the scoping and implementation of an ICT risk management framework we will elaborate on the scope of assets and the proportionality in relation to existing controls.

For scoping, DORA refers to the management of ICT risk management in a broad sense, covering aspects such as business functions and system accounts. However, also supporting information assets should be taken into consideration. This means that IT support tooling for the execution of a control should be marked as applicable for ICT risk management. The scope therefore extends beyond core business applications. An example is an identity access management (IAM) tool used for the automatic granting of authorizations to users. In this case, the IAM tool should be subject to ICT risk management to ensure that the risk of unauthorized access to the core business application is mitigated.

Besides the scoping of assets, there are requirements which the regulation emphasizes. This concerns subjects such as ICT incident handling and ICT third-party risk management. Besides the emphasis on these areas, there are already many other regulations to comply with. This triggers the proportionality discussion.

With regards to proportionality, the Dutch Central Bank (DNB) already noted in earlier publications that regulation and supervision require alignment of the size and complexity, but foremost the risks of financial institutions ([DCB18a], [DCB18b]). With DORA, microenterprises already benefit from more flexibility. Also, DORA’s proposal describes that tailored risks and needs depend on the size and business profile of the respective financial institution ([EuCo20]). Based on our experience, we already see many supervisory requirements for the Dutch financial services sector. This may result in not redoing the work for certain areas. Financial institutions that already implement DNB’s good practice for information protection might already have the correct measures in place. DNB’s good practice for information protection is not a marked standard by the EC, however, we see that relevant aspects in relation to DORA have been covered. The only remark is that the good practice is principle-based rather than risk-based ([DCB19]).

We therefore propose the following steps to establish proper ICT risk management in relation to DORA:

  1. Determine your scope of IT assets
  2. Identify the risks related to DORA
  3. Identify the impact based on confidentiality, integrity and availability
  4. Identify the source of the risk based on whether the risk is driven by human, process, technology or compliance
  5. Determine the likelihood and impact based on low, medium or high for each risk
  6. Link the risk to existing/implemented controls and determine your residual risk for follow-up

The incident handling process can facilitate adequate identification of risks on a continuous basis. Having the right data and reporting structure in place enables the organization to perform analyses and identify which new possible risks arise from IT.

ICT-related incidents

ICT-related incident management process

Many organizations are used to having an incident management process in place. The goal of this reactive process is to mitigate (remove or reduce) the impact of ICT-related disruptions and to ensure that ICT services become operational and secure in a timely manner.

With DORA, financial entities will establish appropriate processes to ensure a consistent and integrated monitoring, handling and follow-up of ICT-related incidents. This includes the identification and eradication of root causes to prevent the occurrence of such incidents. Potentially, financial entities need to enhance their incident management process to align with the minimum requirements.

The incident management process should at least consist of elements shown in Figure 3.

C-2022-3-Alam-3-klein

Figure 3. ICT-related incident management process. [Click on the image for a larger image]

The level of formalization of the ICT-related incident management process is different per financial entity. The more the process is formalized, the more likely an incident ticketing system is in place. Using an incident ticketing system facilitates the organization to record and track incidents as well as monitor the timely response by authorized staff.

Figure 4 gives an example of roles and responsibilities involved in the incident management process. It can be noted that the roles involved in the incident management process cover the full range of the organization, as incidents originate at different points in the organization and the resolution takes place at different points.

C-2022-3-Alam-4-klein

Figure 4. Roles in the incident management process. [Click on the image for a larger image]

Based on our experience, we see that most of the financial entities already have a similar incident management process in place. As such, we expect the efforts to adjust to the DORA requirements for this part will be limited, as financial entities in general already have an incident management process in place.

Classification of ICT-related incidents

In case there are multiple ICT-related incidents, priorities must be determined. The priority is based on the impact the incident might have on the operations and on the urgency (to what extent is the incident acceptable to users or the organization). Many organizations already use criteria to prioritize incidents.

DORA describes that financial entities will classify ICT-related incidents and determine their impact based on the criteria in Figure 5.

C-2022-3-Alam-5-klein

Figure 5. Impact assessment criteria. [Click on the image for a larger image]

The above-mentioned criteria will be further specified by the Joint Committee of the European Supervisory Authorities, including materiality thresholds for determining major ICT-related incidents which will be subject to the reporting obligation (see Figure 6). Additionally, this committee will develop criteria to be applied by competent authorities for the purpose of assessing the relevance of major ICT-related incidents to other Member States’ jurisdictions.

Based on our experience, we see that most of the financial entities already apply a base of criteria to prioritize ICT-related incidents. We expect that all financial entities need to enhance this base of criteria to align the DORA requirements for this part, however this is something that is doable.

Reporting of major ICT-related incidents

When a major ICT-related incident occurs, financial entities will be subject to report these to the relevant competent authority within the time-limits shown in Figure 6.

C-2022-3-Alam-6-klein

Figure 6. Reporting timeline incidents. [Click on the image for a larger image]

The major ICT-related incidents may also have an impact on the financial interests of service users and clients of the financial entity. In that case, the financial entity will, without undue delay, inform their service users and clients about the incident and inform them as soon as possible of all measures which have been taken to mitigate the adverse effects of such incident.

At this moment, we note that reporting of major ICT-related incidents to relevant competent authorities and to service users and clients is not formalized. As such, KPMG expects that implementing a formalized reporting process on major ICT-related incidents will take effort for every financial entity. The next section explains about the requirements on Threat-Led Pen Testing.

Threat-Led Pen Testing

One of the key pillars of DORA is to perform digital operational resilience testing on a periodic basis. The requirements stated in the Act support and compliment the overall digital resilience framework and provides guidelines to financial institutions for scoping, testing, and tracking of ICT risks. The requirements of this testing can be classified and is explained below.

The financial entities should follow a risk-based approach to establish, maintain and review a comprehensive digital operational resilience testing program, as per the business and risk profiles. The resilience tests of the digital operational resilience testing program can be conducted by a third-party or an internal function and should at least contain the following and at least on a yearly basis:

  • Vulnerability assessments
  • Open-source analysis
  • Network security assessments
  • Physical security reviews
  • Source code reviews
  • Penetration testing

Apart from the testing program, the entities also have to perform vulnerability assessments before any new deployment or redeployment (or major changes) of critical functions, applications and infrastructure components.

In addition to the general tests mentioned above, DORA also states that advanced penetration tests, such as Threat-Led Penetration Tests (TLPT) meaning penetration testing adjusted to the threats the financial entity faces (i.e., a payment organization should perform penetration testing on their payment platform as threats on there are high), should also be performed at least every three years on the critical functions and services of a financial entity. The following points should be considered while performing these tests:

  • The scope of the TLPT will be determined by the financial entity itself and validated with the competent authority. The scope will contain all critical functions and services – including a third party.
  • TLPT performed should be proportionate to the size, scale, activity and overall risk profile of the financial entity.
  • EBA, ESMA and EIOPA will develop draft regulatory technical standards after consulting the ECB and taking into account relevant frameworks in the Union which apply to intelligence-based penetration tests.
  • Financial entities should apply effective risk management controls to reduce any type of disruptive risks which affect the confidentiality, integrity or availability of data and assets.
  • Reports and remediation plans should be submitted to the competent authority, which shall verify and issue an attestation.

DORA also places specific demands with regard to the testers performing the Threat-Led Pen Testing. Reputation and suitability are key combined with the required expertise and level of skill. Moreover, testers are certified by an accreditation board (e.g., ISACA) valid in the member states. If testers from external parties are included, the same requirements apply. However, when using external parties, professional indemnity insurances should be in place to manage risks of misconduct and indemnity and an audit or independent assurance is needed on the sound management of protection of confidential information used as part of the testing.

Apart from internal control, emphasis is placed on managing third parties too. This will be explained in the next section.

ICT third-party risk

The use of ICT third-party providers is prevalent in the financial sector. This ranges from limited outsourcing for data hosting services at external data centers to the more extensive outsourcing where use of IT systems and software is cloud-based or based on the Software-as-a-Service (SaaS) model, with different types of outsourcing between the two extremes.

C-2022-3-Alam-7-klein

Figure 7. Spectrum of outsourcing. [Click on the image for a larger image]

DORA’s approach towards ICT third-party risk is based on the perspective of financial entities managing ICT third-party providers throughout the entire lifecycle from the contracting to post-termination stage. This means a more holistic process than just monitoring the achievement of service level agreements and assurance reports received from the ICT third-party providers. This perspective is similar to that of the Outsourcing Guidelines of the European Banking Authority (EBA) ([EBA19b]).

C-2022-3-Alam-8-klein

Figure 8. Lifecycle ICT third-party service provider management. [Click on the image for a larger image]

At the same time, DORA does propagate the principle of proportionality when implementing measures to comply with DORA.

DORA defines proportionality for outsourcing as follows ([EuCo20]):

  1. “scale, complexity and importance of ICT-related dependencies” and;
  2. “the risks arising from contractual arrangements on the use of ICT services concluded with ICT third-party service providers, taking into account the criticality or importance of the respective service, process or function, and to the potential impact on the continuity and quality of financial services and activities, at individual and at group level.”

The main changes lie in the processes around pre-contracting, contracting and termination.

General requirements

Just like other regulations on outsourcing, DORA places responsibility for the results from the business process (impacted or not) by outsourcing at the financial entity, regardless of the extent of outsourcing. Financial entities are also expected to have proper insight in their ICT third-party providers and delivered services by properly maintaining this information in a so-called “Register of Information”. The level of detail of this register should explain the difference between ICT third-party providers that deliver services that cover critical or important functions and those that do not. Where needed, the national competent authority (e.g., AFM and DNB) may request (parts of) the register of information to fulfill their supervisory role.

(Pre-)contracting requirements

In the context of DORA, the requirements that need to be taken into account when selecting an ICT third-party provider increase significantly compared to the situation now (see Figure 8).

The reporting process as mentioned in Figure 8 is the same as the existing policy of the Dutch Central Bank (DNB) that requires financial entities to notify DNB in case the entity is planning to enter a contractual agreement with a third-party service provider for any critical activities or with a cloud-provider ([DCB18a], [DCB18b]).

In addition to the list in Figure 8, to guide the financial entities, the European Supervision Authorities (ESAs) together will also designate and annually update the list the ICT third-party service providers that they view as critical for financial entities. The designation of “critical ICT third-party service providers” is among others based on:

  • the systemic impact on the financial entities in case of failure of the ICT third-party service provider;
  • number of financial entities (globally or other systemically important institutions) relying on a certain ICT third-party service provider;
  • the degree of the substitutability of the ICT third-party service provider;
  • number of countries the ICT third-party service provider provides service to financial entities;
  • the number of countries of which the financial entities are operating using a specific ICT third-party provider.

As part of the pre-contracting, the following assessments and checks need to be made with regard to the ICT third-party provider in order to enter the contractual agreement:

  • Whether the contractual agreement concerns the outsourcing of a critical or important function;
  • Supervisory conditions for contracting are met;
  • Proper risk assessment is performed, with attention to ICT concentration risk;
  • Proper due diligence as part of the selection and assessment process;
  • Potential conflicts of interest the contractual agreement may cause;
  • The ICT third-party service provider that complies with the appropriate and the latest information security standards;
  • Audit rights and frequency of audits at ICT third-party provider needs to be determined based on the financial entity’s risk approach;
  • For contractual agreements entailing a service with a high level of technological complexity (for instance software using algorithms), the financial entity should make sure it has auditors (internal or external) available that have the appropriate skills and knowledge to perform relevant audits and assessments;
  • In case a financial entity is planning to outsource to a third-party service provider that is located in a third country, the entity needs to make sure that the third country has sufficient laws in place regarding data protection and insolvency and that these are properly enforced.

DORA places significant emphasis on ICT concentration risk and defines it as follows ([EuCo20]):

  • Contracting with an ICT third-party service provider that is not easy to substitute for another provider, or;
  • Having multiple contractual agreements with the same ICT third-party service provider or a tightly knit group of ICT third-party service providers.

Termination requirements

DORA requires financial entities to terminate contractual agreements with ICT third-party providers under certain circumstances:

  • The ICT third-party provider breaches applicable laws, regulations or contractual terms;
  • Circumstances or material changes arise that can potentially impact the delivered services to the extent that performance alters and impacts the financial entity;
  • Weaknesses in the overall ICT risk management of the ICT third-party provider are identified that can impact the security and integrity of confidential, personal, or sensitive data;
  • Circumstances arise that result in the national competent authority not being able to effectively supervise the financial entity as a result of the contractual agreement.

Proper analysis of alternative solutions in the pre-contracting stage and development of transition plans are needed to be able to sustain business operations after terminating the contract with the ICT third-party provider.

Future outlook

Overall, DORA results in a significant increase of requirements for managing ICT third-party providers, as it requires management of ICT third-party service providers throughout the lifecycle from pre-contracting till post-exiting.

The current state of management of ICT third-party service providers at financial entities is focused on due diligence procedures, service level management and analysis of assurance reports received from these ICT third-party providers (the traditional way of “managing ICT third-party providers”). Moreover, financial entities also experience difficulties in providing insight into all the relevant ICT third-party service providers with the level of detail that DORA requires. Most of the time, recording information of ICT third-party service providers is limited to the larger and more critical ICT third-party service providers. KPMG’s view is that of all the DORA pillars, it is expected that compliance with the requirements to manage ICT third-party providers will require the most effort from financial entities due to the widest gap between the current and future required states. Financial entities have to review their processes per lifecycle phase and expand current or implement new procedures and controls to ensure the inclusion of the DORA requirements. Large efforts lie in the creation of the “Register of Information” as within most financial entities contracts with ICT third-party providers are dispersed over the organization and management of these contracts takes place in a decentralized manner. Getting all this information together in one overview is an ardent task.

Information sharing agreements

All operating financial entities experience information- and cybersecurity threats one way or another. Most of the time, the threats are also similar in form and nature, like common network and system vulnerabilities, hacks and malware. Overall, each financial entity battles the same threats, some quicker or more adequate than the other because of differences in size, experience or other factors.

Reasoning from this situation, DORA prescribes financial entities to form communities and exchange information amongst themselves on cyber threat information. This includes indicators of identification, compromise, tactics, techniques and procedures on how to prevent and/or recover from the threat.

However, there are certain conditions to forming such information sharing agreements. Forming such groups should be focused on enhancing the digital operational resilience and increasing awareness of the cyber threats and how these can be identified and resolved. At the same time, conditions for participation should be set, and data and information exchanged should be protected. Lastly, the national competent authorities should be notified when such information agreements are formed.

In the current landscape, we note that there are working groups between financial entities in the banking and insurance segments, but these are broader in nature and directed at exchanging information and not specifically at cyber threat information.

The current state of DORA

As mentioned in the introduction, DORA scopes in many segments that are new to any IT regulation and have little to no experience in translating and implementing IT requirements into their organizations. At the same time, there are a number of segments that have had their fair share of experience with IT regulations through the national competent authority (being the Dutch Central Bank ([DCB19]) and European Supervisory Authorities ([EBA19a], [EBA19b], [EIOP20]) and are more mature in governing IT in their respective organizations which include banks, insurers and pension funds.

This dynamic creates a split in terms of the effort needed to comply. More mature organizations have the capabilities to bridge the gap based on past experiences, whereas less mature organizations also have to build on their capabilities to translate IT requirements into their organization.

The analysis in Figure 9 provides a detailed view of what the situation is like. The five pillars of DORA are plotted against the different segments in scope. Per segment it is described whether there are any existing IT regulations/frameworks in those segments that overlap with DORA to determine an indication of the effort required to comply with DORA. A digit of 1 means that there is one existing IT regulation or framework that overlaps with the requirements in the respective DORA pillar, whereas 2 means there is an overlap with two existing IT regulations or frameworks.

C-2022-3-Alam-9-klein

Figure 9. Analysis of mapping of DORA pillars vs. segments. [Click on the image for a larger image]

Based on the analysis above, three specific observations can be made:

  1. What becomes immediately apparent from this analysis is that there are certain sectors (e.g., credit rating agencies, benchmark administrators, crowdfunding organizations) that lack IT frameworks to govern IT, which are being subjected to IT regulations for the first time and therefore do not have any experience in translating IT regulations into controls within their organizations. The expectation is that financial entities in this segment will have to undertake considerable efforts to comply with DORA requirements.
  2. At the same time we note that all segments across the board do not have any good practices or controls in place through existing regulations that address the information sharing agreements. The requirements for “information sharing agreements” are not the hardest set of requirements within DORA that needs to be complied with. As mentioned earlier, some informal working groups among banks, insurers and pension funds already exist, and adding the requirements from DORA would most probably require little effort.
  3. Lastly, segments that already have experience in implementing IT regulatory requirements such as credit institutions, insurers and pension funds did this through frameworks such as the DNB Good Practice Information Security, EBA Guidelines on ICT & Security Risk Management, EBA Guidelines on Outsourcing and EIOPA ICT Guidelines. However, these financial entities still have to undertake some effort to comply with DORA. Like for ICT risk management there are additional requirements not covered by the existing regulations/frameworks and the guidelines of the EIOPA framework for insurers and pension funds limit outsourcing to contract and service level management only. In the same manner, the reporting of ICT-related incidents and Threat-Led Pen Testing is not common practice yet and will therefore also require efforts for proper implementation – although to a lesser extent compared to newly regulated financial entities.

Roadmap to compliance

If we sum up all the requirements discussed in the previous sections, we note that while some elements are entirely new, there are also elements that represent an add-on to existing practices. The requirements for managing ICT third-party risks and information sharing agreements are entirely new, whereas IT risk management and ICT incidents represent the add-ons to existing topics. All in all, there is quite a lot to comply with for DORA. Figure 10 gives a summary of the requirements for each of the DORA pillars, which are new to financial entities.

C-2022-3-Alam-10-klein

Figure 10. Compliance roadmap. [Click on the image for a larger image]

Conclusion

DORA increases the attention on ICT used by financial institutions . As discussed in this article, DORA focuses on five pillars: ICT risk management, ICT-related incident reporting, Digital operational resilience testing, ICT third-party risk and Information sharing agreements. The scope of financial institutions to whom this applies has been broadened. Besides the traditional financial institutions, such as banks and insurance companies, crypto-asset service providers are also required to comply with the guidelines and require more formalization since no current standards are published yet for crypto-asset service providers. Hence, the current state of maturity may vary between the types of organizations regarding compliance with the five pillars. This also triggers the proportionality discussion and requires revisiting of the financial entities’ current state to determine to what extent new or extra measures should be taken.

KPMG’s view is that DORA will increase the regulatory pressure and require compliance with new additional requirements. This has multiple reasons. First of all, DORA is a European IT regulation that will bring extra pressure and impact as it will apply as a law and brings them under the supervision of the European Commission. Therefore, financial entities will have to comply with a law and not being able to comply will be viewed as Non-Compliance with Laws and Regulations (NOCLAR) with potential legal implications.

Secondly, DORA introduces entirely new requirements, that will require the necessary additional efforts to implement within the organization, including the redefinition of internal processes (ICT third-party management) and formation of new processes (information sharing agreements). For financial entities that do not have much experience with complying with IT regulations this will be an arduous task.

Thirdly, a large part of the financial entities already has to comply with many different IT regulations/guidelines. Financial entities can therefore experience the so-called “regulatory fatigue” which may impact their overall level of compliance.

KPMG is of the opinion that financial entities should start assessing the impact of DORA on their organization as soon as they can, in order to effectively utilize the implementation timeframe of one year and achieve compliance with DORA by the end of 2024.

References

[DCB18a] Dutch Central Bank (2018, June 25). Good practices beheersing risico’s bij uitbesteding. Retrieved from: https://www.dnb.nl/voor-de-sector/open-boek-toezicht-fasen/lopend-toezicht/prudentieel-toezicht/governance/good-practices-beheersing-risico-s-bij-uitbesteding/

[DCB18b] Dutch Central Bank (2018). Proportioneel en effectief toezicht. Retrieved from: https://www.dnb.nl/media/yojpc5a5/dnb-studie-proportioneel-en-effectief-toezicht.pdf

[DCB19] Dutch Central Bank (2019). Good Practice Informatiebeveiliging. Amsterdam: DNB.

[EBA19a] European Banking Authority (2019). EBA Guidelines on ICT and security risk management. Paris: European Banking Authority.

[EBA19b] European Banking Authority (2019). EBA Guidelines on outsourcing arrangements. Paris: European Banking Authority.

[ECFC20] European Commission First Council Working Party (2020, September 30). Digital Operational Resilience Act. Retrieved from: https://ec.europa.eu/info/sites/default/files/business_economy_euro/banking_and_finance/200924-presentation-proposal-digital-operational-resilience_en.pdf

[EIOP20] European Insurance and Occupational Pensions Authority (2020). Guidelines on information and communication technology security and governance. European Insurance and Occupational Pensions Authority.

[EuCo20] European Commission (2020). REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on digital operational resilience for the financial sector and amending Regulations (EC). Brussels: European Commission.

[EuPa22] European Parliament (2022, March 24). Legislative Train Schedule: Digital Operational Resilience for the Financial Sector. Retrieved from: https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-cross-sectoral-financial-services-act-1

ESG is here to stay: is your policy management framework ready?

Introduction

The world has experienced continuous change over the last few years, with it sometimes being difficult to know where the focus should be placed. The newest change facing the world has been brought about through generational shifts and increasing climate concerns: environmental, social, governance (ESG). From the introduction of a standard EU taxonomy for ESG to ESG data challenges (see [Zhig22] and [Delf22] respectively), it has become the buzzword of 2021/2022 for organizations and governments globally. However, those in ethics and compliance functions understand that ESG is not a new concept. In reality, this is more of a resurgence of concepts that have been combined due to their interdependence and growth from “nice to have” into regulatory obligations.

The ESG challenge

The United Nations Principles for Responsible Investment define ESG as shown in Figure 1 ([UNPR18]).

C-2022-2-Kozlowski-1-klein

Figure 1. United Nations Principles for Responsible Investment definition of ESG. [Click on the image for a larger image]

The breadth of these definitions may be daunting for those functions tasked with developing successful ESG strategies in their organizations. What’s more, it challenges – and largely prohibits – the traditional approach by organizations to delegate an emerging risk or legislative change to a single function in accordance with their risk framework. Implementing and managing ESG successfully will require an integrated approach that stretches across borders and areas of expertise.

Regulators driving change

ESG-centered regulatory guidance and obligations have steadily grown over the past few years, and there are expectations that these regulations will come with teeth ([Roge19]); a key factor in driving real change. To date, regulations have been largely targeted at the sustainable investing and financial reporting obligations, supply chains or third-party risk and diversity requirements. For example, the EU Taxonomy was released to establish a common language when discussing and reporting on sustainability topics and metrics ([Link22]). The Dutch Central Bank (de Nederlandsche Bank (DNB)) has also taken steps to drive change by monitoring the level of ESG commitment in the financial sector. As of January 2022, “Climate-related risks are now also part of the fit and proper assessments of (co)policymakers of banks, insurers and pension funds. The financial undertaking in question must include in its screening application the candidate’s knowledge and experience with regard to such risks. DNB amended its suitability matrices to explicitly include this.” ([Link22])

Apart from new requirements, regulators like the United States Department of Justice (DoJ) have also chosen to reiterate existing obligations that remain relevant for the success of governance, risk, and compliance (GRC) frameworks. The updated DoJ guidance on evaluating corporate compliance programs is one such example that would also support a sound ESG strategy. As noted by [Bell20], “the adequacy of compliance programs is frequently relevant in civil enforcement brought by federal agencies such as the United States Environmental Protection Agency (EPA) and state environmental enforcers … and are generally recognized as foundations for effective environmental risk management”. This suggests that while the onset of new regulations will require change, organizations should also utilize their compliance frameworks to approach ESG needs in an integrated manner.

Where should you start?

With the increase in regulations and societal demand, organizations are seeking solutions to implement ESG into their organization. As a first step, conducting a materiality assessment of ESG topics will support the focus on the areas which are most relevant and impactful to build the ESG strategy ([KPMG20]). Through existing frameworks, organizations can bring their strategy to life by tailoring their ESG approach to what works for their organization without causing significant business disruptions in the process.

A policy management framework is one such framework that is both foundational and a connector between topics. Policies and procedures are the resource that organizations use to set common standards across their organization and support the realization of the organization’s mission, vision, values, and strategy ([Nave21]). The policy management framework is the resource to ensure that those standards are communicated, the roles and responsibilities concerning the standards are understood, and that the designated metrics are monitored and reported accordingly; all crucial elements for the success of ESG ([KPMG20]). Traditionally, siloed topics also naturally converge within the policy management framework. This supports a cross-functional approach to interdependent risks – of which ESG has in abundance.

Successful policy management frameworks should include at least the areas mentioned in Figure 2 to be effective and efficient.

C-2022-2-Kozlowski-2-klein

Figure 2. KPMG Policies and Procedures Management Framework. [Click on the image for a larger image]

The policy management framework should build upon existing fundamentals that are in place in the organization. When bringing the policy management framework to life, organizations should ensure consistency amongst policies, accuracy to relevant laws and concerns, relations between policies and concepts, and the application of a risk / value-driven approach. Moreover, multinational organizations should ensure that the global framework accounts for local regulatory requirements and association to the global policies, as this is often a where misalignment can occur.

Reinvent or refresh?

Once an organization completes the materiality assessment and sets their ESG strategy, they need to build a solid governance structure and process to maintain it. Having a mature policy management framework will provide a standard template for ESG to be incorporated into the organization like other emerging risks. Leveraging regulatory monitoring and change management within the policy management framework would enable swift mapping of existing topics and functional areas to ESG, thereby identifying alignment opportunities and in-house expertise. For example, it may be the case that the organization already has established policies on the focus areas of their strategy. These could be refreshed to specifically tie in the ESG strategy, rather than creating a new set of ESG policies and procedures.

However, if an organization has treated policy management as an administrative necessity, further work will be required to be successful with ESG. As noted by [Doct21], “without effective policies in place, organizations will struggle to follow through with their ESG values as well as fail to effectively report.” Apart from an unrealized strategy, ineffective policy management can also result in increased legal costs and regulatory scrutiny. Therefore, organizations wishing to implement their ESG strategy should first review their policy management framework to ensure that the foundation is solid.

We have supported a variety of organizations in strengthening their Policy Houses and associated policy and procedure management frameworks. In one such case, we assisted a large financial services organization in establishing a meta-policy which detailed the overall framework approach, including governance, policy lifecycle, training and communications, as well as ongoing monitoring and effectiveness reviews. The benefits for that organization were to leverage a structured framework with sufficient documentation and tool-enabled to enable consistency for all core laws and topics to be covered based on their risk appetite and strategy. The organization successfully moved from a rule-based approach to a value-driven approach. This supports the overall understanding of and adherence to policies and procedures and fosters the desired culture.

Conclusion

Strong policy management frameworks lay the foundation for risk management. Organizations without this are likely to experience ESG implementation that is siloed and has overlapping existing risk areas, as well as a lack of structured monitoring to support compliance with extensive ESG regulations. So, from stakeholders and CEOs to compliance officers and general counsels, the decision makers and responsible persons across any organization should take stock of their policy management frameworks to prepare for ESG. A few questions to consider:

  • Have you invested in your framework recently?
  • Is your framework currently effective?
  • Do resourcing constraints point towards the opportunity to automate?
  • Is your framework sufficiently integrated to manage the multi-faceted risks that ESG brings?

If these cannot be answered “yes” with certainty, now is the time for proactive change; before it’s too late.

See also the other ESG article on Risk Management in this edition.

References

[Bell20] Bell, C.L. (2020, June 3). U.S. Department of Justice Revises its Guidance on Evaluating Corporate Compliance Programs. GreenbergTraurig E2 Law Blog. Retrieved from: https://www.natlawreview.com/article/us-department-justice-revises-its-guidance-evaluating-corporate-compliance-programs

[Delf22] van Delft, M., Hoffman, C., Verhaar, E., & Pieroen, P. (2022). Mastering the ESG Reporting and Data Challenges. Compact, 2022(1). Retrieved from: https://www.compact.nl/en/articles/mastering-the-esg-reporting-and-data-challenges/

[Doct21] DocTract (2021, December 13). Why ESG Demands a Strong Policy Framework. Retrieved from: https://www.linkedin.com/pulse/why-esg-demands-strong-policy-management-framework-doctract?trk=organization-update-content_share-article

[KPMG20] KPMG China (2020). Integrating ESG into your business. A step-by-step ESG guide for Hong Kong-listed issuers. Retrieved from: https://assets.kpmg/content/dam/kpmg/cn/pdf/en/2020/01/integrating-esg-into-your-business.pdf

[Link22] LinkLaters (2022). ESG Outlook in the Netherlands. Retrieved from: https://www.linklaters.com/en/insights/publications/2021/august/esg-outlook-in-the-netherlands

[Nave21] Navex Global (N.D.). Definitive Guide to Policy & Procedure Management, second edition. Retrieved from: https://www.navexglobal.com/en-us/resources/definitive-guides/definitive-guide-policy-and-procedure-management?RCAssetNumber=152

[Roge19] Rogers, J. & Richardson, S. (2019, December). ESG investing: The sharpening teeth of disclosure. How to stay ahead of the curve, minimize future costs of compliance and feed the growing demand from investors for responsible products and services. White & Case Financial Regulatory Observer. Retrieved from: https://www.whitecase.com/publications/insight/financial-regulatory-observer-december-2019/esg-investing-sharpening-teeth-disclosure

[UNPR18] United Nations Principles for Responsible Investment (2018). PRI Reporting Framework Main Definitions. Retrieved from: https://www.unpri.org/Uploads/i/m/n/maindefinitionstoprireportingframework_127272_949397.pdf

[Zhig22] Zhigalov, A. & de Graaff, G. (2022). Emerging Global and European Sustainability Reporting Requirements. Navigating the complexity and getting ready. Compact, 2022(1). Retrieved from: https://www.compact.nl/en/articles/emerging-global-and-european-sustainability-reporting-requirements/

Continuous control monitoring: the trend and how to get on board

How does the market think?

In a survey about Governance, Risk and Compliance ([KPMG19]), 57% of participants stated that only 10% of their internal control framework consisted of automated controls. However, 72% of participants identified control automation as a top priority. During the International SAP Conference on Internal Controls, Compliance and Risk Management in 2021 ([TAC22]), participants were asked several questions related to internal controls and their automation.

Figure 1 shows that 57% of the respondents would like to improve automated testing of their internal controls; 50% of respondents indicated that automated control testing and risk monitoring would be the highest priority on their GRC digitalization roadmap. However, 56% of respondents also stated that there are no technologies leveraged (yet) to automate their control testing.

C-2022-2-Giesen-1-klein

Figure 1. Poll results from the International SAP Conference on Internal Controls, Compliance and Risk Management ([TAC22]). [Click on the image for a larger image]

Why automation, and what can we automate?

Organizations or representatives are aiming to automate testing of controls, but why? Because automation of controls will lead to increased assurance while spending less effort on manually performing or testing the control. This is also described with practical examples in [Klei16]. In this article the cost savings, assurance increase, and quality increase were calculated for an example control (possible duplicate vendor invoices). Once the control testing is automated, the frequency of testing can be increased and become continuous. When the automated testing or monitoring of these controls indeed becomes continuous, there are additional benefits. A publication from The Institute of Internal Auditors ([Code05]) states about continuous auditing: “The power of continuous auditing lies in the intelligent and efficient continuous testing of controls and risks that results in timely notification of gaps and weaknesses to allow immediate follow-up and remediation.” While continuous monitoring or testing are more the responsibility of the 2nd line of defense function and continuous auditing lies with the 3rd line of defense, the statement can apply to both. Continuous monitoring or testing will lead to timely notification of gaps and weaknesses and enables immediate follow up and remediation.”

The similarities and differences between continuous auditing and continuous monitoring are shown in Table 1.

C-2022-2-Giesen-1t-klein

Table 1. Continuous auditing versus continuous monitoring. [Click on the image for a larger image]

In summary, continuous automated testing or monitoring of controls is interesting for organizations as it is cost efficient, has a high level of reliability and allows for timely notifications and follow-up.

While the testing or monitoring of almost any control can be automated to some extent through periodic data analytics, robotics, small scripts in Python or even through macros in Excel, [Gies20] describes that it is easiest to do this for configuration and authorization controls, which are automated in nature as they are programmed or configured directly in the application. IT dependent controls (e.g. controls based on a report) have slightly less potential for automation followed by completely manual controls for which automation is less straightforward or in case of a procedural control (e.g. both CFO and CEO need to physically sign a document while in the same room).

While both continuous auditing and continuous monitoring are relevant and interesting topics, the remainder of this article will focus more on the continuous monitoring capabilities of selected tooling.

Systems and tools for automation

There are different systems and tools that have capabilities for continuous control monitoring. Some examples are MetricStream, SAI360, ServiceNow and SAP. Some might even say that with Robotic Process Automation (RPA) and low-code platforms, these capabilities can also be met. While this is probably theoretically correct, the costs for setting up and maintaining such RPA or low-code solutions are not always considered in the business case. Examples could be the costs of developing an RPA, this often requires a specialized developer or team to gather requirements, develop, test and deploy the robot. If the process changes after the RPA solution is live, the robot needs to be adjusted accordingly, which again takes time from the specialized development team. Other tools, such as GRC tools, are often owned by the internal control function and usually require less effort from IT or specialized teams.

With organizations that are using SAP as their main ERP or financial system, often an SAP solution for continuous monitoring is used. Nowadays, SAP offers two solutions which can be leveraged for automated testing of controls and continuously monitoring thereof: SAP Process control (part of SAP GRC) and SAP Financial Compliance Management.

SAP Process Control

SAP Process Control is part of the SAP GRC application. It offers, amongst others, capabilities to document controls, send out workflows for control assessment and testing, reporting and automated control monitoring. A detailed overview of this system is provided in [Kimb17]. In this article the focus will be on the automated control monitoring capabilities of SAP Process Control. SAP offers multiple different integration scenarios for control monitoring – as highlighted in Figure 2.

C-2022-2-Giesen-2-klein

Figure 2. Integration scenarios in SAP Process Control. [Click on the image for a larger image]

While there are ten possible scenarios, the four scenarios highlighted in green in Figure 2 are most commonly used. These are further explained in Table 2.

C-2022-2-Giesen-2t-klein

Table 2. Commonly used integration Scenarios in SAP Process Control explained. [Click on the image for a larger image]

Once the integration with the target SAP ECC or SAP S/4 system is done, Data Sources (which is essentially a table, view or set of tables and views) and Business rules (a rule that determines which records are “right” and “wrong” in the retrieved data source) can be set up in SAP Process Control to determine whether the automated control in the target system is correctly or incorrectly configured. If the control is correctly configured, the SAP Process Control business rule will provide a “passed” result and the control is automatically reported as effective in SAP Process Control. However, if the control is not correctly configured, SAP Process Control will automatically create an issue workflow and send it, accompanied by the results of the business rule, to the responsible person for the control for further follow up. An example of such a workflow task in SAP Process Control is shown in Figure 3.

C-2022-2-Giesen-3-klein

Figure 3. SAP Process Control Automated Monitoring workflow. [Click on the image for a larger image]

On top of this check, SAP Process Control also offers a change log check functionality. This functionality can read and analyze the full change history of a table (e.g. configuration table for 3-way-match control) if the table is flagged for change logging. By combining the “regular” configuration check and the change log check in SAP Process Control, a 100% coverage can be achieved, meaning that the configuration settings of a target SAP system are completely and continuously monitored.

SAP Financial Compliance Management

SAP Financial Compliance Management is a relatively new solution from SAP. The aim of SAP with Financial Compliance Management is to provide a system that can be used to comply with SOx, with a low total cost of ownerships that can leverage a set of existing, pre-defined monitoring content.

As part of SAP Financial Compliance Management, SAP currently provides 60 Core Data Services (CDS) views in SAP S/4 which can be leveraged. These 60 CDS are provided out-of-the box. It is also possible to create additional CDS views which can be read by SAP Financial Compliance Management.

The CDS views are read using so called “Automated procedures” in SAP Financial Compliance Management. These procedures are run to determine whether a control linked to the procedure is effective or ineffective. If the result of a procedure is ineffective, an issue is created for follow up by the responsible user. An example of such a workflow task in SAP Financial Compliance Management is shown in Figure 4.

C-2022-2-Giesen-4-klein

Figure 4. SAP Financial Compliance Management procedure results. [Click on the image for a larger image]

SAP Process Control and SAP Financial Compliance Management side by side

Both solutions from SAP can be used for continuous control monitoring of automated controls in SAP target systems. While they are largely similar, there are also some differences. Table 3 shows a comparison.

C-2022-2-Giesen-3t-klein

Table 3. Comparison between SAP Process Control and SAP Financial Compliance Management. [Click on the image for a larger image]

While SAP Process Control has been around for several years, contains a broad range of functionalities and could be considered more heavy-duty, SAP Financial Compliance Management a new solution from SAP, more positioned as a quick and easy introduction to control automation and SOx compliance. Both solutions provide the tools that are needed to perform continuous control monitoring.

Looking at the roadmap for the remainder of 2022, there is a clear focus on the further development of SAP Financial Compliance Management, with seven planned activities. For SAP Process Control, there is only one development planned on the roadmap. On one hand, this might mean SAP Process Control is a stable solution, as it has been around for many years. On the other hand, it also shows the ambition to enhance the new SAP Financial Compliance Management system. Both systems are, and remain, compatible with the SAP S/4 system. This provides customers with a choice and the opportunity to really assess what is the best solution for their requirements.

Conclusion

Control automation and continuous control monitoring are still trending topics in the market. There are different applications and tools that provide functionality for continuous control monitoring. The applications delivered by SAP – SAP Process Control and SAP Financial Compliance Management – have their differences, but both deliver the functionalities needed to make the next step in the continuous control monitoring efforts of the internal control or internal audit function.

References

[Code05] Coderre, D. (2005). Global Technology Audit Guide: Continuous Auditing: Implications for Assurance, Monitoring, and Risk Assessment. The Institute of Internal Auditors. Retrieved from: https://www.iia.nl/SiteFiles/IIA_leden/Praktijkgidsen/GTAG3.pdf

[Gies20] van der Giesen, S. & Speelman, V. (2020). Exploring digital: Empowering the internal control function. Compact, 2020(3). Retrieved from: https://www.compact.nl/articles/exploring-digital-empowering-the-internal-control-function/

[Kimb17] Kimball, D.A. & van der Giesen, S. (2017). A practical view on SAP Process Control. Compact, 2017(4). Retrieved from: https://www.compact.nl/articles/a-practical-view-on-sap-process-control

[Klei16] Klein Tank, K. & van Hillo, R. (2016). It’s time to embrace continuous monitoring. Compact, 2016(4). Retrieved from: https://www.compact.nl/articles/its-time-to-embrace-continuous-monitoring/

[KPMG19] KPMG (2019, May). Survey – Governance, Risk and Compliance. Retrieved from: https://assets.kpmg/content/dam/kpmg/ch/pdf/results-grc-survey-2019.pdf

[TAC22] TAC Events (2022, March). Poll results – International SAP Conference on Internal Controls, Compliance and Risk Management 2021. Retrieved from: https://www.linkedin.com/posts/tac-events_sapccr-sapgrc-grc-activity-6902553579547426816-Q7A6

Incorporating ESG in risk management

Introduction

As a Risk & Controls professional, you sometimes find yourself in the following situation, “You just finished the year-end in-control statement and celebrated another successful end-of-year cycle with your team. You received an email from the CFO asking: “Do we have an internal controls framework for ESG reporting?” You are familiar with the term ESG. In fact, you just bought an electric vehicle to show your personal commitment to this topic. However, the internal controls framework for ESG reporting is completely new to you and you don’t even know where to begin.

Following this scenario, questions that naturally surface are: “What is the information required to report and how do I ensure completeness, accuracy and compliance of such information being reported? Are there appropriate internal controls in place within different processes to ensure transparency, accuracy and consistency of the data being disclosed and reported? How do I assess whether I am doing enough to comply with the regulatory requirements in its true essence and not make it a box-ticking exercise? How does my role in this journey differ from what the sustainability department is responsible for?”

If the questions above sound familiar to you at all, you are not alone. ESG and ESG reporting have moved out of the office of the Chief Sustainability Officer (CSO) into the purview of the CFO for many organizations, as it is slowly becoming the focal point and climbing its way up to the top agendas of the boardroom and C-suite discussions. Regulators across the globe have been driving the inclusion of ESG in reporting which can be found in [Zhig22].

Understanding the need of the hour, we suggest some simple, albeit not easy, steps for you to consider commencing the ESG Reporting Journey.

The ESG Reporting Journey

There are some considerations ([Schm22]) to be kept in mind by a Risk & Controls professional like yourself while starting and continuing on this journey:

  • Define the strategy for the risk function. The ESG risk profile should be underpinned with risk appetite statements, a robust framework and taxonomy as well as clear metrics to allow the management to monitor the amount of risk it is willing to accept in pursuit of the organizational objectives.
    For instance, consider a statement: “We have a low risk appetite for non-compliance of ESG reporting regulations either out of ignorance or willfulness; therefore, we focus on education, training, awareness and accountability of actions and disclosures.”
  • Self-assessment of skills and capabilities. Ensure your risk function is credible and well-positioned to add to the dialogue concerning strategic change. This implies a need for action on several fronts, such as hiring, training and career development of the right talent who has the competency of identifying risks pertaining to ESG and putting an internal controls framework place. The risk function should stay up to date with all the regulatory changes in the ESG space like the introduction of EU Taxonomy, proposed reporting requirements by SEC and being quick to analyze the impact of non-compliance on the reputation of the organization. Risk professionals should also possess the ability to assess the robustness of the existing processes and controls for instance for being able to assess the HR department to see how the employee related numbers (to be disclosed) are collected and if the controls are appropriate for complete and accurate reporting.
  • Define roles and responsibilities. Define and agree the role of the risk function within the business planning cycle – set out chronologically and map check points for risk management-facilitated discussions on key strategic initiatives. ESG internal control specialists should be allocated the responsibility to perform risk assessments and double materiality assessments. Additionally, the risk function should play a role in defining the organizations’ policy and procedures for ESG-related disclosure risks and controls.
  • Enhance risk management technologies. Make better use of available technologies, visualization tools and dashboarding to support senior management decisions on strategic risk. Invest in emerging risks, horizon scanning and stress testing capabilities to support better conversations on long-term implications of strategic decisions.
    For example, KPMG’s Sofy platform is often used for ESG regulations compliance tracking, carbon emissions monitoring, providing assurance over supporting data collection & analytics, ESG project impact tracking and performing maturity assessments.

Evolving your risk function towards the future ambition of the organization can be a complex undertaking. The following key steps are the core for a successful transformation:

  1. Look at establishing a governance structure with clear roles and responsibilities. The organization should set up adequate sustainability governance with clear roles and responsibilities in order to define policies, oversee the end-to-end ESG process from the definition of strategy through to the disclosures being made, and ensure there are appropriate controls throughout the process.
    In conjunction with management, it is important to understand the ESG topics of investor focus. You should focus on gathering existing documentation (e.g. baseline data, reporting strategy documents, output of process reviews) and review existing stakeholder materiality assessments, ERM results, internal board presentations, and analyst reports.
  2. Assess the as-is state for ESG reporting within the organization. While you start assessing the as-is state, give some thought to the below questions/points for a holistic overview:
    1. Is the ESG theme part of your organization’s values? Is the S(ocial) element included in the ethics & integrity employee training sessions?
    2. Is there sufficient knowledge of the G(overnance) aspects amongst oversight bodies to enable them to carry out their role appropriately?
    3. Are there clear well-established reporting lines, authorities and responsibilities for the E(nvironment) theme activities which also enable the organization to hold people accountable for their actions like waste disposal, carbon emissions, energy efficiency?
    4. How can you include fraud risks into ESG risk assessment activity to avoid greenwashing activities?
    5. Select and develop entity level governance controls like development of policies and procedures for ESG reporting. Develop process level controls for ESG disclosure activities like reporting of numbers under the gender and diversity KPI, number of accidents, along with technology driven controls for the IT systems used to generate the quantifiable figures.
    6. Can you already leverage on the existing lines of information and communication to use and communicate control information with respect to internal and external ESG reporting?
    7. Is there relevant and sufficient capability with your function to perform ESG risk assessments and regular evaluation of the designed ESG reporting internal controls framework?

    As a Risk & Controls professional, start by assessing the maturity of the internal controls framework for the relevant ESG metrics and prepare a list of gaps coming out to review that would need to be remediated to reach the end state. Focus on the Responsible, Accountable, Consulted, and Informed (RACI) Matrix for appropriate allocation of jobs across the organization. Also perform the data readiness assessment to understand how efficiently the data can be used for disclosures and what remediations would be required on the way.
    For instance, for reporting Green House Gas (GHG) emissions under Scope 1 and 21 – assess the process of collection of data and calculation of the numbers that would be required to be reported. Assess the key risks and validate which controls are present or would be required in the process to mitigate the key risks.

  3. Design the internal controls framework for relevant ESG metrics. Based on the new governance structure and the as-is assessment, a new ESG internal controls framework including process, controls, reporting, technology, and data improvement recommendations for a future state Target Operating Model (TOM) should be prepared. Also include a Change Management and transformation plan for an efficient implementation process. For example:
    1. At an entity level, the risk function should design management controls for regular materiality assessments to monitor sustainability goals. Additionally, they should also consider cut-off procedures to ensure data is presented and calculated for correct period.
    2. Another operational example – for reporting of GHG emissions under Scope 1 and 2 – the internal controls will have to be designed at a process level to ensure:
      • Completeness and accuracy of source data being used for calculation of GHG emissions in the organization
      • Complete and accurate calculation of GHG emissions
      • Transparency, consistency and relevance of GHG emissions data
  4. Implementation of the internal controls framework. Plug in the gaps identified and extend support in executing the designed ESG reporting program and controls. This would include introducing some system implementations, training of the staff on the job and deployment of the roadmap towards ESG reporting.
  5. Sustain the framework. This new framework must be tested by your team over time and require some overhauling as and when there are some changes in the ESG metrics as per materiality assessment. By having an appropriate internal status reporting including the testing results, data can be modified in a timely and complete manner and accurate reporting targets can be achieved.

C-2022-2-Bolt-1-klein

Figure 1. Your road to reporting in the ESG journey. [Click on the image for a larger image]

Conclusion

Risk & Controls professionals can help organizations establish a long-term vision rather than managing short-term risks. This presents a unique opportunity for the risk professionals to take an eminent role and drive the transformation within the organization towards a better future.

After carefully considering these five steps and your company’s current situation, you can confidently respond to your CFO and say that “No, we currently do not have an internal controls framework for ESG reporting, but I know what to do. I will arrange a meeting to get started.”

See also the other ESG article on Risk Management in this edition.

Notes

  1. Scope 1 emissions: direct emissions from owned or controlled sources; Scope 2 emissions: indirect emissions from purchased energy; and Scope 3 emissions: indirect emissions, other than the ones under Scope 2, that occur in the value chain of an organization.

References

[Schm22] Schmucki, P. (2022, February 1). ESG and the evolving risk management function. KPMG Switzerland Blog. Retrieved from: https://home.kpmg/ch/en/blogs/home/posts/2022/01/esg-and-the-evolving-risk-management-function.html

[Zhig22] Zhigalov, A. & de Graaff, G. (2022). Emerging global and European sustainability reporting requirements. Compact, 2022(1). Retrieved from: https://www.compact.nl/en/articles/emerging-global-and-european-sustainability-reporting-requirements/

Emerging global and European sustainability reporting requirements

This article looks at new developments in sustainability reporting on a global and European level. A global multi-stakeholder acknowledgment for coherence and consistency in sustainability reporting is desired. Major standard setters are collaborating and prototyping what later can become a unified solution. In this paper we share what we know about the proposal of EU CSRD, EU Taxonomy and IFRS ISSB and try to indicate in what way companies should be ready for new global and European developments.

Introduction

Regardless of regulation and domicile, companies – both public and private – are under pressure from regulators, investors, lenders, customers and others to improve their sustainability credentials and related reporting. Companies often report using multiple standards, metrics or frameworks with limited effectiveness and impact, a high risk of complexity and ever-increasing cost. It, moreover, can be daunting to keep track of the everchanging reporting frameworks and new regulations.

As a result, there is a global demand for major stakeholders involved in sustainability reporting standard setting collectively coming up with a set of comparable and consistent standards ([IFR20]). This would allow companies to ease reporting fatigue and prepare for compliance with transparent and common requirements. Greater consistency would reduce complexity and help build public trust through greater transparency of corporate sustainability reporting. Investors, in turn, would benefit from increased comparability of reported information.

However, the demand for global coherence and greater consistency in sustainability reporting is yet to be met. This paper provides an overview of the current state of affairs and highlights the most prominent collaborative attempts to set standards, through the IFRS Foundation Sustainability Standards Board, EU Corporate Sustainability Reporting Directive and EU Taxonomy.

Global sustainability reporting developments: IFRS International Sustainability Standards Board (ISSB) in focus

The new International Sustainability Standards Board (ISSB) aims to develop sustainability disclosure standards that are focused on enterprise value. The goal is to stimulate globally consistent, comparable and reliable sustainability reporting using a building block approach. With strong support from The International Organization of Securities Commissions (IOSCO), a rapid route to adoption is expected in a number of jurisdictions. In some jurisdictions, the standards will provide a baseline either to influence or to be incorporated into local requirements. Others are likely to adopt the standards in their entirety. Companies need to monitor their jurisdictions’ response to the standards issued by the ISSB and prepare for their implementation.

There is considerable investor support behind the ISSB initiative, and the Glasgow Financial Alliance for net Zero (GFANZ) announced at COP26 that over $130 trillion of private capital is committed to transforming the global economy towards net zero ([GFAN21]). Investors expect the ISSB to bring the same focus, comparability and rigor to sustainability reporting as the International Accounting Standards Board (IASB Board) has done for financial reporting. This could mean that public and private organizations will adopt the standards in response to investor or social pressure.

ISSB has provided prototype standards on climate-related disclosures and general requirements for sustainability disclosures, which are based on existing frameworks and standards, including Task Force on Climate-Related Financial Disclosures (TCFD) and Sustainability Accounting Standards Board (SASB). As for now the prototype standards have been released for discussion purposes only. The prototypes cover climate-related disclosures and general requirements for disclosures that should form the basis for future standard setting on other sustainability matters.

C-2022-1-Zhigalov-01-klein

Figure 1. What contributes to the ISSB and IFRS Sustainability Disclosure Standards. [Click on the image for a larger image]

The prototypes are based on the latest insight into existing frameworks and standards. They follow the four pillars of the TCFD’s recommended disclosures: governance, strategy, risk management, metrics and targets. Enhanced by climate-related industry-specific metrics derived from the SASB’s 77 industry-specific standards. Additionally, the prototypes embrace input from other frameworks and stakeholders, including input from the IASB Board’s management commentary proposals. The ISSB builds prototypes using a similar approach to IFRS Accounting Standards. The general disclosure requirements prototype was inspired by IAS 1 Presentation of Financial Statements, setting out the overall requirements for presentation under IFRS Accounting Standards.

Companies that previously adopted TCFD should consider identifying and presenting information on topics other than climate and focus on sector-specific metrics, while those companies that previously adopted SASB should focus on strategic and process-related requirements related to governance, strategy and risk management.

C-2022-1-Zhigalov-02-klein

Figure 2. How Sustainability Disclosure Standards are supposed to look. [Click on the image for a larger image]

The prototypes shed light on the proposed disclosure requirements. Material information should be disclosed across presentation standard, thematic and industry-specific standards. Material information is supposed to:

  1. provide a complete and balanced explanation of significant sustainability risks and opportunities;
  2. cover governance, strategy, risk management and metrics and targets;
  3. focus on the needs of investors and creditors, and drivers of enterprise value;
  4. be consistent, comparable and connected;
  5. be relevant to the sector and the industry;
  6. be present across time horizons: short-, medium- and long-term.

Material metrics should be based on measurement requirements in the climate prototype or other frameworks such as the Greenhouse Gas Protocol.

The climate prototype has a prominent reference to scenario analysis. Such analysis can help investors assess the possible exposures from a range of hypothetical circumstances and can be a helpful tool for company’s management in assessing the resilience of a company’s business model and strategy to climate-related risks.

What is scenario analysis?

Scenario analysis is a structured way to consider how climate-related risks and opportunities could impact a company’s governance framework, business model and strategy. Scenario analysis is used to answer ‘what if’ questions. It does not aim to forecast of predict what will happen.

A climate scenario is a set of assumptions on how the world will react to different degrees of global warming. For example, the carbon prices and other factors needed to limit global warming to 1.5 °C. By their nature, scenarios may be different from the assumptions underlying the financial statements. However, careful consideration needs to be given to the extent in which linkage between the scenario analysis and these assumptions is appropriate.

The prototypes do not specify a single location where the information should be disclosed. The prototypes allow for cross referencing to information presented elsewhere, but only if it is released at the same time as the general-purpose financial report. For example, the MD&A (management discussion & analysis) or management commentary may be the most appropriate place to provide information required by future ISSB standards.

C-2022-1-Zhigalov-03-klein

Figure 3. Examples of potential places for ISSB-standards disclosure. [Click on the image for a larger image]

As for an audit of such disclosure, audit requirements are not within the ISSB’s remit. Regardless of local assurance requirements, companies will need to ensure they have the processes and controls in place to produce robust and timely information. Regulators may choose to require assurance when adopting the standards.

How the policy context of the EU shapes the reporting requirements

In line with the Sustainable Finance Action Plan of the European Commission, the EU has taken a number of measures to ensure that the financial sector plays a significant part in achieving the objectives of the European Green Deal ([EUR18]). The European policy maker states that better data from companies about the sustainability risks they are exposed to, and their own impact on people and the environment, is essential for the successful implementation of the European Green Deal and the Sustainable Finance Action Plan.

C-2022-1-Zhigalov-04-klein

Figure 4. The interplay of EU sustainable finance regulations. [Click on the image for a larger image]

The following trends build up a greater demand for transparency and uptake of corporate sustainability information in investment decision making:

  1. Increased awareness that climate change will have severe consequences when not actively addressed
  2. Social stability requires more focus on equal treatment of people, including a more equal distribution of income and financial capital
  3. Allocating capital to companies with successful long-term value creation requires more comprehensive insights in non-financial value factors
  4. Recognition that large corporate institutions have a much broader role than primarily serving shareholders

The European Commission as a policy maker addresses these trends through comprehensive legislation focusing on directly addressing issues as well as indirectly addressing issues through corporate disclosures to support investors decision making.

In terms of the interplay between the European and global standard setters, it is interesting to notice that collaboration is highly endorsed. The EU Commission clearly states that EU sustainability reporting standards need to be globally aligned and aims to incorporate the essential elements of globally accepted standards currently being developed. The proposals of the International Financial Reporting Standards (IFRS) Foundation to create a new Sustainability Standards Board are called relevant in this context ([EUR21d]).

Proposal for a Corporate Sustainability Reporting Directive

On April 21, 2021, the EU Commission announced the adoption of the Corporate Sustainability Reporting Directive (CSRD) in line with the commitment made under the European Green Deal. The CSRD will amend the existing Non-Financial Reporting Directive (NFRD) and will substantially increase the reporting requirements on the companies falling within its scope in order to expand the sustainability information for users.

C-2022-1-Zhigalov-05-klein

Figure 5. European sustainability reporting standards timeline. [Click on the image for a larger image]

The proposed directive will entail a significant increase in the number of companies subject to the EU sustainability reporting requirements. The NFRD currently in place for reporting on sustainability information, covers approximately 11,700 companies and groups across the EU. The CSRD is expected to increase the number of firms subject to EU sustainability reporting requirements to approximately 49,000. Small and medium listed companies get an extra 3 years to comply. Criteria to define the applicability of CSRD to companies (listed or non-listed) make a list of three. At least two of three should be met. The criteria are:

  • more than 250 employees and/or;
  • EUR 40 mln turnover and/or;
  • EUR 20 mln assets.

New developments will come with significant changes and potential challenges for companies in scope. The proposed Directive has additional requirements that will affect the sustainability reporting of those affected ([EUR21a]):

  1. The Directive aims to clarify the principle of double materiality and to remove any ambiguity about the fact that companies should report information necessary to understand how sustainability matters affect them, and information necessary to understand the impact they have on people and the environment.
  2. The Directive introduces new requirements for companies to provide information about their strategy, targets, the role of the board and management, the principal adverse impacts connected to the company and its value chain, intangibles, and how they have identified the information they report.
  3. The Directive specifies that companies should report qualitative and quantitative as well as forward-looking and retrospective information, and information that covers short-, medium- and long-term time horizons as appropriate.
  4. The Directive removes the possibility for Member States to allow companies to report the required information in a separate report that is not part of the management report.
  5. The Directive requires exempted subsidiary companies to publish the consolidated management report of the parent company reporting at group level, and to include a reference in its legal-entity (individual) management report to the fact that the company in question is exempted from the requirements of the Directive.
  6. The Directive requires companies in scope to prepare their financial statements and their management report in XHTML format and to mark-up sustainability information.

C-2022-1-Zhigalov-06-klein

Figure 6. Nature of double materiality concept. [Click on the image for a larger image]

The CSRD has overall requirements on how to report, general disclosure requirements on how the company has organized and managed itself and topic specific disclosure requirements in the field of sustainability. It should be noted that the company sustainability reporting requirements are much broader than climate risk, e.g., environmental, social, governance and diversity are the topics addressed by the CSRD.

C-2022-1-Zhigalov-07-klein

Figure 7. Overview of the reporting requirements of the CSRD. [Click on the image for a larger image]

Extended reporting requirements that come with the CSRD may require companies in scope of this regulation to start preparing now. Here is an illustrative timeline for companies to become CSRD ready.

C-2022-1-Zhigalov-08-klein

Figure 8. A potential way forward to become CSRD ready. [Click on the image for a larger image]

EU Taxonomy – new financial language for corporates

The EU Taxonomy and the delegated regulation are the first formal steps of the EU to require sustainability reporting in an effort to achieve the green objectives.

Over the financial year 2021, so called large (more than 500 employees) listed entities have to disclose, in their non-financial statement as part of the management report, how their turnover, CapEx and OpEx are split by Taxonomy-eligible activities (%) and Taxonomy-non-eligible activities (%) including further qualitative information.

Over the financial year 2022 these activities need to be aligned with the criteria for sustainability to contribute to the environmental objectives and do no significant harm to other objectives and comply with minimum safeguards. Alignment should then be reported as proportion of turnover, CapEx and OpEx to assets or processes associated with economic activities that qualify as environmentally sustainable. To financial institutions in turn it translates to the requirement to report on the green asset ratio, which in principle is a ratio of Taxonomy-eligible or Taxonomy-aligned assets as a percentage of total assets.

C-2022-1-Zhigalov-09-klein

Figure 9. EU Taxonomy timeline. [Click on the image for a larger image]

The “delegated act” under the Taxonomy Regulation sets out the technical screening criteria for economic activities that can make a “substantial contribution” to climate change mitigation and climate change adaptation. In order to gain political agreement at this stage texts relating to crops and livestock production were deleted, and those relating to electricity generation from gaseous and liquid fuels only relate to renewable, non-fossil sources. On the other hand, texts on the manufacture of batteries and plastics in primary form have been added, and the sections on information and communications technology, and professional, scientific and technical activities have been augmented.

With further updates of the technical screening criteria for the environmental objective of climate mitigation we will also see the development of the technical screening criteria for transitional activities. Those transitional economic activities should qualify as contributing substantially to climate change mitigation if their greenhouse gas emissions are substantially lower than the sector or industry average, if they do not hamper the development and deployment of low-carbon alternatives and if they do not lead to a lock-in of assets incompatible with the objective of climate- neutrality, considering the economic lifetime of those assets.

Moreover, those economic activities that qualify as contributing substantially to one or more of the environmental objectives by directly enabling other activities to make a substantial contribution to one or more of those objectives are to be reported as enabling activities.

The EU Commission estimates that the first delegated act covers the economic activities of about 40% of EU-domiciled listed companies, in sectors which are responsible for almost 80% of direct greenhouse gas emissions in Europe. A complementary delegated act, expected later in early 2022, will include criteria for the agricultural and energy sector activities that were excluded this time around. The four remaining environmental objectives — sustainable use of water and marine resources, transition to a circular economy, pollution prevention and control, and protection and restoration of biodiversity and ecosystems — will be addressed in a further delegated act scheduled for Q1 of this year.

C-2022-1-Zhigalov-10-klein

Figure 10. EU Taxonomy conceptual illustration. [Click on the image for a larger image]

Companies shall disclose the proportion of environmentally sustainable economic activities that align with the EU Taxonomy criteria. The European ([EUR21c]) Commission views that the translation of environmental performance into financial variables (turnover, CapEx and OpEx KPIs) allows investors and financial institutions in turn to have clear and comparable data to help them with their investment and financing decisions. The main KPIs for non-financial companies include the following:

  • The turnover KPI represents the proportion of the net turnover derived from products or services that are Taxonomy aligned. The turnover KPI gives a static view of the companies’ contribution to environmental goals.
  • The CapEx KPI represents the proportion of the capital expenditure of an activity that is either already Taxonomy aligned or part of a credible plan to extend or reach Taxonomy alignment. CapEx provides a dynamic and forward-looking view of companies’ plans to transform their business activities.
  • The OpEx KPI represents the proportion of the operating expenditure associated with Taxonomy-aligned activities or to the CapEx plan. The operating expenditure covers direct non-capitalized costs relating to research and development, renovation measures, short-term lease, maintenance and other direct expenditures relating to the day-to-day servicing of assets of property, plant and equipment that are necessary to ensure the continued and effective use of such assets.

The plan that accompanies both the CapEx and OpEx KPIs shall be disclosed at the economic activity aggregated level and meet the following conditions:

  • It shall aim to extend the scope of Taxonomy-aligned economic activities or it shall aim for economic activities to become Taxonomy aligned within a period of maximum 10 years.
  • It shall be approved by the management board of non-financial undertakings or another body to which this task has been delegated.

In addition, non-financial companies should provide for a breakdown of the KPIs based on the economic activity pursued, including transitional and enabling activities, and the environmental objective reached.

C-2022-1-Zhigalov-11-klein

Figure 11. EU Taxonomy disclosure requirements. [Click on the image for a larger image]

As for challenges companies face when preparing for EU Taxonomy disclosure, the following key implementation challenges are observed in our practice:

  1. administrative burden and systems readiness;
  2. alignment with other reporting frameworks and regulations;
  3. data availability;
  4. definitions alignment across all forms of management reporting;
  5. integration of EU Taxonomy reporting into strategic decision making.

Furthermore, the Platform on Sustainable Finance is consulting ([EUR21b]) on extending the Taxonomy to cover “brown” activities and a new Social Taxonomy. The current Taxonomy covers only things that are definitely “green”, indicating a binary classification. The Platform notes the importance of encouraging non-green activities to transition and suggests two new concepts – “significantly harmful” and “no significant harm”. The aim of a Social Taxonomy would be to identify economic activities that contribute to advancing social objectives. A follow-up report by the Commission is expected soon. The eventual outcome will be a mandatory social dictionary, which will add further to the corporate reporting requirements mentioned above and company processes and company-level and product disclosures for the buy-side (see below). It will also be the basis for a Social Bond Standard.

Conclusion

Evolution of sustainability reporting is happening at a fast pace. Collective efforts on a global and European level help develop the disclosure requirements to make them more coherent and consistent in order to be comparable and reliable. The prototype standards that have been released by now show optimism about leveraging on the existing reporting frameworks for the sake of consistency. Luckily the European and global standard setters prioritized recycling and leveraging on existing reporting frameworks and guidance instead of designing something absolutely new to the wider audience. Sustainability reporting standardization is after all not only much waited activity but also very much a dynamic and multi-centred challenge. All, EU CSRD, EU Taxonomy and IFRS ISSB will ultimately contribute to the availability of high-quality information about sustainability risks and opportunities, including the impact companies have on people and the environment. This in turn will improve the allocation of financial capital to companies and activities that address social, health and environmental problems and ultimately build trust between those companies and society. This is a pivotal moment for corporate sustainability reporting; more updates on the developments will follow most certainly!

Read more on this subject in “Mastering the ESG reporting and data challenges“.

References

[EUR18] European Commission (2018). Communication from the Commission. Action Plan: Financing Sustainable growth. Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52018DC0097

[EUR21a] European Commission (2021). Proposal for a Directive of the European Parliament and of the Council amending Directive 2013/34/EU, Directive 2004/109/EC, Directive 2006/43/EC and Regulation (EU) No537/2014, as regards corporate sustainability reporting

[EUR21b] European Commission (2021). Call for feedback on the draft reports by the Platform on Sustainable Finance on a Social Taxonomy and on an extended Taxonomy to support economic transition. Retrieved from: https://ec.europa.eu/info/publications/210712-sustainable-finance-platform-draft-reports_en

[EUR21c] European Commission (2021). FAQ: What is the EU Taxonomy Article 8 delegated act and how will it work in practice? Retrieved from: https://ec.europa.eu/info/sites/default/files/business_economy_euro/banking_and_finance/documents/sustainable-finance-taxonomy-article-8-faq_en.pdf

[EUR21d] European Commission (2021). Questions and Answers: Corporate Sustainability Reporting Directive proposal. Retrieved from: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1806

[GFAN21] GFANZ, Glasgow Financial Alliance for Net Zero (2021). Amount of finance committed to achieving 1.5º C now at scale needed to deliver the transition. Retrieved from: https://www.gfanzero.com/press/amount-of-finance-committed-to-achieving-1-5c-now-at-scale-needed-to-deliver-the-transition/

[IFR20] IFRS Foundation (2020). Consultation Paper on Sustainability Reporting. Retrieved from: https://www.ifrs.org/content/dam/ifrs/project/sustainability-reporting/consultation-paper-on-sustainability-reporting.pdf

[IOSC21] IOSCO (2021). Media Release IOSCO/MR/16/2021. Retrieved from: https://www.iosco.org/news/pdf/IOSCONEWS608.pdf

Trust by Design: rethinking technology risk

In society, there is a growing call for trust in technology. Just think about the data leaks and privacy issues that hit the news almost on a daily basis. Organizations tackle these issues through risk management practices and implementing controls and measures to ensure meeting the risk appetite of the organization. But the implications go further, last year the Dutch privacy watchdog stated that organizations should be very careful in using Google cloud services due to their poor privacy practices. This is not only challenging for the vendor but also for the clients using the services. Another example is Apple. They are doubling down on privacy in their iCloud and iOS offerings, so users trust them more as a brand, which increases their market share.

This raises several questions: What is trust? When do we decide to trust someone or something? And how can you become trusted? Do we overlook what trust really means, or do we have an innate sense for “trust”? But how does that work for organizations consisting of hundreds or thousands of people and complex business-to-business structures?

Introduction

The questions above seem to be easily overlooked when someone says, “trust me on this one”, or “have some trust in this”. For example, imagine you are buying a used car and the salesperson shows you a car and says “trust me, it is in perfect condition” we first want to look under the hood, open all doors, ask for a maintenance log, and of course do a test drive. Then imagine a colleague with whom you have been working for years tells you to trust them on some work-related topic, you tend to trust that person in an instance. That is looking at trust from a personal perspective. For business to business, the easiest direction to point at are contracts and formal agreements. But these only go so far and do not protect organizations or, maybe to a greater extent individuals against every risk. It’s important to not only look at whether a solution works well, but also if it meets your trustworthiness requirements across the wider value and supply chains. In our hyper-connected world, we rarely see stand-alone technology solutions anymore; we see systems of systems interacting in real-time. The ability to govern technology comprehensively can help you avoid potential ecosystem risks while fostering stronger alliances based on shared governance principles.

The concept of trust

In the audit, we use the saying “tell me, show me, prove it” (or something similar) where we list three ways to support a claim in order of lowest to the highest level of assurance. This implies that trust is the lowest level of assurance which is, strictly speaking, of course true. However, despite this, humanity has built many constructs of trust which we rely on, on a daily basis: money, legal structures, law, and governments, just to name a few. In the book Guns, Germs, and Steel: The Fates of Human Societies by Jared Diamond ([Diam97]), the concept of creating these “fantasy” constructs in which we put a lot of trust, is posited as a cornerstone of human progress.

In the risk management world, an example of trust we often come across is assurance reports. Frameworks such as ISAE, SOC, and ISO are trusted to be relevant and properly applied. These are all tools or constructs that we trust to keep our data, investments, or processes safe. These constructs are used as ways of trusting each other in the B2B world. These types of trust concepts are, to an extent, widely accepted, and rightfully so. However, isn’t it strange that we put so much trust in the authors of the frameworks or the independent auditors that validate these frameworks? Is this a case of blind faith or is it the trust we have and put in these types of constructs based on something more that we might take for granted?

The concept of trust is hard to define. You can look it up in a dictionary and depending on the one you use, the definitions vary. However, you can leave it up to academia to relentlessly structure things. In a meta-analysis, a well-founded concept of trust has been derived ([McKn96]). The act of trusting, or trusting behavior has 5 different precursors, where trusting intention (1) directly causes the trusting behavior. Trusting Intention is the extent to which one party is willing to depend on the other party even though negative consequences are possible. In general, people tend to form trusting intentions based on their beliefs, or trusting beliefs (2). These are formed from current, and prior experiences (dispositional trust (3)). In addition to that, there is a component that is caused by the system (4), and trust in that system (5). The system in this context can be a wide variety of systems given the situation, for example, an IT system or a management system.

Another concept of trust is that individuals want to have an amount of certainty that positive events unfold, and do not like risks that might reduce the certainty of said events. Trust could therefore also be considered as a feeling that there is low exposure to risks. This concept of risk exposure is also used in research to understand technology adoption, and trusting behavior ([Lipp06]). This research mentions predictability and reliability as two core features that can be used to evaluate trust in technology.

Most of these conceptions of trust are based on personal trust, or the trust behaviors of an individual ([Zahe98]). There is however a distinction between personal trust and organizational trust. The latter is considered to be a function of the combined personal trust of the employees of an organization. This seems to indicate that the predictability, reliability, and usability of technology can increase trust in a technology through the reduction of risks towards the potential benefits of using said technology. This, however, does not explain how organizations trust technology or other organizations. On the other hand, organizations consist of individuals that work together, so there is a clear connection between personal and organizational trust ([Zahe98]). There are different views on how this works, and the debate on how trust complements or replaces formal agreements and contracts seems to be ongoing. A lot of research was done into various facets of trust and how it works between two actors (e.g. [Noot97]). What literature does agree on is that trust can work as a less formal agreement between organizations and allows for less complex and costly safeguards when compared for contracts or vertical integration ([Gula08]).

Based on this, we broadly derive that trust will always play an important and above all positive role in organizational and interpersonal relationships (although the exact implications might be not completely understood at this point). It does however show us that trust can complement governance models and operationalizing this concept can be beneficial to organizations on various levels, bringing efficiency gains and maybe even competitive advantage ([Sydo06]). Trust in technology can be achieved by the demonstration of predictable and reliable positive outcomes, and a high degree of usability.

Achieving trust in practice

Now that the theoretical concept is uncovered to a degree, we can look at the practical aspect and a framework that is capable of governing how a trust should operate. First, we should probably set some conditions. As the concept of trust is very broad, we cannot cover the entire topic; we will therefore first look at the internal organization, the way organizations adopt technology and perform innovation and change projects.

Usually, these types of changes are governed by risk management processes that try to optimize the benefits while at the same time reducing the risks of non-compliance, security deficiencies, or reduced process effectiveness.

Risk management” is the term used in most cases, but we also see that “risk management and a lot of other stuff” is sometimes more applicable to reality. With “other stuff” we mean a lot of discussions on risks and mitigation, creating a lot of controls for things that might not really need controls in the first place. Then we come to testing these, sometimes overcomplete, control frameworks, to a degree that some organizations are testing controls for the sake of testing them. Testing is followed by reporting and additional procedures to close gaps that are identified. Usually, this has to be done on top of regular work instead of having embedded or automated controls that just operate within the processes themselves. In various industries, regulators impose more and more expectations regarding the way that risks are managed. In addition, there are increasing expectations from society on how (personal) data is protected, and the way organizations deliver their services. This includes far-reaching digitization and data-driven processes, required to support customer requirements. These expectations, technological advancements, and the ever-increasing competitiveness in the market create a gap between often agile-driven product delivery and risk management. Unfortunately, we also see that, as a natural reflex, organizations tend to impose even more controls on processes which further inflates the “other stuff” of risk management.

From a classical risk management standpoint, risks are mostly managed through the execution of controls and periodic testing of said controls. These controls are usually following frameworks such as ITIL or COSO. In many organizations, this type of work is divided between the so-called first, second, and third lines of defense. Recently we have seen that especially the first and second lines of defense are positioned closer to each other ([IIA20]). In practice, this results in more collaborative efforts between the first and second lines. Regardless of how exactly this structure should be implemented, the interaction between the first two lines of defense is increasingly important: organizations’ risk management practices often struggle to keep up with the momentum of the product teams that are releasing changes, sometimes multiple times a day. These releases can introduce risks such as privacy or contractual exposures that can be overlooked by a delivery-focused first line.

Innovations and technology adoptions are performed by the collective intelligence of teams that have a high-risk acceptance and focus on getting the highest benefit. Collective intelligence can be broadly defined as the enhanced capacity for thought and decision-making through the combination of people, data, and technology. This not only helps collective intelligence to understand problems and identify solutions – and therefore deciding on the action to take – it also implies constant improvement. The experiences of all involved in the collective are combined to make the next solution or decision better than the last. However, risk management practices are required to be embedded within the innovation process to ensure that the risk acceptance of the organization as a whole is not breached. Take for example the processing of personal data by a staffing organization. This can be extremely beneficial and lead to competitive advantages if done properly. However, this is not necessarily allowed in the context of European legislation. This is where risk management plays a significant role in, for example, limiting the usage of personal data. In innovative teams, this is however not always perceived as beneficial. Risk management can therefore be seen as a limiting factor that slows down the organization and makes processes murky and bureaucratic. Unfortunately, this compliance pressure is true and present in a lot of organizations, see Figure 1. There is, however, another perspective that we want to highlight.

C-2022-1-Kumar-01-klein

Figure 1. The issue at hand. [Click on the image for a larger image]

A good analogy is a racing car, which purpose is to achieve the fastest track times. This is achieved by a lot of different parts all working together to reach the fastest time. As racing tracks usually consist of higher and lower speed corners; a strong engine and fast gearbox are not enough to go the fastest. Good control with suspension and brakes that can continue to operate under a lot of stress, is just as important as the engine. This is no different with business teams, they need a powerful engine and gearbox to go forward quickly, but the best cars in motorsports have the best brakes. These roles are performed by an organization’s risk management practice. Data leaks, hacks, and reputational damage can be even more costly than slow time to market. However, there is an undeniable yearn from business teams to become more risk-based as compared to compliance-based.

In an agile environment this is also, or maybe even more, true. To achieve risk management in an agile world, risk management itself needs to become agile. But with the ITIL or COSO focused periodic testing of controls, the outcomes will lag behind. Imagine that testing changes every quarter. Before this has been tested and reports are written, numerous new changes will have been triggered already. With a constantly changing IT landscape, the gap between the identified risks from these periodic controls will no longer be an accurate representation of the actual risk exposure. This is called the digital risk gap, which is growing in a lot of organizations.

To close, or at least decrease the gap, the focus should be on the first line process; the business teams that implement changes and carry forward the innovations. It is most efficient to inject risk management as early in the innovation process as possible. In every step of the ideation, refinement, and planning processes, risk management should at least be in the back of the minds of product owners and product teams.

To achieve this risk awareness and to close the digital risk gap a framework has been developed that incorporates concepts from agile, software development, and risk management to provide an end-to-end approach for creating trust by reducing risks in a proactive and business-focused approach. This is what we call Trust by Design, and takes the concepts from the integrated risk management lifecycle (see Figure 2) into practice.

C-2022-1-Kumar-02-klein

Figure 2. The integrated risk management lifecycle. [Click on the image for a larger image]

The goal of Trust by Design is to achieve risk management practices in an agile world where trust is built by design into the applications and solutions by the people who are building and creating them. Due to the high iterative and fast-paced first-line teams that are almost coming up with new cool ideas on a weekly basis, the second line struggles to keep up. To change this, we should allow the first line teams to take control of their own risks, and build a system of trust. The second line can digitize the policies into guardrails and blueprints that the first line can use to take all the risk that is needed as long as the risk appetite of the organization is not breached.

Looking at how trust is achieved, there are three main principles we want to incorporate into the framework. The first is predictability. This can be achieved by standardizing the way that risks are managed because a highly standardized system functions in a highly predictable way. We strongly believe that 80% of the risk procedures that are taken within organizations, which seem to be one-of or custom procedures, can in fact be standardized. This is, however, not achieved overnight, and can be seen as an insurmountable task. The Trust by Design framework takes this transition into account by allowing processes to continue as they are at first but standardizing on the go. Slowly, standardization will be achieved, and trust will grow because the procedures are much more manageable and can be easily adapted to new legislation or technological advances.

Secondly, there is reliability. A standardized system allows for much better transparency across the board, both on an operational and a strategic level. But determining if processes are reliably functioning calls for transparency and well-articulated insights into the functioning of these processes. By using powerful dashboards and reporting tools pockets of high risk, developments can be made insightful, and even be used as steering information. Imagine if an organization is undertaking 100 projects of which 50 are processing highly sensitive personal data. Is that in line with the risk appetite, or should it be reconsidered? By adopting the Trust by Design framework these types of insights are available.

Lastly, there is a usability component. This is how the business teams perceive the guardrails and blueprints that are imposed. The Trust by Design approach is meant to take risks into account at the start of developing applications or undertaking changes. To achieve this, there are three basic building blocks that need to be defined. The first is the process itself, which also includes the governance, responsibilities between various functions, and the ownership of the building blocks themselves is defined. The second building block is the content, consisting of the risks, control objectives, and controls. And the third is where this content is kept, the technology to tap into the content and enable the process.

Following the three components of trust, the Trust by Design framework aims to reduce the complex compliance burden for business teams and increase the transparency for decision-makers and the risk management function within organizations. The Trust by Design framework aims to reduce the two most time-consuming phases in risk management for innovations and developments; determining the scope of the measures to be taken, and implementing said measures, while at the same time creating trust within the organization to leverage the benefits.

In practice, the framework is meant to be embedded within the development lifecycle of innovation, development, and changes. In Figure 3 the high-level framework overview is shown and consists of four major stages.

The first is the assessment stage, where through standardization business teams use business impact assessments and the following deeper level assessments to determine the risk profile of innovation or development. These are standardized questionnaires that can be applied to a technology, product, or asset throughout the development journey. The results of the questionnaires are used to funnel risk assessments and lead to a set of well-articulated risks, for which control objectives are defined. These control objectives can then be used to determine the risk appetite that the business is allowed, or willing to take, resulting in controls/measures. In the third stage, these can be implemented into the product at the right stages of the development cycle. These controls/measures can be seen as a set of functional/technical requirements that are added from a risk-based perspective. By applying this approach, by the time a development is completed or migrated into production the major risks are already mitigated in the development process itself. By design.

Lastly, there is the outcome where products are “certified” as the assessments, risks, the associated measures and the implementation can be transparently monitored.

C-2022-1-Kumar-03-klein

Figure 3. High-level Trust by Design framework. [Click on the image for a larger image]

These stages are constantly intertwined with the development processes. As circumstances change, so can the assessments or controls. Moreover, in environments where stringent controlling of certain risks is not necessary, guidance or blueprints can be part of the operation stage, helping innovation teams or developers with certain best practices based on the technology being applied, or the platform being used.

A case study

At a major insurance company, this approach has been adapted and implemented to enable the first line in taking control of the risks. The approach proposed at this organization is based on four steps:

  1. a high-level scorecard for a light-touch initial impression of the risks at hand,
  2. a deep dive into those topics that actually add risk,
  3. transparently implementing the measures to mitigate risks, and
  4. monitoring the process via dashboards and reports.

By using a refined scorecard on fifteen topics that cover the most important risk areas, product teams understand what risks they should take into account during the ideation and refinement processes. But also, which risks are not important to the feature being developed. This prevents teams from being surprised when promoting a feature to production, or worse once it is already out in the open. By applying risk-mitigating measures as close to the process step where the risk materializes, an acuminate risk response is possible, preventing over or under control. This requires specific control designs and blueprints that allow teams to implement measures that fit their efforts. The more precise these blueprints are the less over-controlled teams will be. It is important to note that for some subjects organizations might decide that over-controlling is preferred to the risk of under-controlling depending on the risk appetite.

Based on the initial impressions the scorecard is used to perform a more specific assessment of the risks and the required measures. This deep-level assessment sometimes requires input from experts on topics such as legal or security. In several organizations, a language gap exists between the first and second lines. One of the product owners we spoke to said, “I do not care about risk management, I just want to move forward as quickly as possible. For me this is only paperwork that does not help me accomplish my objectives“. Risk management consultants are also often guilty of speaking too much in a 2nd line vocabulary. It is important that we understand the 1st line and their objectives towards effectively designing a scorecard. In a way, this type of scorecard can be seen as the “Google Translate” between the 1st and 2nd lines. By asking the right questions in the right language the risks can become more explicit, and the required measures to mitigate the risk can be more specific. This reduces overcontrolling and leads to lowered costs and more acceptance from the product teams. The communication between the first and second lines is imperative to a successful implementation of a Trust by Design approach. This is also in line with the earlier mentioned IIA paper, in which the second line will become a partner that advises the first line, instead of an independent risk management department.

Since true agile is characterized by fast iterations and does not plan ahead too far, using a scorecard with an underlying deep level assessment helps product teams to quickly adapt to changes in the inherent risk of the change at hand. This “switchboard” approach allows much more agility, and still allows organizations to mitigate risks.

Developing this type of “switchboard” that leads users from high-level risks to more specific risks and the required standard measures, should be done iteratively. We also learned during our implementation that there is no way to make such a system exhaustive. At best we expect that 80% of the risks can be covered by these standard measures. The remainder will require custom risk mitigation and involvement of the 2nd line.

Implement measures, measure the implementation

Once the specific risks are identified, the agile or scrum process can be followed to take measures into account as if they are (non) functional requirements. This way, the regular development or change process can be followed, and the development teams can work on these in a way that is familiar to them.

The technology used by our insurance client to manage projects is Azure DevOps. It is used for both developments and more “classic” project management. This tooling allowed us to seamlessly integrate with the process used by teams in their daily routines. In addition, by structuring the data from the scorecard, risks were made transparent to all lines of defense. Through structured data, it is possible to create aggregations or to slice and dice data specifically for different levels of management and stakeholders. Using PowerBI or the standard Azure DevOps dashboarding decisions regarding risk mitigation and risk acceptation is open for all to see. In addition, the Power platform can be considered to further automate the processes and use powerful workflows to digitize the risk policies and inject these directly into the change machine of the 1st line.

How about the controls?

This leaves us with one more question, how do we connect these measures to the controls in the often exceptionally large and complex control frameworks? Especially since ITIL/COSO worlds are looking back, by periodically (weekly, monthly, etc.) testing controls using data and information from events that have passed. Based on this testing, the current, or even future situation is inferred. Agile is more responsive, in the moment and per occurrence. So, this inference can no longer be easily applied. Of course, large organizations cannot simply change their risk universes or control frameworks. So how do we connect these measures to controls?

This is a difficult question to answer, and counterintuitively, one to at first ignore. Once the first line starts to work with the standard measures, gaps between the operational risk management and the control testing world will become apparent. These can then be fixed relatively quickly by adapting the measures to better align with the controls. In other cases, we expect that controls also need to be reassessed. Especially given huge control frameworks of highly regulated organizations this will also be an opportunity to perform rationalization and cull the sometimes overcontrolled nature of organizations. In addition, it also presents the opportunity to further explore options of control automation that can lead to further savings and efficiency.

Measure or control?

We make the distinction between control and measure. In the first stages of implementing the approach, this will be apparent. Controls are periodically tested, whereas measures will be an explicit part of the development process, just like any (non)functional requirement. But we expect that this distinction will fade as the change process itself will eventually become the only control to mitigate risks in the future.

Conclusion

In our vision, a major part of risk procedures will migrate into the changing realm instead of control testing procedures that are static in nature (see Figure 4). As organizations become more software-development oriented (in some shape or form), it allows us to reconsider the approach to testing controls and mitigating risks. Imagine a bank changing the conditions on which someone can apply for a loan, nowadays this involves a lot of manual checks and balances and testing before all kinds of changes are applied and manual interventions are needed. Since the risks of these changes are not holistically considered during the development process, controls are needed in the production environment to ensure the proper functioning of the systems after the change. The digitized organizations of the future will converge their risk processes into the release cadence and will never worry about testing controls other than access controls and security. They know the as-is state, and they know the delta, that which is being added, changed, or removed is done according to the appropriate risk appetite by design. Finally, Trust by Design will also provide the foundation to develop digital trust metrics. Digital trust captures the confidence that citizens have in the ability of digital technology, service, or information providers to protect their interests. This allows internal organizations, their clients, and society to trust the processes and technology.

C-2022-1-Kumar-04-klein

Figure 4. The release process as a pivotal function in managing risks. [Click on the image for a larger image]

References

[Diam05] Diamond, J. M. (2005). Guns, Germs, and Steel: The Fates of Human Societies. New York: Norton.

[Gula08] Gulati, R. & Nickerson, J. A. (2008). Interorganizational Trust, Governance Choice, and Exchange Performance. Organization Science 19(5), 688-708.

[IIA20] Institute of Internal Auditors (IIA) (2020). The IIA’s Three Lines Model: An update of the Three Lines of Defense, Retrieved from: https://global.theiia.org/about/about-internal-auditing/Public%20Documents/Three-Lines-Model-Updated.pdf

[Lipp06] Lippert, S. K., & Davis, M. (2006). A conceptual model integrating trust into planned change activities to enhance technology adoption behavior. Journal of Information Science, 32(5), 434-448.

[McKn96] McKnight, D. H., & Chervany, N. L. (1996). The Meanings of Trust. Minneapolis, Minn.: Carlson School of Management, Univ. of Minnesota.

[Noot97] Nooteboom, B., Berger, H., & Noorderhaven, N. G. (1997). Effects of Trust and Governance on Relational Risk. Academy of Management Journal, 40(2), 308-338

[Sydo06] Sydow, J. (2006). How can systems trust systems? A structuration perspective on trust-building in inter-organizational relations. In R. Bachmann & A. Zaheer (Eds.), Handbook of Trust Research (pp. 377-392). Cheltenham/Northampton: Edward Elgar.

[Zahe98] Zaheer, A., McEvily, B., & Perrone, V. (1998). Does trust matter? Exploring the effects of interorganizational and interpersonal trust on performance. Organization Science, 9(2), 141-159.

An Internal Control Framework in a complex organization

Complex organizations need insight in their system of internal control. Based on this insight, the operation of that system can be tested periodically and continually adjusted to changing circumstances. More and more there is a call for compiling a so-called Internal Control Framework. In this article we discuss an example from the unruly practice of the most complex and largest business management service provider of The Netherlands: the Politiedienstencentrum (PDC, Police Services Centre).

The Politiedienstencentrum (PDC, Police Services Centre)

The assignment of the Politiedienstencentrum (PDC, Police Services Centre) is to facilitate the entire Dutch police force (Regional units, National Unit, Police Academy and Landelijke Meldkamer Samenwerking [LMS, National Control Room Cooperation]) with high quality business management. The position of the PDC in the force is depicted in Figure 1. This is how the PDC contributes to sound police work. The PDC regulates Payroll, Purchasing, Housing, IT, Vehicles & Vessels, Weapons & Ammunition, Clothing & Equipment and External Communication, including the production of TV programs. To this end, the PDC houses seven departments: Purchasing, Finance, Facility Management, Human Resource Management, Information Management, IT and Communication. Over 7,000 colleagues work at the PDC on a daily basis. The PDC has an annual budget of nearly EUR 6 billion. The organization is relatively young, established in 2012 from the merger of the regional police forces into a single national police force.

C-2022-1-Jacob-01-EN-klein

Figure 1. Main structure of the police organization. [Click on the image for a larger image]

Two approaches of an Internal Control Framework

An Internal Control Framework (ICF) is a generally applicable framework in which all types of controls are displayed in interdependence. The most well-known model is the COSO framework that focuses on the entire internal control system, known as COSO II or Enterprise Risk Management (ERM) Framework.

But what is the impact of an ICF for a complex organization such as the PDC? At the beginning of 2021, that assignment was given by PDC management to the Planning & Control unit. We asked ourselves a practical question: what does the ICF look like? Is it a document and if so, how thin, or thick is it? And how is it structured? To provide an answer to all these questions, we have had many conversations, within and especially outside the police organization. Because why would we want to reinvent the wheel? A surprising result: a tour along a number of large private and public organizations did not deliver one useful ICF sample that the police could use. However, there appeared to be two trends:

  1. an ICF is the sum of the description of all controls in the processes, or
  2. an ICF is a document in outlines with a description of the way in which Internal Control within the organization has been designed.

The PDC has made a choice, a pragmatic approach that fits the current level of control within the PDC. The ICF has become a document in outlines for the PDC which describes how Internal Control is designed for the entire organization. It provides an overview and insight into its coherence, creates a common language with terminology and refers to sources within our organization for more details. And with that the ICF document is also eligible for discussion at executive level. The ICF contains the leading principles of Internal Control within the PDC and is setting the frame for all types of business processes: whether it concerns the development of real estate, the management of clothing and equipment or the production of TV programs. The ICF focuses primarily on business management that is responsible for Internal Control and secondary on controllers that support business management. Finally, the ICF is of value to our internal and external auditors.

Objective of control

The central theme of the ICF is control. Control is primarily about achieving the set objectives within legal and budgetary restrictions, despite the risks that could obstruct or prevent this. These can be operational goals, compliance and regulatory goals, financial objectives and accountability goals or policy objectives and development goals. Control does not solely serve the reliability of the financial accountability; it also serves a sound balance between the going concern activities and the renewal of services.

The framework

It took us a little over six months with a small group of professionals to prepare the ICF for the PDC, to tune in with a representative group of colleagues (business management, controllers, staff) and to take a well-considered decision at board level. De-facto the alignment process was the first step to get the document alive within our organization. After formal decision-making, many discussions followed at all management levels within our organization. The fact that the ICF is mainly a description of what has been arranged organization-wide for Internal Control but has never been mapped in conjunction helped us tremendously. Only a few new parts need to be implemented.

All-in-all, after one year the PDC has a working and alive ICF for its entire shared services organization. Our aim was to deliver an accessible document of up to 20 pages. Ultimately, we have shaped the description of Internal Control of our organization in 7 chapters and 30 pages ([sJac21]). Figure 2 shows a rough sketch of each of these components / chapters. Especially the framework is a description of how things are organized at the moment and only to a very limited extent a target image. This is emphatically not a model that needs to be implemented. It brings things together, consolidates, makes transparent and gives direction.

C-2022-1-Jacob-02-EN-klein

Figure 2. Major components of the ICF. [Click on the image for a larger image]

1. Typology of the PDC

It may sound a bit old-fashioned, but Starreveld has helped us again after all those years ([Berg20], [Leeu14]). The typology of the business functions of the PDC is diverse and the nature of the control is therefore also diverse: from the processing of salaries to fleet management, the logistics of weapons, the development and the management of police-specific IT systems and the production of a TV programs. We sketch the PDC in its context, the typology of the various business functions, the culture, the tension between going concern and renewal, the strategic developments, and the developments in the area of IT.

2. Planning & Control

For the PDC, the Planning & Control cycle is the core of Internal Control. From budget to annual report. Our cycle is anchored in that of the force as a whole and the annual reports connect to the further development of our strategic vision. Especially the preparing of the annual plan of the PDC is extremely complex, where all portfolio plans of the operational portfolios come together with a need for going concern. Renewal of the police force namely works along the line of portfolio management and practically every renewal in our operational process impacts the business processes of the PDC. Insight into that impact prior to the annual plan process is essential. Over the past years, the Planning & Control organization of the PDC has been busy with a catching up a relatively large backlog in the area of professionalization, organizational design, and staffing.

3. Risk management

Risk-based working is anchored in the genes of operational police work. That’s what we do. But risk management in the supporting business has been left behind corps-wide. That is why in our framework explicit attention has been given to the risk management process, the risk-based working, the typology of risks, the method of assessing risks, our risk appetite, and the different roles in the risk management.

4. Process control

From the start in 2012, the PDC has been organized in business operation columns. Each department (HRM, Finance, Facility Management, etc.) had the primary task to get its own processes in order. However, many processes no longer take place in only one department these days. We distinguish the well-known chains such as Procurement to Pay (P2P) and also Police specific chains such as Competent to Skilled and Equipped when it comes to means of violence (standard weapon, baton, pepper spray, etc.). Managing the improvement of the PDC-wide workflows is becoming more and more important. In our ICF we pay attention to het way in which we do so, for which workflows and what are the roles of workflow owner and work process owner.

5. Controls in the processes

Especially for our service, this is the most important part of the framework. We describe our system of controls in the processes that underlie the products and services of the PDC. We record our processes under architecture in BizzDesign. The aim is also to record all key controls of the main processes under architecture. We have started with recording all key controls focused on financial accountability and now we expand that to all processes that are primarily directed to the delivery of our products and services. Regretfully, BizzDesign does not contain a Governance, Risk & Compliance (GRC) module. We are still struggling with that; the way in which we need to model the key controls is not optimal yet. In the ICF we explain the coherence between process control, quality assurance, quality measurement and service, because there is a lot of overlap there. Both worlds contain methods and techniques for the controlled delivery of products and services that comply with predefined KPIs. We strive to make them come together and to integrate them in the organization, if only by creating one language. Finally, using the models developed by Quinn and Cameron ([Came14]) and of prof. Muel Kaptein ([Kapt03]; see also [Bast15]), we give an outline of the influence of culture and behavior on our internal control. Quin’s model helped us to design controls and to implement them, matching our dominant culture. The Soft Controls model of Kaptein helped us to gain insight in the potential effect of soft control instruments to the desired behavior of employees.

6. Complying with laws and regulations

The PDC has to do with a large diversity of laws and regulations. Whether it concerns all regulations in the area of working conditions, financial management, privacy or the Weapons and Ammunition Act, the ICF provides general tools on how to deal with this. The ICF does not contain concrete integration of all laws and regulations, it is too generic for that.

7. Improvement cycles

The police invests heavily in being a learning organization. This is necessary to achieve the goals. To this end, multiple improvement cycles have been designed for this that are closely related and coordinated. Development on a personal level and within the team or the network structure is the basis of the improvement cycles. At sector level or service level, we work with a quality management system to test, measure and optimize the quality of the servicing. And our system of regular Control Self-Assessments (CSAs) is an important improvement tool (see Figure 3). As part of the two key questions of the CSA (“Are we doing the right things?” and “Are we doing things right?”), the following questions are being addressed:

  • Are all critical risks controlled sufficiently?
  • Are we adhering to criteria and norms?
  • Do we steer on the desired behavior-risk ratio?
  • Do we use the right management and accountability reports?

The CSA system is the basis for the annual In-Control Statement of the police, as recorded in our annual report. At the level of the PDC as a whole, the improvement of the servicing is secured in the Planning & Control cycle.

C-2022-1-Jacob-03-EN-klein

Figure 3. Objects of the Control Self-Assessment. [Click on the image for a larger image]

Necessary in-depth improvement of the IT organization

The police is increasingly becoming an IT-driven organization. The IT part in the total business and in the total police work increases every year. That justifies separate attention for the control in the area of IT. Our ICF is too generic in nature to be directly applicable to our IT organization. That is why we have developed a specific framework for the development of our police systems and why we have begun with the implementation of one framework for IT management processes. The goal is to identify and control the risks of the IT organization within the entire IT landscape (and the associated IT layers) by appropriate controls, considering applicable laws and regulations. As framework for the management of the IT processes the police has adopted the Government Information Security Baseline (Dutch: BIO). The BIO ([BIO20]) concerns a standardized framework based on the international ISO standards NEN-ISO/IEC 27001:2017 and NEN-ISO/IEC 27002:2017 for the Dutch government to protect all its information (systems). The BIO provides direction for the concretization of the ISO standards into concrete control measures.

KPMG best practices for BIO implementations at governmental organizations prove that the first step of the implementation roadmap includes the formulating of the objective by top management and the need to translate the degree of control into the annual internal control statement in the annual report, specifically for our IT processes. The desired scope of that internal control statement determines the scope of the implementation process, which is usually carried out in phases. After the scope has been determined, a link is made per IT process to the BIO security framework in order to take inventory which BIO security controls minimally need to be implemented.

Subsequently, a gap-analysis is used to assess to what extent the current controls are in line with the desired controls from the BIO Security standard. At organizations with a limited degree of maturity levels in the area of IT control, it often is a major challenge to map existing controls and the BIO Security standard in practice. In addition, we often see that the ownership of the controls and the responsibility for the testing thereof are not unambiguously designed and implemented. In order to considerably reduce the complexity of the mapping between existing controls and the complying with the BIO Security standard, a GRC tool can offer a solution. A precondition is to handle a simple structure in the arrangement of this tool in the recording and documentation of the control measure, including a linkage to relevant BIO and ISO standards, upon which the controls are based. This mapping is important because one control measure can affect multiple BIO and ISO standards.

Figure 4 shows in outline the KPMG approach of the BIO implementation; subdivided in five phases and five themes, of which the theme “control framework”, or rather the actual ICF for the IT management environment, is the most important theme.

C-2022-1-Jacob-04-EN-klein

Figure 4. Phased implementation method of the BIO. [Click on the image for a larger image]

The implementation of the BIO is a complex process, the duration of which is influenced by for example:

  • The desired scope of the roadmap (entire PDC, entire IT organization, specific IT parts, etc.)
  • The maturity of the IT environment and the quality of the design and implementation of the IT processes with the enclosed controls
  • The degree in which the existing controls already comply with the BIO Security standard and whether they are periodically tested
  • The knowledge of and experience with the design, implementation and testing of controls in general within the organization and especially the BIO Security standard.
  • The degree to which a contribution is made to the implementation roadmap, also outside the IT organization
  • the available resources ( staffing and budget).

In addition to the BIO, it is necessary to control specific IT risks. In strictly regulated organizations we see that specific internal control frameworks are being developed and implemented for certain IT processes and/or parts of the IT infrastructure, to control specific IT risks. For example, for the process of the transfer of changes to the product environment with deployment tooling, such as Azure DevOps or AWS or for that part of the IT network along which traffic in and out is regulated.

Conclusion: what is the contribution of an ICF to the Dutch police?

For the police the ICF is an instrument to take Internal Control to the next level. The Planning & Control organization offers the ICF as a menu list. There are large differences between and also within the departments of the PDC when the typology of the business functions is at stake and when it affects the degree of maturity in the area of internal control. Every PDC department chooses, also based on their own Control Self-Assessment, one or more subjects from the ICF with which they can further improve their internal control.

The ICF is a police-specific framework. Each of the parts need in-depth understanding. This is how the control of our processes truly gains meaning when the key controls have been accessibly recorded per business process. For our IT environment it is important that we further develop the ICF based on the BIO Security standard and make the link to the various layers in the IT infrastructure and in the specifically high-risk IT processes. In that way we keep on controlling the risks adequately.

For the executive management of the PDC, the ICF offers overview and insight, and it is an aid in the steering towards improvement of Internal Control. And, we now have a common language where internal control is concerned. This creates rest, regularity and (administrative) cleanliness.

References

[Bast15] Basten, A.R.J., Bekkum, E. van & Kuilman, S.A. (2015). Soft controls: IT General Controls 2.0. Compact 2015/1. Retrieved from: https://www.compact.nl/articles/soft-controls-it-general-controls-2-0/

[Berg20] Bergsma, J. & Leeuwen, O.C. van (2020). Bestuurlijke informatieverzorging: Typologieën (Management Information Systems: Typologies). Groningen: Noordhoff Uitgevers.

[BIO20] BIO (2020, 17 June). Baseline Informatiebeveiliging Overheid, versie 1.04zv (BIO Baseline Information Security Government version 1.04zv). Retrieved from: www.bio-overheid.nl.

[Came11] Cameron, K.S. & Quinn, R.E (2011). Diagnosing and changing organizational culture – Based on the competing values framework. Jossey Bass.

[Kapt03] Kaptein, M. & Kerklaan, V. (2003). Controlling the ‘soft controls’. Management Control & Accounting, 7(6), 8-13.

[Leeu14] Leeuwen, O.C. van & Bergsma, J. (2014). Bestuurlijke informatieverzorging: Algemene grondslagen Starreveld (Management Information Systems: General Principles). Groningen: Noordhoff.

[sJac21] s’Jacob, R.A. (2021). The Internal Control Framework PDC. [The ICF document is public and available upon request, only in Dutch.]

Implementing a new GRC solution

Managing risks, controls and compliance has become an integral part of the business operations of any organization. The intent to be demonstrably in control is in most cases on the agenda of the Board of Management. Depending on the business or market sector, pressure to comply or demonstrate control comes from internal stakeholders as well as external stakeholders as regulators, shareholders or external auditors ([Lamb17]). At the same time, there is the need to be cost efficient, reduce an increase in required effort for risk management and compliance, or even reduce the cost of control. In this context GRC (Governance, Risk & Compliance) tooling and platforms are relevant: These revolve around the automation of managing internal control and risks and complying with regulations. Implementing these and achieving the intended benefits can be a challenge. This article gives an overview of the lessons we learned during years of implementing GRC solutions.

Introduction to GRC

To start off, it’s important to understand Governance, Risk & Compliance terminology, why there is a need to automate and in what way the solutions on the market support this need. The GRC concept aims to support:

  • Governance of the organization: the management and monitoring of policies, procedures and measures to enable the organization to function in accordance with its objectives.
  • Risk management: the methodologies and procedures aimed at identifying and qualifying risks and implementing and monitoring measures to mitigate these risks.
  • Compliance: working in compliance with applicable laws and regulations.

There may be multiple reasons to start an implementation project. From practical experience, we know that the following arguments are important drivers:

  • The playing field of GRC expands as a result of increasing regulations, requiring (additional) IT support. Think of the examples in the area of privacy and information security.
  • The execution of control, risk or compliance activities takes place in silos as a result of the organizational structure. This can lead to fragmented, ineffective or duplicated control or compliance measures and difficulty to pinpoint the weak spots in the GRC area within the organization. The current way of conducting GRC activities is supported by an obsolete GRC solution or (worst case) by spreadsheets and email, making it labor intensive to perform activities and a nightmare to report on.
  • The (future) effort that is spent on GRC activities is mainly related to the hours that employees spend on managing GRC activities to identify issues instead of resolving these. This usually is a reason to look for automation to replace expensive labor.

Functionality offered by GRC solutions

In its simplest form, a GRC solution is a database or document archive connected to a workflow engine and reporting capabilities, as a cloud application or on-premise. In the most extensive form, the required functionality is delivered as part of a platform solution that provides capabilities for all processes concerning (supplier) risk management, implementation of control measures and compliance activities. Mobile integration and out-of-the-box data integration capabilities can be included. Many providers offer IT solutions that support the various use cases in the GRC area. These can be grouped in the following categories:

  • Policy and regulations management: maintaining and periodically reviewing internal or external policies or regulations, managing deviations, and identifying whether new regulations might be applicable. Some providers offer (connections to) regulatory change identification.
  • Enterprise, operational or IT risk management: identifying and managing risks and as a result reported issues and actions. These risks can arise on enterprise level; be non-financial (Operational risk) or are focused on IT topics (IT Risk).
  • Vendor risk management: this discipline focuses on identifying and mitigating risks for processes outsource to suppliers and third parties, trying to prevent that for example the use of (IT) service providers creates unacceptable risks for the business.
  • Privacy risk management: focused on the risks of processing data and protection of privacy. The registration of this type of risks can require additional measures as these risk and possible associated issues can be sensitive in nature requiring restricted access for risk officers.
  • Access risk management: managing risks of granting (critical) access to applications and data. Setting up baselines of critical functionality and segregation of duties and workflows to support the day-to-day addition or removal of users is usually part of this solution.
  • Continuous monitoring: using structured data (e.g. ERP transactions) to analyze transactional or configuration settings to identify and follow up control exceptions.
  • Audit management: Planning, staffing and documenting the internal audit engagements that are conducted within an organization. Integrated GRC tooling often offers functionality that reuses information stored elsewhere in the GRC solution or platform, enabling efficient and focused execution of audits.

All these topics may require a different way of managing risks: input data can be different and as a result the performance of measures can be more or less automated. They have in common that workflows to support the activities and reporting/dashboarding to enable adequate monitoring of status and results are required by most users of the solution.

Lessons learned (the eight most valuable tips)

When a suitable GRC solution has been selected based on the requirements from the users, it has to be implemented and adopted by the organization to enable the benefits as desired. The technical design and implementation of the solution are important parts of these projects, but there’s more to it than just that …

There are many lessons to be learned from GRC implementation projects which apply to the system integrator, the business integrator and the customer. In the remainder of this article we will describe some of the key lessons (pitfalls) of GRC projects we have observed.

Nr Key lessons
1 Well-defined GRC roadmap & phased approach
2 Try to stick to standard out of the box functionality
3 A common language
4 The importance of a design authority
5 Garbage in is garbage out
6 Importance of business involvement
7 More than a technology implementation
8 Business implementation as the key success factor

Lesson 1: GRC roadmap & phased approach

GRC applications often provide a broad range of functionalities which are interesting to different parts of the organization. Think about functionalities for risk & controls testing, audit management, IT controls, third-party risk management and policy management. Different departments may also start to show an interest in the functionalities that are provided by a GRC solution. When planning for an implementation of GRC software, it is recommended that the organization first makes a GRC strategy and GRC roadmap. The GRC Strategy and Roadmap is often initiated by a second line of control, which of course is recommended if various functions in the organization are involved in the development of this GRC strategy and roadmap. Functions that can be involved are for example compliance, information security, risk management and internal audit.

GRC solutions provide functionality for many use cases. Develop a roadmap to implement these functionalities one by one based on requirements.

The GRC roadmap can be used to prioritize requests of the organization and determine when a specific capability can be implemented in the GRC solution. Furthermore, it is recommended to define a very clear scope of the GRC project and not to try to implement all functionalities simultaneously. The different functionalities to be implemented will have impact on the data objects (like risk, control or issue) in the system. Implementing too many different functionalities simultaneously can paralyze the design of these objects, waiting for each other to be finished. A more phased approach (agile) will allow an organization to get to a steadier state sooner which then can be extended with additional functionalities.

Lesson 2: Stick to the standard

Most GRC solutions provide out-of-the-box functionalities for GRC. This standard out-of-the-box functionality is clustered in use cases like SOX, policy management, audit management and third-party risk management. The out-of-the-box functionality consists of predefined data objects for risks, control objectives, controls, entities and so on. In these data objects, standard fields and field attributes are available, which an organization can use in its solution. Additionally, the GRC vendors provide preconfigured workflows that can often be easily adjusted by activating or deactivating a review step. These standard out-of-the-box functionalities should be used as a reference where minor tweaks to the standard should be allowed and which will accelerate the implementation of the GRC solution.

Most GRC solutions provide out-of-the-box functionalities. Finetuning to meet the organizations requirements will speed up an implementation project. Do not start from scratch.

Organizations should limit customization to the standard configuration of the application. If customers decide to make a lot of changes to the standard functionality, it has an immediate impact on the complete project timeline and implementation budget required. More time will be required to prepare the design of the application, to configure and customize the application, and to test the application. Additionally, and depending on the GRC solution, a possible future upgrade of the system might be more complex and therefore more time-consuming and might not always fit in the roadmap of the GRC vendor which could result in future additional efforts. Therefore, it is always recommended to stay close to the functionality which is provided by off-the-shelf software or SaaS, and to try to prevent custom development (custom coding) as much as possible.

Lesson 3: A common language

In addition to the lessons above, it is important to mention that everyone involved in a GRC solution implementation project should have a common and shared understanding of the functionality and scope that will be implemented. It might sound obvious, but in too many cases projects fail due to a lack of shared GRC terminology like risk, event, control and issue and how these are connected.

Different department or functions within an organization might have a different understanding of a risk, or a risk event or an issue (which could be a risk). A common and shared terminology and a shared definition of how to document these (data quality) will improve the language used within an organization.

Develop standard definitions for the key data objects in GRC. This will facilitate a common language of GRC in your organization.

It goes without saying that communication in such projects is key. Already from the very first step of the project everyone should be on the same page to eliminate any ambiguities regarding the terminology used. Is there a shared foundation for the risk function? When each risk function within the organization is managing risks in their own manner, using stand-alone solutions and creating analytical insights from different data sources, it’s very difficult to share a common risk insight as none of the risk functions speak in the same terminology.

To prevent this from causing a complete project failure a common risk taxonomy could help everyone to think, prioritize and communicate about risks in the same way. If this is not in place key risk indicators could be interpreted in different ways causing confusion in required follow-up or the actual risk a company is facing.

The fact that the organization is already considering the implementation of a GRC solution helps of course to get everyone on the same level of understanding. One of the objectives of the risk function is to at least align to the corporate-wide digital transformation goals of the organization. The risk function needs to define an ambition that support the business and yet maintain the objectives and KPIs of a risk function.

Lesson 4: The importance of a design authority

As mentioned before, a GRC application can be used by various departments or functions within an organization. And all the stakeholders of these department will have a different view on risk, controls, issues and actions as mentioned in [Beug10]. They might be afraid to lose decision making power & autonomy if they need to make use of an integrated risk management solution.

For a project team implementing the GRC solution, it can be very difficult to navigate and get alignment across all these departments and functions in an efficient way as all will have their own view and opinion on how the GRC application should be configured. Getting alignment on how the system should be designed and configured can become cumbersome and time-consuming which will have impact on project timelines. Furthermore, previously made decisions might get questioned over and over by other departments or functions.

A design authority empowered to make the design decisions on behalf the organization will have a positive impact on designing the application.

Therefore, it is recommended to have an overall design authority in the project that is empowered to take the decisions regarding the roadmap of the project and the design and configuration of the GRC application. This person, often a senior stakeholder in a compliance or risk management function, should have a view of the overall requirements of the various departments and should be authorized to make the overall design decisions for the project. This will result in swift decision making and will have a positive impact on project timelines.

Lesson 5: Garbage in is garbage out

One of use cases that is frequently used by organizations is “management of internal controls” (which for example can be the IT, SOX or financial controls). In this use case a business entity hierarch is created in the GRC application. As a second step the business process, risks and controls (and possibly other data) are uploaded in the GRC application and assigned to the entities for which these processes, risks and controls are applicable.

The master data to be uploaded in the GRC application is one of the key components of GRC system implementation of the system ([Kimb17]) but also an activity which can be very complex and time-consuming due to the number of risk and controls as well as the possible localization effort involved.

When (master) data management is not well defined or set up correctly and according to the company needs, there could be an impact on reporting and efficiency of the functionalities that are used. If framework integration is not performed properly this could even lead to duplicate controls being tested.

One of the key objectives of implementing a GRC solution is often to make risk & compliance processes more efficient by taking out inefficiencies or manual steps in for example the Risk & Control Self-Assessment (RCSA) process or control testing processes. Often quite some inefficiencies lie in the risk and control framework that are uploaded in the GRC environment. These risk and control frameworks might have been developed quite a few years ago and could include duplicate risk and controls, localized controls or primarily manual controls or might be missing important risk and controls due to a changed (regulatory) environment. Also issues with reporting on risk and controls might even be caused by the existing risk and control framework when no standard naming conventions are applied or when a central standardized risk and control library is not available. If these existing risk and control frameworks are implemented like for like in the GRC application the inefficiencies still remain.

Improve the quality of your risk and control framework before implementing a GRC solution.

When organizations are considering implementing a new GRC platform, it might be worthwhile to also reconsider the existing internal control framework for a couple of reasons:

  1. Control framework integration: often different departments or functions within an organization will be interested to make use of the GRC applications. Therefore, there will be a shared internal control framework which might have duplications or overlaps. It is therefore important to harmonize control frameworks and to remove any duplicate risks and controls. The recommended starting point here is a risk assessment which focuses on key risks in processes, for example.
  2. Control framework transformation: Some risk and control frameworks might be somewhat older and would only have a focus on manual controls. The integrated control framework would allow the possibility for organizations to identify controls which are embedded within applications like segregation of duty controls or configuration controls.
  3. Automation: GRC applications often provide Continuous Control monitoring (CCM) functionality or will have this functionality on their short-term roadmap. Therefore it would be possible to identify controls in the control framework which have the potential to be (partly) automated (assessment, testing) via continuous control monitoring functionality. Especially when an organization has multiple ERP applications this might become relevant as the business case for CCM becomes more interesting.

It is recommended to perform the improvement activities regarding the risk and control framework before the actual implementation of the GRC application, as this becomes important input for the GRC application. This would obviously prevent work duplication as uploading the risk and control framework in the GRC application and assigning the risk and controls to the relevant business units and control owners can be a time-consuming task (especially when many business entities are involved and some control localization work would be required).

Lesson 6: Not enough senior management involvement

The lack of senior management involvement and their sponsorship has proven fatal for many GRC implementation projects. Without their sponsorship, the end user community might not feel committed to the new system and the new way of working, and many may even be hostile against it. It is therefore paramount that management and end users are involved when the GRC project commences. At the start of the project or even before the project kicks off, stakeholders should be informed about the introduction of the GRC solution. The best way to do so, is to show them the solution and how it will positively impact their way of working. Once the solution has been shown, they should be allowed to raise all their questions and remarks that can be directly addressed. The business stakeholders could then leave that very first meeting on a positive note and spread the word to the rest of the organization.

Make sure the business understands the importance of a GRC project to meet its strategic objectives. Senior management involvement is key to the successful implementation of GRC.

Also throughout the duration of the project, the business should be kept involved with activities regarding the design principles, testing and training. Management should continuously and openly support the GRC implementation to emphasize its advantages and the priority of the project. There is also the risk of losing project resources if the priority of the project is not emphasized enough by senior management.

To increase the level of involvement communication about the project is essential. The project manager should create a clear communication plan, announce the project to the business and clarify what it will mean for them with a focus on the advantages of the GRC solution, and report the project status periodically to the stakeholders.

KPMG’s Five Steps for tackling culture ([KPMG16]) framework could also help with the approach of GRC solution implementations as it focuses on the organizational and culture changes as well.

Lesson 7. More than a technology implementation – the target operating model

Many organizations still consider the implementation of a GRC application as an implementation of a tool. These organizations completely focus on the design and implementation of the application itself: the technology component. Often these projects are not successful because a standalone solution was implemented.

C-2022-1-Hallemeesch-01-klein

Figure 1. Target operating model. [Click on the image for a larger image]

When implementing a GRC solution, it is recommended to focus the following components of the target operating model for GRC (or Risk). The components of the target operating model are:

  • The functional processes: overview of the functional processes in and around the GRC applications. These are processes covered by the GRC application (like performing a risk assessment) but could also be processes outside a GRC solution (for example establishing a risk appetite). It is recommended to document the broader picture of GRC with a focus on the different GRC capabilities in an organization (like ERM, policy management, internal control, audit management and for example third-party risk. This process overview will provide the organization with detailed information on the existing risk processes, which may be included in the GRC solution (and are input to the GRC roadmap.
  • People: when the processes have been elaborated, it is possible to designate the relevant roles to processes and process steps. This is valuable information for the change management workstream. Based on the identified roles, different training and reporting lines can be described

Developing a comprehensive target operating model for GRC will make sure that all requirements regarding processes that are relevant for GRC are documented, defined and implemented in the organization.. Not only the processes supported by the GRC application.

  • The same process model can be used to describe which activities are performed where in the organization (service delivery model). Certain processes might be performed on a central level in an expertise center (like maintaining the central organizational hierarchy and risk and control frameworks) and other processes might be performed locally (like assessing a control by a local control owners. Documenting the service delivery model provides interesting information, especially when parts of the organization are performing or testing controls on behalf of other parts of the organization.
  • The technology part of the model of course is related to the implementation of the GRC application. It is possible to make a link with the process model to identify the processes which are not or not yet supported by the GRC tool.
  • Performance and insight: often forgotten during an implementation of a GRC tool. But is very important to think upfront about the information that the organization would like to get out of the solution. If this is not taken into consideration when designing the application and the data to be uploaded and assigned in the system, there is a reasonable chance that not all relevant reporting requirements can be met via the solution (slice and dice).
  • Governance: it is important to define the governance model for the solution. An example that we often see is that the solution is configured, but there is no process in place regarding possible changes to the tool or to request possible enhancements. The controls concerning the GRC tool are also not documented and performed. We have seen too many systems where workflows are put in place to control owners to assess controls, but there is no real monitoring if workflows are actually followed up and closed in the system, or that workflows have been planned accurately (and completely).

Lesson 8: Last but not least: business implementation as the key success factor

Where lesson 6 focused on the more top-down involvement of senior management and sponsorship, it is also important to focus on the business and end users as they will be working with the newly implemented GRC solution. The implementation of a GRC solution contains a technical aspect where an IT system is to be designed and implemented, a repository of risks and controls is to be set up and reports are to be developed. However, the technical implementation alone doesn’t make the GRC solution a success. The solution is to be used by the business and if they are not on board with the project it can be seen as a fail. Especially because executing control activities is often seen as a burden to people in the business (1st line).

A key component of a GRC implementation is business implementation. Make sure that the business accepts the solution and feels comfortable to identify new risks, raise issues with controls or proactively raise other issues or deficiencies. Only then will the organization reap the benefits of GRC.

The introduction of a GRC solution also means a new way of working. Often in the rush to get a GRC implementation project going, management jumps straight into the technical implementation without thinking of organizational changes that need to take please as well. There is no cutting corners in this; eventually the business needs to be on board with the GRC solution in order to make it a success. Make sure end users are involved at every phase of the project, especially with the design, testing and training. When end users are involved in the design phase, they will feel a sense of ownership as they are asked about the features and functionalities of the solution. During testing, the business will be responsible for accepting the developed solution for the design that they have helped set up. Through training, they will become knowledgeable about the solution and the new way of working. Enabling key users to facilitate end user training sessions (train the trainer) will increase the sense of ownership even more within the organization, lowering the barrier for end users to reach out with questions on how to use the solution and its new process.

Besides training the application, the organization should also train the organization in why the application is being implemented (importance of compliance, importance of being in control, importance of doing business with the right third parties and so on) but also what is expected of the people making use of the GRC solution. Business users should be encouraged to do the right thing. Stating that a design, implementation or operating effectiveness of a control is NOT adequate should be fine. This will allow the organization to further improve their internal control environment. Raising an issue in a GRC solution will also allow a company to further strengthen its control environment. If users just close workflows to get them off their worklist then there is limited benefit to the GRC tool. As an auditor we have seen many examples in GRC tooling where controls were only rates as done or completed or “risk identified but mitigated”. These kinds of comments just raise more questions and the GRC application will have limited benefit to the organization. If control owners just put in the rating effective as they are afraid that a control cannot be rated as ineffective, then the GRC solution also has limited benefit.

Besides the technical component, users should be trained on what is expected. What is the expected evidence of a control execution or testing, what is the rationale behind the answers in an RCSA questionnaire? What does good test evidence look like? If the users of a GRC solution understand why they are using the software and what kind of input is expected in the GRC tool then the organization will benefit from the solution. The business implementation component is therefore the key success factor in implementing a GRC solution.

Conclusion

GRC projects can become very complex and long running projects for organizations but there are learnings from other projects that have a positive impact on these projects. Lessons learned, which if applied during an implementation will allow a GRC project to run smoother, are not very different than lessons learned from other IT projects. The business implementation and business involvement in a GRC project is the key success factor of implementing a GRC solution. This is the workflow that will make sure that business users adapt the GRC solution and will make use of it as it is intended: a key component of the internal control environment of the organization.

References

[Beug10] Beugelaar B.. et al. (2010). Geslaagd GRC binnen Handbereik. Compact 2010/1. Retrieved from: https://www.compact.nl/articles/geslaagd-grc-binnen-handbereik/

[Kimb17] Kimball, D.A et al. (2017). A practical view on SAP Process Control. Compact 2017/4. Retrieved from: https://www.compact.nl/articles/a-practical-view-on-sap-process-control

[KPMG16] KPMG (2016). Five steps to tackling culture. Retrieved from: https://assets.kpmg/content/dam/kpmg/co/pdf/co-17-01-09-hc-five-steps-to-tackling-culture.pdf

[Lamb17]Lamberiks, G.J.L. et al. (2017). Trending Topics in GRC tooling. Compact 2017/3. Retrieved from: https://www.compact.nl/articles/trending-topics-in-grc-tooling

Mastering the ESG reporting and data challenges

Companies are struggling how to measure and report on their Environmental, Social, and Governance (ESG) performance. How well a company is performing on ESG aspects is becoming more important for investors, consumers, employees, and business partners and therefore management. This article tries to shed a light on how companies can overcome ESG reporting (data) challenges. A nine-step structured approach is introduced to give companies guidance on how to tackle the ESG reporting (data) challenges.

Introduction

Environmental, Social and Governance (ESG) aspects of organizations are important non-financial reporting topics. Organizations struggle with how to measure ESG metrics and how to report on their ESG performance and priorities. Many organizations haven’t yet defined a corporate-wide reporting strategy for ESG as part of their overall strategy. Other organizations are already committed to ESG reporting and are struggling to put programs into place to measure ESG metrics and to steer their business as it is not yet part of their overall heartbeat. Currently, most CEOs are weathering the COVID storm and are managing their organization’s performance by trying to outperform their financial targets. Besides the causality between the climate developments in our weather; from a sustainability perspective the waves are becoming higher, the storm is increasing rapidly as ESG is becoming the new standard to evaluate an organization’s performance.

How well a company performs on ESG aspects is becoming an increasingly important performance metric for investors, consumers, employees, business partners and therefore management. Next to performance, information about an organizations’ ESG metrics is also requested by regulators.

Investors are demanding ESG performance insights. They believe that organizations with a strong ESG program perform better and are more stable. On the other hand, poor ESG performance poses environmental, social, and reputational risks that can damage the company’s performance.

Consumers want to increasingly buy from organizations that are environmentally sustainable, demonstrate good governance, and take a stand on social justice issues. And they are even willing to pay a premium to support organizations with a better ESG score.

Globally, we are seeing a war on talent, with new recruits and young professionals looking for organizations that have a positive impact on ESG aspects because that is what most appeals to them and what they would like to contribute to. Companies that take ESG seriously will be ranked at the top of the best places to work for and will find it easier to retain and hire the best employees.

Across the value chain, organizations will select business partners that are for example most sustainable and are reducing the overall carbon footprint of the entire value chain. Business partners solely focusing on creating their value based on the lowest costs will be re-evaluated because of ESG. Organizations that will not contribute to a sustainable value chain can find difficulties in continuing their business in the future.

The ESG KPIs are only the tip of the iceberg

The actual reporting on ESG key performance indicators (KPIs) is often only a small step in an extensive process. All facets can be compared to an iceberg, where only certain things are visible to stakeholders – the “tip of the iceberg”: the ESG KPIs or report in this case. What is underneath the water, however, is where the challenges arise. The real challenge of ESG reporting is a complex variety of people, processes, data and systems aspects which need to be taken into account.

C-2022-1-Delft-01-klein

Figure 1. Overview of aspects related to ESG reporting. [Click on the image for a larger image]

In this article, we will first further introduce ESG reporting including the required insights of the ESG stakeholders. After this we will elaborate more on the ESG data challenges, and we will conclude with a nine-step structured approach how to master the reporting and data challenges covering the “below the waterline” aspects related to ESG reporting.

ESG is at the forefront of the CFO agenda

The rise in the recognition of ESG as a major factor within regulation, capital markets and media discourse has led CFOs to rethink how they measure and report on ESG aspects for their organization.

Finance is ideally positioned in the organization to track the data needed for ESG strategies and reporting. Finance also works across functions and business units, and is in a position to lead an organization’s ESG reporting and data management program. The (financial) business planning and analysis organization can connect ESG information, drive insights, and report on progress. Finance has the existing discipline, governance and controls to leverage on the required collation, analysis and reporting of data with regard to ESG. Therefore, we generally see ESG as an addition to the CFO agenda.

ESG as part of the “heartbeat” of an organization

Embedding ESG is not solely focused on the production of new non-financial report. It is also about understanding the drivers of value creation within the organization and enabling business insights and manage sustainable growth over time. Embedding ESG within an organization should impact decision-making and for example capital allocation.

The following aspects are therefore eminent to secure ESG as part of the company’s heartbeat:

  • Alignment of an organization’s purpose, strategy, KPIs and outcomes across financial and non-financial performance.
  • Ability to set ESG targets and financial performance and track yearly/monthly performance with drill downs, target vs actual and comparison across dimensions (i.e. region, site, product type).
  • Automated integration of performance data to complete full narrative disclosures for internal and external reporting and short-term managerial dashboards.

Embedding ESG into core performance management practices is about integrating ESG across the end-to-end process – from target setting to budgeting, through to internal and external reporting to ensure alignment between financial and non-financial performance management.

An important first step is related to articulate the strategy which is about translating the strategic vision of the organization into clear measures and targets to focus on executing the strategy and achieve business outcomes. ESG should be part of the purpose of the organization and integrated into the overall strategy of an organization. In order to achieve this, organizations need to understand ESG and the impact of the broad ESG agenda on their business and environment. They need to investigate which ESG elements are most important for them and these should be incorporated into the overall strategic vision.

Many organizations still run their business using legacy KPIs, or “industry standard” KPIs, which can allow them to run the business in a controlled manner. Conversely, this is not necessarily contributing to the strategic position that the organization is aiming for. These KPI measures are not just financial but look at the organization as a whole. Although the strategy is generally focused on growing shareholder value and profits, the non-financial and ESG measures underpin these goals, from customer through to operations and people/culture to relevant ESG topics.

The definition of the KPIs is critical to ensure linkage to underlying drivers of value and to ensure business units are able to focus on strategically aligned core targets to drive business outcomes. When an organization has (re-)articulated its strategy and included ESG strategic objectives the next step is to embed it into its planning and control cycle to deliver decision support.

In addition to defining the right ESG metrics to evaluate the organizational performance, organizations struggle with unlocking the ESG relevant data.

Data is at the base of all reports

With a clear view of the ESG reporting and KPIs, it is time to highlight the raw material required which is deep below sea level: data. Data is sometimes referred to as the new oil, or organizations’ most valuable asset. But most organizations do not manage data as it was an asset; not in the way they would do for their buildings, cash, customers and for example their employees.

ESG reporting is even more complex than “classic” non-financial reporting

A first challenge with regard to ESG data is the lack of a standardized approach in ESG reporting. Frameworks and standards have been established to report on ESG topics like sustainability. For example, the Global Reporting Initiative (GRI) and Sustainability Accounting Standards Board’s (SASB) which is widely used in financial services organizations. However, these standards are self-regulatory and lack universal reporting metrics and therefore a universal data need.

Even if there is one global standard in place, companies would still face challenges when it comes to finding the right data whereas data originates from various parts of the organization like the supply chain, human resources but also from external vendors and customers ([Capl21]). The absence of standard approaches leads to lack of comparability among companies’ reports and confusion among companies on which standard to choose. The KPI definition must be clear in order to define the data needed.

In April 2021, the European Commission adopted a proposal for a Corporate Sustainability Reporting Directive (CSRD) which radically improves the existing reporting requirements of the EU’s Non-Financial Reporting Directive (NFRD).

Besides a lack of a standardized approach, more data challenges on ESG reporting arise:

  • ESG KPIs often require data which isn’t managed till now. Financial data is documented, has an owner, has data lifecycle management processes and tooling but ESG data mostly doesn’t. This affects the overall data quality, for example.
  • Required data is not available. As a consequence, the required data needs to be recorded, if possible reconstructed or sourced from a data provider.
  • Data collectors and providers’ outputs are unverified and inconsistent which could affect the data quality.
  • Processing the data and providing the ESG output is relatively new compared to financial reporting and is in many occasions based on End User Computing tools like Access and Excel which could lead to inconsistent handling of data and errors.
  • The ESG topic is not only about the environment. The challenge is that a company may need different solutions for different data sources (e.g. CR360 or Enablon for site-based reporting (HSE) and another for HR data, etc.).

Requirements like CSRD make it clearer for organizations what to report on but at the same time, it is sometimes not clear to companies how the road from data to report is laid out. Looking at these data challenges mentioned above, it is also important for organizations to structure a solid approach on how to tackle the ESG challenges which will be introduced in the next paragraph.

A structured approach to deal with ESG reporting challenges

The required “below the waterline” activities can be summarized in nine sequential steps to structurally approach these ESG challenges. Using a designed approach does not cater for all but will be a basis for developing the right capabilities and to move in the right direction.

C-2022-1-Delft-02-klein

Figure 2. ESG “below the waterline” steps. [Click on the image for a larger image]

This approach consists of nine sequential steps or questions covering the People, Processes, Data and Source systems & tooling facets of the introduced iceberg concept. The “tip of the iceberg” aspects with regard to defining and reporting the required KPI were discussed in the previous paragraphs. Let’s go through the steps one by one.

  1. Who is the ESG KPI owner? Ownership is one of the most important measures in managing assets. The targets and related KPIs are generally designated to specific departments and progress is measured using a set of KPIs. When we look at ESG reporting, this designating is often less clear. Having a clear understanding of which department or role is responsible for a target also leads to a KPI owner. It is often challenging to identify the KPI owner since it can be vague who is responsible for the KPI. A KPI owner has multiple responsibilities. First and foremost, the owner is responsible for defining the KPI. Second, a KPI owner is an important role in the change management process. Guarding consistency is a key aspect, as reports often look at multiple moments in time. It is important that when two timeframes are compared, the same measurement is used to say something about a trend.
  2. How is the KPI calculated? Once it is known who is responsible for a KPI, a clear definition of how the KPI is calculated should be formulated and approved by the owner. This demands a good understanding of what is measured, but more importantly how it is measured. Setting definitions should follow a structured process including logging the KPI and managing changes to the KPI, for example in a KPI rulebook.
  3. Which data is required for the calculation? A calculation consists of multiple parts that all have their own data sources and data types. An example calculation of CO2 emission per employee needs to look at emission data, as well as HR data. More often than not, these data sources all have a different update frequency and many ways of registering. In addition to the difference in data types, data quality is always a challenge. This also starts with ownership. All important data should have an owner who is, again, responsible for setting the data definition and to improve and uphold the data quality. Without proper data management measures in place ([Jonk11]), the data quality cannot be measured and improved which has a direct impact on the quality of the KPI outcome.
  4. Is the data available and where is it located? Knowing which data is needed brings an organization to the next challenge: is the data actually available? Next to the availability, the granularity of the data is an important aspect to keep in mind. Is the right level of detail of the data available, for example per department or product, to provide the required insights. A strict data definition is essential in this quest.
  5. Can the data be sourced? If the data is not available, it should be built or sourced. An organization can start registering the data itself or the data can be sourced to third parties. Having KPI and data definitions available is essential in order to set the right requirements when sourcing the data. Creating (custom) tooling or purchasing third-party tooling to register own or sourced data is a related challenge. It is expected that more and more ESG specific data solutions will enter the market in the coming years.
  6. Can the data connection be built? Nowadays, a lot of (ERP) systems have integrated connectivity as a standard, this is not a given fact for many systems, however. Therefore, it is relevant to investigate how the data can be retrieved. Data connections can have many forms and frequencies like streaming, batch, or ad-hoc. Dependent on the type of connection, structured access to the data should be arranged.
  7. Is the data of proper quality? If the right data is available, the proper quality can be determined in which the data definition is the basis. Based on data quality rules for example for the required syntax (for example: should it be a number or a date) the data quality can be measured and improved. Data quality standards and other measures should be made available within the organization in a consistent way in which again the data owner plays an important part.
  8. Can the logic be built? Building reports and dashboards require a structured process in which the requirements are defined, the logic is built in a consistent and maintainable way and the right tooling is deployed. In this step the available data is combined in order to create the KPI based on the KPI definition in which the KPI owner has the final approval of the outcome.
  9. Is the user knowledgeable to use the KPI? Reporting the KPI is not a goal in itself. It is important that the user of the KPI is knowledgeable enough to interpret the KPI in conjunction with other information and its development over time to define actions and adjust the course of the organization if needed.

Based on this nine-step approach, the company will have a clear view of all the challenges of the iceberg and required steps that need to be taken to be able to report and steer on ESG. The challenges can be divers starting from defining the KPIs, the tooling and sourcing of the data or data management. Structuring the approach helps the organization for now and going forward, whereas the generic consensus is that the reporting and therefore data requirements will only grow.

Conclusion

The demand to report on ESG aspects is diverse and growing. Governments, investors, consumers, employees, business partners and therefore management are all requesting insights into an organizations’ ESG metrics. It seems like the topic is on the agenda of every board meeting, as it should be. To be able to report on ESG-related topics, it is important to know what you want to measure, how/where/if the necessary data is registered and having a possible approach towards reporting. ESG KPIs cannot be a one-off whereas the scope for ESG reporting will only grow; the ESG journey has only just begun. And it is a journey that inspires to dig deeper into the subject and further mature for which a consistent approach is key.

The D&A Factory approach of KPMG ([Duij21]) provides a blueprint architecture to utilize the company’s data. KPMG’s proven Data & Analytics Factory combines all key elements of data & analytics (i.e., data strategy, data governance, (master) data management, data lakes, analytics and algorithms, visualizations, and reporting) to generate integrated insights like a streamlined factory. Insights that can be used in all layers of your organization: from small-scale optimizations to strategic decision-making. The modular and flexible structure of the factory also ensures optimum scalability and agility in response to changing organizational needs and market developments. In this way, KPMG ensures that organizations can industrialize their existing working methods and extract maximum business value from the available data.

References

[Capl21] Caplain, J. et al. (2021). Closing the disconnect in ESG data. KPMG International. Retrieved from: https://assets.kpmg/content/dam/kpmg/xx/pdf/2021/10/closing-the-disconnect-in-esg-data.pdf

[Duij21] Duijkers, R., Iersel, J. van, & Dvortsova, O. (2021). How to future proof your corporate tax compliance. Compact 2021/2. Retrieved from: https://www.compact.nl/articles/how-to-future-proof-your-corporate-tax-compliance/

[Jonk11] Jonker, R.A., Kooistra, F.T., Cepariu, D., Etten, J. van, & Swartjes, S. (2011). Effective Master Data Management. Compact 2011/0. Retrieved from: https://www.compact.nl/articles/effective-master-data-management/

Privacy audits

The importance of data privacy has increased incredibly in the last couple of years. With the introduction of the General Data Protection Regulation (GDPR), the importance of data privacy has increased even more. Data privacy is an important management aspect and contributes to sustainable investments. It should therefore take a prominent role in GRC efforts and ESG reporting. This article discusses the options to perform privacy audits and the relevancy of the outcomes.

Introduction

In recent years, there have been various developments with regard to data privacy. These developments, and especially the introduction of the General Data Protection Regulation (GDPR), forced organizations to become more aware of the way they process personal data. However, not just organizations have been confronted with these developments, individuals who entrust organizations with their data have also become more aware of the way their personal data is processed. Therefore, the need to demonstrate compliance with data privacy laws, regulations and other data privacy requirements has increased among organizations.

Since data privacy is an important management aspect and contributes to sustainable investments, it has taken a prominent role in Governance, Risk management & Compliance (GRC) efforts and Environmental, Social & Governance (ESG) reporting. GRC and ESG challenges organizations to approach the way they are dealing with personal data from different angels and the way they report on their performed efforts. However, because of the complexity of privacy laws and regulations and a lack of awareness, it seems to be quite a challenging task for organizations to demonstrate the adequacy of their privacy implementation. It seems that a lot can be gained when determining whether controls are suitably applied in this regard, since there are no straightforward methods that could be applied to provide insight. The poor state of awareness and knowledge on this topic makes this even more complicated.

This article explains the criticality of GDPR in obtaining compliance, followed by a description of the various ways in which privacy compliance reporting can be performed. In addition, the role of privacy audits, their value, and the relationship of privacy audits to GRC & ESG is explained, prior to providing some closing thoughts on the development of the sector. The key question in this article is whether privacy audits are relevant for GRC & ESG.

Criticality of the GDPR in obtaining compliancy

Although the GDPR has already been implemented in May 2018, it is still a huge challenge for organizations to cope with. This privacy regulation has not only resulted in the European Commission requiring organizations to prove their level of compliance, but it has also increased the interest from individuals on how their personal data is processed by organizations. The most important principles of the GDPR, as listed in article 5 are:

  1. Lawfulness, Fairness, and Transparency
  2. Limitations on Purposes of Collection, Processing & Storage
  3. Data Minimization
  4. Accuracy of Data
  5. Data Storage Limits and
  6. Integrity and Confidentiality

The rights that individuals have as data subjects are listed in Chapter 3 of the GDPR and are translated into requirements that should be met by organizations, such as:

  1. The right to be informed – organizations should be able to inform data subjects about how their data is collected, processed, stored (incl. for how long) and whether data is shared with other (third) parties.
  2. The right to access – organizations should be able to provide data subjects access to their data and give them insight in what personal data is processed by the organizations in question.
  3. The right to rectification – organizations must rectify personal data of subjects in case it is incorrect.
  4. The right to erasure/the right to be forgotten – in certain cases, such as when the data is processed unlawfully, the individual has the right to be forgotten which means that all personal data of the individual must be deleted by the data processor.
  5. The right to restrict processing – under certain circumstances, for example, when doubts arise about the accuracy of the data, the processing of personal data could be restricted by the data subject.

C-2022-1-Amaador-00a-klein

A starting point for any organization to determine whether and which privacy requirements are applicable to the organization is a clear view of the incoming and outcoming flows of data and the way the data is processed within and outside the organization. In case personal data is processed, an organization should have a processing register. Personal data hereby being defined as any data which can be related to natural persons. In addition, the organization should perform Data Privacy Impact Assessments (DPIAs) for projects that implement new information systems to process sensitive personal data and a high degree of privacy protection is needed.

The obligation to possess a data processing register and the obligation to set up DPIAs, ensure that the basic principles that are required by the privacy regulation for the processing of personal data (elaborated in Chapter 2 of the GDPR) and privacy control have the right scope. Furthermore, these principles ensure that processing of personal data by an organization is done in a legitimate, fair and transparent way. Organizations should hereby bear in mind that processing personal data is limited to the purpose for which the data has been obtained. All personal data that is requested should be linkable to the initial purpose. The latter has to do with data minimization, which is also one of the basic principles of the GDPR. Regarding the storage of data, organizations should ensure that data is no longer stored than is necessary. The personal data itself should also be accurate and must be handled with integrity and confidentiality.

Organizations are held accountable by the GDRP for demonstrating their compliance with applicable privacy regulations. The role of the Data Protection Officer (DPO) has increased considerably in this regard. The DPO is often seen as the first point of contact for data privacy within an organization. It is even mandatory to appoint a DPO in case the organization is a public authority or body. DPOs are appointed to fulfill several tasks such as informing and advising management and employees about data privacy regulations, monitor compliance with the GDPR and increase awareness with regard to data privacy by for example, introducing mandatory privacy awareness training programs.

Demonstration of compliance with privacy regulations could be quite challenging for organizations and especially for DPOs. Complying with privacy regulations has been outlined in article 42 of the GDPR. However, practice has shown that demonstrating compliance is more complex than is described in this article. At this moment the Dutch Authority of Personal Data (Autoriteit Persoonsgegevens), the Dutch accreditation council (Raad voor Accreditatie) and other regulators have not yet come to a practical approach for issuing certificates to organizations that meet the requirements, due to the elusive nature of the law article. Besides the certification approach foreseen in the GDPR, there are different approaches in the market which organizations can use to report on their privacy compliance. In the next section some of these reporting approaches are elaborated on.

Reporting on privacy compliance

There are different ways in which organizations can report on privacy. Of course there are self-assessments and advisory-based privacy reporting. These ways of reporting on privacy are mostly unstructured and the conclusions subjective, however. The reports therefore make it difficult to benchmark organizations against each other. To make privacy compliance more comparable and the results less questionable, there are broadly speaking two ways of more structured reporting in the Netherlands. These ways are reporting based on privacy assurance and reporting based on privacy certification. They are further explained in the following paragraphs of this section.

A. Reporting based on privacy assurance

C-2022-1-Amaador-00b-klein

Assurance engagements can be defined as assignments in which auditors give an independent third-party statement (“opinion”) on objects by testing suitable criteria. Assurance engagements are meant to instill confidence in the intended users. These engagements originate in the financial audit sector. How these engagements should be performed and reported are predefined by internationally accepted “Standaarden” (standards) respectively “Richtlijnen” (guidelines) and are propagated by the NBA and NOREA.1 As part of assurance engagements, controls are tested using auditing techniques consisting of the Test of Design (ToD) and/or Test of operating Effectiveness (ToE). Based on the results of controls testing, an opinion is given on the research objects. This opinion can be either qualified, unqualified, abstaining from judgment, or qualified with limitation. The most commonly used assurance “Standaarden” and “Richtlijnen” in the Netherlands to report on privacy are: ISAE 3000, SOC1 and SOC2. ISAE3000 is a generic standard for assurance on non-financial information. SOC1 is meant to report relevant non-financial control information for financial statement analysis purposes and SOC2 is set up for IT organizations that require assurance regarding security, availability, process integrity, confidentiality and privacy related controls. Assignments based on ISAE3000, SOC1 and SOC2 can lead to opinions on privacy control. The criteria chosen to be in scope as part of an ISAE3000 or SOC1 engagement can be chosen freely as long as the choice leads to a cohesive, clear and a usable result. The criteria for SOC2 are prescribed, although extension is possible.

C-2022-1-Amaador-00c-klein

NOREA gives organizations the possibility to obtain a Privacy Audit Proof quality mark for individual or multiple processing activities of personal data or for an entire organization ([NORE21]). This mark can be obtained based on a “ISAE3000” or “SOC2” privacy assurance report with an unqualified opinion. The NOREA Taskforce Privacy has set up terms in which guidelines for performing privacy assurance engagements and obtaining the Privacy Audit Proof quality mark. One of the conditions for this quality mark is the usage of the NOREA Privacy Control Framework (PCF) as a set of criteria, in case of the usage of the ISAE3000, or the usage of the criteria elaborated in the privacy paragraph of an SOC2-assurance report. The Privacy Audit Proof quality mark can be obtained by either controllers or processors. After handing over an unqualified assurance report and giving relevant information, NOREA gives permission to the organization that is being successfully audited to use this mark for one year, under certain conditions.

The extent to which an opinion on privacy control resulting from an assurance engagement is the same as an opinion on privacy compliance depends on the criteria in scope of the assurance engagement. An opinion on privacy controls, although a good indicator, can however never be seen as an all-encompassing compliance statement. Due to the fact that the GDPR is ambiguous and the selection of controls in scope requires interpretation, an objective opinion on compliance by financial or IT auditors is not possible.

B. Reporting based on privacy certification

Certification originates from quality control purposes. To be eligible for a certification, an independent, accredited party should assess whether the management system of the concerned organization meets all requirements of the standard. Certification audits are meant to making products and services comparable. In addition, the strive for continuous improvement is an important part of these audits.

In general, the most commonly used certifications in the Netherlands are those originating from the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) ([ISO21]). Examples of ISO/IEC-standards are the ISO/IEC 27001 (information security management), the ISO/IEC 27002 (information security), the ISO/IEC 27018 Information technology) and the ISO/IEC 29100 (for public cloud computing). In addition, the ISO27701 has been introduced in August 2019 as an extension of ISO27001. This standard focuses on the privacy information management system (PIMS). This particular standard assists organizations to establish systems to support compliance with the European Union General Data Protection Regulation (GDPR) and other data privacy requirements but as a global standard it is not GDPR specific.

Other privacy certification standards are for example the BS10012, European Privacy Seal (also named EuroPrise), the Europrivacy GDPR certification and private initiatives, like certification on the Data Pro Code ([NLdi21]). BS10012, as the British Standard on PIMS, is mostly replaced by the ISO27701. EuroPrise provides certifications that demonstrate that for example IT products and IT-based services, comply with the European data protection laws ([EuPr21]). Europrivacy GDPR certification, as is stated on their website, “provides a state-of-the-art methodology to certify the conformity of all sorts of data processing with the GDPR”. In the Netherlands, NL Digital, as an organization of ICT companies, has developed the Data Pro Code. This Code specifies the requirements of the GDPR for data processors. Due to their specific nature the Europrivacy GDPR certification and certification on the Data Pro Code are less commonly used in The Netherlands.

C. Privacy assurance versus privacy certification

The main difference between privacy assurance and certification is that assurance is more assignment-specific and in-depth. This is illustrated in Figure 1. In this figure, the main differences between privacy assurance based on ISAE/COS or Directive 3000 and Certification according to ISO 27701 are summarized.

C-2022-1-Amaador-01-klein

Figure 1. Comparison privacy assurance versus privacy certification (based on [Zwin21]). [Click on the image for a larger image]

Since the privacy reporting business hasn’t matured yet, privacy assurance and privacy certification can coexist and have their own benefits. Organizations that want to report on privacy should choose the way that suits their needs, which is dependent on their level of maturity for instance.

Privacy audits

Although a lot of knowledge and experience is available, performing an audit is an intensive process. This is especially the case for privacy audits. Since personal data is cross sectional, a separate area and not tangible, privacy audits are considered to be even more difficult.

This section describes typical aspects of privacy audits. As a model for describing these aspects, a privacy audit is considered to follow the phases shown in Figure 2.

C-2022-1-Amaador-02-klein

Figure 2. Privacy audit phases. [Click on the image for a larger image]

In general, the privacy audit phases look like the phases of “regular” audits. There are a few differences, however. One of the most important differences between regular and privacy audits is the determination of the scope, which is more difficult for privacy audits. A clear view of the incoming and outcoming flows of data and the way the data is processed within and outside the organization is a good starting point for privacy related efforts, and therefore also for scope determination. The processing register and DPIAs are other “anchors” that are useful. Data flows and the processing register list what data is processed in what system and which part can be considered personal data. DPIAs can provide further insight into the division of responsibilities, sensitivity of the data, applicable laws and relevant threats and vulnerabilities. Although all the aforementioned can help, there are still a few problems to be solved. The most important of these are the existence of unstructured data and the effects of working in information supply chains.

  • Unstructured personal data is data which is not stored in dedicated information systems. Examples of this type of data is personal data stored in Word or Excel files on hard disks or on server folders used in the office automation, such as personal data messages in mailboxes or personal data in physical files. Due to the unstructured character, scope determination is difficult by nature. Possible solutions for these situations can be found in tools which scan for files containing keywords which indicate personal data, like “Mr.”, “Mrs.” or “street”. A more structural solution can be found in data cleansing as part of archiving routines and “privacy by design” and “privacy by default” aspects of system adjustments or implementations. Whereas scanning is diverted to point solutions, archiving and system adjustments or implementations can help find a more structural solution.
  • Working in information supply chains leads to the problem that the division of responsibilities among involved parties is not always clear. In case of outsourcing relations, processing agreements can help clarify the relationships between what can be considered the processor and the controller. Whereas the relationships in these relatively simple chains are mostly straightforward, less simple chains like in Dutch healthcare or justice systems lead to more difficult puzzles. Although some clarification can be given in the public sector due to the existence of “national key registers” (in Dutch: “Basisregisters”), most of the involved relationships can best be considered as co-processor relationships, in which there are joint responsibilities. These relationships should be clarified one by one. In addition to co-processor relationships, there are those relationships in which many processor tasks lead to what can be considered controller tasks, due to the unique collection of personal data. This situation leads to a whole new view on the scoping discussion, with accompanying challenges.

Other difficulties performing a privacy audit arise from the Schrems-II ruling. As a result of this ruling, processing of personal data of European citizens under the so-called Privacy Shield agreement in the United States is considered to be illegal. Since data access is also data processing, the use of US cloud providers is to be considered illegal. Although there are solutions being specified like new contractual clauses and data location indicators, there is no entirely privacy-compliant solution available yet. Considering that the US secret service organizations are not bound to any privacy clauses and since European citizens are not allowed on the American privacy oversight board, there is still a leak.

Testing privacy controls is not simple either. Of course there are standard privacy control frameworks and the largest part of these frameworks consists of security controls and PIMS. There is a lot of experience with testing these. Testing controls which guard the rights of the data subjects, like the rights to be informed, access and rectification is more difficult, however. This difficulty arises from the fact that these controls are not always straightforward and testing these requires interpretation of policies and juridical knowledge. These difficulties can of course be overcome by making it explicit that an audit on privacy cannot be considered a legal assessment. This disclaimer is, however, not helpful in gaining the intended trust.

To improve the chance of successfully testing controls, most privacy audits are preceded by a privacy assessment advisory engagement. These advisory engagements enable the suggestion of improvements to help organizations, whereas audits, especially those devoted to assurance, leave less room to do so.

Reports resulting from privacy audits are mainly dictated by the assurance or certification standards, as described in the preceding section. The standard and resulting report should suit the level of maturity of the object and the trust needed so that maximum effect can be reached.

Added value of privacy audits

Privacy audits lead to several benefits and added value. In this section the most important are listed.

Building or restoring confidence – Like any audit performed for an assurance or certification assignment, a privacy audit is devoted to help build or restore confidence. This is even more so if the privacy audit leads to a quality mark.

Increasing awareness – Whether an audit leads to a qualified opinion or not, any audit leads to awareness. The questions raised and evidence gathered make employees aware. Since the relevance of privacy has increased over the past years, a privacy audit can help with prioritizing the subject within the organization as the outcomes could eventually lead to necessary follow-up actions that require the engagement of several employees/departments within the organization.

Providing an independent perspective – As mentioned before, privacy is not an easy subject. Therefore, subjectivity and self-interest are common pitfalls. Auditors can help avoid risks related to these pitfalls by independently rationalizing situations.

Giving advice on better practices – Auditors are educated to give their opinion, based on the last regulations and standards. Therefore, the auditors’ advice is based on better practices. Since privacy is an evolving and immature business, advising on better practices has taken a prominent role in their job and provided services.

Facilitating compliance discussions – Last not but not least, although auditors do not give an opinion on compliance, they facilitate compliance discussions inside and outside client organizations, due to their opinion on relevant criteria and controls. In this respect, the auditor can also help in discussions with supervisory boards. Assurance, certification and quality marks are proven assets in relationships with these organizations.

Client case: Privacy audits at RDW

A good example of how privacy reporting can be helpful are the privacy audits performed for the Dutch public sector agency that administers motor vehicles and driving licenses “RDW”.

RDW is responsible for the licensing of vehicles and vehicle parts, supervision and enforcement, registration, information provision and issuing documents. RDW maintains the “national key registers” (“Basisregisters”) of the Dutch government with regard to license plate registration in the “Basis Kentekenregister” (BKR) and the registration of driving licenses in the “Centraal Rijbewijzenregister” (CRB). In addition, RDW is processor of on-street parking data in the “Nationaal Parkeerregister” (NPR) for many Dutch municipalities.

Since there are many interests and there is a lot of personal data being processed, RDW is keen on being transparent on privacy control. KPMG takes care of privacy audits with respect to the abovementioned key registers, BKR, CRB and NPR, as RDW’s assurance provider.

Performing these audits, there are the aforementioned challenges with regard to scope. They are dealt with by, amongst others, restricting the scope to the lawfully and contractually confirmed tasks and descriptions in processing registers and PIAs. Furthermore, due to the fact that RDW has a three lines of defense model, with quality control and resulting reports as second line, they have managed to implement privacy controls as listed in the NOREA privacy control framework.

According to the RDW, privacy reports and marks are helpful in, for example, communication to partners in automotive and governmental information supply chains and with supervisory boards. Although there is a lot of innovation in conjunction with, for example, connected and autonomous vehicles, RDW states that they are able to manage accompanying challenges with regard to amongst others privacy protection. If unintentionally something happens like a data breach, RDW is in a good position to give an explanation, supported by audit results.

Position of privacy audits in GRC & ESG

ESG measures the sustainable and ethical impact of investments in an organization based on the Environmental, Social and Governance related criteria. Previous events – such as the Facebook privacy scandal in which user data could be accessed without their explicit consent of these users ([RTLN19]) – have shown that data breaches could raise a lot of questions from investors or even result in decreasing share prices. Insufficient GRC efforts regarding data privacy could even lead to doubts about the social responsibility of an organization.

As mentioned in previous sections, there are various ways for organizations to demonstrate their compliance with data privacy regulations. The importance of presenting the way an organization is dealing with data privacy is further emphasized with the introduction of ESG, since it demands privacy to be implemented from an Environmental, Social and Governance point of view as well.

The outcomes of privacy audits could be used as a basis for one of the ESG areas. Privacy audits could provide insights in the extent to which measures are effective and a means to monitor privacy controls. Also, findings that have been identified from a privacy audit could help in ESG as they make organizations aware of the improvements that they have to make to prevent these events in the future to (re)gain the trust of all relevant stakeholders, including (potential) investors.

Conclusion and final thoughts

Although privacy audits could not provide the ultimate answer on whether organizations comply with all applicable data privacy regulations, it does offer added value. Therefore, the answer to the earlier question on whether privacy audits are relevant for GRC and ESG is, according to us, undoubtfully: “yes, they are!”

Privacy audits cannot provide the ultimate answer to whether organizations comply with all applicable data privacy regulations, however. Using privacy audits, organizations obtain insights into the current state of affairs regarding data privacy management. The outcomes of a privacy audit could also increase further awareness within the organization, as it emphasized the shortcomings that had to be followed up or investigated by the relevant parties within the organization. Next to the benefits that the organization itself will have with the performance of a privacy audit, it facilitates discussions with third parties and supervisory boards when it comes to demonstrating compliance with data privacy regulations, especially when the privacy audit has resulted in a report provided by an independent external privacy auditor. Another advantage of having privacy audits performed is that it lays the foundation for further ESG in which an organization can describe the measures performed to ensure data privacy and the way how progress is monitored. This could explain why sustainable investments are ensured at the organization in question. Privacy audits are difficult, however, since personal data are cross sectional, a separate area and not tangible.

Outsourcing and working in information supply chains are upcoming trends. These trends will offer a lot of opportunities for those who want to make a profit. To gain maximum benefit, the focus of the involved organizations should not only be on offering reliable services; they should also have a clear vision on GRC and ESG aspects. Privacy should be one of these aspects, whereas balanced reporting on all of the aforementioned is the challenge for the future.

Notes

  1. The NBA and NOREA are the Dutch bodies for financial respectively IT auditing.

References

[EU16] European Union (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council. Office Journal of the European Union. Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&qid=1635868498095&from=EN

[EuPr21] EuroPrise (2021). EuroPrise – the European Privacy Seal for IT Products and IT-Based Services. Retrieved from: https://www.euprivacyseal.com/EPS-en/Home

[Euro21] Europrivacy (2021). Europrivacy Certification. Retrieved from: https://www.europrivacy.org

[ISO21] ISO (2021). Standards. Retrieved from: https://www.iso.org/standards.html

[Koor13] Koorn, R., & Stoof, S. (2013). IT-assurance versus IT-certificering. Compact 2013/2. Retrieved from: https://www.compact.nl/articles/it-assurance-versus-it-certificering/

[NLdi21] NLdigital (2021). Data Pro Code. Retrieved from: https://www.nldigital.nl/data-pro-code/

[NORE21] NOREA (2021). Privacy Audit Proof: Logo voor de betrouwbare verwerking van persoonsgegevens. Retrieved from: https://www.privacy-audit-proof.nl

[RTLN19] RTL Nieuws (2019, July 24). Recordboete voor Facebook van 5 miljard dollar om privacyschandaal. Retrieved from: https://www.rtlnieuws.nl/economie/bedrijven/artikel/4791371/recordboete-facebook-vijf-miljard-toezichthouder

[Zwin21] Zwinkels, S., & Koorn, R. (2021). SOC 2 assurance becomes critical for cloud & it service providers. Compact 2021/1. Retrieved from: https://www.compact.nl/articles/soc-2-assurance-becomes-critical-for-cloud-it-service-providers/

No AI risk if you don’t use AI? Think again!

AI-related risks regularly make news headlines and have led to a number of legislative initiatives in the areas of privacy, fair and equal treatment, and fair competition. This may cause organizations to shy away from using AI technology. AI risks, as commonly understood, are however caused largely by the degree of autonomy and the increasing social impact of data processing rather than just by new algorithms. These risks should be understood holistically as threats to entire IT infrastructures rather than to individual AI components. A broad, comprehensive and ongoing AI-related risk assessment process is essential for any organization that wants to be ready for the future.

Introduction

Computers don’t always do what you want, but they do what they were instructed to do. This clearly separates the computer as an agent performing a task from a human being doing the same. Computers as components in a business process are in essence predictable: their behavior follows a design specification, and the same input will generate the same output. People, on the other hand, are the unpredictable components of a business process. In practice, they often do not fully follow instructions. They deviate from the business process specification, for bad and for good reasons. People are autonomous.

On the one hand, people are a weak point and therefore form a major risk. They may be sloppy, slow, commit frauds, extract confidential data for their own purposes, be influenced by unconscious biases, etc. On the other hand, people often take the rough edges out of a business process. People use their own common sense, see new patterns in data, spontaneously remedy injustices they see, diagnose problems in the business process, are aware of changes in society that may affect business because they follow the news, and generally generate valuable feedback for adapting and continually improving business processes. People make processes more resilient.

Blackboxness

Popularly, AI technology is positioned somewhere between humans and computers. It has, in essence, a blackboxness problem. It may have some capacity to adapt to changes in its environment. It sometimes surprises us by finding predictive patterns in data we did not see. But its design specification does not lend itself to simulation of its behavior in our mind: the relation between input and output data is discovered by the AI technology itself. It is not predictable. Not to us. And it does make mistakes that humans will never make. Mistakes that are hard to explain. Sometimes the mistakes are even hard to notice.

Because blackboxness is bad English we will call it a complexity problem instead, keeping in mind that we do not have an objective measure of topological complexity in mind, but rather our inability to simulate what it does. AI technology is, therefore, complex.

AI-related risks regularly make news headlines, may cause significant reputation damage, and have led to a number of legislative initiatives and ethical frameworks in the areas of privacy, fair and equal treatment, and fair competition. The associated cost of introducing effective control measures may cause organizations to shy away from using AI technology, or to pick traditional, well-established techniques for data analysis in favor of more complex and more experimental ones. We see a preference for linear regression techniques in many organizations for exactly this reason. This is not a solution. While shying away from AI technology may be a valid choice in certain circumstances, it neither addresses the inherent risks nor necessarily exempts one from special legal responsibilities.

In this article we address the origin of some of the inherent risks, and the role AI and data play in these risks, and finally come to the conclusion that a broad, comprehensive and ongoing AI-related risk assessment process is essential for any organization that wants to be ready for the future.

Complexity essentially deals with how easy it is to simulate the behavior of a system in our mind, on the level of abstraction we care about. What we require of this simulation largely depends on our needs for explainability. For instance, a facial recognition application is, objectively from an information theoretic perspective, more complex than a simple risk-scoring model based on social-economic parameters. Since we usually do not wonder how we recognize faces, we tend to take its behavior on a functional level for granted, until we discover it makes mistakes we would not make. Only then we face a complexity problem.

Is it AI?

A first problem becomes apparent if we look at European legislative initiatives that create potentially expensive compliance requirements. There is no overarching agreement about the kinds of systems that create AI-related risks. This is not surprising because the risks are diverse and take many forms.

Let us quickly run by some examples. Art. 22 of the GDPR is already in effect and targets automated decision making using personal data – regardless of the technology used. Besides the limitations to personal data, there is a clear concern regarding the degree of autonomy of systems. The freshly proposed Artificial Intelligence Act ([Euro21a]) prohibits and regulates certain functions of AI based on risk categories – instead of starting from a restrictive definition of technology. For a civil liability insurance regime for AI ([Euro20]) it is too early to tell how it will turn out to work, but it makes sense that it will adopt a classification by function as well.

The Ethics Guidelines for Trustworthy AI ([Euro19]) on the other hand target technology with a certain adaptive – learning – capacity, without direct reference to a risk-based classification based on function. This is a restrictive technology-based definition, but one which leaves big grey areas for those who try to apply it. The Dutch proposed guideline for Government agencies ([Rijk19]) targets data-driven applications, without a functional classification, and without reference to learning capacity.

This already creates a complicated scoping problem as organizations need to determine which classifications apply to them and which do not. And beyond that there is legislation that directly impacts AI but does not directly addresses it as a topic. Existing restrictions on financial risk modeling in the financial sector obviously impacts AI applications that make financial predictions, regardless of the technology used. New restrictions on self-preferencing ([Euro21b]) will for instance impact the use of active learning technology in recommender algorithms but they will be technology-agnostic in their approach.

AI risk may address software that you already use and never classified as AI. It may address descriptive analytics that you use for policy making that you never considered as software, and don’t have a registration for. Your first task is therefore to review what is present in the organization and whether and how it is impacted by compliance requirements related to AI. Beyond that, seemingly conflicting compliance requirements will create interpretation problems and ethical dilemmas. For instance, when you have to choose between privacy protections of the GDPR on the one hand and measurable non-discrimination as suggested by the Artificial Intelligence Act on the other, and both cannot be fully honored.

Three dimensions of AI risk

All in all, we can plot the risk profile of AI technology on three different dimensions. Although the risks take diverse forms, the underlying dimensions are usually clear. The first one is the one we already identified as complexity.

But complexity is not the major source of risk. AI risk is predominantly caused by the degree of autonomy and the increasing social impact of data processing rather than just by new algorithms. Risks are often grounded in the task to be performed, regardless of whether it is automated or not. If how well the task is executed matters significantly to stakeholders, then risk always exists. This is the risk based on its social impact. If the automated system functions without effective oversight of human operators, it is autonomous. Autonomy is the third source of risk. We also regard it matter-of-factly autonomous if human operators are not able to perform the function of the system, either because they cannot come to the same output based on the available input data, or because they cannot do so within a reasonable time frame.

If an automated system scores on any of these three dimensions (see Figure 1), it may carry AI-related risk with it if we look at it within its data ecosystem. This is not because one single dimension creates the risk, but because a source of risk on a second risk dimension may be found nearby in the IT infrastructure, and we need to check that.

C-2021-2-Boer-01-klein

Figure 1. Three dimensions of AI risk [Click on the image for a larger image]

Data ecosystems

Most AI-related risks may also surface in traditional technology as decision making ecosystems are increasingly automated. Increasing dependence on automation within whole task chains causes human decision makers to increasingly go out of the loop, and the decision points at which problems could be noted by human decision makers in the task chain become few and far between. The risks are caused by the increasing autonomy of automated decision-making systems as human oversight is being reduced. If things go wrong, they may really go wrong.

These risks should be understood holistically as threats to entire IT infrastructures rather than individual AI components. We can take any task as our starting point (see Figure 2). When determining risk there are basically three directions to search for risk factors that we need to take into account.

Upstream task dependencies

If the task uses information produced by AI technology, it is essential to gain insight into the value of the information produced by the technology and the resilience of that information source, and to take precautions if needed. The AI technology on which you depend need not be a part of your IT infrastructure. If you depend on a spam filter for instance, you risk losing important emails and you need to consider precautions.

Downstream task dependencies

If a task shares information with AI technology downstream it is essential to understand all direct and indirect outcomes of that information sharing. Moreover, you may take specific risks, such as reidentification of anonymized information, or inductive bias that may develop downstream from misunderstanding the data you create, and you may be responsible for that risk.

Ecological task interdependencies

If you both take information from and share information to an AI component, fielding a simple task agent may increase your risks of being harmed by the AI component’s failure or be exploited by it. You should take strict precautions for misbehaviors of AI components that interact in a non-cooperative setting with your IT systems. Interaction between agents through communication protocols may break down in unexpected ways.

C-2021-2-Boer-02-klein

Figure 2. Where does the risk come from? [Click on the image for a larger image]

Ecologies of task agents are mainly found in infrastructures, where predictive models representing different parties as task agents function in a competitive setting. For instance, online markets and auctions for ad targeting. A systemic risk in such settings is that predictive models may cause a flash crash or collusion to limit open competition. Fielding a simple technological solution in a setting like that is usually not better than fielding a smart one from a risk point of view.

Information usually equates with data when we think about computers, but make sure to keep an eye on information that is shared in ways other than by data. If a computer opens doors, the opening of the door is observable to third parties and carries information value about the functioning of the computer. If you open doors based on facial recognition, discrimination is going to be measurable, purely by observation.

Data is not history

Nearly all avoidable failures to successfully apply AI-based learning from data find their origin in either inductive bias, systematic error caused by the data you used to train or test the system, or in underspecification, mainly caused by not carefully thinking through what you want the system to do ([Amou20]). And besides that, there are unavoidable failures if the relationship between input data and the desired output simply does not exist. This is mainly caused by uncritical enthusiasm for AI and Big Data.

If you are a Data Scientist, it is easy to jump to the conclusion that biases in models are merely a reflection of biased ways of working in the past because historical data is used. That conclusion is, however, too simple and conflates the meaning of information and data. Not all information is stored as data, and not all data that is stored was used as information for decision-making in the past.

The information we use to make decisions is changing, and even without AI technology this creates new risks. When we remove humans from decision making, we lose information that was never turned into data. Decisions are no longer based on information gleaned from conversations in face-to-face interactions between the decision maker and stakeholders. Even if we train models on historical data we may miss patterns in information that was implicitly present when that historical decision was taken.

Big data

At the same time, we are also tapping into fundamentally new sources of information and try to make predictions based on this. Data sharing between organizations has become more prevalent, and various kinds of data traces previously unavailable are increasingly mined for new predictive patterns. It is easy to make mistakes:

  • Wrongly assuming predictive patterns are invariant over time and precisely characterize the task, and will (therefore) reliably generalize from training and testing to operational use ([Lipt18]).
  • Overlooking or misinterpreting the origin of inductive biases in task dependencies, leading to an unfounded belief in predictive patterns.

Inductive bias may lead to discrimination of protected groups, besides other performance-related problems. To properly label measurable inequalities ([Verm18]) as discrimination you have to understand underlying causal mechanisms and the level of control you have over those. Lack of diversity in the workplace may for instance be directly traceable to the output of the education system. As a company you can only solve that lack of diversity at the expense of your competitors on the job market.

Simple rules

Big Data is not just used by AI technology. Insights from Big Data may end up in automated decision-making indirectly, through new business rules formulated by policymakers based on a belief in patterns deduced from descriptive statistics on large sets of data with low information density. In essence we are doing the same thing as the machine learning algorithm, with one big difference: there is a human in the loop who confirms that the pattern is valid and may be operationally used. The statistical pattern is translated into a simple rule as part of a simple and predictable form of automation. And therefore does not carry AI risks? In reality we run the same data-related risks as before: our simple rule may turn out to be less invariant than we thought, and it may be grounded in inductive biases that we overlooked.

AI as a mitigator of risk

The use of AI technology instead of something else could add to the already existing risk, but it might mitigate already existing risks too. One important business case for predictive models is risk scoring, which differentiates between high risk cases and low risk cases to determine whether they may be processed automatically by a fragile rule-based system or should be handled by a human decision maker. Another important application of AI technology is detecting changes in input patterns of other systems, to make sure warning bells start ringing if a sudden change is detected. The application of AI technology is the risk mitigation measure in this case. It is unfortunate if these options are discarded because AI technology is perceived as too risky.

Risk scoring models are increasingly used in government, insurance and the financial sector. The function of these models essentially works as a filter for the rule-based system, which is vulnerable to gaming the system risks because of its relative simplicity. The application of AI technology is intended to reduce the risk. KPMG Trusted Analytics has looked at risk mitigating measures taken at some government agencies to protect risk scoring models against biases. Any shortcomings we found thus far relate to the whole business process of which the risk scoring model is a part. The model itself hardly adds to the risk. Simple human-made selection rules used in those same processes were in our view considerably riskier.

A broad perspective on AI risk

While AI-related compliance responsibilities may focus on the technology itself, insight in risk necessitates looking at the environment in which the technology is fielded. Few risks are inherent in the technology itself. To determine your risk profile in terms of autonomy and social impact it is necessary to look at the whole business process and its business value to the organization and other stakeholders.

Besides that, understanding data lineage is of critical importance. In a modern data-driven organization, the same type of data may be used and produced by various applications, and the same application may be used for different purposes in different activities. This complexity can be managed to some extent by clearly splitting accountability for uses of data between data management teams, application development teams, and business users.

Responsibilities for understanding the environment you work in does not stop at the boundaries of the organization however. Third-party sourcing plays a key role, just like understanding your performance in competitive settings. In certain cases, setting up network arrangements or trusted third parties for keeping control over AI risk may turn out to be a solution to preventing unnecessary duplication of work.

Best practices regarding the privacy impact assessment (PIA) may be used as an analogy for a comprehensive AI risk assessment. In practice, many data-driven organizations have organized privacy impact assessments regarding:

  • datasets,
  • data-driven applications, and
  • data-processing activities.

This way of working reflects an important insight about data ethics. Ethical principles about the use of personal data usually relate to either:

  • reasons for collecting and storing data about people, and dealing with information about, and modification and deletion of that data,
  • reasons for making such data available for use by an application, and the privacy safeguards built into that application, or
  • specific purposes that such an application is put to in data-processing activities throughout the organization, and process-based privacy safeguards in those environments.

In a modern data-driven organization, the same type of data may be used and produced by various applications, and the same application may be used for different purposes in different activities. The relation between personal data and the uses to which it is put may therefore be complex and hard to trace. This complexity is managed by splitting accountability for the data between data management teams, application development teams, and business users.

Conclusion

A broad, comprehensive and ongoing AI-related risk assessment process is essential for data-driven organizations that want to be ready for the future, regardless of whether they aim to use AI. Local absence of AI technology does not absolve you from responsibilities for AI-related risk. The big question is how to organize this ongoing risk assessment process. One element of the solution is organizing accountability for uses of data between data management teams, application development teams, and business users. Another common element of the solution may be the formation of network arrangements with other parties to reduce the cost of control. An element that is always needed, and one that the KPMG Trusted Analytics team aims to provide for its customers, is a long list of known AI-related risk factors. And another long list of associated controls that can be used to address those risks from a variety of perspectives within an organization or a network of organizations. The first step for an organization is taking the strategic decision to take a good look at what its AI-related risks are and where they come from.

References

[Amou20] D’Amour, A., Heller, K., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., … & Sculley, D. (2020). Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395.

[Angw16] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. Retrieved from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[Eise11] Eisen, M. (2011, April 22). Amazon’s $23,698,655.93 book about flies. Retrieved from: https://www.michaeleisen.org/blog/?p=358

[Euro19] European Commission (2019, April 8). Ethics guidelines for trustworthy AI. Retrieved from: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

[Euro20] European Parliament (2020, October 20). Recommendations to the Commission on a civil liability regime for artificial intelligence. Retrieved from: https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html

[Euro21a] European Commission (2021, March 17). Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Retrieved from: https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence

[Euro21b] European Commission (2021). The Digital Services Act Package. Retrieved 2021, May 10, from: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

[Feis21] Feis, A. (2021, April 11). Google’s ‘Project Bernanke’ gave titan unfair ad-buying edge, lawsuit claims. New York Post. Retrieved from: https://nypost.com/2021/04/11/googles-project-bernanke-gave-titan-unfair-ad-buying-edge-lawsuit/

[Geig21] Geiger, G. (2021, January 5). Court Rules Deliveroo Used ‘Discriminatory’ Algorithm. Vice. Retrieved from: https://www.vice.com/en/article/7k9e4e/court-rules-deliveroo-used-discriminatory-algorithm

[Lipt18] Lipton, Z. C. (2018). The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31-57.

[Quac20] Quach, K. (2020, July 1). MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs. The Register. Retrieved from: https://www.theregister.com/2020/07/01/mit_dataset_removed/

[Rijk19] Rijksoverheid (2019, October 8). Richtlijnen voor het toepassen van algoritmes door overheden. Retrieved from: https://www.rijksoverheid.nl/documenten/rapporten/2019/10/08/tk-bijlage-over-waarborgen-tegen-risico-s-van-data-analyses-door-de-overheid

[Verm18] Verma, S., & Rubin, J. (2018, May). Fairness definitions explained. In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare) (pp. 1-7). IEEE.

Handling data transfers in a changing landscape

Although privacy has long been a discussion point within technology, its role in the use of Cloud services has not always demanded close attention. This changed in 2020, when the Schrems II ruling invalidated Privacy Shield. As a result, companies who relied on Privacy Shield for data transfers to the U.S., including the use of Cloud services, are now non-compliant and must take action. In this article, we will take a closer look at the impact of the ruling, and steps that organizations can take to manage the consequences.

Introduction

Organizations that are based in the EU/EEA and that have data exchanges with companies outside of the EU/EEA, have to meet new EU requirements that require the revision of contracts, the performance of additional jurisdiction analysis and the implementation of measures to mitigate the gaps.

What?

Stricter requirements for companies engaging in data exchanges with third parties or recipients outside of the EU/EEA, following from the Schrems II judgement.

Impact

Contract revisions and remediating actions are required.

Timeline

The ruling of the Central European Court of Justice (CJEU) took place on the 16th of July 2020, invalidating Privacy Shield with immediate, and retroactive, effect.

Fines

As non-compliance would result in non-compliance with GDPR, fines of up to 4% of annual revenue, or 20 million euros, are possible.

Scope

EU-US data transfers (including access to data) which were reliant on Privacy Shield as their transfer mechanism.

In the Schrems II case of July 2020, the European Court of Justice ruled that the Privacy Shield is no longer a valid means of transferring personal data to the U.S. The important players in the cloud services domain, like Amazon, Microsoft, Google and IBM are, however, based in the U.S. In most cases it is not a realistic option to look for alternative cloud services outside of the U.S. That does not mean it ends there. For example, it is important to consider the level of encryption and the existence of model contracts. In this article we gathered important considerations that every organization should take into account when using a US-based Cloud provider where data is transferred to or accessed from the US.

A brief overview of the context

What was the Privacy Shield?

In some countries outside the European Union (EU) there are no or less stringent privacy laws and regulations in comparison to those of the EU. In order to enable the same level of protection for EU citizens, the General Data Protection Regulation (GDPR) rules that personal data cannot be transferred to persons or organizations outside of the EU, for example the US, unless there are adequate measures in place. In this manner, the GDPR ensures that personal data of EU citizens are also protected outside the EU. Organizations can only transfer personal data outside of the EU to so-called ‘third countries’ when there is an adequate level of protection, comparable to that of the EU.

The US does not offer a comparable level of protection, because there is no general privacy law. Because organizations in the EU transfer personal data on a large scale and on a daily basis to the US, a new data treaty was adopted in 2016 – the Privacy Shield (successor of Safe Harbour). Under the Privacy Shield, US-based organizations could certify themselves, claiming they complied with all privacy requirements deriving from GDPR.

What happened in the Schrems II case?

The Schrems II case owes its name to Max Schrems, an Austrian lawyer and privacy activist who put the case forward. He was already known from the Schrems I case in 2015, in which the European Court of Justice declared that Safe Harbour (the predecessor of Privacy Shield) was no longer valid. The same fate now hits the Privacy Shield.

In the Schrems II case, Max Schrems filed a complaint against Facebook Ireland (EU), because they transferred his personal data to servers of Facebook Inc., which are located in the US. Facebook transferred this data on the basis of the Privacy Shield. Schrems’ complaint was, however, that the Privacy Shield offered insufficient protection. According to American law, Facebook Inc. is obliged to make personal data from the EU available to the American authorities, such as the National Security Agency (NSA) and the Federal Bureau of Investigation (FBI). 

In the Schrems II case, the Court investigated the level of protection in the US. Important criteria are the existence of ‘adequate safeguards’ and if privacy laws of EU citizens are ‘effective and enforceable’. The Court concluded that under American law, it cannot be prevented that intelligence agencies use personal data of EU citizens, even when this is not strictly necessary. The only legal safeguard that the US offers, is that the intelligence activities need to be ‘as tailored as feasible’. The Court ruled that the US is processing personal data of EU citizens on a large-scale, without offering an adequate level of protection. The Court also ruled that European citizens do not have the same legal access as American citizens. The activities of the NSA are not subject to judicial supervision, and there is no means to appeal it. The Privacy shield ombudsman for EU citizens is not a court and does not offer adequate enforceable protection. In short: Privacy Shield is now invalid.

This ruling has far-reaching consequences, given that a large number of EU based companies using cloud providers use a US-based provider. It is important to note that the liability rests on the organization who “owns” the data and exports it, not on the Cloud Provider. Therefore, it is critical that measures are taken so that running business as usual is not jeopardized. There are a number of steps which organizations can take to minimize the impact of this ruling and ensure continued compliance with GDPR. We have outlined these for you, to help you on your Cloud compliance journey.

Working towards privacy conscious Cloud Compliance

Changing US -based Cloud providers, for EU-based ones will in many cases not be desirable or feasible, regardless of being the most compliant approach for handling EU data in the cloud post-Schrems II. Thankfully there are alternatives. There are three key elements to consider when beginning the journey towards compliance:

  • Data mapping – understanding where data transfers exist within the organizations
  • Contractual measures – using legal instruments in managing transfers with third parties
  • Supplementary measures – reducing risks through enhanced protection

Each of these items is explored in greater depth in the following sections, bringing together recommendations from the European Data Protection Board, and best practices.

C-2021-1-Huijts-01-klein

Figure 1. Do not wait to take action; start taking steps towards remediation. [Click on the image for a larger image]

1. Know thy transfers – data mapping is key

It is a bit of a no-brainer, although no less crucial: the first step is knowing to which locations your data is transferred. It is essential to be aware of where the personal data goes, in order to ensure that an equivalent level of protection is afforded wherever is it processed. However, mapping all transfers of personal data to third countries can be a difficult exercise. A good starting point would be to use the record of processing activities, which organizations are already obliged to maintain under the GDPR. There are also dedicated software vendors, such as OneTrust, RSA Archer and MetricStream, in the market that are proven to be very helpful in gathering all this (decentralized) information. Keep in mind that next to storage in a cloud situated outside the EEA, remote access from a third country (for example in support situations) is also considered to be a transfer. More specifically, if you are using an international cloud infrastructure, you must assess if your data will be transferred to third countries and where, unless the cloud provider clearly states in its contract that the data will not be processed at all in third countries. The following step is verifying that the data you transfer is adequate, relevant and limited to what is necessary in relationship to the purposes for which it is transferred.

2. What about standard contractual clauses?

Once you have a list of all transfers to a third country, the next step is to verify the transfer tool, as listed in chapter V of the GDPR, which your transfers rely on. In this article, we will not elaborate on all the transfer tools. We will instead focus on what is relevant for the use of cloud services in the US. That means that we assume that the transfers fall under ‘regular and repetitive’, occurring at frequent and reoccurring intervals, e.g. having direct access to a database. Therefore, no use can be made of the exception for ‘occasional and non-repetitive transfers’, which would only cover transfers taking place outside of regular course of business and under unknown circumstances, such as an emergency.

An option that exists for internal transfers within your organization, is to incorporate Binding Corporate Rules. However, most organizations have their cloud services outsourced, and therefore the most logical transfer tool to address in this article is that of standard contractual clauses (SCCs), also sometimes referred to as model contracts. SCCs however, do not operate in a vacuum. In its Schrems II ruling, the Court reiterates that organizations are responsible for verifying on a case-by-case basis if the law or practice of the third country impinges on the effectiveness of the appropriate safeguards.

Relevant factors to consider in this regard are:

  • the purposes for which the data are transferred;
  • the type of entities involved (public/private; controller/processor);
  • the sector (e.g. telecommunication, financial);
  • the categories of personal data transferred;
  • whether the data will be stored in the third country or only remotely accessed; and
  • the format (plain text, pseudonymized and/or encrypted).

Lastly, you will need to assess if the applicable laws impinge on the commitments contained in the SCC. Because of Schrems II, it is likely that the U.S. impinges on the effectiveness of the appropriate safeguards in the SCC. Does that mean it ends there, and we cannot make use of US-based cloud services anymore? It does not. In those cases, the Court leaves the possibility to implement supplementary measures in addition to the SCCs that fill these gaps in the protection and bring it up to the level required by EU law. In the next paragraph we uncover what this entails in practice.

3. Supplementary measures

In its recommendations 01/2020, the European Data Protection Board (EDPB) included a non-exhaustive list of examples of supplementary measures, including the conditions they would require to be effective. The measures are aimed at reducing the risk that public authorities in third countries endeavor to access transferred data, either in transit by accessing the lines of communication used to convey the data to the recipient country, or while in custody by an intended recipient of the data. These supplementary measures can have a contractual, technical or organizational nature. Combining diverse measures in a way that they support and build on each other can enhance the level of protection. However, combining contractual and organizational measures alone will generally not overcome access to personal data by public authorities of the third country. Therefore, it can happen that only technical measures are effective in preventing such access. In these instances, the contractual and/or organizational measures are complementary, for example by creating obstacles for attempts from public authorities to access data in a manner not compliant with EU standards. We will highlight two technical supplementary measures you may want to consider.

Technical measure: using strong encryption

If your organization uses a hosting service provider in a third country like the US to store personal data, this should be done using strong encryption before transmission. This means that the encryption algorithm and its parameterization (e.g., key length, operating mode, if applicable) conform to the state-of-the-art and can be considered robust against cryptanalysis performed by the public authorities in the recipient country taking into account the resources and technical capabilities (e.g., computing power for brute-force attacks) available to them. Next, the strength of the encryption should take into account the specific time period during which the confidentiality of the encrypted personal data must be preserved. It is advised to have the algorithm verified, for example by certification. Also, the keys should be reliably managed (generated, administered, stored, if relevant, linked to the identity). Lastly, it is advised that the keys are retained solely under the control of an entity within the EEA. The main US-based cloud providers like Amazon Web Services, IBM Cloud Services, Google Cloud Platform and Microsoft Cloud Services will most likely comply with the strong encryption rules.

Technical measure: transferring pseudonymized data

Another measure is pseudonymizing data before transfer to the US. This measure is effective under the following circumstances: firstly, the personal data must be processed in such a manner that the personal data can no longer be attributed to a specific data subject, nor be used to single out the data subject in a larger group, without the use of additional information. Secondly, that additional information is held exclusively by the data exporter and kept separately in the EEA. Thirdly, disclosure or unauthorized use of that additional information is prevented by appropriate technical and organizational safeguards, and it is ensured that the data exporter retains sole control of the algorithm or repository that enables re-identification using the additional information. Lastly, by means of a thorough analysis of the data in question – taking into account any information that the public authorities of the recipient country may possess – the controller established that the pseudonymized personal data cannot be attributed to an identified or identifiable natural person even if cross-referenced with such information.

Conclusion

In summary, it is important to begin remediation action in light of Schrems II. Good hygiene is important, so start with data mapping, and knowing in which processing activities the transfers to third countries happen. Next, make an assessment on which transfer tool (e.g Privacy Shield) these international transfers are based. For now, SCCs appear to be the way forward when transferring to the US, supported by technical and organizational supplementary measures. To determine which supplementary measures to apply, you should assess the risk of each transfer through a Transfer Impact Assessment, based on at least the following criteria:

  • Format of the data to be transferred (plain text/pseudonymized or encrypted);
  • Nature of the data;
  • Length and complexity of data processing workflow, number of actors involved in the processing, and the relationship between them;
  • Possibility that the data may be subject to onward transfers, within the same third country or outside.

Based on this risk, decide which supplementary technical, contractual and organizational measures are appropriate. Make sure you work together with your legal and privacy department throughout the process. Do not wait to take action. Schrems II took immediate effect, and incompliance as a data exporter (i.e. the party contracting the Cloud provider) has the potential for high financial and reputation damage.

References

[AWP17] Article 29 Data Protection Working Party (2017). Adequacy Referential. Retrieved from: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=614108

[ECJ20] European Court of Justice (2020). Data Protection Commissioner v Facebook Ireland and Maximillian Schrems. Case C-311/18. Retrieved from: http://curia.europa.eu/juris/document/document.jsf;jsessionid=6CD30D2590A68BE18984F3C86A55271E?text=&docid=228677&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=11656651

[EDPB20a] European Data Protection Board (2020). Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data. Retrieved from: https://edpb.europa.eu/sites/edpb/files/consultation/edpb_recommendations_202001_supplementarymeasurestransferstools_en.pdf

[EDPB20b] European Data Protection Board (2020). Recommendations 02/2020 on the European Essential Guarantees for surveillance measures. Retrieved from: https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_recommendations_202002_europeanessentialguaranteessurveillance_en.pdf

[EuPa16] European Parliament and Council of European Union (2016). Regulation (EU) 2016/679. Retrieved from: https://eur-lex.europa.eu/eli/reg/2016/679/oj

Cross-system segregations of duties analysis in complex IT landscape

This article explains the importance of access controls and segregation of duties in complex IT landscapes and elaborates on performing segregation of duties (SoD) analyses across multiple application systems. Practical tips for performing SoD analyses are outlined based on the lessons learned from a SoD project at a multinational financial services company. In this project, the Sofy Access Control platform solution was implemented to automate the SoD analysis and to overcome the challenges with SoD conflicts in an effective manner.

The importance of access controls and segregation of duties

In a world where (digital) knowledge is power and the vast majority of all businesses work digitally to a large extent, security is an important element of the IT environment. Given that IT is more connected than ever, forming digital platforms, the need for a holistic security view over multiple platforms grows. Within security, the domain of access controls is tasked with the management of permissions, determining who can do what in an IT system. Setting the right level of permissions within a system is always a balancing act. If you set permissions too narrow, the system will become unworkable, but if you set permissions too broad, there will be an increased risk of security breaches. With employees switching functions, adding or dropping responsibilities and corresponding functions in applications, access management should be seen as an ongoing process.

Typically, the domain of access management consists of multiple safeguards or controls within the processes to ensure that the permissions handed out remain within the boundaries to keep it workable, which prevents security issues. The most common safeguards are the following:

  1. User management procedures
  2. Authorization (concept) reviews
  3. Segregation of Duties (SoD) monitoring

Based on these areas, SoD monitoring is considered the most challenging for a number of reasons. Firstly, applications and their permission structures can be complex. Depending on the (type of) system, permissions can be either determined and granted in a structural way, for example through roles or profiles with (multiple layers of) underlying menus, functions, permissions or privileges, or in a less structural way by assigning individual permissions to a user. An example of an application with multiple levels is displayed in Figure 1.

C-2020-3-Vugt-01-klein

Figure 1. Example of layered access levels (based on Microsoft Dynamics). [Click on the image for a larger image]

Secondly, combinations of assigned permissions or roles within the application need to be taken into account. It is insufficient to review the structure and the individual assignments to a user (e.g. a user has a role and can therefore execute a specific task in the application), as it will not detect the ability of a user to perform a task in the application by means of combination of permissions stemming from multiple roles.

Lastly, highly depending on the context, SoD conflicts might be inevitable. Whether it is an employee who needs a (temporary) backup colleague or a team that is simply too small to be split up for performing multiple tasks; sometimes it is just undesirable from a business efficiency perspective to enforce SoDs on a permission level in the application.

Businesses that strive to implement solid SoD monitoring and overcome the challenges, should take a structured approach. KPMG has developed a standardized method to put SoD monitoring and follow-up in place. This is a generic approach, not focused on specific technologies or (ERP) applications. The method consists of six steps that are required for control with respect to SoDs (see Figure 2):

  1. Risk Identification. Identify risks and the related SoD rules in business processes.
  2. Technical Translation. Translate critical tasks into technical definitions of user permissions based on data extracted from the applicable application.
  3. Risk Analysis. Use the data from your application(s) to analyze if users have possible combinations of critical tasks that are not allowed. These are called SoD conflicts.
  4. Risk Remediation. Remediate the risks and fix the SoD conflicts by changing or revoking access rights from users.
  5. Risk Mitigation. In case remediation is not possible or undesirable, mitigate the risks by implementing (automated) controls.
  6. Continuous Compliance. Implement measures and tools to structurally monitor SoD conflicts, follow up on conflicts and demonstrate compliance to business owners, regulators and other stakeholders.

Later (see the section “Ten lessons learned for cross-system SoD monitoring”), we will elaborate on this method with practical examples of how these steps were used in the SoD project at a global financial services company which is primarily focused on leasing products.

C-2020-3-Vugt-02-klein

Figure 2. Example of layered access levels (based on Microsoft Dynamics). [Click on the image for a larger image]

The need for cross-system SoD analysis

Stemming from a period in which organizations only had one main application system supporting all key processes, the typical SoD analysis is focused on one single application. This would then usually be focused on either the Enterprise Resource Planning-system (ERP) or the financial or General Ledger (GL) system. Recent trends, such as the movement of applications towards the cloud, digitization and platform thinking put an emphasis on (inter-)connectivity of applications within the IT environment.

As a result, more and more organizations are abandoning the idea of one single application in which all activities are performed, reintroducing point solutions.

Figure 3 shows a schematic image of business processes within one single ERP application in comparison to the scattered landscape of a company that uses multiple applications in Figure 4.

C-2020-3-Vugt-03-klein

Figure 3. Overview ERP system: example a company using the modules within a single SAP ERP system. [Click on the image for a larger image]

C-2020-3-Vugt-04-klein

Figure 4. Overview scattered landscape: example of a financial services company using separate systems for set of processes. [Click on the image for a larger image]

When an organization decides to spread its processes over multiple applications, access management will, as a consequence, have to be maintained for and synchronized across all applications. Correspondingly, the risk function will have to follow suit and make sure that the controls to prevent increasing security breaches or SoD conflicts are in place for multiple applications. As such, SoDs should be monitored across multiple applications. A specific example which is considered crucial would be the access to the banking or payment environment of an organization. Often fed by either the GL or ERP system, payment orders are sent to the bank for further approval and execution. A good example of the criticality of cross-system SoD conflicts is the situation in which in the GL or ERP systems is defined which parties should be paid for what amount, and where the actual payment (or approval thereof) is performed within the banking application. Imagine a user being able to administer bank accounts in the GL or ERP system and having the ability to approve payments within the banking system, enabling enrichment. In conclusion, as organizations have multiple applications within their IT environment covering their main business processes, user and access management should also be performed with a holistic view, overseeing all relevant applications.

Ten lessons learned for cross-system SoD monitoring

Introduction to the financial services company case

The financial services company (hereafter: the client) has identified challenges with respect to user access, user rights and Segregation of Duties (SoDs). Resolving the SoD issues has been proven complex due to several root causes such as unclear roles and responsibilities, knowledge gaps and limitations to application information. A more centrally coordinated approach – coupled with local (business) responsibility and accountability – was required to successfully resolve these SoD issues and to design and implement a process to prevent similar issues going forward. The client embarked on a project – in corporation with KPMG – in which they analyzed and followed-up on possible cross-system SoD conflicts. The KPMG platform SOFY was implemented to support this project and is still being used as a tool to continuously demonstrate compliance. More than 40 applications used in more than 15 countries (local offices) were on-boarded on the platform by which the client was able to measure and analyze and mitigate cross-system SoD conflicts.

1 Starting with a well-defined risk-based policy

As a starting point for the SoD analysis, it is important that the content of the SoD policy is carefully drafted. The client has developed a “Global Policy on Segregation of Duties” covering mandatory SoD principles. The SoD principles are applicable to the client’s core transactional systems covering the core lease initiation and contract management processes. These primarily relate to front-office, back-office, general ledger and pay-out / e-banking system. An example of such a principle is: “The person that activates contracts cannot be involved in payment activities.”

As a minimum, the combinations of critical activities and application functionality that are not allowed to be combined should be described, avoiding related risks.  (see Figure 5). Where discussions on authorizations regarding business and IT might become complex, confusing or overwhelming, a sharply defined policy helps to easily conclude whether these combinations are accepted or lead to a SoD conflict and if so, which risks should be mitigated.

C-2020-3-Vugt-05-klein

Figure 5. Example of a SoD policy configured in SOFY, including a set of SoD rules and related risks. [Click on the image for a larger image]

2 Defining critical activities using raw data and low level of detail

In order to start the SoD analysis, the SoD policy has to be translated into local permissions for each application in scope for the analysis. It is important to have the correct starting point for this translation; an overview of the local permissions of the application. It is recommended to use raw data and the lowest level of detail at which authorizations are configured, as this can be beneficial to prove completeness to other stakeholders (such as external auditors). At the client, project raw data was used (e.g. unedited dumps without any filtering, restrictions or other logic) so that any filtering and logic is applied within the analysis. IT people, key users and external IT partners/vendors were involved to determine the relevant tables and to extract those tables.

The reason to define critical activities at the lowest level of detail at which they are configured within the application, is to make the analysis more robust (see Figure 6). If we take a sample application in which authorizations are handed out at screen/menu level as the lowest level of detail and in the future the menu needed to execute the task is removed from the role, the definition at role level will no longer be valid, whereas the definition at menu level (which has to a derivation of the authorization structure from role to screen level) will be updated automatically.

C-2020-3-Vugt-06-klein

Figure 6. Critical activity definitions as defined in SOFY. The “access levels” refer to actual permissions required to perform the activity. [Click on the image for a larger image]

3 Setting up the analysis through intensive collaboration with both business and IT

When using raw data extracts as described previously, there is a risk the key user will not recognize the technical names of the menus (or other levels of permissions) when making the translation from  critical activities into technical names in the application. In order to prevent that from happening, it is recommended to organize sessions with both IT and the business (primarily key users). During these sessions in the SoD project at the client, the business provided (practical) input by showing how tasks are performed within the application, whereas IT assisted in making the transition to the underlying technical permissions and the extraction of that data. We have called this activity the “technical translation”, as shown as step 2 of the “SoD Monitoring Model” discussed earlier (see Figure 2). Walkthroughs have proven to be effective and efficient ways of discovering all tasks that can be performed within an application. Alternatively, determining which users can execute which tasks is done by looking at historical transactional data (and depending on the information stored also which permissions they used).

The outcome of this analysis should result in a conclusion at the same level for each application (e.g. user X is able to execute task 1). Even though the underlying permission structure can be different for each application, it is important to conclude at the same level as input for the SoD analysis itself. Therein, the combination of critical tasks on a user level will be calculated and reported as a “hit” when these tasks are present as conflicting tasks within the SoD policy.

4 Linking users within an application to an employee ID

In order to be able to analyze permissions of the same employee over multiple applications, it needs to be identified which user accounts within the applications belong to the same employee. Within the client’s company, some employees were linked to different usernames over multiple applications due to application-specific constraints or naming conventions. As such, it is recommended to determine a unique employee ID (e.g. personnel number) and link each of the application users to that employee ID. Linking these accounts is preferably processed automatically in order to prevent manual, repetitious and error-prone efforts (see Figure 7). Prerequisites for automation would be a logical naming convention and sufficient user details (e.g. e-mail address or personnel number) stored within the application to create the links.

It is recommended to periodically review the list of users for which automated linking to the employee ID could not be performed, as it is a prerequisite for a cross-system SoD analysis. The SOFY solution provides functionality to maintain the user mappings.

C-2020-3-Vugt-07-klein

Figure 7. User maintenance function as implemented at the client, automatically mapping application user IDs to an employee. [Click on the image for a larger image]

5 Using tooling to validate the analysis in quick iterations

When discussing application permissions with key users and IT to define the critical tasks (as described in step 4), it can become quite abstract. In order to make the effects of the agreed upon definitions tangible, we recommend working with tooling. At the client, the SOFY platform was used to demonstrate the effects of including or omitting single permissions from the definition for an application by simulating the results (when applying the current definition). SOFY Access Control is a tailored tool with dashboards, KPIs and functions to analyze and dive into SoD conflicts and underlying user permissions (see Figure 8).

In the client’s SOFY dashboards, it was chosen to focus on user and role level analysis. This means that the results include numbers and details of the users and roles with SoD conflicts, so root causes of conflicts could be analyzed on either one of these levels. This set-up of the analysis in the tool facilitates completeness in the analysis and follow-up; if no users and no user roles within applications contain SoD conflicts (including cross-system), there will no longer be any SoD risks.

Once the SoD analysis has been set up initially, and raw data extracts are used as described, the effort needed to automate the extraction process and conduct a cross-system SoD analysis is usually limited. As authorizations change over time, due to employees joining, leaving or moving within the organization, controls related to authorizations and SoDs are typically executed periodically. In doing so, this will contribute to (the demonstration of) more control on the access management domain.

C-2020-3-Vugt-08-klein

Figure 8. SoD Conflicts Overview dashboard; including timelines and functionalities to filter and “dive” into the results. [Click on the image for a larger image]

6 Following up on results in a phased manner

Once the analysis is done, it is time to start working with the results. The first step should always be to validate the outcome of the analysis. If the analysis is not correct, despite the efforts of key users and IT, it should be corrected. Any follow-up that should normally be performed can then be skipped, making it the most efficient option. However, when the analysis is validated and correct, follow-up should be done in a phased manner. We recommend starting the clean-up of the “low-hanging” targets to get familiar with the way of thinking and gain some momentum within your organization as a result of the improvement experienced. The following categories are considered “easy” categories:

  1. Inactive users having SoD conflicts
  2. Employees having multiple user accounts
  3. Super users having (all) SoD conflicts
  4. Roles with inherent SoD conflicts

These follow-up activities have taken place in a structured manner in the client’s company, based on the details provided by the analysis and automated results in the SOFY application. For each of the abovementioned follow-up categories, clear dashboards, KPIs and overviews were created.

7 Assigning responsibilities at a local, proper level

In order to address SoDs and resolve possible SoD conflicts, it is critical to have good operational governance. The client started with a project in which addressing the urgency and local responsibility for each of the company locations involved were important goals. The right tone at the top and proper post-project day-to-day governance made sure the organization kept paying attention to SoDs. To stay in control, the responsibilities of the three lines of defense were described as follows:

  • 1st line (business and IT): Review and resolve SoD conflicts, functional sign-off translation roles/permissions into critical tasks (yearly process), advise within key IT projects: Risk authorization work stream, etc.;
  • 2nd line: Maintain the SoD Policy, authorize possible exceptions, validate business controls (e.g. mitigating controls) of locations/countries, etc.;
  • 3rd line: Conduct periodic reviews on implementation and effectiveness of SoD controls, perform reviews on mitigating controls, etc.

Like already addressed in lesson 7, the use of tooling is recommended to make this governance structure and processes feasible. Especially with the tasks outlined above, those involved should have access to proper tooling. For instance, the second line at the client has access to a KPI summary dashboard for monthly monitoring (targets) and managing the results (see Figure 9).

C-2020-3-Vugt-09-klein

Figure 9. Summary Dashboard of SOFY Access Control. [Click on the image for a larger image]

8 Only accept mitigation after exploring remediation options

Once responsibilities are assigned and the organization operates according to the designated Lines of Defense (see previous lesson), there are multiple ways to respond to SoD conflicts. Either remediation of the conflict by resolving the root cause or mitigation by minimizing or removing the resulting risk of the SoD conflict. Before handing out targets on reducing the number of conflicts, it is beneficial to reflect on the desired solution to the SoD conflicts first. Whereas mitigation of SoD conflicts is most often quickly arranged by implementing an additional check or another control (e.g. periodically take a sample for a number of transactions and check their validity), this might not be the preferred approach for an overall SoD solution. Especially when performing cross-system SoD analysis, there might be several different routes to a SoD conflict (when a task can be executed in multiple applications, this results in multiple routes for a SoD conflict), which in turn might require its own specific additional control.

When remediating, the solution is often slightly more difficult. Either adjusting the roles within the application, removing permissions from users (which can be accompanied with challenging discussions on why they do need the access) or adding a control within the application, all require more effort than coming up with an additional control. However, these solutions do tend to provide a more permanent resolution to the SoD conflict, saving the periodical effort of having to perform the additional control. The SOFY platform was leveraged in the project to bring down the effort needed for remediation as well, for example by simulating the effects of removing permissions from roles.

9 Utilizing the available data for additional insights

Gathering authorization data of multiple applications in a single location, in combination with increased awareness and momentum on access management, can result in non-SoD related improvements. For example: the same data needed to analyze the permissions of a user or a role within an application can also be used to perform the regular user/role reviews as part of the access management controls. Even better, instead of reviewing all individual permissions contained in a role or assigned to a user, reviewing the critical activity level can be performed as well, making the review more efficient.

Secondly, due to the links between an application user and an employee, it is very easy to detect if there are any employees having multiple user accounts for one single application. This also helps to clean up the system as part of user management activities. Lastly, when combined with sources such as the Active Directory, interesting new insights can be generated. Employees that have left the organization (and are marked as such in the Active Directory) and are linked to active users in applications, can be easily listed. These exception reports are supportive with keeping an application authorizations clean and in reducing SoD conflicts.

These examples were implemented as KPIs in a remediation dashboard in the SOFY tool of a financial services company. Figure 10 shows an overview of what such a dashboard would look like. The left-hand blue-colored column highlights the KPI actuals, whereas the middle column indicates the target value, and the right-hand column defines the KPI and advises on the required remediation action.

C-2020-3-Vugt-10-klein

Figure 10. Remediation Dashboard of SOFY Access Control. [Click on the image for a larger image]

10 Aiming for a sustainable solution

Finally, when the analysis has been set up at the right level of detail, when it has been successfully automated and follow-up has been operationalized, one final aspect has to be taken into consideration. The subject of analysis (e.g. the IT environment analyzed for SoD conflicts) is subject to constant change. New functionality might be added, entire application systems might be phased out or introduced; this should all be reflected within the SoD analysis.

We recognize three different mechanisms to keep the reality reflected within the cross-system analysis:

  1. The ongoing process of change management applicable to IT environments. Typically, changes are generated by well-structured processes or projects (depending on the size of the change) and should be evaluated on SoD relevance. If relevant, these changes should be communicated from the change process to be included in the analysis.
  2. The second mechanism functions as a backup for the first and consists of a periodical review of all definitions applied in the SoD analysis. By periodically (for example yearly) distributing the current definitions which need to be conformed per application, it is ensured that any changes missed by applying the first mechanism are identified and that the definitions stay up to date.
  3. As definitions are not the only critical input to a successful SoD analysis, other elements such as the linkages between application users and employees needs to be maintained as well. When a new employee joins the organization and obtains a new user account, it needs to be (automatically) linked to the correct person. To include these prerequisites in the SoD KPIs reported, it is thereby enforced that these critical inputs are maintained during the entire year.

Conclusion

In case an organization with a complex IT environment encounters challenges relating to access rights and SoDs, it is advised to use a platform to support the analysis. Moreover, the right platform will provide an organization with the tools for maintenance of the policies and (technical) rules which the SoD analyses are based on. The analysis should reveal conflicts on SoD and critical access and includes information on the related users, applications, roles and access rights. This information can be used to either remediate or mitigate conflicts. To monitor SoDs in a structured manner, it is key to automate the analyses and follow up on time. With the described 10 lessons learned, some practical tips are provided for a head start in their approach on SoDs and effectively demonstrate compliance.

We would like to thank Dennis Hallemeesch and Nick Jenniskens for their contribution to this article.

Reference

[Gunt19] Günthardt, D., D. Hallemeesch & S. van der Giesen (2019). The lessons learned from did-do analytics on SAP. Compact 2019/1. Retrieved from: https://www.compact.nl/articles/the-lessons-learned-from-did-do-analytics-on-sap/

Exploring digital: empowering the Internal Control Function

The Internal Control Function, or second line of defense is a vital part of the organization tasked with devising and improving measures to prevent fraud, helping the company adhering to laws and regulations and improving the quality of internal financial reporting. The world is watching: companies, especially the larger ones have to adhere to global and local laws, expectations of the general public, shareholders, auditors, employees, the supply chain and other stakeholders. This internal and external pressure is causing the Internal Control Function to feel the urge to improve its way of operating. This article provides insight into different digitalization options to help improve the way the Internal Control Function operates, with the purpose of inspiring you to digitize your Internal Control Function too.

Introduction

The Internal Control Function (ICF) uses a wide set of controls to make sure business and compliance risks are prevented or dealt with in the benefit of the company’s well-being. Many of these controls are manual. Using largely manual controls means a great deal of effort, time and money. Not only are these controls more time consuming and costly, they also cannot absorb the increasing complexity of today’s business environments in time, leaving the company possibly exposed to risks.

Let’s see what happening in the ICF market domain. The recently published Governance risk & Compliance (GRC) survey by KPMG ([KPMG19]) was initiated to get a better insight into the maturity of GRC, the level of internal controls and the adoption of HANA with organizations running SAP within the EMA region. More than 40 large organizations running SAP have been asked to participate in this survey. Relevant conclusions for ICF:

  • Approximately 20% of these companies don’t have a centralized internal control repository
  • Approximately 50% of these companies have less than 10% of their controls automated
  • Approximately 70% of these companies identify control automation as a top priority
  • Approximately 50% of these companies want to reduce their control deficiencies

Following this survey, it seems that while the top priorities of companies include further automation and reducing control deficiencies, the actual number of companies relying heavily on automation, using digital solutions, is low. While the relevance of digitizing seems evident, it is difficult to start, having a large landscape of applications and different control options available to you as organization. A logical question to ask is: how can we start to digitize our ICF?

This article shares client stories of ICF that used digital options, simply tools, to improve the way they operate across a variety of industries. We will outline how digital options can be used to lower the cost of control and improve the level of assurance of four different control types and what pitfalls should be avoided. We will also share relevant lessons learned and next steps based on our own experiences.

Digitalization options for the Internal Control Function

Digitalization of the ICF can be achieved in different forms. Some organizations start by implementing a tool or system to centrally manage and govern their risk and control framework. Some choose to go for an end-to-end transformation, where various tools and systems are integrating with each other, controls are automated and manual activities are supported by robotic process automation and low-code platforms. In the end, all companies try to achieve the same goal: increase their level of assurance while decreasing their cost of control. To help reach this goal we analyze a number of digitalization options using the CARP model. This model helps to categorize the different control types that will effectively reduce risks within a process or process step. CARP stands for Configuration, Authorization, Reporting and Procedural (Manual), which represent different types of controls (see Figure 1).

C-2020-3-Giesen-01-klein

Figure 1. CARP model. [Click on the image for a larger image]

In Figure 1, the left side (C+A) of the model represents more technical controls, which can often be implemented directly in the ERP system, whereas the right side (R+P) of the model represents organizational controls which are embedded in daily business activities. Furthermore, configuration and authorization controls are preventive in nature while reporting and procedural controls are detective in nature. For each of the control types indicated in the model, there are digitalization opportunities. In the next section, some examples and use cases are provided for each category to provide a peek into the different options.

Configuration controls

Configuration controls are related to the system settings of an ERP system that can help prevent undesirable actions or force desirable actions in the system. These kinds of configurations can exist for the business processes handled in the ERP system as well as for the IT processes related to the ERP system. Therefore, a distinction can be made between “business configuration controls” also known as application controls and “system configuration controls”.

Examples business configuration controls
  • Mandatory fields, such as a bank account number; when creating a new vendor in the system, these settings make sure no critical information is missing.
  • Three-way match; this enforces that the purchase order, goods receipt and invoice document postings are matched (within the tolerance limits) with regard to quantity and amount.
Examples system configuration controls
  • Password settings such as SAP parameters: “login/min_password_lng” or “login_min_password_letters” determine system behavior, such as the length of a password or the minimum number of letters used.
  • More general system settings such as the number of rows that can be exported in a Microsoft Dynamics D365 environment determine a part of the system stability and are governed to make sure system performance is not impacted by frequent large exports.

In short, configuration controls are automated and preventive in nature which help organizations stay in control while not taking up any FTEs to execute the controls. Therefore, this type of control can be used to reduce the cost of control and increase assurance levels. However, there is a catch: how can the organization prove its automated configuration controls are set up correctly? And how does it prove that has been the case over time? To show how digital solutions can help solve this question, we present a use case of a large multinational where SAP Process Control was implemented to monitor the system configuration controls of 20 SAP systems.

Use case: using tools to go from quarterly parameter checks to continuous monitoring

Context

A large multinational with over 10 Billion euros in revenue. The company has over 20 centralized SAP systems. For each of these SAP systems, their (security) parameter settings, such as client logging (Rec/Client) or Password Settings (login/min_password_lng) needed to be monitored in order to adhere to their set SAP Security baseline. Their security baseline covers over 100 (security) parameter settings, which resulted in a lot of pressure on their testing resources.

State before Process Control

Before SAP Process Control was used, the 100+ security settings for each centralized SAP system were reviewed once per quarter. The review was performed manually and documented by creating screenshots of each relevant system setting. These documents were over 100 pages per system. The follow up on findings of these reviews were limited and rarely documented. If changes had occurred during the quarter (e.g. a setting was changed to an incorrect value and changed back to the correct value just before the review) there was no possibility to find these changes.

State after Process Control

By using continuous monitoring via SAP Process Control the system (security) parameters are now monitored on a weekly or monthly basis (dependent on the risk profile) and on top of that, all changes made to parameters are reported. Furthermore, the monitoring is now exception based. This means that parameters which are set to the correct values are passed and reported as effective whereas parameters that are set to the incorrect value are set to deficient and escalated through a workflow. The workflow requires a follow up action of the system owner, which is then captured in SAP Process Control.

Key benefits

By shifting the monitoring to SAP Process Control, the cost of control decreased while the assurance over control increased. By automating the parameter monitoring the focus shifted towards exceptions and follow up thereof. In the new situation, all results are also better auditable and more useful for the external auditor.

In this specific case, the client used SAP Process Control to perform continuous monitoring on their system configuration controls.

System authorization controls

Are preventative measures taken to control the content of technical “roles” and access for users to those roles with the intent of making sure the right people can execute the right actions? In their efforts to manage access, companies generally make use of the below authorization controls:

  • Segregation of duty controls. The ability to change vendor bank accounts is limited to a technical system role related to master data management. That role is only assigned to personnel in the master data management department, to people who are not directly processing transactions. Another role is limited to be able to do purchase orders. This limits the risk that one person can change a vendor bank account into a private account, and create a purchase order against it, for a fraudulent pay-out. The outcome is that certain activities or “duties” are segregated. Issues like this example are called Segregation of Duties (SoD) conflicts.

    [Vree06] zooms in on the relevance of Segregating of Duties and its impact, along with multiple improvement suggestions and [Zon13] dives into solutions for managing acces controls.
  • Sensitive Access controls. Updating credit management settings often falls under the Sensitive Access controls. These controls are essentially lists of actions that can have major impact on the business, and access to it should therefore be limited and closely monitored. Unlike SoDs, this is a single specific action. In the example of credit management, access to the transaction code and object in SAP and the Permission in Microsoft D365 are normally monitored periodically, where any users or roles having this access are screened and adjusted where needed.

In short, authorization controls are very similar to configuration controls because they are part of the system and once created and assigned, they automatically do their job. These controls are strong preventive controls if set up correctly. Like in the case of configuration controls, there’s a catch: how can the organization prove that the authorization controls are set up correctly, and how do you prove that has been the case over time? To show how digital solutions can help solve this question, we present a use case where a company used Access Control tooling to assist in managing their access controls and making sure their roles are SoD Free or SoD-mitigated in a way that increases assurance and lowers the cost of control.

Use case

Background Sofy

The Sofy platform is a KPMG proprietary SaaS platform, hosting solutions in areas where KPMG built expertise over the course of years. Solutions on the Sofy platform aim to provide relevant insights into business-critical processes as well as triggering relevant follow-up actions by end users with the help of workflows, tasks and notifications.

Context

A large multinational operates in more than 150 countries and has annual global revenues of over 50 Billion euros. The core application landscape of the customer consists of 9 SAP production systems. Access of all users to these systems are to be monitored to limit extensive conflicting access rights and trigger quick resolution of access rights related issues by the appropriate end users.

State before KPMG Sofy Access Control

The organization struggles to get a reliable view on Access Risks within and between their business critical applications. Their previous solution only looked at the SAP landscape and analyzed their production systems in an isolated way without taking into account that users may have conflicting access rights in the intersection of multiple systems. There is a strong desire, driven by Internal Control and findings from the external auditor to get better insights into conflicting authorizations within the full SAP development stack as well as other business critical applications. Issues often exist with users that have access to multiple business critical applications and as such can perform conflicting activities in multiple systems. With the existing solution, the company was unable to detect these issues.

State after KPMG Sofy Access Control

By implementing the Sofy Access Control solution:

  • transparency has been created within the complete SAP landscape.
  • a preventive SoD check is now running continuously for every access request
  • conflicting user authorizations resulting from role assignments are being reviewed and approved. This happens before they are actually assigned in the underlying system to prevent unauthorized access for end users.
  • conflicting user authorizations are being reviewed continuously to ensure accurate follow-up,  takes place in terms of risk acceptance, mitigation or remediation.

Key benefits

The solution helped the client gain control over their authorization controls because:

  • it increased transparency in conflicting access across the full SAP stack
  • continuous monitoring on each of these systems ensures quick resolution and remediation of access related risks.
  • preventive SoD checks make sure unauthorized access in the system is avoided as the impact of roles changes or role assignments is clear upfront

The implementation of this digital solution has shifted the minds from taking remedial actions in a re-active way to pro-actively avoiding and mitigating access related issues.

Reporting controls

Reporting controls are pieces of information combined into a format where a user can get to conclusions about for example the effectiveness of a process or the state of a financial. They are used to detect anomalies so that action can be initiated. Examples are:

  • Related to the SoD example mentioned under authorization controls, a manager in the internal control department wants to know how many SoD conflicts were reported last month, the actions that were taken to fix them, the status of those actions and the real risk the business is now facing. A dashboard, for example in BI tooling such as MS PowerBI, Tableau or Qlicksense or as part of a risk platform such as the earlier mentioned Sofy, SAP Process Control, can be a great tool to visualize and report on the status of the authorization controls. Especially on this topic, many lessons can be learned, and we highly recommend reading the 10 most valuable tips for analyzing actual SoD violations ([Günt19]).
  • A manager in finance running a report that checks the systems for duplicate invoices, so that double payments to vendors can be prevented or payment of duplicated invoices can be retrieved from vendors that were paid twice.

In short, reporting controls are generally detective in nature as they present information about something that already has occurred. While the Configuration and Authorization controls try to prevent risks, there is always a residual risk and this is where reporting controls come in, to detect any mistakes or fraudulent behavior that got past the preventative controls.

These reporting controls can be very strong when the executers of those controls are supported by strong dashboards and analytics. In the Compact special of January 2019, [Zuij19] presented a case on advanced duplicate invoice analysis. The article explains how they implemented smart digital tooling to create a duplicate invoice analysis at a major oil company. We advise reading this in-depth case, because it could provide helpful guidance on how reporting controls can be digitized to unlock the power to identify conclusions that were invisible or inaccessible before. This can directly increase the assurance level of this type of control because insight is provided that wasn’t there before, and at the same time, it decreases the cost of control, opening up the possibility to gain back any invoices that are paid twice. Additional examples are:

  • Automating the running and sending of reports using SAP Process Control
  • Creating an analysis to identify the use of discounts in the sales process using SAP HANA or SQL
  • Unlocking faster decision-making by providing the organization with real-time overviews of the state of internal controls. This can be achieved with a live Dashboard using Microsoft PowerBi in combination with Outsystems and SAP Process Control, or KPMG SOFY

Procedural (manual) controls

Procedural controls are manual actions performed by a human, initiated to prevent or detect anomalies in processes. They help companies cover residual risks that are not easily covered by Configuration, Authorization or Reporting controls. Examples of manual controls are:

  • Signing off documents such as contracts, large orders, etc.;
  • Manual reconciliation of received payments against invoices, with or without the use of a system;
  • Manual data alignment between the sales system and invoice system.

Controls executed by humans, such as Reporting and Procedural controls compared to ones that are executed by a machine, have the inherent risk of the human operator making mistakes, because making mistakes is human, especially when the complexity and receptiveness of a control increases. As business complexity and the volume of data is increasing, companies are now looking into solutions that can replace or enhance human-operated controls with automated digitized ones.

To show how manual controls can be improved using digital solutions, we present a use case focused on reducing manual actions or at least reducing the effort and increasing the quality of their output. In this case, Robotics Process Automation (RPA) tooling was used to automate manual journal entries, resulting in fewer manual control actions. This reduces the cost of control because fewer FTEs are required to operate the control. Secondly, the level of assurance increases as a robot will not make mistakes even when the repetitive task is executed hundreds of thousands of times.

Use case: automating manual journal entries at a large telecommunication organization

Context

During a large finance transformation at one of the biggest Dutch telecom companies, KPMG was asked to help identify opportunities for automation within the finance department. During an assessment at the client, the processing of manual journal entries was identified as a suitable process for automation with the use of Robotic Process Automation (RPA). This happened because of the high repetitive nature of the process and the high business case value, as the process is time consuming and error prone. To show the viability of RPA within the organization and the potential benefits for the client, a Proof of Concept was initiated.

State before using RPA tooling

  • Large finance team that is performing manual repetitive task daily.
  • Low first-time right percentage for manual journal entries which leads to rework
  • Inefficient input template for creation of manual journal entries
  • Multiple human control steps embedded in the process to check journal entries before recording which is time consuming
  • No clear visibility for management of prior recorded manual journal entries

State after the implementation

  • Standardized a manual journal entry template for the usage of RPA
  • Automated the booking of manual journal entries using RPA software
  • Eliminated unnecessary steps within the manual journal entry process

Key benefits

  • Higher first-time right percentage due to fewer errors performed in the process as a result of automating the process using RPA
  • One fully automated process which resulted in FTE reduction
  • Less human intervention necessary due to higher data quality caused by robotic input, which is more stable and less prone to error.
  • Automated reports generated can be used for a better audit trail and management reporting.

Lessons learned: digitizing the right way

Like other projects, digitalization projects can be challenging and will have pitfalls. In this section, we will provide examples of the pitfalls we encountered, and explore how they can be prevented.

Determine the baseline

In several cases the goal of a project is set without first analyzing the starting point of the project. This can result in unachievable goals, which will cause the project to be a failure. For example, if the goal of a digitization project is: “we would like to fully automate the testing for 50% of the controls in our central control framework using tool XYZ” there are several prerequisites to achieve that goal:

  1. Tool XYZ should be capable of automating the testing for these controls.
  2. The feasibility of automating the testing of controls in the central control framework should be determined beforehand.
  3. The end users should be part of the digitization journey to make sure they understand the tool and understand how it can be embedded in their process

In this example, setting the baseline would consist of analyzing the controls for feasibility of automation in general, then checking whether tool XYZ is capable to facilitate the aimed automation and then engaging the business users before the project starts.

Once the baseline is determined, an achievable goal can be set, and the project will have a higher chance of success.

A fool with a tool is still a fool

The market is full of tools and technology solutions, some with a very broad range of services, some with a specific focus. Each of these tools have strong points and weaknesses. These tools are often sold using accompanying success stories. However, even if the best tool is used in the wrong way, it won’t be successful.

As an example, we could consider SAP Access Control, a tool which can be used to monitor potential Segregation of Duties conflicts in an ERP system. When the tool reports SoD conflicts, an end user should follow up on the conflicts. In the case of SAP Access Control, a user has the ability to assign a control to an access risk to let the system know the risk has been mitigated. In reality, many users assign a control in the system to hide the results of SAP Access control by showing them as “mitigated”, while in reality the actual risk in the ERP system is still there as the control is not really executed. In this case, the tool is just as good because the end user decides to use it. Good examples of how to use this tool properly can be found in the article of van der Zon, Spruit and Schutte ([Zon13]).

To ensure that a tool or solution is used in the right way, make sure that the end users are involved and properly trained in the tool or solution. If they see the benefits, the adoption will be easier and the tool or solution can be used to the full extend, moving towards a more digitized organization.

Who controls the robot?

Governance is an important topic in relation to technology solutions. In a landscape where controls are tested automatically, reports are being generated by a data analytics platform and manual tasks are performed by robots there is still manual intervention of humans.

The automated testing of controls needs to be configured in the tool or solution. As part of the implementation project this is probably tested and reviewed, but what happens after that? How do organizations make sure that nobody changes the rule-based setup for the automated testing of the controls? This question is relevant for every tool or technology solution used for digitization of processes. If there are separated robots to perform conflicting activities within a process, but both robots can be accessed by the same person, the conflict and underlying risks still exist. To resolve this, proper governance of the technology solution or tool should be put into place.

Working together

In larger corporations, each department might have their own budget and their own wishes and requirements. However, if each department is working on digitizing individually, a lot of effort is wasted. Re-inventing the wheel is costly and will slow down overall progress.

When all powers are combined, requirements are bundled and effort is centralized, digitizing the processes will make more sense and implementation can be faster and cheaper. It’s about connecting individual digitizing efforts to achieve the next level.

Conclusion

The digitalization of internal control entails more than selecting and implementing a new tool and learning how to use it; it is the use of digital technologies to change the way the business or a department works and provides new value-producing opportunities for the company. Onboarding new tooling will therefore require enhancements on your operating model to be set for success.

Different aspects of the operating model need to be considered. Think of the potential impact on the ICF when automation impacts the role between the business and internal control. People and skills are impacted when internal control takes a role in the configuration or maintenance of automation rules, requiring certain technical capabilities and skillsets. In today’s digitalization, a more agile way of working is usually better, potentially impacting the required capabilities of the internal controllers. From technology perspective, automation has a major impact because of the integration within or connection with the existing IT landscape. This is even more so the case when RPA is used, impacting aspects such as governance, maintainability and security. Finally, automation within the internal control realm will have an impact on the current way of reporting, also considering auditability.

Taking the time and effort to define the impact on the operating model of your ICF and to devise a detailed plan on how to use digital control options, is the key to success.

Acknowledgements

The authors would like to thank Sebastiaan Tiemens, Martin Boon, Robert Sweijen and Geert Dekker for their support in providing use cases, feedback and co-reading the article.

References

[Cool18] Coolen, J., Bos, V., de Koning, T. & Koot, W. (2018). Agile transformation of the (IT) Operating Model. Compact 2018/1. Retrieved from: https://www.compact.nl/articles/agile-transformation-of-the-it-operating-model/

[Günt19] Günthardt, D., Hallemeesch, D. & van der Giesen, S. (2019). The lessons learned from did-do analytics on SAP. Compact 2019/1. Retrieved from: https://www.compact.nl/articles/the-lessons-learned-from-did-do-analytics-on-sap/?highlight=The%20lessons

[KPMG19] KPMG (2019, May). Survey – Governance, Risk and Compliance. Retrieved from: https://assets.kpmg/content/dam/kpmg/ch/pdf/results-grc-survey-2019.pdf

[Vree06] Vreeke, A. & Hallemeesch, D. (2006). ‘Zoveel functiescheidingsconflicten in SAP – dat kan nooit’, en waarom is dat eigenlijk een risico? Compact 2006/2. Retrieved from: https://www.compact.nl/articles/zoveel-functiescheidingsconflicten-in-sap-dat-kan-nooit-en-waarom-is-dat-eigenlijk-een-risico/?highlight=hallemeesch

[Zon13] Van der Zon, A., Spruit, I. & Schutte, J. (2013). Access Control applicaties voor SAP. Compact 2013/3. Retrieved from: https://www.compact.nl/articles/access-control-applicaties-voor-sap/

[Zuij19] Zuijderwijk, S. & van der Giesen, S. (2019). Advanced duplicate invoice analysis case. Compact 2019/1. Retrieved from: https://www.compact.nl/articles/advanced-duplicate-invoice-analysis-case/?highlight=Advanced

Transaction monitoring model validation

The bar for transaction monitoring by financial institutions has been raised during the past decade. Recently, several banks have been confronted with high fines relating to insufficient and ineffective transaction monitoring. There is an increasing number of regulators that expect (mainly) banks to perform self-attestations with respect to their transaction monitoring models. This is, however, a complex exercise with many challenges and pitfalls. This article aims to provide some guidance regarding the approach and methods.

Introduction

Many people consider financial crime like money laundering as a crime without real victims. Perhaps a large company loses money, or the government gets fewer taxes, but nobody really suffers true harm. Sadly, this is far from the truth. From human trafficking, to drug wars and child labor, the human cost of financial crime is very real and substantial. Financial crime is therefore considered to be a major problem by governments around the world. As a consequence, increasingly strict regulations regarding transaction monitoring where imposed on the financial industry since the beginning of the financial crisis as they are the gatekeepers to the financial system. These regulations have predominantly, although not exclusively, an effect on banks. Financial institutions are increasingly confronted with complex compliance-related challenges and struggle to keep up with the development of regulatory requirements. This especially applies for financial institutions that operate on a global level and that are using legacy systems. The penalties of non-compliance are severe as demonstrated by amongst others UBS with a fine of 5,1 billion (USD) and a case in The Netherlands where ING Bank settled for €775 million with the public prosecutor. As time progresses, the bar for financial institutions is being raised even higher.

In 2017, the New York State Department of Financial Services (NYDFS) part 504 rule became effective. The NYDFS part 504 requires – starting in 2018 – that the board of directors or senior officers annually sign off on the effectiveness of the transaction monitoring and filtering processes, and a remediation program for deficiencies regarding internal controls. The nature of the NYDFS part 504 rule is similar to that of the SOx act. This seems to be a next step in transaction monitoring regulatory compliance requirements. For example, the Monetary Authority of Singapore has increased its focus both on anti-money laundering compliance as well as independent validation of models. In the Netherlands, De Nederlandsche Bank (DNB) as supervisory authority has issued a guideline in December 2019 ([DNB19]) regarding, for now, voluntarily model validation with respect to transaction monitoring.

Given the increased attention for transaction monitoring and model validation (self-attestation), this article zooms in on the way model validations for transaction monitoring can be approached. The next section contains an overview of the compliance framework for transaction monitoring after which the common pitfalls and challenges for model validations are discussed. The five-pillar approach of KPMG enables financial institutions to cope with these pitfalls and challenges and is also explained as an outlook to the near future regarding transaction monitoring and technologies for model validation. Finally, a conclusion is provided.

High-level transaction monitoring process

C-2020-2-Schijndel-01-klein

Figure 1. High-level overview of the transaction monitoring process ([DNB17]). [Click on the image for a larger image]

Before discussing model validation in more detail, it might be helpful to provide a high-level overview of the transaction monitoring process, as an example of a compliance model. Figure 1 contains a graphical high-level overview. The SIRA (Systematic Integrity Risk Analyses) and the transaction monitoring governance are at the basis of the process. When transactions are initiated, pre-transaction monitoring activities are triggered (e.g. with respect to physical contact with the client, relating to trade finance or Sanctions) based on business rules. This might result in alerts which are followed up in accordance with the governance and escalation procedures.

Inbound and outbound transactions (“R.C. Mutations”) are processed after which post-transaction monitoring activities are triggered based on business rules, again resulting in potential alerts which are followed up and reported to the FIU if required (Financial Intelligence Unit, a supervisory authority regarding money laundering and terrorism financing).

Parallel to daily activities a data-driven learning and improvement cycle is in place in order to decrease false positive and false negatives alerts and to increase efficiency.

Compliance framework

Banks and other financial institutions use a multitude of models to perform quantitative analyses like credit risk modeling. As a response to the increased reliance on such models, different regulators as well as other (inter)national organizations have issued regulations and guidance in relation to sound model governance and model validation.

Within the compliance domain we see an increasing reliance on compliance models, like transaction monitoring systems, client risk scoring models or sanction screening solutions. These models are used to ensure compliance with laws and regulations related to, among other, money laundering, terrorism financing and sanctions. Where these models are intended to mitigate specific integrity related risks like the facilitation of payments related to terrorism, the usage of such models introduce model risk. And if not handled well, can result in an unjustified reliance on the model. Therefore, the model-related guidance, either specifically related to the compliance domain or more general, is equally relevant for compliance models. Examples include Bulletin OCC 11-12 from the Federal Reserve and Office of the Controller of the Currency or the Guidance for effective AML/CFT Transaction Monitoring Controls by the Monetary Authority of Singapore. The DNB has presented guidance on the post-event transaction monitoring process for banks on how to set up an adequate transaction monitoring model and related processes, including a solid Three Lines of Defense.

Internationally, different regulators have not only issued guidance in relation to model risk and sound model governance. They have additionally introduced or are requesting reviews, examinations and even mandatory periodical attestations by the board of directors or senior officers to ensure that compliance models are working as intended and that financial institutions are in control of these models. For example, the New York Department of Financial Services (NYDFS) requires senior officers to file an annual certification attesting to compliance with the NYDFS regulations that describe the minimal requirements for transaction monitoring and filtering programs. The DNB for instance has stated, in the updated guidance on the Wwft and SW, that both the quality and effectiveness of e.g. a transaction monitoring system must be demonstrated and that by carrying out a model validation or (internal) audit, an institute can adequately demonstrate the quality and effectiveness of such a model.

In our experience, banks are increasingly considering compliance models to be in scope of the regular internal model validations processes that already being performed for more financially related models, requiring sign-off by internal model validation departments prior to implementation and/or as part of ongoing validation of e.g. transaction monitoring systems. Additionally, compliance departments as well as internal audit departments are paying more attention to the internal mechanics of compliance models rather than looking at merely the output of the model (e.g. generated alerts and the subsequent handling). Especially due to recent regulatory enforcements within the EU and specifically the Netherlands, we have seen the topic of compliance model validation being more and more part of the agenda of senior management and the board, and banks that are allocating more resources to compliance models.

Given the increased awareness by both external parties such as regulators, as well as internal parties at financial institutions, these models introduce new risk management challenges. Simply put; how do we know and show that these compliance models are functioning as intended?

Issues, pitfalls and challenges

Effectively managing model risk comes with several issues, pitfalls and challenges. Some of these are part of the overall model risk management (MRM) process and other relate more specifically to compliance models. We have also seen recurring observations, findings or deficiencies in models that can impact both the efficiency and effectiveness of the models. This section describes some of these challenges and deficiencies so that when designing, implementing and operating compliance models or when validating or reviewing such models, these can be considered upfront.

KPMG has conducted a study to identify key issues facing model developers and model risk managers. This study, which is not specifically focused on compliance models, shows that key issues include that the definition of a model is subjective or even obscure and that the dividing line between a model and a more simpler computational tool – like an extensive spreadsheet – keeps shifting towards including more and more tools as a model. In addition, creating a consistent risk rating to apply to both models as well as model-related findings, is considered difficult, making it difficult, if not impossible, to quantify individual model risk as well as the organization’s aggregate model risk. Other key issues include not having a consistent IT platform, inefficient processes and the difficulty in fostering an MRM culture.

More specifically, compliance models may have certain challenges that can be extra time-consuming or painful. For many financial institutes, the current validation of e.g. a transaction monitoring system is a first-time exercise. This means that the setup and overhead costs are high when organizations recognize that certain crucial elements are not adequately documented or dispersed across the organization, making the start of a full-scale validation difficult. Perhaps there isn’t even sufficient insight into all the relevant source systems and data flows that impact the models.

From an ownership perspective more and more activities related to compliance models, which historically have been more managed by compliance departments, are being introduced to the first line of defense. This means that e.g. certain historical choices have been made and are unknown by the current system owners when such choices have not been historically documented.

For financial institutes that have activities in multiple countries, the lack of a uniform regulatory framework means that incorporating all relevant global and local requirements can be challenging. Even within the EU, although minimal requirements are similar, certain requirements, like what constitutes sufficient verification or what are mandatory sanctions lists, may differ per country. Outside the EU, even more distinct requirements might be relevant. What is sufficient in one jurisdiction might be insufficient or even unacceptable in another.

Because compliance models, due to regulatory pressure, are getting more resources to improve and upscale current activities, models are less static than before and become a moving target with frequent changes to data feeds, scenario logic, system functionality or even complete migrations or overhauls of current models. In addition, increased staffing in relation to compliance models, means many new employees don’t have the historical knowledge of the models and we also see difficulties in the market when recruiting and retaining sufficient and skilled people.

An inherent element of such compliance models, similar to e.g. fraud-related models, is the lack of an extrinsic performance metric to determine the success or sufficient working of the model. Transaction monitoring systems or sanction screening solutions currently have a high “false positive” rate of alerts of sometimes as high a 99%. When banks report unusual or suspicious transactions they generally lack the feedback from regulatory organizations to determine if what they are reporting is indeed valuable (i.e. being a true positive). Furthermore, for all transactions that are not reported, banks do not know if these are indeed true negatives or that these perhaps still relate to money laundering. This uncertainty makes it very difficult to objectively score model performance compared to more quantitative models that are used to e.g. estimate the risk of defaulting on a loan.

All these elements make the validation of these compliance models a major challenge, something financial institutes are confronted with.

When financial institutions actually conduct a model validation or when internal or external reviews or examinations are conducted, this can result in findings like model deficiencies. Based on public sources and supplemented with KPMG’s experience, certain recurring of common compliance model deficiencies resulting from validations or examinations are ([Al-Ra15], [OM18]):

  • Monitoring not being applied at the correct level of granularity. E.g. monitoring individual accounts instead of the aggregate behavior of customers, entities or ultimate beneficial owners or monitoring being done across various separated systems;
  • Application of different character encodings which are not completely compatible or inadvertently applying case-sensitive matching of terms and names;
  • Applying capacity-based tuning and system configurations instead of a setup commensurate with the risk appetite of the organization;
  • Programming errors or fundamental logic errors resulting in unintended results1;
  • A lack of detailed documentation, appropriate resources and expertise and/or unclear roles and responsibilities to effectively manage and support model risk management activities;
  • A conceptual design that is inconsistent with the unique integrity risks of an organization and minimal regulatory expectations;
  • Insufficient risk-based model controls to ensure consistent workings of the system;
  • Issues related to data quality like the incomplete, inaccurate or untimely transfer of transactional or client data between systems that feed into the compliance model.

Model risk management is a process wherein institutes should be able to demonstrate to, among other, regulators that their compliance models work as expected and that the model risk aligns to the risk appetite of the bank. Therefore, both the challenges and common model deficiencies mentioned in this section are relevant to consider when commencing a model validation.

Five-pillar approach: an approach for transaction monitoring model validation

When discussing model validation, it is helpful to elaborate on the foundations first. For more statistical models, the task of model validation is to confirm whether the output of a model is within an acceptable range of real-world values to fit the intended purpose. Looking at compliance models, model validation is intended to verify that models are performing as expected, to validate whether the model is in line with the intended purpose and design objectives, and business uses. The validation is also used to identify potential limitations and test assumptions and assesses their potential impact.

During the validation it is substantiated that the model, within its domain of applicability, processes a satisfactory range of accuracy consistent with the intended application of the model and the validness of the assumptions underlying it.

To validate models, an approach is required. Whereas for certain statistical or predictive models there are a lot of well-established techniques, for compliance models this is less the case; the validation approach is highly dependent on the model, type of model and system being used and the validation of compliance models is a relatively new domain. KPMG has developed an approach consisting of five interrelated pillars. The approach has been successfully used globally for both international banks as well as smaller institutions and has evolved based on global and local practice experience.

KPMG’s global model validation approach

The KPMG approach for transaction monitoring model validation consists of five pillars:

  1. Governance
  2. Conceptual Soundness
  3. Data, System & Process Validation
  4. Ongoing & Effective Challenge
  5. Outcomes Analysis & Reporting
Governance

For an effective model, not only the technical model but also its governance, is a prerequisite to success. The governance framework related to the model needs to be reviewed. This review should include policies and procedures, roles and responsibilities, resources and training for comparison against existing authoritative standards for compliance and controls programs as well as industry leading practices and experiences with comparative institutions. This is predominantly done by conducting interviews with stakeholders based on structured questionnaires and documentation review.

Conceptual Soundness

The foundation of any compliance model is its conceptual design. Therefore, an assessment is required regarding the quality of the model design and development in order to ensure that the design criteria used in model design follow sound regulatory requirements and industry practice. In addition, key actions include a review of the risk evaluation, rules/settings assessment and the assessment of developmental evidence and supporting analysis.

Data, System & Process Validation

A (conceptual) design of a model generally gets implemented into an information system which requires (input) data to function and has processes that govern aspects of the system regarding, for example, change management. This pillar of the validation approach has three main types of activities that differ depending on the exact model and system being used:

  • The first type of activity involves performing a number of tests to assess whether data feeds and information from ancillary systems are appropriately integrated into the models. Preferably this is done from and end-to-end perspective (from data creation to processing to retention).
  • The second activity involves testing the system to assess its core functionality is working as intended. For example, for a Transaction Monitoring system, rules may be (independently) replicated based on documentation to determine if they are implemented and working as designed. Additional or alternative tests, depending on the model, can be considered, such as control structure testing or code review.
  • The third and final component involves reviewing the processes that govern the use of the system.
Ongoing & Effective Challenge

A model’s effectiveness needs to be assessed on an ongoing basis to ensure that changes in products, customers, risks, data inputs/outputs, and regulatory environment do (not) necessitate adjustment, redevelopment, or replacement of the model or whether model limitations and assumptions are still appropriate. Key activities of this pillar include ongoing verification, sensitivity testing (safeguarding the balance between the number of alerts and missing false negatives), performance tuning, and quantitative and qualitative benchmarking with peer organizations.

Outcomes Analysis & Reporting

Outcomes analysis compares and evaluates prospective scenario or rule changes, scoring, or alerting process changes to historical outcomes. This way, opportunities for efficiency improvements or to substantial parameter changes that may exist are identified. The key activities of this component include outcomes analysis and reporting.

As the validation of compliance models is a relatively new domain, model validations are struggling with the required level of depth needed to do an adequate validation without going beyond what is required. The use of a global methodology allows for a consistent and structured approach and way-of-working with both the benefit of consistency throughout time, locations and different institutions as well as having an approach that maps back to regulatory guidance. This methodology needs to be able to cope with regulatory compliance differences per jurisdiction.

Transaction monitoring outlook

Enhanced compliance frameworks, digitalization and globalization causes transaction monitoring to become more and more intricate. In addition, due to growing polarization between certain countries, also the sanctions regimes are increasingly complex. How can organizations tackle these issues?

As a consequence of the digitalization, the availability of unstructured data has also increased significantly over the last years. It is therefore no surprise that applying artificial intelligence (AI) and machine learning (ML) for models is advancing rapidly. Also financial institutions are having their initial experiences in using AI/ML in reducing both false positives to increase efficiency as well as trying to detect the, previously unknown, false negatives to increase effectiveness. This is happening while they are also trying to reduce, or at least control, the costs of monitoring.

When looking from a validation perspective, however, there are some attention points when using AI and ML for compliance models. The first attention point is the knowledge and experience of the model developers with AI and ML. Due to the complexity, it is hard to master AI and ML and as a consequence reach conceptual soundness if these techniques are used. In addition, there is the risk that the model becomes a black box which is only understood by a few staff members. That way key man risk regarding the model potentially becomes an issue as well. This makes the model less transparent. The complexity of applying AI and ML to large volumes of data for modelling makes it hard to ensure the integrity and unbiasedness of data. By using integrity validation rules to collect data, the model can become biased as a consequence of decision making by the developers.

In the author’s opinion challenges as mentioned above should not withhold financial institutions to selectively apply AI and ML. But they do require extra attention for regular model validation, develop AI and ML capabilities within the organization and enhancing the risk culture. For financial institutions that are still in the beginning of their AI and ML journey, it might be interesting to start applying it for the challenger model in the validation process of the current transaction monitoring model.

Another interesting development in the field of transaction monitoring is that on 19 September 2019, the Dutch Bank Association announced that five Dutch Banks (ABN AMRO, ING, Rabobank, Triodos and Volksbank) will join forces and set up an entity for transaction monitoring: Transaction Monitoring Netherlands (TMNL). Other banks can join at a later stage. This does however require a change to existing legislation (competition related). It will be interesting to follow this development and to see whether new entrants may also join this initiative. In addition, it is the question whether the business case can be realized, since this TMNL only monitors domestic traffic. It will be interesting to see whether similar initiatives will be launched elsewhere in the EU.

Conclusion

Regulatory requirements regarding financial crime are making it increasingly complex for financial institutions to become and stay compliant with respect to transaction monitoring. Having a model for transaction monitoring is not sufficient anymore. Regulators are increasingly expecting financial institutions to be able to demonstrate effectiveness of transaction monitoring and in the process of doing so, to validate their models. Certainly for financial institutions that operate internationally, this has proven to be quite a (costly) challenge. The best way to validate a model is to start with a broad perspective and include processes and activities that surround the model as well. The five pillars cover the required areas for model validation. However, there is no one single way of validating a model. The focus regarding the five pillars depends on the nature of the model. AI and ML can be utilized for both the model and to serve as a challenger model. However, in practice the application of AI and ML also creates challenges and potential issues. Collaborating with FinTechs or join forces as financial institutions might be a key with respect to ensuring compliance and keeping the cost base at an acceptable level.

Notes

  1. An example of a logical error, or undocumented limitation, can be that a system is configured to detect specific behavior or transactional activity within a week period instead of a 7-day period. I.e. when a certain combination of transactions occurs on a Monday till Wednesday, this generates an alert, whereas when the exact same behavior occurs on a Saturday till Monday nothing is detected due to the system setup instead of due to a deliberate design of the logic.

References

[Al-Ra15] Al-Rabea, R. (2015, September). The Challenge of AML Models Validation. Retrieved from: http://files.acams.org/pdfs/2016/The_Challenge_of_AML_Model_Validations_R_Al_Rabea_Updated.pdf

[DNB17] De Nederlandsche Bank (2017). Post-event transaction monitoring process for banks.

[DNB19] De Nederlandsche Bank (2019, December). Leidraad Wwft en SW.

[OM18] Openbaar Ministerie (2018). Onderzoek Houston: het strafrechtelijk onderzoek naar ING Bank N.V. Feitenrelaas en Beoordeling Openbaar Ministerie. Retrieved from: https://www.fiod.nl/wp-content/uploads/2018/09/feitenrelaas_houston.pdf

PSD2 risks and IT controls to mitigate

With the introduction of the second Payment Service Directive (PSD2), some new IT risks evolved from the regulation which has a direct impact on all the payment service providers such as banks, payment gateways and acquirers and payment service users such as individuals, organizations and governmental bodies. When looking at the risks and controls for the legislation, we saw organizations struggle with the right information about the IT-related risks and the necessary steps for the mitigation of those risks. This article will provide an overview of which IT controls should be in place for the payment service providers and users in order to be risk tolerant.

Introduction

PSD2 is the new European Directive on consumer and business payments. With the introduction of PSD2, new providers of new payment and account information services will enter the market. They will act as an online third party between you and your bank. These third parties – also known as Third Party Payment Providers – can be other banks, for example, or FinTech companies. The PSD2 brings two major changes to the payments industry: it mandates stronger security requirements for online transactions through multi-factor authentication and it forces banks and other financial institutions to give third-party payment services providers access to customer bank accounts if account holders give their consent.

There are many opportunities and advantages that will be introduced by the second Payment Services Directive (PSD2), such as increase protection of payment service users through increase security requirements and the opportunity for new services, based on account information and payment initiation capabilities. Unfortunately, with the new regulation and new opportunities and advantages, new risks will also be introduced. The risks include both operational and third-party risks and must be managed effectively. Banks and Third Party Payment Providers (TPPs) will experience significant growth in the volume of their business to business (B2B) network connections and traffic, and a growth in exposure of core banking functions driving up enterprise risk. In addition, by mandating banks to do business with TPPs, they will soon face the challenge of how to aggregate and understand risk from potentially dozens to hundreds of TPPs. The question is, is this practice safe for your security and compliance program? Alternatively, if it’s not safe, which controls could be applied to your product team to mitigate the risks? ([Blea18])

In this article, we will present the related risks arising from PSD2 for four parties: banks, customers, TPPs and supervisors, with a focus on IT. We will also explain how to mitigate these risks, followed by some controls and best practices.

Background

PSD2 forces banks of the Netherlands and Europe to share data with licensed organizations and execute a payment through initiation of payment services. Transaction data can be shared – this includes how consumers spend their money whether with cash or through credit, including information on loans of customers. One of the key elements of PSD2 is introduction of Access to Accounts (XS2A) via TPPs. Banks and other financial institution must give certain licensed third parties access to account information and cannot treat payments that go through Third-party Service Providers differently. Once a customer has given their explicit consent to have their data shared, this is most commonly done through a trusted API that requires strong customer authentication ([Mant18]). Open banking is the use of open APIs that share financial information with providers in a secure way. Open banking API means that the customers’ information stored in banks will no longer be “proprietary” and will finally belong to the account owners, not to the banks keeping those accounts.

C-2020-2-Oechslin-01-klein

Figure 1. PSD2 stakeholder relationships. [Click on the image for a larger image]

PSD2 is the second Payment Services Directive and is applicable to the countries of the European Economic Area (EEA). The PSD2 directive aims to establish legal clarity and create a level playing field in the payments sector in order to promote competition in the payments network, efficiency and innovation. Furthermore, higher security standards will be introduced to protect consumers and customers with online payments.

The scope of PSD2 relates to any party that offers payment services for example FinTechs, innovative new providers, also referred to as TPPs and tech giants such as GAFAMs (Google, Apple, Facebook, Amazon, Microsoft). Payment services include account information services and payment initiation services. TPPs include payment initiation services providers (PISPs) that initiate payments on behalf of customers and aggregators and account information service providers (AISPs) that give an overview of customer accounts and balances ([EuCo17]).

In order to comply with PSD2, certain Regulatory Technical Standards (RTS) will have to be complied with, specifically Strong Customer Authentication (SCA) and Secure Communication. SCA allows payments that are being made to be more secure through enhanced levels of authentication that are required when completing a transaction. There are, however, some exceptions to this rule, including but not limited to, low value and recurring transactions ([Adye18]). Due to the complexity of the requirements The European Banking Authority (EBA) has advised an opinion that the deadline for SCA for online card transactions should be postponed to 31 December 2020 ([EBA19a]) and National Competent Authorities adhered to this as service providers, mainly retailers and PSPs, experienced implementation challenges over Europe. Riskified, a global payment service provider facilitator, performed a survey that included participants from UK, Germany, France and Spain and noted that nine out of ten retailers (88%) believe consumers are ‘somewhat’ or ‘very aware’ of PSD2. However, more than three-quarters (76%) of consumers report that they haven’t even heard of PSD2, showing an imbalance between retailers and online consumers awareness ([Sing19]). Over the last few years, EU Member States have integrated PSD2 into local legislation in order to issue, among other, TPP licenses. The total number of TPPs in the EEA is 326 with the UK leading with 158.

C-2020-2-Oechslin-02-klein

Figure 2. TPPs in EEA in 2020 ([OBE20]). [Click on the image for a larger image]

In order to ensure that payment providers adhere to these regulations, supervisors have been tasked with monitoring compliance with PSD2, in certain cases in conjunction with one another. In the Netherlands, supervision is being shared between four supervisors: the Dutch Bank (DNB) that is focused on authorizations, providing licenses to payment service providers as well as being the prudential supervisor, the Dutch Data Protection Authority (AP) that focuses on the protection of personal data in PSD2, the Dutch Authority for Financial Markets (AFM) that is focused on behavioral or conduct supervision on payment services providers, and the Dutch Authority Consumer and Market (ACM) that is focused on the competition between payment service providers ([McIn20]).

The current payment landscape brings certain risks with it. We will present the related risks arising from PSD2 for privacy, security and transaction fraud risk from the perspective of the four parties: banks, customers, payment service providers and supervisors, with a focus on IT.

Risks

Privacy

PSD2 will allow customers to share their banking information and “sensitive personal data” with parties other than their bank. This raises questions about the privacy of customer data and whether the movement of customer data can be traced; it is clear who has access to what data. Banks and PSPs will accumulate customer data and eventually process the data and should therefore be aware of the risks relating to retention and processing of data as well as complying with legislation such as the General Data Protection Regulation (GDPR).

Companies with a license as a PSP can access payment data from account holders. However, once information from customers is obtained, these payment institutions need to protect the information, otherwise they are at risk for getting major fines of €20 million or 4% of global sales ([Hoor17]). For a new PSP that enters the market, suffering the reputational damage if customers are not comfortable with how their data is being used or feel that their data has been “stolen” through unclear agreements could be detrimental to their success.

Banks need to consider the impact of the interplay between PSD2 and GDPR as requirements may be conflicting. Banks will share banking information with relevant TPPs that have a PSD2 license, however, due to the GDPR that came into play on 25 May 2018, it is also the responsibility of banks to protect their customer data that they are obliged to share. If banks do not share data, competition authorities may intervene. Furthermore, only data should be shared for which explicit and current consent has been provided to avoid unauthorized data being shared and the reputational damage that will follow ([Benn18]).

Customers need to give explicit permission to the relevant financial institution, or PSP is allowed to access their payment information. The risk for the user is that they give permission too easily and too quickly without considering the consequences ([Hoor17]). Customers cannot limit the amount of information that is shared with the PSP, all payment account information is shared whether it is relevant to the payment service or not. This is a risk if the TPP uses customer data beyond its primary responsibility or if data is stolen, because the banking history of customers could reveal information of other parties through combined customer information and buying trends, such as spending rates at specific retail institutions. Therefore, customers may consent to sharing both their banking information as well as “sensitive personal data”, such as purchasing history revealing habits or perhaps sensitive purchases, that may not even be relevant to the PSP and has the risk of ending up in the wrong hands ([PrFi19]).

It is expected that supervisors monitor players in the payment landscape to ensure a safe and fair payment environment. The compliance set out by regulators needs to be adhered to in order to ensure this (see also the KPMG Point of View paper [KPMG20], which focuses on potential regulatory compliance risks arising from payments innovation). The major risk faced by the regulators and supervisors is loss of visibility of different players, and that players are no longer held accountable for their actions, their customers and their payment information ([McIn20]).

Security

PSD2 brings potential threats, such as security risk in sharing data with third-party payment providers, risk of fraud in case of dishonorable third-party payment providers, or hacked customers and requests made via TPPs that may be susceptible to third party fraud powered by malware or social engineering techniques, and fraudsters could use the TPPs as an obfuscation layer to confuse the banks’ fraud defenses. It is therefore important that TPPs are able to cope up with the security threats and mitigate such risks. By nature, Fintech firms – new PISPs and AISPs – have little reputation at stake. This means they may be inclined to take riskier business decisions, or even involve themselves in misleading business activities.

First, banks must train their IT systems to cope with potential cyberattacks. It’s helpful to think of an API as a set of doorways, and behind every door is a different set of data: account balances, transaction history, the ability to make deposits/withdrawals along with other customer information. In an ideal system, these APIs (or doorways) would only be accessible to trusted parties with your knowledge of their access. However, banks have always been a target of criminal activity, and it’s not hard for anyone to imagine that there are those out there waiting to abuse these new access points to bank data ([EBA19b]). To eliminate cyber-attacks from hackers, robust authentication barriers need to be in place. For the banks there needs to be a concrete clarification that the PISP is who she says she is. The way banks manage this will be crucial for investors as well as depositors. If banks are unable to develop sound API infrastructure to become reliable Account Servicing Payment Service Providers (ASPSPs), their market share will be lost to FinTech firms.

Customers making use of the services offered by TPPs under PSD2 need to be aware of the security risks. Whereas customers placed their faith in decades-old institutions with a long history of security, they will now be transferring that same trust to lesser-known third-party providers that don’t have a long track record of combating fraud. Antifraud systems of banks will have less data input to train computer models and spot fraud in real time as their customers’ financial data will spread across multiple companies. While customers are now more aware of phishing techniques that cybercriminals used in the past, malicious actors will get new opportunities to trick banking customers. Cybercriminals could pretend to be the FinTech companies working with banks, and new phishing schemes are expected to emerge.

Additionally, the regulators are yet to establish effective methods of monitoring for the increasing number of smaller but significant players. This could reduce overall levels of compliance and make the market vulnerable to money launderers and fraudsters ([Hask]).

Transaction fraud risk

The market changes that we anticipate as a result of PSD2 will likely create new opportunities for fraud because banks will be required to open up their infrastructure and allow third-party providers access to customer account information. This will impact the visibility of banks when it comes to end-to-end transaction monitoring and will inevitably affect their ability to prevent and detect fraudulent transactions.

While the objective is to allow innovation and development of the payment services industry, the growing concern is that this provides criminals with possibilities to commit fraud and to launder money.

Three key fraud risks, as highlighted by the Anti Money Laundering Center (AMLC) ([Lamm17]) are:

  • potentially unreliable and criminal TPPs,
  • reduced fraud detection, and
  • misuse and phishing of data.

The first risk to consider is that of potentially unreliable and criminal TPPs. The entry into force of the PSD2 may lead to an increase in both local and foreign TPPs that are active on the Dutch payment market. If direct access is allowed, the ASPSP is also unable to verify if the TPP actually executes the transaction in accordance with the wishes of the payment service user. Furthermore, malicious persons who aim to commit large-scale (identity) fraud, can set up a TPP themselves to facilitate fraudulent payments. Customers may interact with these fraudulent TPPs e.g. by entering their details on fake websites or mobile payment apps. The criminal can then use this information to access information about the customer and/or make payments in the name of this customer.

The second key risk to consider is that of reduced fraud detection. PSD2 opens the payment market for new entrants who may not have gained any experience with compliance and fraud detection yet. There is a growing trend to accelerate payment transactions via instant payments, which also makes an accelerated cash-out possible. As the risk that fraud transactions are conducted successfully increases, so does the importance of adequate fraud detection in relation to ASPSPs. Traditional financial organizations have so far enjoyed a bilateral relationship with their customers, which will change as TPPs enter the market with new services. PSD2 is bringing higher transaction volume for banks, and more demand from customers for mobile payments and quicker transactions. Those increases result in more pressure being put on fraud detection systems — which, in turn, provide obvious opportunity for businesses that sell fraud prevention technology. The window for investigations will be significantly reduced and banks will need to rely on automation and advanced analytics to mitigate the increased fraud risk ([PYMN18]).

The third risk to consider is the misuse and phishing of data. As outlined above, TPPs may be used as a way to unethically obtain confidential information, which could then be used to facilitate fraudulent transactions. For example, with PSD2 and the dynamic linking of authentication codes to the payment transaction details for remote transactions, phishing of authentication codes may become redundant while the phishing of activation codes for mobile payment /authentication apps could become the new target ([EPC19]).

While the introduction of PSD2 facilitates the innovation of the payments sector, it poses key privacy, security and transaction fraud risks. The next section explores the considerations concerning mitigating these risks.

Mitigation of risks

While PSD2 is a directive brought into effect to stimulate innovation and development within the Payments sector, a number of risks arising as a result have also been identified. The following should be considered in order to reduce the risk of transacting under PSD2 regulation to an acceptable level.

To protect customers, the identified risks need to be appropriately mitigated through sound operational risk management practices by all the players involved (i.e. banks and third parties) that address the security, business continuity and robustness of operations, both in the internal systems of the different parties as well as in the transmission or communication between them. This is particularly challenging in the case of third-party players rather than regulated financial institutions, who often lack the risk management frameworks that are common practice in the banking sector, with detailed policies, procedures and internal and external controls. Financial institutions should develop and document an information security policy that should define the high-level principles and rules to protect the confidentiality, integrity and availability of financial institutions’ and their customers’ data and information ([Carr18]) .

This policy is identified for PSPs in the security policy document to be adopted in accordance with Article 5(1)(j) of Directive (EU) 2015/2366. Based on the security policy information, financial institutions should establish and implement security measures to mitigate the information and communication technology and security risks that they are exposed to. These measures should include policies and controls in place over change management, logical security, physical security, operations security, security monitoring, information security reviews, assessment and testing, information security training and awareness and incident and problem management ([JTFP19]).

PSPs are expected to develop a security policy that thoroughly describes the measures and process that they have in place for the purposes of protecting customers against fraud. PSPs are expected to implement SCA processes for customers accessing their accounts online, initiating electronic payments or carrying out transactions through remote channels. As these activities have a high degree of risks, the PSD2 mandates PSPs to implement appropriate security processes to reduce the incidence of risk. Adopting appropriate SCA processes will promote confidentiality of users and assure the integrity of communication between participants regarding the transactions taking place on any particular platform ([Adey19]).

The implementation of PSD2 will contribute to building new relationships and data partnerships between financial institutions, which helps protect customers interests and improve transactional oversight. To capitalize on the vast amounts of data being channeled through PISPs and AISPs, banks must, however, invest in technology that finds the patterns that indicate crime. PSPs need to share transaction data and intelligence through a central hub that is underpinned by the necessary legal permissions and security to ensure compliance with GDPR. The risk of attack can be mitigated by following a sound API architectural approach, one that integrates security requirements and tools into the API itself. By adding more layers of fraud protection and authentication to APIs, banks could potentially integrate features like access control and threat detection directly into data-sharing offerings, allowing them to be proactive, rather than reactive, when it comes to securing APIs.

All the involved parties such as banks and TPPs need to work in creating a risk tolerant control framework and implementing the control objectives from the RTS guidelines and having specific payment related control activities. The banks and TPPs should coordinate with each other and work on this together to standardize the approach and methodology and discuss the smoothening of the overall process for the consumers with their market competitors.

Conclusion

PSD2 has been put in place in order to stimulate the payments industry creating innovation and broadening the market for payment service providers. As the services of TPP relies on use of sensitive personal and financial data, it opens up the market to a greater number of competitors as well as being heavily reliant on the IT infrastructure of several parties, a number of risks have been identified. Whilst these risks have been identified, so have a number of mitigating measures to reduce the overall operational risks around privacy, security and transaction and fraud risk to an acceptable level. Careful consideration however should be taken by all parties involved in payments services. Further, the regulators will take an active role to ensure a safe and secure payment landscape as part of the mitigation of risks identified in the market through the requirement of certain controls that should be in place for licensed organizations. Due to the dynamic nature of this industry and rapid development of technology, we can expect the landscape of services, and therefore the associated risks, to also develop at a rapid pace. With robust risk management strategies in place, there is an opportunity for payment services community to revolutionize the industry and provide a wide range of innovative payment products and services.

References

[Adey19] Adeyemi, A. (2019, January 21). A New Phase of Payments in Europe: the Impact of PSD2 on the Payments Industry. Computer and Telecommunications Law Review, 25(2), pp. 47-53.

[Adye18] Adyen (2018, August 28). PSD2: Understanding Strong Customer Authentication. Retrieved April 30, 2020, from: https://www.adyen.com/blog/psd2-understanding-strong-customer-authentication

[Benn18] Bennett, B. et al. (2018, March 16). Overlap Between the GDPR and PSD2. Inside Privacy. Retrieved from: https://www.insideprivacy.com/financial-institutions/overlap-between-the-gdpr-and-psd2/

[Blea18] Bleau, H. (2018, October 3). Prepare for PSD2: Understanding the Opportunities and Digital Risks. RSA. Retrieved from: https://www.rsa.com/en-us/blog/2018-10/prepare-for-psd2-understanding-the-opportunities-and-digital-risks

[Carr18] Carr, B., Urbiola, P. & Delle, A. (2018). Liability and Consumer Protection in Open Banking. IIF.

[Craw17] Crawford, G. (2017). The Ethics and Financial Issues of PSD2: Demise of Banks and Other Risks. Moral Cents, 6(1), pp. 48-57.

[EBA18] European Banking Authority (EBA) (2018, July 18). Final Report on Fraud Reporting Guidelines onder PSD2. Retrieved May 5, 2020, from: https://eba.europa.eu/sites/default/documents/files/document_library//Final%20Report%20on%20EBA%20Guidelines%20on%20fraud%20reporting%20-%20Consolidated%20version.pdf

[EBA19a] European Banking Authority (EBA) (2019, October 16). EBA publishes Opinion on the deadline and process for completing the migration to strong customer authentication (SCA) for e-commerce card-based payment transactions. Retrieved from: https://eba.europa.eu/eba-publishes-opinion-on-the-deadline-and-process-for-completing-the-migration-to-strong-customer-authentication-sca-for-e-commerce-card-based-payment

[EBA19b] European Banking Authority (EBA) (2019, November 29). Final Report on Guidelines on ICT and Security Risk Management.

[EPC19] European Payments Council (EPC) (2019, December 9). 2019 Payment Threats and Fraud Trends Report. Retrieved from: https://www.europeanpaymentscouncil.eu/sites/default/files/kb/file/2019-12/EPC302-19%20v1.0%202019%20Payments%20Threats%20and%20Fraud%20Trends%20Report.pdf

[EuCo17] European Commission (2017, November 27). Payment Services Directive (PSD2): Regulatory Technical Standards (RTS) enabling consumers to benefit from safer and more innovative electronic payments. Retrieved April 30, 2020, from: https://ec.europa.eu/commission/presscorner/detail/en/MEMO_17_4961

[Gruh19] Gruhn, D. (2019. September 30). 5 Things You Need to Know Right Now About Secure Communications for PSD2. Entrust Datacard. Retrieved April 30, 2020, from: https://www.entrustdatacard.com/blog/2019/september/five-things-to-know-about-secure-communications-for-psd2

[Hask] Haskins, S. (n.d.). PSD2: Let’s open up about anti-money laundering and open banking. Retrieved from: https://www.paconsulting.com/insights/psd2-lets-open-up-about-anti-money-laundering-and-open-banking/

[Hoor17] Hoorn, S. van der (2017, July 19). Betekent PSD2 een inbreuk op de privacy? Retrieved from: https://www.banken.nl/nieuws/20354/betekent-psd2-een-inbreuk-op-de-privacy

[JTFP19] JT FPS (2019, September 10). What are the new risks that PSD2 will bring and how to cope with them? JT International Blog. Retrieved from: https://blog.international.jtglobal.com/what-are-the-new-risks-that-psd2-will-bring-and-how-to-cope-with-them

[KeBe18] Kennisgroep Betalingsverkeer, NOREA (2018). PSD2.

[KPMG20] KPMG (2020). Sustainable compliance amidst payments modernization. Retrieved from: https://advisory.kpmg.us/articles/2020/sustainable-compliance.html

[Lamm17] Lammerts, I. et al. (2017). The Second European Payment Services Directive (PSD2) and the Risks of Fraud and Money Laundering. Retrieved May 5, 2020, from: https://www.amlc.eu/wp-content/uploads/2019/04/The-PSD2-and-the-Risks-of-Fraud-and-Money-Laundering.pdf

[Mant18] Manthorpe, R. (2018, April 17). What is Open Banking and PSD2? WIRED explains. Wired. Retrieved April 30, 2020, from: https://www.wired.co.uk/article/open-banking-cma-psd2-explained

[McIn20] McInnes, S. et al. (2020). Dutch Data Protection Authority investigates account information service providers. Retrieved May 1, 2020, from: https://www.twobirds.com/en/news/articles/2020/netherlands/dutch-data-protection-authority-investigates-account-information-service-providers

[Meni19] Menikgama, D. (2019, May 12). A Deep Dive of Transaction Risk Analysis for Open Banking and PSD2. Retrieved May 5, 2020, from: https://wso2.com/articles/2019/05/a-deep-dive-of-transaction-risk-analysis-for-open-banking-and-psd2/

[OBE20] Open Banking Europe (2020). Infographic on TPPs. Retrieved from: https://www.openbankingeurope.eu/resources/open-banking-resources/

[PrFi19] Privacy First (2019, January 7). European PSD2 legislation puts privacy under pressure. Privacy First demands PSD2 opt-out register. Retrieved from: https://www.privacyfirst.eu/focus-areas/financial-privacy/672-privacy-first-demands-psd2-opt-out-register.html

[PYMN18] PYMNTS (2018). As PSD2 Gets Off the Ground, Fraudsters Gear Up. Retrieved from: https://www.pymnts.com/fraud-prevention/2018/psd2-fraud-attacks-digital-payments-unbundled-banking/

[Sing19] Singer, A. (2019, December 24). Infographic: What Europe really thinks about PSD2. Retrieved from: https://www.riskified.com/blog/psd2-survey-infographic/

[Zepe19] Zepeda, R. (2019, 27 October). PSD2: Regulation, Strategy, and Innovation. Finextra. Retrieved from: https://www.finextra.com/blogposting/18057/psd2-regulation-strategy-and-innovation

Verified by MonsterInsights