Skip to main content

Dynamic Risk Assessment: KPMG meets VTTI

In a joint endeavor, KPMG and VTTI collaborated to challenge the traditional risk assessment models with their Dynamic Risk Assessment (DRA) approach. The traditional risk assessment models, which assess risks based on their individual impact or likelihood, have been widely applied by many organizations. The existing models, however, often overlook the interconnectivity between risks which could uncover additional dimensions for enhanced assessment and more pertinent risk mitigating strategies.

Introduction

In the rapidly evolving business landscape, organizations often face external challenges such as geopolitical developments, complex stakeholder landscapes, and the energy transition. Having a robust risk management strategy therefore becomes crucial. This article introduces a dialogue between Jennifer (JF), the Head of Governance, Risk & Assurance at VTTI B.V., and Ara (AH), from Governance, Risk & Compliance Services at KPMG.

The conversation explores the implementation and implications of KPMG’s Dynamic Risk Assessment (DRA) at VTTI to manage strategic risks more efficiently. Unlike traditional risk assessment models, DRA creates an interconnected view of risks, allowing organizations to develop more effective risk mitigating measures. This detailed discussion provides insights into the DRA process, challenges in execution, and how VTTI utilized the DRA report for risk mitigation and the improvement of their risk management capabilities. This valuable discussion provides key insights for IA and ERM professionals and offers a unique viewpoint on the implementation and benefits of DRA. For further background on the Dynamic Risk Assessment methodology, see [Kris18].

The DRA is designed to identify and quantify the interconnectivity between risks , providing a more comprehensive evaluation of potential threats. VTTI performed the DRA to obtain an integrated understanding of its risk ecosystem, including strategic, external, and operational risks, enhancing its risk management capabilities. The process involves a four-step system including risk identification, expert consultations, risk assessment, and reporting. Success factors include C-suite involvement, clear goal formulation, employing the right professionals, and effective expectation management. Challenges included aligning busy schedules and ensuring common risk languages. The process culminated in a report detailing risk impact, interconnectivity, likelihood, and velocity for strategic decision-making and continuous risk dialogue.

Ara Hovsepjan (AH), manager Governance, Risk & Compliance Services KPMG: KPMG is the outsourcing partner for VTTI in Internal Audit. In preparation for the 2024 annual audit plan and further professionalization of risk management, we started discussions about conducting a Dynamic Risk Assessment (DRA).

Jennifer Feuerstacke (JF), Head of Governance, Risk & Assurance VTTI B.V.: At VTTI, we have been working with KPMG for a while, and we are always on the lookout for better practices and expert insights. After having heard of DRA, I was curious about the proposition and how it could help our organization to better facilitate the discussion about risk management.

Dynamic Risk Assessment at VTTI

Who are you and how did you get to know each other?

AH: KPMG is the outsourcing partner for VTTI in Internal Audit. In preparation for the 2024 annual audit plan and further professionalization of risk management, we started discussions about conducting a Dynamic Risk Assessment (DRA).

JF: We have been working with KPMG for a while at VTTI, and we are always on the lookout for better practices and expert insights. After having heard of DRA, I was curious about the proposition and how it could help our organization to better facilitate the discussion about risk management.

What is KPMG’s Dynamic Risk Assessment?

AH:  The traditional risk assessment models, which assess risks based on their individual impact or likelihood, have been widely applied by many organizations. However, the existing models fail to recognize the interconnections among the risks, which may reveal enhanced assessment dimensions and more relevant risk-mitigating actions. In response to this, the Dynamic Risk Assessment (DRA) has been developed based on proven scientific modelling, expert elicitation, and advanced data analytics. DRA enables organizations to gain a deeper understanding of how risks impact the different parts of the firm and subsequently, to design more effective and efficient risk mitigating measures.

Why did VTTI perform a Dynamic Risk Assessment?

JF: VTTI operates in a highly dynamic and constantly evolving business environment with a variety of external challenges and often subject to strict regulatory aspects. Examples of external challenges include the overall energy transition, complex stakeholder landscapes, and geopolitical developments. As the Governance, Risk & Assurance lead, my goal is to contribute to the delivery of VTTI’s business objectives and enhance its resilience to risks. To achieve this, it is essential to firmly integrate risk management into VTTI’s daily activities.

The primary reason for executing DRA is to obtain a concise and integrated representation of VTTI’s risk ecosystem, including strategic and external risks, as well as more operational risks. By gathering insights and creating an interconnected view of risks, we can effectively address distinct risk clusters in the organization. The execution of DRA provides an opportunity to further enhance the professionalization of risk management activities and improve processes, recognizing the importance of evolving risk management tooling and enhancing our risk management capabilities.

The Dynamic Risk Assessment process

What does the Dynamic Risk Assessment (DRA) process look like?

AH: The Dynamic Risk Assessment process consists of four steps, divided into risk Identification and risk Assessment.

  • Steps 1 and 2: Individual Interviews & Workshop with Experts
    We started by identifying at least six experts for individual interviews to compile an initial risk list. Selecting the right experts to achieve the best possible result is key. It is crucial to determine appropriate risk scales with the client, as this is vital for the most accurate risk assessment. In step 2, we collaborate with a large group of experts to validate and narrow down the risks, ultimately identifying a maximum of 20 strategic risks for the organization.
  • Steps 3 and 4: Risk assessment and Reporting
    In step 3, all experts will use the DRA survey tool to assess the identified risks based on probability & impact, connectivity, and velocity. Subsequently, we analyze the results and discuss them in step 4.

C-2024-8-Hovsepjan-01-klein

Figure 1. The four steps of the Dynamic Risk Assessment process. [Click on the image for a larger image]

What are the success factors for conducting a Dynamic Risk Assessment?

AH: Over the past few years, we have identified several success factors through conducting DRAs. We noted that these factors were crucial for ensuring the desired quality and impact in every assignment carried out. An essential factor in successful Dynamic Risk Assessment (DRA) implementation is the involvement of the C-suite management. They need to understand and support the importance of risk management processes to allocate the appropriate resources and budgets, facilitating an effective DRA execution and realizing the outcomes. Coupled with this, clear goal formulation becomes crucial. Identifying these objectives during the goal-setting phase is crucial. They should be clear, widely understood, and set the expectations straight. Navigating through this intricate and continuous process, it becomes vitally important to seek out the right professionals. They should possess the necessary skills and experience to ensure they fit seamlessly into the process.

JF: Expectation management plays a crucial role to perform an effective DRA, and this involves appointing a C-level sponsor to discuss and manage expectations around risk management. This level of involvement creates realistic expectations among participants, ensuring they are engaged, aligned, and committed to expected results and outcomes. VTTI’s work culture and behavior effectively support risk management. The importance of the tone at the top and organizational culture in making ERM a success cannot be emphasized often enough.

What were the challenges during execution?

AH: Each time, we find that one of the biggest challenges is to free up the schedules of our C-level executives to carry out the risk assessment process. The struggle intensifies when our leaders need these experts to focus their energy on other priorities. Overcoming this hurdle means our leaders need to carefully plan a strategy that balances our available resources against the level of risk we are willing to take.

JF:  This is very recognizable. We faced a similar challenge with our team as well. First, selecting the right people to involve in the dynamic risk assessment process is crucial. Then getting timely and thorough responses from all involved can take time and effort, as not everyone shares the same priorities, which can lead to misunderstanding of timelines and unnecessary delays. To overcome this challenge, we focused on clear communication with all DRA participants and reminded them of the importance of their input and the criticality of timely responses for the process to be effective.

We also addressed the need for clear common definitions and language for risk assessment. This process required finetuning to ensure that everyone had the same understanding of concepts and alignment on the interpretation of risk scales. As part of the DRA, we made deliberate choices, and given the dynamic nature of our environment, we periodically revisit the process to ensure a common language and consistent alignment on scales.

What were the deliverables and next steps?

AH: After completing step 4 of the process, we handed over the report to VTTI. This report visualizes the entire strategic landscape using the four dimensions of a Risk Assessment: impact, likelihood, connectivity, and velocity. Jennifer, what did you do with the DRA report and what choices did you make after receiving the results?

JF: Our company operates with an open mindset and consequence-conscious decision-making process. We carefully evaluate the implications of our choices. We discuss the DRA key insights regularly, sometimes specific to a risk domain and sometimes broader. We also share essential points with the Audit Committee.

This ensures an ongoing risk dialogue and helps integrate consciously factoring risk aspects into daily decision-making. As a risk facilitator, it is important to be aware of the overall landscape and how this impacts our business. Building connections that help people understand the interconnectivity of risks featured in our DRA in their daily work is a continuous process. It requires refreshing the discussion from time to time and evaluating if things have changed.

Keeping the dialogue going is the essence. Especially the aspect of interconnectivity represents a mental shift for the organization since there was a tendency to focus on risks from a functional perspective. The insight was not so much on the individual risks, but more on the way they influence each other.

That also means taking the DRA and looking deeper into the risk ecosystem with more detailed risk analysis. For external or strategic risks, the approach to address them is different to more preventable operational risks. That is where each risk owner needs to work on a suitable, cross-functional approach that goes deep into the organization, and ensure the actions are relevant.

After the DRA, we started taking a fresh look at our existing risk & control matrices (RCM) to see where the interconnectivity aspects play a role and if we need to take a different approach to some topics. While we did the DRA at enterprise level, our team of experts also included operating site representatives, who now take the learnings of the process to their teams, giving a new impulse to the local site dialogues on risk.

Finally, we ensure we have the right assurance activities in place to close the PDCA (plan, do, check, act) circle. Risk-based auditing is part of the approach, along with focused actions aimed at enhancing awareness around specific themes.

Within VTTI, we recognize the importance of better risk dialogues and necessity to address risk in a cross-functional manner. Sometimes, cross-functional communication is challenging, given everyone’s full agenda and different perspectives on a topic. Still, dialogue is vital as it can reveal insights that are not visible on paper. It can also help identify the low-hanging fruit and promote a lean approach to processes. Expert elicitation as a concept ensures there are participants with relevant knowledge in any given dialogue. For any topic, ensure that knowledgeable colleagues within the organization are involved in the discussion.

Reference

[Kris18] Kristamuljana, A., Van Loon, B., Bolt, J. & Terblanché, A. (2018). Digital Risk Assessment: Above and Beyond the Hidden Structure of Interconnections between Risks. Compact 2018/2. Retrieved from: https://www.compact.nl/articles/dynamic-risk-assessment/

From banking to compliance: what (not) to expect

In this interview, Jules van Damme discusses his recent transition from operational roles in international banking to Financial Economic Crime (FEC) Compliance with Patrick Özer and Jori van Schijndel. Based on his 30-year career in international banking, he addresses challenges and opportunities within FEC, the importance of change management, and the need for enhanced collaboration across banking departments. How can FEC draw lessons from operational banking to strike a balance between compliance and commercial opportunities?

Introduction

Can you explain your experience in banking and your career path?

I have been working in banking since 1991, with the last 30 years within international banking. In that time, I have held various positions: I started in IT, then worked in operations, finance, product control, market risk, and markets and treasury. These were always relatively short assignments of 3 to 4 years.

My role typically involved overseeing changes, such as introducing new products, launching new activities, or leading improvement initiatives. This allowed me to gain a deeper understanding of how a bank operates, providing me with a comprehensive view of the various departments. While my work primarily focused on the bank’s core functions, I also indirectly engaged with compliance matters.

Looking back, my role was a blend of interim management and consultancy. In addition to overseeing change projects, I also managed departments. What I particularly enjoy about change projects is working towards a tangible result. Once that goal is achieved, you can move on to the next challenge.

What can you tell us about your most recent switch?

Since October 2023, I switched to Financial Economic Crime (FEC) Compliance. While I had dealt with FEC indirectly in the past, the focus on it was not as intense as it is now, largely due to the increasing stringency of laws, regulations, and oversight.

FEC at Rabobank, like other banks and financial institutions, is attentive to developments among our customers. In the Netherlands, for example, online payments are increasingly made through Payment Service Providers (PSPs) such as Adyen and Mollie. Crypto is also a growing trend. In one of my projects, for example, I am working on managing these dynamic risks.

Another project I’m involved in, drawing on my practical experience in international banking, focuses on strengthening FEC controls within the international banking sector. In my current role, I can use my expertise to help detect and prevent financial economic crime, making a meaningful contribution to the client, the bank, and society as a whole.

From banking to compliance: observations and surprises

What struck you about the transition to FEC Compliance in terms of working methods, culture and communication?

The commercial and FEC sectors operate in separate worlds, each with its own terminology and perspectives. For example, when the commercial side refers to a “transaction,” they mean the agreement or contract, while FEC refers to the settlement or the actual execution of the payment. Although the same word is used, it’s often mistakenly assumed they are discussing the same thing, when in fact, they are referring to different aspects of the process.

Another difference is that the operational side often settles for an 80% solution, planning to address the remaining 20% later. In contrast, FEC typically strives for a 100% solution, possibly due to a lower sense of urgency or, more likely, the need to fully mitigate all risks. Any risks left unaddressed continue to pose a problem.

Based on my experience, fostering greater collaboration between FEC and the commercial side of the business is crucial. FEC staff often work in their field for extended periods and may lack in-depth knowledge of the broader banking business. Conversely, it can be challenging to attract individuals with a banking background to FEC positions. Ideally, there should be cross-fertilization and knowledge transfer, with professionals rotating between FEC and business roles to enhance understanding and cooperation on both sides.

FEC primarily requires logical thinking to identify risks, such as money laundering or fraud. If you understand business processes, interpreting these risks becomes straightforward. However, I’m still getting accustomed to the specific terminology used in FEC. While the bank is already heavy on abbreviations, FEC adds another layer, which can complicate internal communication between departments. For instance, those in Commercial may not know what a SIRA is (a Systematic Integrity Risk Analysis required by the Dutch central bank DNB to assess financial-economic crime risks). On the positive side, FEC terminology is standardized across banks, so I can easily discuss topics like SIRA with FEC Compliance departments at other institutions.

In retail banking, a lack of business knowledge is less problematic because the products are simpler and people are generally familiar with them through their personal banking experiences. However, in international banking, this issue is more significant due to the complexity of products, the global nature of operations, and varying regulations or processes across different foreign branches.

Can you give examples of areas of improvement for FEC departments, besides working on the knowledge gap?

I believe there is potential for better collaboration. In my experience, FEC often identifies a risk and seeks to address it independently, rather than consulting with other departments to find the best bank-wide solution. They tend to view FEC risks as solely their responsibility. However, many of these risks can be managed earlier in the value chain, though this can be more challenging to measure and demonstrate. While FEC can address risks at the back end, such as through transaction monitoring, it’s important to evaluate the cost/benefit ratio. Some residual risks may be minimal and could be accepted temporarily, or it might be more effective to implement controls earlier in the process.

One aspect that plays into this is risk tolerance. Who determines risk tolerance?

Risk tolerance is determined in several ways: driven by laws and regulations, bank policy and who is (ultimately) responsible for what. At Rabobank, that responsibility is vested at board level, with Rabobank also having a board member solely responsible for FEC. Ultimately, risk tolerance will be determined by the respective board members (commercial and FEC).

Learning from banking: closing the knowledge gap

You mention that there is too little cooperation and cross-fertilization is needed. How can FEC and operations work better together?

Communication is key. For example, there are instances where the commercial business may assess risks as higher than FEC does. FEC, with its regulatory expertise, can better evaluate the implications from a regulatory standpoint. By sharing perspectives—explaining how each party views the risks, the potential impact on customers, and the regulatory intent behind specific rules—we can develop the most effective approach for both the customer and the bank. Collaborative learning between FEC and the commercial business will enhance our ability to serve both clients and the bank more effectively.

I think it would be good to involve FEC more and early in commercial consultations. That way, FEC gets a better feel for the business and customers, and the business better understands the rules. Early collaboration prevents later delays. We can explain potential FEC risks and determine together how to mitigate them. Not every theoretical risk requires a separate FEC control; sometimes an existing business control suffices or the probability is too low. If certain products are only offered to e.g. a few blue-chip companies, is it necessary to set up a separate FEC control? Or can the risk be covered as part of the product provision, or included in the periodic customer assessment already in place?

What could be the cause of the mismatch between theory and practice?

FEC issues and solutions vary between retail and international banking. For instance, cash-related risks are more pertinent in retail banking than in international banking. Retail banking deals with high volumes and standardized solutions, while international banking, with its fewer clients and complex products, benefits from personal discussions and tailored approaches. This necessitates a practical, client-specific approach, such as using knowledge of internal controls at particular clients to assess risks related to bribery involving counterparties.

A sense of urgency and commercial awareness

How does the difference in “sense of urgency” between business and FEC affect operations? For example, how is success measured?

Banks are bound by the rules of the law. FEC performs the “gatekeeper role” on behalf of the bank: preventing abuse of the financial system, such as money laundering and terrorist financing. This requires monitoring and enforcement of rules. The business also wants to comply with these rules, but it also sees the commercial opportunities and consequences of not acting on time.

In my view, the difference in approach lies mainly in compliance with rules and guidelines. Do we go for a 10 or is a 6 sufficient? By discussing specific cases with each other, we can bring both parties closer together. With understanding and knowledge of each other’s position and background, it is easier to find a jointly supported solution.

How would you raise awareness that FEC is an extension of the bank?

In my view, FEC activities should be integrated into the bank’s value chain. This involves evaluating each step in both the commercial and administrative processes for FEC risks and collaborating with the business to establish the necessary controls to mitigate these risks. By doing so, we can optimize processes for both the bank and its customers while ensuring compliance with regulatory requirements.

International challenges

How does a complex international domain deal with FEC activities?

The legal structure of an institution—whether it has a banking license, operates as a branch, or functions as a representative office—can significantly affect regulatory oversight. For instance, representative offices may be subject to less stringent supervision or, in some cases, may not be supervised at all. The size of the institution within a specific jurisdiction can also impact regulatory scrutiny.

From an FEC perspective, the risks are generally consistent across countries. Legislative and regulatory frameworks are increasingly harmonized, such as through the EU’s AMLR and AMLD6, which facilitate central management of FEC activities. However, central bodies must avoid the pitfalls of over-centralization. Local expertise remains crucial, especially when local legislation imposes additional requirements or when local regulators have specific expectations. For example, while EU and Dutch money laundering regulations are comprehensive, U.S. regulations, such as FinCEN’s 314(a) legislation, have additional local requirements that necessitate localized implementation due to confidentiality constraints. The challenge is to balance global and local approaches, ensuring compliance without unnecessary duplication.

Conclusion

What would you like to say to your readers?

Firstly, stay practical and avoid purely theoretical approaches to FEC. Secondly, focus on proactive risk management by raising awareness about FEC challenges within the business and collaborating on potential solutions. Thirdly, prioritize automation for large-scale or time-consuming manual controls to keep employees engaged with more complex tasks. For instance, implement E-KYC (Electronic Know Your Customer) to streamline and expedite verification processes. Techniques like text analysis and automated public source screening can help identify risk factors. It’s crucial to minimize friction in these processes and ensure that sharing information is easy for customers. Additionally, AI techniques currently used in retail transaction monitoring can be adapted for international banking, improving efficiency and effectiveness in monitoring.

The IT Reporting Initiative in the Netherlands

In this article, we outline the contours of the NOREA Reporting Initiative (NRI) ([NORE24]). This initiative arose because of the need to report and be able to account for IT controls in a standardized manner. A public consultation on the so-called “IT report” took place in March 2023 and responses are currently being processed ([NORE23b]). We describe the reason for this initiative and the evolution it has undergone over the past two years. Of course, we also discuss the content of the reporting standard. In addition to the “IT report,” an “IT statement” is also being considered. We will also discuss this in more detail.

Drafting a reporting standard is one thing, but its use is obviously something that must be demonstrated in practice. This is why we also outline the experiences that CZ gained during one of the pilots in which the reporting standard was applied.

Background

By now, it is evident that Information Technology (IT) holds paramount significance across virtually all organizations. The integration of IT is indispensable for maintaining financial accounts, and in numerous instances, it assumes a pivotal role in driving operational activities for organizations. This may include administrative aspects in operations such as the import and distribution of cars, for example, as well as the control of production lines. Of course, IT also plays a crucial role in public tasks. Think of the control of our water flood defenses or the coordination in the emergency services.

When IT plays a role in operations, it is often a means for an organization to achieve strategic goals and is partly what determines an organization’s valuation. In a critical scenario where an organization’s viability is contingent on IT infrastructure, a poorly maintained system reliant on a limited pool of individuals for expertise can substantially diminish the organization’s valuation. Conversely, a high-quality IT organization equipped to adapt swiftly to evolving circumstances would enhance the organization’s overall value.

What is striking is that there are many specific accountability requirements for organizations in IT, but there is still a lack of integrated accountability. A bottleneck emerges as a result of diverse reporting formats varying in depth and scope, resulting in redundancy, increased burdens, incomparability, and ambiguity for stakeholders.

Specific obligations exist in the areas of DigiD, ENSIA and, for example, NEN 7510. Regulators such as DNB and AFM have also instituted specific accountability obligations. Internationally, the SEC recently announced a cybersecurity disclosure obligation. This is the first obligation where public disclosure is expected. In addition, a specific part of IT control – namely IT risks related to the financial reporting process and the management of those risks – is also a regular part of the audit.

Dutch Civil Code book 2 title 9 article 393 paragraph 4 is an important piece of legislation when we talk about IT within the financial statement audit:

The auditor will report on his audit to the supervisory board and the management board. He will at least report his findings with respect to the reliability and continuity of automated data processing.

Traditionally, accountants have conducted audits of financial statements with a substantive approach, employing detailed checks and numerical analysis to ensure that the information aligns with the true and fair view intended by the financial statements. During this audit, the auditor will also gain insight into IT.

Increasingly, we are seeing auditors take a “systems-oriented” approach to the financial statement audit, making use of the internal controls that have been established around IT systems. This usually leads to a combination of a system-oriented and a substantive audit approach. The auditor may report to a limited extent on the reliability and continuity of automated data processing in the report to the board and those charged with governance. The focus is only on those systems that are relevant to the financial statements and to the extent they are in scope for the financial audit. In short, the information about the “quality” – if it can be defined at all – of the automated data processing is retrieved to a limited extent as part of the financial statement audit, while it may be pertinent to conduct this assessment in a broader context for various reasons.

The identification of the gap between the critical importance of IT in a broad sense, on the one hand, and the limited provision of information about IT to supervisory bodies, such as those charged with governance and possibly other stakeholders, on the other, led to the NOREA Reporting Initiative (NRI). The aim of the NRI is to systematically illuminate how an organization has structured its IT framework to ensure that IT actively contributes to the achievement of the organization’s strategic objectives. This fits in with NOREA’s manifesto “Towards a digitally resilient society” ([NORE23a]), which was presented in April 2023 to State Secretary for Kingdom Relations and Digitalization Alexandra van Huffelen and to Nicole Stolk, board member of De Nederlandsche Bank. It includes the recommendation for external accountability for IT control within an organization which will boost accountability in IT control.

To ensure uniformity, it was decided to develop a reporting standard. NOREA has taken the initiative and produced a first draft and incorporated feedback received. It is important to note that this reporting standard is still under development and has no formal status yet. It is also recognized that the responsibility for and management of such a reporting standard should not lie with the professional group of IT auditors, but with an organization more appropriate for this purpose. This has not been further concretized at this stage. This reporting standard provides guidance and also identifies topics to be described that, if explained, contribute to the purpose of the IT report.

The current NRI has gone through several developments in its inception, which makes sense considering the complexity resulting from:

  • a diverse landscape of types of organizations (large, small, national, international, IT-driven or not etc.);
  • pre-existing standards and standards frameworks;
  • the link to the financial statement audit;
  • public or private distinction (public organizations should be more transparent);
  • whether an organization is publicly traded (publicly traded organizations should be more transparent);
  • the sector in which an organization operates (external accountability plays more of a role in highly regulated sectors such as banking and healthcare);
  • different information needs of various stakeholders. Examples include understanding different aspects of IT, degree of depth, focus on past accountability or future-proofing et cetera.

One of the first issues was whether the reporting standard should include a standards framework for minimum desired internal controls and/or control objectives. It quickly became clear that such a uniform standards framework could not be established because organizations are very different. In addition, there are already several standards frameworks on the market and an overlap with those standards frameworks did not seem a logical idea. Therefore, the NRI certainly does not include minimum required internal control objectives and internal control measures.

As the development progressed, it became evident that the primary goal is not necessarily to generate an IT report for the general public interest. It soon ran into the understandable objection that an organization does not want to reveal confidential aspects of its IT organization. This ultimately resulted in the NRI aiming to produce an IT report primarily for the supervisory body, for example, leaving it to the supervisory body to decide whether the IT report should be made public. The NRI does not include any obligation to make an IT report public, but primarily aims to provide a reporting framework to help organizations understand the state of IT.

C-2024-3-Harst-1-EN-klein

Figure 1. NOREA Reporting Initiative development timeline. [Click on the image for a larger image]

What does an IT report in accordance with the NRI look like?

As described earlier, the IT report is not a standards framework with internal control objectives and/or measures. That does not mean it is unstructured. A design and structure has been chosen in line with the GRI Sustainability Reporting Standards ([GRI]). On the one hand, this provides a modular structure, featuring the development of six IT themes. Any other (optional) IT themes can be added to this later. On the other hand, the NRI outlines what needs to be reported for each theme, without requiring an explicit assessment of whether the current IT environment meets a particular standard/requirement such as DORA, GDPR or the Cyber Resilience Act. As an illustration, GRI 418: Customer Privacy 2016 ([GRI16]) includes one reporting requirement with no standard setting: “report the total number of substantiated complaints received about customer privacy breaches, and the total number of identified leaks, thefts or losses of customer data”. Additionally, NOREA’s Privacy Control Framework (PCF) can be used by entities to determine whether privacy protection measures are adequate in relation to the GDPR, for example, and includes 95 control measures. The PCF can lead to a Privacy Audit Proof statement.

The IT report takes a broad look at the organization of IT. In doing so, the NRI identifies two main sections. The first section deals with more general themes regarding the organization of IT and its governance, and risk management. The second section deals with specific themes that may be relevant to each organization.

C-2024-3-Harst-2-EN-klein

Figure 2. Coherence of generic and specific themes ([NORE23b]). [Click on the image for a larger image]

In this process, the organization assesses six key IT themes by examining the existing level of IT control on one hand and juxtaposing it with the organization’s aspirations in the respective theme on the other. The reporting standard pays specific attention to elements that are critical to an organization and can impact its stakeholders, including customers, suppliers, employees and other workers, regulators, investors and society. The standard currently identifies six themes:

  • Digital Innovation & Transformation;
  • Data Governance & Ethics;
  • Outsourcing;
  • Cybersecurity;
  • IT Continuity Management;
  • Privacy.

The reporting standard provides guidance by clarifying the scope for each theme and by describing specific reporting requirements with associated specifications to substantiate them. The reporting standard provides readers with a cohesive and standardized approach through a common framework for IT reporting. Consequently, the report enables a consistent depiction of various organizations, fostering clarity and uniformity in presenting a comprehensive picture.

Outsourcing

To paint a picture of the elaboration according to the NRI, we describe the theme of “Outsourcing” below.

According to the standard, managing outsourcing is generally addressed in Chapter 1 of the “Management of IT” report, which outlines both the organization of outsourcing and risk management as a result of disclosure “MGT-1.1: IT organization and governance”.

There are also two more specific disclosures related to managing outsourcing:

MGT-OUTS-1.1 – The reporting organization shall report how it manages outsourcing using requirements and the context and scope of the outsourcing in addition to ‘”Management of IT topics”.

MGT-OUTS-1.2 – The reporting organization shall describe how it manages risks related to its outsourcing of processes and services in addition to “Management of IT topics”.

An organization that has determined that the outsourcing of processes and services is material is required by the standard to report how it is being handled. Organizations describe the impact of outsourcing on their own organization and on the chain of supply and demand in which the organization finds itself.

An organization describes the establishment of outsourcing along three relevant disclosures as described in the standard:

  • OUTS-1 Outsourcing is governed and managed and the value and other overall objectives of outsourcing are monitored and evaluated.
  • OUTS-2 Candidate providers for outsourcing of processes and services are selected, evaluated (to determine preferred candidate) and services are contracted, implemented and (eventually) terminated based on identified requirements.
  • OUTS-3 The delivery of services is managed based on identified requirements, including the connections (interfaces and handovers) with the rest of the organization, and service management.

To ensure uniform reporting, the following requirements are embedded in the NRI:

  • OUTS-1.1 The organization shall report how it conducts ongoing oversight over its outsourcing portfolio, including the ongoing evaluation of the overall outsourcing performance against objectives.
  • OUTS-2.1 The reporting organization shall report on its processes, policies and procedures for the initiation, implementation and termination of outsourcing.
  • OUTS-3.1 The reporting organization shall report on its policies and procedures on the ongoing monitoring of the performance of outsourced processes and services. This includes responding to occurrences (e.g. incidents) and other service management aspects.

The NRI then provides further guidance for each disclosure to achieve a proper description. By way of illustration, below is an example of guidance in relation to the second disclosure:

OUTS-2.1e The organization could describe how it handles the following topics:

  • the (re-)transfer of assets and data;
  • documentation and archiving of the results of the termination efforts;
  • fulfillment of contractual, compliance and regulatory obligations.

At what level is reporting performed?

The IT report is prepared by the organization and is emphatically not an audit or assurance report that, as is well known, is prepared by an independent external auditor. The organization describes the current situation in the organization at one moment in time, looking back 18 months in the description as well as 18 months ahead. As a result, choices and ambitions are explained in the report.

The report, as mentioned earlier, does not describe the design, implementation and operational effectiveness1 of controls, but aims to provide insight into the organization of IT to relevant stakeholders. There is an explicit distinction between the IT report and standards frameworks such as NIST or ISO 27001 and between the IT report and assurance reporting standards such as SOC1, 2 and 3 and NOREA Guideline 3000. In addition, confidential detail about cyber incidents, for example, also have no place in the report.

The organization describes its IT organization and IT controls

The purpose of the NRI, as mentioned, is to provide insight in a standardized uniform manner into how an organization has organized IT and how IT contributes to the organization’s strategic goals. The organization’s management is the appropriate body to report based on the NRI. Of course, the organization can also use external parties for this purpose, but the basic principle is that the organization itself is responsible for preparing the IT report.

It is relevant to circle back to the earlier assertion that the NRI is not a standards framework. For example, the NRI does not require an organization to comply with NIST or ISO 27001/2. What the NRI does ask is to describe whether, and if so, what information security standard the organization meets or intends to meet. If there are specific accountability requirements within the realm of IT, such as DORA, BIO, or NEN 7510, they will be explicitly addressed. At the same time, if the organization does not have a formal information security standard and reports it as such, this is appropriate in the spirit of the IT report.

Providing assurance about the IT report

Drawing a parallel to financial statements, wherein an organization compiles financial information adhering to specific reporting standards, and an auditor subsequently reviews these statements in accordance with auditing standards, a similar process can be applied to IT reports. Herein lies the opportunity for the internal auditor or an external accountant/IT auditor to scrutinize the IT report. In this context it is possible to issue an assurance statement to the IT report, or IT statement. An IT statement is an assurance report based on NOREA Directive 3000A and includes an opinion as to whether the content of the IT report gives a true and fair view of reality to provide additional assurance for third-party users. In essence, the initial creation of reporting criteria by NOREA may not pose an issue for utilizing them in an assurance engagement. This holds true if the IT auditor collaboratively establishes agreement with the responsible party regarding the appropriateness of the criteria.

An IT report could be included in the organization’s annual report. This is then complementary to other aspects, such as descriptions of various developments within or around the organization. Once more, we can draw parallels to CSRD/ESG/sustainability reporting, wherein the organization has the option to incorporate the IT report into the annual report.

Nevertheless, there are some snags to be considered. First, it is necessary to determine the role of the auditor with respect to the statements in the IT report (will a separate opinion be prepared, or should it be considered “other information”?). Another aspect at play here, which may be somewhat more recalcitrant, is the potential contradiction between the description of IT imperfections in the IT report on the one hand and an unqualified opinion on the other. There will be situations where questions may be raised about how certain statements in the IT report relate to an unqualified opinion on the financial statements. The auditor’s ability to offer compelling explanations for apparent inconsistencies might not always be readily apparent to a reader.

The NRI does not aim to make the IT report a part of the annual report. We believe it is too early for that at this time and that broader experience with the IT report should first be gained so that such considerations can be evaluated.

Pilot at CZ: creating an IT report

Over the past year, CZ has gained experience in preparing an IT report. CZ is part of the NRI working group of NOREA, and from that role they have initiated an internal pilot, of which they have also reported back their findings in the aforementioned working group. CZ was the first organization in the Netherlands to pilot the process of drawing an integrated picture of the themes of Digital Innovation and Transformation, Data Governance & Ethics, Outsourcing, Cybersecurity, IT Continuity Management & Privacy and preparing a related audit report.

Tom Verharen, a senior auditor in CZ’s Internal Audit Department (IAD), and Jurgen Pertijs, the IT audit manager in IAD, both played distinct roles in the preparation of the IT report and audit report.

Background

A confluence of circumstances led to CZ’s need for an integrated picture in the field of IT and the IT report. At the time, the CIO was relatively new to his position, and gaining insights into the organization’s IT environment was highly valued from his perspective. CZ also has an IAD with REs and a strong relationship with NOREA’s specialist working and knowledge groups, which gave CZ an early introduction to the NRI initiative. In addition, the report provided an opportunity to form a coherent picture of IT management, growth and ambitions.

Preparation and approach

The CZ Board of Directors commissioned a study and an IT report. The owner and person ultimately responsible for the report was the CIO. He helped determine the approach to arrive at the report and made an initial classification of the officials who needed to be involved to produce an IT report. CZ adopted a project-based approach in which information was gathered for the IT report using one workshop per theme. The senior auditor of the IAD supervised the entire project as process supervisor, providing substantive professional knowledge in the field of reporting.

Each workshop took two hours and was prepared in terms of content by the senior auditor from the IAD. Each workshop included pertinent stakeholders aligned with the specific theme. In the case of the “Outsourcing” theme, as discussed earlier in this article, key participants encompassed the procurement manager, supplier managers, and the infrastructure manager. During the workshop, all disclosures of the NRI were discussed. This included looking back and discussing ambition in that area. The workshops were supported by the secretariat to ensure proper recording.

“Based on the disclosures in the NRI, we formulated questions for each theme to provide a good format for the workshops.”

– Tom Verharen (CZ)

After the information was gathered in the workshops, the CIO department created a summary for each theme that was coordinated with relevant officials. These summaries together resulted in the IT report. The IAD provided support in this process to ensure that the report was issued in accordance with the NRI standard.

The entire project to come up with an IT report required a total of about thirteen days of effort from the CIO department, and the IAD also spent about eleven days to come up with an IT report. The project had a lead time of eight weeks and resulted in a comprehensive report that was found to be very valuable from multiple perspectives.

Audit on the IT report

Already during the design of the project, the IT audit manager of the IAD planned to carry out an audit on the IT report as well to provide the board of directors with more assurance on the content of the report. To make this audit effective and efficient, CZ chose to conduct this audit during the project. An audit file was created and during the workshops, the IAD asked questions and requested additional documentation to determine the reliability of statements. The IAD confirmed observations from the IT report and supplemented them with observations from its own observation of previous audits. The IAD issued an audit report on the IT report and this, together with the report, was presented to the Board of Directors, Supervisory Board and Audit-Risk Committee. Conducting the audit required about six days of commitment from the IAD.

“Performing an audit on the drafting of the IT report was new to us. CZ’s IAD issued a report of factual findings.”

– Jurgen Pertijs (CZ)

The 2021-2022-2023 IT Report

The IT Report is structured along the six themes mentioned earlier. CZ has peeled off all these themes, looking back 18 months and looking ahead 18 months, in accordance with the reporting standard. All themes are described in the report based on the requirements in the standard. However, in certain cases a choice has been made regarding the depth with which the theme is described. In the case of cyber security, for example, a choice was made not to include certain details in the report.

In addition to the six specific themes, CZ also chose to describe a number of general chapters, because in practice it turned out that there were a number of topics that were repeated in each theme. These include the strategy description, the general organization and a description of the design and operation of CZ’s internal risk management and control systems.

User experiences

The then newly appointed CIO responded positively to the report, appreciating the fact that it provided a comprehensive overview of the IT status at CZ in a relatively short timeframe. Furthermore, the report furnished him with a solid baseline measurement and an effective means of communication with stakeholders pertinent to his role. An additional advantage is that the IAD independently reviewed the content of the report.

It has been of added value to the BoD and SB that they could obtain an overall picture of IT in a single report written in clear language. While a significant portion of the information existed in isolation, the IT report has consolidated it, allowing for a unified perspective. Through structured analysis of IT themes, patterns within them have become discernible. For example, it is clearly noticeable that CZ’s role as an IT employer is important to CZ when looking to the future.

“The IT report really resonated well with the Supervisory Board; they appreciated the integrated picture.”

– Jurgen Pertijs (CZ)

The IT report is perceived by users as an enhancement to the test results communicated periodically in terms of the operation of general IT controls (GITCs). Whereas GITCs are more operational in nature, this report is much more tactical and strategic in nature because of the way the disclosures are prepared.

Experiences and lessons learned from the project

CZ reflects on the “IT report” project with a favorable perspective. The NRI was not experienced as an oppressive straitjacket and enabled the CIO to present its story in a structured way. The CIO has chosen to continue to issue a report periodically. The specific form this will take has yet to be determined. CZ plans to produce an abbreviated version of the IT report in 2024.

Because of the positive experience, the IAD has chosen to shape its audit programs in the future along the report’s six themes.

In a subsequent report, in addition to the entire CIO office, business stakeholders as well as the risk management department will be further involved in the workshops. Business stakeholders own the primary process and are therefore also responsible for the use of IT in it. Risk management manages the risk management process, which also pays broad attention to IT-related risks. Experience has shown that these actors also play an important role in drawing up an integral picture of IT control.

The pilot at CZ has brought NOREA new insights. For example, the general chapters as defined by CZ have now become a permanent part of the NRI standard.

Conclusion

The NOREA Reporting Initiative is an initiative created to report and account for IT governance in an integrated and standardized manner. The initiative was developed because of the growing importance of IT in almost all organizations and the need to account for it internally or externally.

The NRI is a reporting standard that also allows for the provision of additional assurance on the fairness of such a report by an independent auditor (“IT statement”). The NRI is not a standards framework for minimum internal control measures, but it does require organizations to describe, for example, whether, and if so, what information security standards they comply with. The NRI aims to provide insight in a standardized way into how an organization has organized IT and how IT contributes to the organization’s strategic goals.

We believe that the NRI fleshes out the growing importance of IT in the functioning and futureproofing of organizations. The structure outlined by the NRI offers guidance in identifying the pertinent themes accurately, ensuring recognition by stakeholders when multiple IT reports are compared side by side. Organizations are enabled to report periodically in accordance with this standard. In doing so, because of the standardization, comparisons can easily be made between multiple reporting periods. We can also imagine that this standard can be used in due diligence investigations; the standardization and recognizability will be a big plus for investors. In a broader sense, the standard can also be used to perform a baseline measurement and based on this, define actions to achieve the desired level of ambition. Internal audit departments can leverage the standard to, for instance, systematically explore the aforementioned topics over a three-year cycle. This approach facilitates engaging in comprehensive discussions with management, utilizing the gathered information to explore how IT can effectively contribute to the organization’s overarching goals. Because of the standardization and the fact that there is a well-thought-out reporting standard, the application possibilities are, in our opinion, numerous.

When it comes to reporting requirements from various regulators, the pressure on organizations is high. Many organizations have to comply with specific laws and regulations, which takes up the available time of not only the second line but also the first line. In that light, ESG regulations will also require a lot of time from organizations in the coming period. In our opinion, the NRI will not further increase the compliance-related workload in the first line. Preparing an IT report in accordance with the NRI obviously takes time, but it reflects the existing situation, not prescribing which requirements an organization must meet. Certainly, the formulation of the IT report might prompt an organization to consider addressing specific facets related to IT control in a different or enhanced manner. However, such considerations arise from an internally driven impetus for change aimed at enhancing the overall organization.

We therefore see the NRI as a sound tool to provide insight to supervisory bodies and as a means for organizations to improve themselves. We believe it is currently too early for mandatory application. More experience should be gained with the standard in the coming period. Anticipating a heightened demand from supervisory bodies and investors for reporting on the IT environment, we recognize that the outlined standard serves as a robust foundation. However, its efficacy is greatly enhanced when complemented by an assurance statement, affirming the accuracy and integrity of the report. This step propels it beyond a mere self-assessment, amplifying its overall value.

NOREA’s NRI working group consists of approximately twenty people representing various organizations, including EY, KPMG, Deloitte, PwC, BDO, Mazars, ACS, TOPP Audit, Verdonck Klooster and Associates, ABN Amro, UWV and CZ. A full list is included in the version released for public consultation.

Notes

  1. The design of a control measure refers to the extent to which it covers an identified risk. Implementation refers to the actual functioning of the control measure at any given time, while operational effectiveness refers to the actual functioning of the control measure over a longer, often specified, period.

References

[NORE23a] NOREA. (2023, March 30). NOREA-Manifest Op naar een digitaal weerbare samenleving. Retrieved from https://www.norea.nl/nieuws/norea-manifest-op-naar-een-digitaal-weerbare-samenleving

[NORE23b] NOREA. (2023, March 31). Norea Reporting Initiative v0.11. Retrieved from https://www.norea.nl/uploads/bfile/6357a197-6fd2-4904-b43e-7a85e123cb59

[NORE24] NOREA. (2024). Werkgroep Reporting Initiative. Retrieved from https://www.norea.nl/organisatie/werkgroepen/werkgroep-norea-reporting-initiative

[Fijn23] Fijneman, R. (2023, January). IT governance report: food for thought and next steps. Board Leadership News KPMG.

[GRI] Global Reporting Initiative. (n.d.). Standards. Retrieved January 23, 2024, from https://www.globalreporting.org/standards/

[GRI16] Global Reporting Initiative. (2016). GRI 418: Customer Privacy 2016. Retrieved from https://www.globalreporting.org/standards/media/1033/gri-418-customer-privacy-2016.pdf

From regulation to reality: the DSA’s early impact on trust and online safety

Designated very large online platforms (including social media, marketplaces, classifieds) and online search engines were required to publish their first audit reports by 28 November 2024. The results show that a wealth of work has been done over the last few years to comply with the Digital Services Act (DSA), but online platforms cannot rest as we see “negative” assurance reports (with adverse and qualified opinions) for 18 out of the 19 online platforms subject to the DSA requirements in the first audit year.  This article discusses what the DSA has achieved since its introduction and how it will further shape online trust, safety, and protection of users of these online platforms. 

Introduction

The European Commission (EC) has enacted several digital regulations over the last couple of years, one of the most influential is the Digital Services Act (DSA). The DSA is one of the frontrunners in global legislation with the aim to provide a safer and fairer digital environment for online users and is part of a broader legislation package (more than 120 laws and regulations) introduced by the EC related to the digital single market. This regulation introduces clear rules for online platforms, aiming to protect users from illegal content, misinformation, and harmful practices, while also ensuring fundamental rights are respected across all EU member states.

The DSA imposes rules and regulations with cumulative obligations for (major) online intermediaries. The smallest scope of obligations applies to all intermediaries with gradually more requirements for hosting services, online platforms, and finally, very large online platforms (VLOPs) and very large online search engines (VLOSEs) with the largest scope, the latter being differentiated by the number of monthly users in the EU with a threshold of 45 million. The EC initially designated 19 VLOPs and VLOSEs in May 2023, among them Amazon, Zalando, Facebook, TikTok, Bing, and Google Search. Over the last year, an additional 6 have hit the VLOP and VLOSE threshold bringing the total to 25 as of 31 October 2024.

At this largest scope of requirements, we see the need to perform annual, thorough risk assessments to identify and address systemic risks associated with illegal content, fundamental rights, electoral processes, and user protection. VLOPs and VLOSEs must also

  1. publish transparency reports detailing their content moderation activities and implementation of measures to safeguard minors and vulnerable people;
  2. consider the design of their algorithms and recommendation systems when evaluating these risks, and
  3. contract an independent auditor to perform a yearly audit and publish the audit reports on their websites.

We can now see the results of the first round of audit reports that online platforms had to publish by 28 November 2024.

Where are we today?

While no fines have been imposed yet, the results of the audit reports show that there is still significant work to be done by the online platforms as the audit firms concluded for several obligations that the measures in place are not sufficient yet.

Particularly, we see online platforms struggling with implementing all necessary controls to meet the stringent standards set by the DSA. Online platforms had limited time to prepare for the DSA audit as the final version of the Delegated Act on performing audits was released in the year one audit period. Moreover, often platforms had limited experience in implementing extensive control frameworks for the areas in scope of the audit, let alone being audited. Most notably, the audit reports reveal that  the online platforms have challenges to show to their auditor how they have implemented measures around transparency reporting. Other areas of contention include the recommender systems, notice and action mechanisms for content moderation, and online protection of minors.

How is the DSA effective?

With the audit reports primarily accentuating the areas that can be improved, we also want to highlight areas where the DSA has already shown its effectiveness. Perhaps as a user of online platforms, you have received emails in your inbox about upcoming changes in the terms and conditions, or you may have noticed that the top pages as a result of your search query are marked as sponsored advertisements. These are both demonstrations of increased public transparency, a direct result of the DSA’s transparency requirements. Online platforms have also been publishing transparency reports, directly accessible on their website, in which they report on illegal content, content moderation activities, and number of users on their platform.

One of the biggest changes that the DSA has brought forth so far, amongst other regulations in the online trust and safety space, is the notable shift toward more compliance initiatives. For example, we see an increased priority for compliance within the board of management and the more operational teams including the engineers. The DSA mandates the establishment of a dedicated compliance function, which has led to the hiring of compliance officers to ensure adherence to this law. Consequently, second line of defense teams are growing within these online platform providers. There is now more emphasis on risk management with measures for risk mitigation being prioritized and compliance becoming more embedded in their processes, for example, in the design and development processes.

Conversely, a major challenge is to minimize any delays that may occur in launching new or updated features and products. It becomes a careful balancing act of continuing to drive innovation while ensuring that risks are carefully considered. This shift towards cautious and deliberate implementation indicates a maturing digital landscape that prioritizes online safety and trust. We have observed that the introduction of new online products or new product features that are subject to EU Acts, such as the DSA, DMA or upcoming AI Act, has been slowed down or postponed recently. 

A positive beginning with room for improvement

While risk management and compliance moved up the priority ladder, the DSA is a sudden and challenging law to comply with so quickly after its adoption. Few prior regulations at the national level existed for online safety unlike what can be seen in the financial industry. Moreover, the DSA can be interpreted in various ways, for example, online platforms can define for themselves how they interpret terms like “easy to access” and “user friendly”. So, not only is the DSA new and difficult to adhere to on such short notice, it also isn’t clear-cut in particular areas.

Additional guidance, the closing of EC investigations, new Delegated Acts, and future case law will all help bring desired clarity to the more ambiguous areas. Also, the EC is expected to encourage online platforms to voluntarily become a signatory for upcoming Codes of Conduct such as requiring age verification to protect minors, promoting safe online advertising, countering illegal hate speech, and combatting the spread of disinformation. The Codes of Conduct are a practical tool the EC can leverage to pressure online platforms to implement measures against the systemic risks imposed by their services.

Another area we hope to see results in over the coming years is the requirement to provide researchers with access to data on the platform. This could, for example, help researchers contextualize DSA requirements by highlighting the social impact of the risk management decisions of online platforms.

All of these steps should lead to a more comprehensive framework for online safety and trust within the EU. 

The DSA is not the only legislation in this field

The DSA is not the only legislation on online safety within the EU or globally. Most recently, we see the United Kingdom and Australia both publishing online safety acts as well as Ireland with the Online Safety Code. We’ve also seen an uptick globally in recent years of laws and regulations contributing to online safety in the privacy, competition, and artificial intelligence (AI) spaces.

On the privacy and data protection side, the EU General Data Protection Regulation has been in force since 2018, and the United States (U.S.) continues to see state-by-state privacy laws enacted (as the first US federal Privacy Bill is still in the legislative process). For online platforms that act as gatekeepers, the Digital Market Act (2022) is key in regulating digital platforms to ensure fair competition. Furthermore, the EU’s AI Act (2024) will be an important contributor to a more robust online safety legislative framework and, again, we see the U.S. introducing AI regulations of its own. Adding to this challenge, we see contradictory trends globally where certain requirements that are mandatory in one jurisdiction (e.g. EU), e.g. content moderation, could be(come) relaxed or forbidden in others (e.g. US). 

What will the future bring?

Drawing from our experience with the majority of the designated online platforms, we anticipate that over time, the maturity of compliance processes will increase and transition to a more routine mode of operation.

As the EC develops more guidance, implements more Delegated Acts, EC investigations are concluded, Codes of Conduct are developed and converted under the DSA, case law is formed, and research on online safety is published, the specific requirements of the DSA will become clearer and the regulatory framework more defined. As a result, this will heighten compliance scrutiny for online platforms and provides less flexibility in the interpretation of requirements in the law. 

Conclusion

The DSA is a solid steppingstone towards a safer and more transparent digital online environment for users. However, it will take several years for the dust to settle and before we know what the DSA – and other online trust and safety regulation within and outside the EU – has achieved. So, while online users may experience a first wave of online safety and protection benefits, it remains to be seen whether the DSA will ultimately provide a significant overall benefit for internet users and society at large.

Shaping the synthetic society: What’s your role in safeguarding society against the systemic risks of AI?

In this article we explore the systemic risks of AI, identifying the various threats it poses to society at different levels of criticality. We complement this analysis with an overview of the interventions at society’s disposal to combat these threats. Reflecting on the (legislative) actions already taking place in the European Union, we identify which threats require most vigilance going forward. We conclude with recommendations on the roles that governments, organizations and citizens can play in further managing the systemic risks of AI.

Introduction

Microtargeted fake news around elections, incorrect data-driven fraud detection, and an epidemic of social media-induced anxiety amongst our youth. These are just a few of the examples that make it increasingly apparent that the risks of artificial intelligence1 (AI) go beyond mere incidents affecting a few unlucky individuals. These risks are not about go away naturally. On the contrary, we are moving towards what we could call a “synthetic society” ([Sloo24]) in which the use of AI permeates more and more aspects of our daily lives. The road ahead holds many promises, but there are also clear and imminent dangers. Unfortunately, current debate on AI risk often yields towards incidents instead of root causes and broader societal impact. Even when discussions turn towards “systemic risks”, they typically result in generic calls pro or contra AI usage2.

In this article we analyze in which ways the use of AI triggers significant societal harms, often through strengthening ongoing trends that are not new to society. In an effort towards gaining a deeper understanding, we propose a use case-based perspective on systemic risk, which discerns the different threats to society that are introduced or increased by using AI.

We will also provide an overview of possible interventions that may fully or partially address the threats, and compare them to the actions taking place in the European Union, particularly in the area of AI-related regulation.

Finally, we reflect on the roles that governments, organizations and citizens3 have to play in shaping the development and use of AI in a way that upholds human rights and dignity while fostering innovation.

Systemic risk defined

AI may create and exacerbate risks at different levels. We could visualize this as a pyramid of risk (Figure 1). At the bottom, operational level AI applications may prove to be unexplainable, biased or simply inaccurate. These are the concrete issues that affect individuals, groups and businesses and provide the juicy headlines about AI we often see in the news. At the top, AI may threaten our very survival as a species. This is the existential risk that is – so far – confined to the realm of spectacular science fiction. Somewhere in-between these categories lies the systemic risk of AI usage: the risk that AI undermines the functioning of society itself and impacts fundamental human rights. The demarcation between operational risk and systemic risk is admittedly gradual and blurry. This is especially true for big tech firms and governments. For organizations providing applications used in critical infrastructures or by large portions of the population, the shortcomings of an individual application may also constitute a systemic risk to society as a whole. Social media platforms such as Facebook or X (formerly Twitter) are concrete examples.

C-2024-11-Beer-1-klein

Figure 1. The pyramid of AI risk (source: authors). [Click on the image for a larger image]

While it may sound intuitively appealing, using the concept of systemic risk begs the question what part of the “system” we call society is actually at stake. If we look at the recent legislation around the use of AI in the European Union, we find ourselves short of an answer. Systemic risk is only mentioned in the Digital Services Act (DSA)4, although interestingly enough it is not explicitly defined there. However, the texts in both the DSA and AI Act (AIA) provide sufficient basis for a definition in the European context. For the purposes of this article, we define systemic risk as all threats of large-scale infringement on the fundamental rights of citizens, either directly or by undermining the institutions and democratic processes that aim to guarantee these rights5.

AI: a systems technology that warrants intervention

The systemic risk of AI matters because AI is a systems technology6 on steroids. Due to its versatility and power, it combines characteristics of various previous major inventions : like nuclear power it can be very destructive, like electricity and the computer it can fundamentally change the way we produce goods and services, like alcohol and drugs it can be incredibly addictive, and like printing, telecommunications and the internet it can change the way we interact. The impact of AI on society could be immense and current developments in society give serious reason for concern. This means that we cannot simply leave the development of AI to self-regulation.

A further argument to try and actively guide the development of AI lies in the astonishing speed at which development takes place. Just think of the way in which the introduction of ChatGPT and other large language models shook the world in a matter of months7. This should urge us not to wait and see how history unfolds. By the time we are confronted with the outcomes, the societal mechanisms needed to keep AI development in check may already be broken beyond repair.

At the same time, it should be recognized that the development and use of AI cannot easily be controlled as it is virtual in nature. In essence, the only resources required are data, computing power and data science skills. Aside from complicating the question of control, this also implies that a complete ban of AI from society, or parts of it, is not a realistic solution to address systemic risk. This is without even considering the potential benefits that society may miss out on by choosing not to invest in the development of AI. In other words: AI is here to stay, it has a huge positive and negative potential, and as society we will have to find a way to deal with it.

Systemic risk unpacked: AI-reinforced threats at three levels

The systemic risks of AI do not exist in a vacuum. So far we have been talking about the use of AI as if it poses radically new challenges to society. This is not the complete picture. In fact, being a versatile systems technology, AI often impacts society by strengthening structural trends that are already ongoing due to other social, economic and technological factors. For instance, disinformation has a long history, and the emergence of deepfake technology adds a new and troubling dimension to it. We therefore approach the systemic risk of AI by looking at what we will call AI-reinforced threats8. The connection to broader sociotechnical challenges helps to see AI risk as more than a technical problem which requires a technical fix. It also helps to be realistic and realize that remedies to the risks of AI on their own are not likely to solve social challenges inherently tied to the state of society and human nature. To continue the abovementioned example: even if we are able to properly address the risks of deep fakes, disinformation will never be completely eradicated from society.  

Each of these threats highlights a different risk to society, but they are explicitly not mutually exclusive. In fact, in practice they work simultaneously and may strengthen each other.

C-2024-11-Beer-2-klein

Figure 2. Overview of AI-reinforced threats (source: authors). [Click on the image for a larger image]

From a conceptual point of view the AI-reinforced threats to society work at three levels, as highlighted in Figure 2. First and most fundamentally, the use of AI can undermine our shared view of reality. If we are no longer able to discern facts from fiction, it not only makes us vulnerable in our personal lives, it also erodes the agreement within society about basic facts that underpin the social contract. The effects can be profound and may carry over to the other two levels discussed next. At this most fundamental level of shared reality, we discern two key threats.

Data-driven deception (disinformation and impersonation). Our belief in what is true and what isn’t has been used as tool and a weapon since the dawn of humanity. In this arena, generative AI9 is a potential game-changer that enables low-cost generation of realistic, synthetic text, audio and video. This may severely impact the human ability to discern facts from fiction and fakes. The motives behind its use can be criminal, political, or part of hybrid warfare, targeting either individuals or society. To date, the impact has been largely focused on individuals, as seen in cases of advanced voice cloning scams or deepfake revenge porn. However, it is easy to see the disruptive potential of political or military uses of generative AI. For example, automatically generated images, audio or video could be used to incite racial conflict and violence or disrupt the military chain of command.

Machine-in-the-middle (digitally mediated society). The use of AI further increases manipulation risks at the digital “interfaces” between people and organizations. If our interactions primarily take place virtually, the operators of our digital communication channels and platforms essentially have the power to determine what we see and don’t see. This also means we are not necessarily looking at the same reality anymore. For example, targeted pricing on online commercial platforms based on profiling undercuts the basic principle of markets having a single price that coordinates actions within that market. AI allows such manipulation to take place on a personalized level, at scale. Other concrete examples include the manipulation of search engine results or the creation of filter bubbles in social media.

At the second level, the use of AI has the potential to shift the balance of power between and within societies. The ability to automate intelligent actions reduces the need for human resources to exert control over others. At the same time, AI introduces new opportunities for surveillance and the use of force, enabling a further concentration of power in the hands of a few. This increases the risk of exploitation. At this level, two distinct threats emerge.

The modern panopticon10 (mass surveillance). Like misinformation, mass surveillance and other privacy threats are a topic that far precede the invention of AI. However, AI provides “Big Brother” with some powerful new tools. Through technologies such as pattern recognition and more specifically face recognition, AI enables the increase of mass surveillance both in scale and scope respectively. Ubiquitous profiling and monitoring may result in the ultimate surveillance society where everyone is being watched all the time. China’s social scoring system shows this is not just dystopian fiction. While mass surveillance is often associated with the state versus its citizens, it has equivalents in the context of workforce management and customer management. Both employees and customers of organizations are also at risk of extensive and invasive monitoring. Concrete examples include Amazon’s approach to worker surveillance ([Gru24]) and the Cambridge Analytica microtargeting scandal.

Autonomous armament (AI-powered weapons). AI enables far-reaching autonomy of both digital and physical weapons. One effect is a decreasing threshold to use violence, since the aggressor faces limited risk to incur the loss of human lives. Secondly, the opportunities of using AI in a military context may spark an arms race between nations or alliances. Domestic applications in the domain of policing, targeting the government’s own citizens, are also not unthinkable. While the image of an autonomous armed drone might be most closely associated with this threat, the development of virtual weapons such as highly automated hacking applications could also wreak havoc in digitalized societies.

At the third level, the use of AI impacts the fundamental human rights and well-being of large groups within society. The threats at this level may not directly affect the way we perceive the world or the power balance between actors in society, but their impact can be serious nevertheless. We identify five different threats at work here.

Human-off-the-loop (automated decision-making). The trend towards automated decision-making is as old as the invention of the computer. Where traditionally the decision rules used to be defined manually at design-time, AI takes automated decision-making a few steps further. By basing decisions on inferences from (large) sets of data, the possibilities of autonomous automated decision-making are greatly enhanced. This is not inherently problematic. When done right, in many cases automated decision-making can be an improvement over a fully manual process. Existing biases can be identified and accounted for. However, meaningful and proper means for human intervention, appeal and redress are not naturally guaranteed ([Eck18]). Combined with a continuous pressure for increased efficiency in most societies, we could end up in a world where there is nothing to be done after the “computer says no”. Concrete examples include the automated suspension of accounts at e.g. Microsoft ([Huls24]), automated reporting of child pornography by social media platforms, and the social benefits scandal in the Netherlands.

Statistical straitjacket (marginalization of all who deviate). Modern applications of AI are based on advanced statistics. As such, they inherently have the propensity to reproduce and even consolidate the status quo, including any undesirable or illegal biases. Secondly, they may not perform as well on subgroups with traits or behaviors that deviate from the norm. When insufficiently accounted for, this threat results in marginalization of statistical outliers. The statistical straitjacket takes many forms. We can see it at work in maltreatment of citizens with deviating backgrounds by government agencies ([PwC24] and [KPMG20]), but also in the difference in voice and face recognition accuracy between men and women.

Digital dependence (dependence on technology). We already live in a society almost completely dependent on technology. The occasional incidents with vital digital or physical infrastructures prove this point time and again, with the global CrowdStrike outage as the most recent example ([West24]). The use of AI is likely to push this dependency even further as more complex mental tasks can and will be offloaded to machines. This could lead to deskilling beyond the critical level within the population, perhaps even for mental capabilities such as critical thinking. The plane crash due to human error after the autopilot malfunction of flight AF477 in 2009 is an example of the possible impact when technology fails us ([Wiki24]). The same type of continuity risk applies in cases where AI allows us to do things that are literally not humanly possible. A concrete example is provided by the security market flash crashes resulting from errors in algorithmic trading systems ([Sorn11]).

Intellectual expropriation (threats to value creation from intellectual property). The concept of intellectual property (IP) and definition of its scope and boundaries have always been a complex discussion. Even more than physical property, intellectual property is a social construct up for debate. The ongoing discussions around patent rights in the pharmaceutical industry provide a clear example. The remarkable capabilities of generative AI have reframed discussions around intellectual property, offering a fresh perspective on the issue. Ingesting very large amounts of data and using these to recreate or mimic original works of thought or art, generative AI enables the appropriation of value from other party’s IP. Another example is search engines providing information to users without generating revenue for the source websites, a significant concern for media companies.

Digital addiction (threats to mental and social well-being). As with many of the previous threats, issues around addiction are not new to mankind. However, in the domain of digital services regulation to prevent addictive effects – or even awareness of such risks – has largely been lacking.11 The use of AI opens up new avenues for digital providers to get users hooked on their services. AI enables hyper personalization of digital content, allowing the exploitation of human psychological weaknesses, tailored to the individual. Perhaps the most vivid example of this threat is the excessive smartphone and social media usage by children (and adults).

Mitigating AI-reinforced threats: a toolbox of interventions

While the breadth and scope of the societal threats posed by AI may seem daunting, society has several mitigating actions at its disposal. In the toolbox available we discern four modes of regulation that can be deployed to address the systemic risks of AI12, shown in Figure 3. Each of these modes of regulation acts on a different part of the AI market and lifecycle.

Cultural interventions. This category contains interventions that aim to affect values, norms and knowledge within society. It’s the domain of active citizens, opinion makers and influencers, NGOs and lobby organizations. It’s also the only category that does not rely on government action, although the government can play a facilitating role. The primal intervention here is the public debate itself, which is needed to establish the norms and values around AI use. In a sense this is an intervention preceding all others. It provides a common ground to set the political agenda for legislation and determine what kind of behavior is socially acceptable in AI. Secondly, education of the public can play an important role in reducing some of the AI-reinforced threats. An example is the National AI Course in the Netherlands. Knowledge of the strengths and weaknesses of AI helps to bolster resilience against exploitative use cases. Again, efforts to raise awareness and instruct the general public may be organized within society itself or can be facilitated by the government. Next, voluntary agreements, e.g. via covenants between stakeholders within society, can also play a role to regulate the use of AI without taking legislative action. A clear example is the growing number of schools that ban the use of smartphones during school hours. Finally, society itself can play a role in monitoring the behavior of governments and organizations alike via independent investigations and research by parties such as NGOs or labor unions.

Governance interventions. This category includes interventions affecting the market model for AI and the position of actors in these markets. The most direct intervention is completely prohibiting certain business models. For example, to combat excessive profiling of customers by organizations, the government could forbid any business model that evolves around paying with your privacy. Other interventions work by affecting the position and power of actors within society. The least intrusive option here is to strengthen the regulatory bodies charged with market oversight as a countervailing power to large organizations. Similarly, for publicly provided services democratic control over the executive branches of government could be strengthened by shifting power to institutions that have a supervisory task. A more drastic step would be to set up state-sponsored alternatives to the existing commercial offerings – such as the GPT-NL language model – leaving aside the question of feasibility for now. Finally, as an ultimate measure the power of large players in the market can be curbed through nationalization or forcing the divesture of parts of these organizations.  

Engineering interventions. This category comprises the interventions that act on the development process of AI applications. The goal here is to set rules that prevent flawed or unethical design decisions. First, a compulsory systemic risk assessment forces organizations to explicitly consider and address risk as part of the development and maintenance process. Setting mandatory design principles for AI applications could have a similar effect. Such interventions leave the details of the design itself to the developing party. More prescriptive interventions include setting specific technical standards, or minimal requirements regarding transparency and the assessment of bias13. Such standards can be imposed by the government or developed by market parties as a form of self-regulation.

Outcome interventions. This category contains interventions aimed at regulating which use cases for AI can be developed and managing the consequences of the use of such applications. The most fundamental intervention within this category is updating the legal framework itself, to account for the new societal challenges resulting from the rise of AI. Given that an adequate framework is in place, the government may choose to put limitations on specific use cases in certain domains. The AI practices prohibited under the EU AI Act, such as facial recognition and social scoring systems, provide a clear example of this approach. Such limitations can be focused on specific vital areas in society, such as critical infrastructures and electoral processes. From a cross-border perspective, international treaties can be negotiated to control specific AI developments. However, the question of treaty enforcement will remain an issue.

C-2024-11-Beer-3-klein

Figure 3. Overview of interventions to address AI-reinforced threats (source: authors). [Click on the image for a larger image]

It goes without saying that substantial consequences of incompliance are a prerequisite for any of the interventions that work through legislation to be effective. This includes both fines imposed by regulatory bodies and liability for damages caused to third parties due to reckless AI usage. Without such penalties, bad actors may cynically and rationally decide that non-compliance is the most profitable course of action. This is what happened for example in the domain of data privacy before the EU General Data Protection Regulation (GDPR) came into effect.  

Choosing interventions: precaution versus non-intervention

Given the toolbox at our disposal, the next question is how to combine the interventions into an effective intervention strategy. This strategy may differ per threat, because of differences in the nature and severity of each threat. We need to recognize that both too much and too little intervention can be harmful to society. The threats at all three levels – shared reality, the balance of power, and fundamental human rights – must be addressed, while also considering the risk of stifling innovation and lagging behind other nations in the development of critical systems technology. At the very least we need some kind of criterion to determine in which cases the principle of precaution or non-intervention should prevail. While the principle of non-intervention fits with the relatively free market economy in Europe, we have already argued the risks to society might be too great to leave developments entirely to the market, and in some domains, the EU has adopted the precautionary principle.

We address the debate between precaution and non-intervention by essentially flipping the question on its head. To do this, we must first acknowledge that uncertainty and (moral) ambiguity are at the core of the debate around the use of AI. No one can reliably predict how AI technology will advance, no one can make an exhaustive list of the possible use cases, and no one knows how these use cases will play out in practice. More importantly, in the end many discussions around the use of AI are not clear-cut problems with a single right answer, but political and moral dilemmas that need to be agreed upon within society and translated into action, for example in the form of legislation. As the development of AI is taking shape, we need a continuous public debate and ongoing refinement of our interventions. In our society, the driving force behind the moral, political, and legislative processes is the liberal democratic constitutional state, which ensures safeguards for inclusive, pluralistic public debate and effective lawmaking. The answer to the question of intervention is therefore as follows: society should intervene swiftly and strongly in all AI-related developments that directly threaten the functioning of the democratic state itself.14

This approach does not provide all answers. Much can be debated about the definition of democracy and the point at which it ceases to function effectively. Nevertheless, our approach provides a clear direction. The threats at the level of shared reality and the balance of power clearly pose the most direct risk, given their fundamental and pervasive nature. They have the potential to end democracy internally via the election process or externally through war. For these threats the application of the precautionary principle is justified, including the use of more prescriptive interventions such as the prohibition of harmful use cases. Generally, for the other threats, we can afford to be somewhat more lenient, allowing for a degree of trial and error through market mechanisms.

Where do we currently stand in addressing AI-reinforced threats?

While we noted a lack of clarity and structure in the debate around the systemic risks of AI, this does not mean that mitigating actions are completely lacking. In the EU the AI Act, Digital Services Act (DSA) and Digital Markets Act (DMA) provide the clearest examples. These laws have been drawn up with the clear goal of contributing to the responsible use of AI in the interest of society. In Figure 4, we provide an overview per AI-reinforced threat that maps the criticality of the threat to strength of the interventions we currently see taking place. We emphasize this is a high-level assessment only, with the sole purpose of identifying which threats could be prioritized for further action. The appendix contains a more detailed overview to support our analysis.

C-2024-11-Beer-4-klein

Figure 4. High-level assessment of the current interventions in place per AI-reinforced threat (source: authors). [Click on the image for a larger image]

Based on our observations, we can conclude that for most of the AI-reinforced threats we still face concerns regarding the effectiveness of the current interventions in place. For a number of threats, we even lack the consensus and norms to take proper action. Combining this with our previous discussion on the desirability of using the precautionary principle, we can see that the most concerning threats are those of misinformation and impersonation (data-driven deception), and autonomous weapons (autonomous armament). In these cases, fundamental risk combines with a lack of effective interventions. Mass surveillance (the modern panopticon) comes in third, as for this threat more safeguards are in place, especially regarding the role of the government. From a societal perspective these three AI-reinforced threats can be seen as the most urgent to address.

What to do next? Action points for government, organizations and citizens

Our analysis has provided a framework for thinking about the systemic risks of AI, the toolbox of interventions available to society, and a view on the priorities on the road ahead. The pivotal remaining question is: who should act? As primary stakeholders the government, organizations and citizens all have an important role to play.

Government

The government occupies a precarious position regarding the systemic risks of AI. It not only holds the legal authority to regulate AI usage but is also one of the most powerful entities capable of causing significant harm. Of course, the government is not a single entity, but a collection of institutions. Considering our topic, we distinguish between the parts of government involved in legislation and regulation and the executive branch of government. In the area of legislation and regulation, we suggest the following action points:15

  • Continually reassess the bans or moratoria already in place (e.g. via the AIA) in light of new AI developments. Insofar a ban is not deemed feasible, consider developing specific standards and requirements for critical domains that are exempted from current legislation, such as national security.
  • Apply the precautionary principle to all developments related to AI-powered disinformation and impersonation and autonomous weapons. Ensure democratic institutions and processes are as robust to destructive forces as possible.
  • At the same time, specifically for autonomous weapons, ensure developments beyond the borders are closely monitored and acted upon. Treat AI capabilities as a key asset for strategic autonomy.
  • Update or clarify any legislation affected or outdated by the advent of AI.
  • Curtail the power of market players when they become too dominant in one of the key (social) infrastructures and prove not to be amenable to regulation.
  • Require organizations to perform a thorough risk assessment throughout the AI lifecycle and include clear guidance on what is expected from such assessments.
  • Set and enforce standards for critical uses of AI. This can take the form of mandatory design principles or technical requirements.
  • Set-up or support AI literacy programs to adequately inform citizens about AI.
  • Set up or strengthen the regulatory bodies to monitor compliance with AI-related regulation, and ensure fines and penalties are sufficiently high to have a deterrent effect.

The executive branch of government is where AI is being used in the services toward citizens. We suggest the following action points:

  • Ensure effectiveness of AI use is proven before deployment, or that at least a sunset clause is required.
  • Be prepared to provide transparency over the use of AI.

Organizations

Organizations are fundamentally incentivized to act in the interest of their most important stakeholders, such as shareholders or supervisory bodies. However, this does not mean they have a passive role and can only be expected to act under pressure of laws and regulations. Proactively addressing the systemic risks of AI can make sense from a strategic perspective. Acting in the interest of society helps to pre-empt stringent and costly regulation. Furthermore, it can help to remain attractive to employees and customers. We suggest the following action points, both for commercial and non-commercial organizations:

  • Invest in co-developing industry standards and practices that both improve the overall quality of AI applications and pre-empt further legislative actions.
  • Obtain insight in the portfolio of AI applications in development and operation as a basis to ascertain compliance with AI-related regulation.
  • Implement a lean but effective risk assessment process as part of AI application development, to ensure alignment with organizational goals and prevent blowback on ethical or legal issues later on. This process should cover technical, legal and ethical risks and dilemmas. With regards to systemic risk, the trends and threats described in this article may be considered as part of the assessment.
  • Establish fit-for-purpose AI governance practices, aligned to the risk profile of the organization’s AI portfolio. This includes topics such as data management, AI literacy, and monitoring of application performance.

Citizens

Citizens (and consumers) are often on the receiving end of AI mishaps and the harsh truth is that individually they may not be able to stand up against the state or a large organization. However, this does not justify a passive approach. In the end, it is the collective of citizens that defines public values and norms and (indirectly) guides the direction of government. It is a matter of getting involved and organized. Our suggested actions:

  • Get informed and contribute to informing others about AI, its applications, risks and possible mitigations at the level of the individual.
  • Get organized – be it via NGOs, unions, political parties or otherwise – to influence the public debate, government and organizations.
  • Take active part in the social, economic and ethical discussions that are needed to shape the values and norms that will determine what our AI-infused society will look like now and in the future.

Conclusion

In our analysis, we observed that AI has the potential to disrupt and destabilize our society in many ways. The systemic risk of AI is a multifaceted challenge, that can best be understood in terms of the broader societal threats that are aggravated by the introduction of AI. These threats relate to basic human rights, the balance of power within society and even our shared concept of reality. However, we are not powerless against these threats. We have discussed the toolbox of interventions that can be deployed to counter the systemic risk of AI and we see that – in the European context – action is already being taken via legislation aimed at AI and digital services and markets. Not all threats are mitigated yet and we can reasonably expect many more AI innovations to introduce new societal challenges. Being able to respond to such challenges effectively and in line with human rights is of paramount importance. We should therefore be extra vigilant regarding those AI-reinforced threats that directly undermine our liberal democratic institutions. Shaping the future development of AI is a collective responsibility, and everyone has a role to play in this important endeavor.

Appendix – Observations on the current interventions per AI-reinforced threat

C-2024-11-Beer-1t-klein

Notes

  1. “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment” ([OECD24]).
  2. The open letter in which a number of prominent AI experts called for a complete pause in the development of powerful AI systems ([Futu23]) is another well-known example of such generic reasoning combined with rather simplistic solutions.
  3. Throughout this article, we will not distinguish explicitly between people in their capacity as citizens within society and as consumers of goods and services.
  4. The DSA requires very large online platforms and search engines to perform a “systemic risk assessment” on their services.
  5. We note that this definition is only valid in the context of liberal democratic states. For a society under authoritarian rule, systemic risks will also be different.
  6. A versatile and unpredictable technology with a major impact on society ([WRR21]).
  7. ChatGPT reached 100 million monthly active users within 2 months ([Hu23]).
  8. The threats presented in this article are by no means a complete overview of all systemic risks to society; they are just an overview of those risks that are introduced or significantly amplified by the advent of AI.
  9. Simply put; generative AI models produce “content” such as text or images, instead of predictions. Although from a technical point of view, this content is also a complex form of prediction based on the input of the model.
  10. In its original conception by Jeremy Bentham the panopticon was an architectural design principle that induced self-regulation through the possibility of being supervised at any time. In the AI-powered version everyone is being supervised all the time.
  11. One exception is the recent EU Digital Services Act which requires very large online platforms to perform a systemic risk assessment specifically taking into account “negative consequences to […] physical and mental well-being”.
  12. Based loosely on Lessig’s “pathetic dot” theory ([Less06]) and adapted to be used on AI-usage as the object of regulation. Lessig discerned interventions via the market (here: governance), legislation (here: outcomes), architecture or “code” (here: engineering), and culture (here: culture).
  13. Examples of standardization in the domain of AI include the NIST AI Risk Management Framework and the ISO/IEC 42001, although both standards do not prescribe specific technical requirements.
  14. These arguments echo Karl Popper’s reasoning on protecting the open society ([Popp94]): the question is not who should rule, but how to ensure that unfit rulers can be peacefully deposed.
  15. Based on [Sloo24].

References

[Eck18] Van Eck, M. (2018). Geautomatiseerde ketenbesluiten & rechtsbescherming. Een onderzoek naar de praktijk van geautomatiseerde ketenbesluiten over een financieel belang in relatie tot rechtsbescherming. Retrieved from: https://pure.uvt.nl/ws/portalfiles/portal/20399771/Van_Eck_Geautomatiseerde_ketenbesluiten.pdf

[Eck24] Van Eck, M. (2024, February 16). Profilering en geautomatiseerde besluiten: een te groot risico? (in Dutch). Hooghiemstra & Partners. Retrieved from: https://hooghiemstra-en-partners.nl/profilering-en-geautomatiseerde-besluiten-een-te-groot-risico/

[EDPB24] European Data Protection Board (2024, April 17). Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms. Retrieved from: https://www.edpb.europa.eu/system/files/2024-04/edpb_opinion_202408_consentorpay_en.pdf

[Futu23] Future of Life Institute. (2023, March 22). Pause Giant AI Experiments: An Open Letter. Retrieved from: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[Gru24] Gruet, S. (2024, January 23). Amazon fined for ‘excessive’ surveillance of workers. BBC. Retrieved from: https://www.bbc.com/news/business-68067022

[Hu23] Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base – analyst note. Reuters. Retrieved from: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

[Huls24] Hulsen, S. (2024, April 2). Microsoft blijft accounts blokkeren zonder uitleg, ondanks nieuwe regels (in Dutch). RTL Nieuws. Retrieved from: https://www.rtl.nl/nieuws/artikel/5441925/microsoft-blokkeert-zonder-uitleg-account-experts-overtreding-dsa

[KPMG20] KPMG Advisory (July 10, 2020). Rapportage verwerking van risicosignalen voor toezicht Belastingdienst (in Dutch). Retrieved from: https://www.rijksoverheid.nl/documenten/kamerstukken/2020/07/10/kpmg-rapport-fsv-onderzoek-belastingdienst

[Less06] Lessig, L. (2006). Code version 2.0. Basic Books, New York. Retrieved from: https://commons.wikimedia.org/wiki/File:Code_v2.pdf

[OECD24] OECD. (2024, March). Explanatory Memorandum on the Updated OECD Definition of an AI System. OECD Artificial Intelligence Papers, No. 8, OECD Publishing, Paris. Retrieved from: https://doi.org/10.1787/623da898-en

[Pete24] Peters, J. (2024, August 22). How the EU’s DMA is changing Big Tech: all of the news and updates. The Verge. Retrieved from: https://www.theverge.com/24040543/eu-dma-digital-markets-act-big-tech-antitrust

[Popp94] Popper, K., & Gombrich, E. H. (1994). The Open Society and Its Enemies: New One-Volume Edition (NED-New edition). Princeton University Press. Retrieved from: https://doi.org/10.2307/j.ctt24hqxs

[PwC24] PwC Advisory N.V. (January 2024). Onderzoek misbruik uitwonendenbeurs (in Dutch). Retrieved from:  https://open.overheid.nl/documenten/dpc-97a155051e66b292ef3cc5799cb4aef61dcbf46b/pdf

[Sawe24] Sawers, P. (2024, June 14). Meta pauses plans to train AI using European users’ data, bowing to regulatory pressure. TechCrunch. Retrieved from: https://techcrunch.com/2024/06/14/meta-pauses-plans-to-train-ai-using-european-users-data-bowing-to-regulatory-pressure/

[Sloo24] Van der Sloot, B. (2024). Regulating the Synthetic Society. Hart Publishing, Oxford. Retrieved from: https://www.bloomsburycollections.com/monograph?docid=b-9781509974979

[Sorn11] Sornette, D. & Von der Becke, S. (August 2011). The Future of Computer Trading in Financial Markets – Foresight Driver Review – DR 7. Government Office for Science. Retrieved from: https://assets.publishing.service.gov.uk/media/5a7c284240f0b61a825d6d18/11-1226-dr7-crashes-and-high-frequency-trading.pdf

[West24] Weston, D. (2024, July 20). Helping our customers through the CrowdStrike outage. Official Microsoft Blog. Retrieved from: https://blogs.microsoft.com/blog/2024/07/20/helping-our-customers-through-the-crowdstrike-outage/

[Wiki24] Wikipedia. Air France Flight 447. Retrieved August 30, 2024, from: https://en.wikipedia.org/wiki/Air_France_Flight_447#:~:text=Air%20France%20Flight%20447%20(AF447,inadvertently%20stalling%20the%20Airbus%20A330.

[WRR21] Wetenschappelijke Raad voor het Regeringsbeleid. (2021). Opgave AI. De nieuwe systeemtechnologie. WRR-Rapport 105, Den Haag. Retrieved from: https://www.wrr.nl/binaries/wrr/documenten/rapporten/2021/11/11/opgave-ai-de-nieuwe-systeemtechnologie/WRRRapport_+Opgave+AI_De+nieuwe+systeemtechnologie_NR105WRR.pdf

A smart contract taxonomy

This study posits the existence of four distinct variations within smart contract technology and proposes a taxonomy to organize and categorize these types. We will exemplify three practical applications of this technology, showcasing how these examples effectively illustrate the categories outlined within the proposed taxonomy. This taxonomy can serve as the foundational basis for constructing a risk analysis framework.

Introduction

This contribution will focus on smart contracts and explores one central question: which types of smart contracts must be distinguished? While the views presented here are based on academic research done in a legal context ([Vers23]), the definition of ‘smart contracts’ that this contribution maintains is purely technical: smart contracts are immutable computer programs that run deterministically in the contract of a blockchain platform (see [Anto19]). The legal aspects of such technology are relevant nonetheless, as smart contracts might be used in a manner that creates a considerable legal impact. Proposed practical applications of this technology concern transactions, transfers, or administrations of rights, interests, or entitlements that users rely on. Creating an environment in which such reliance is justified and protected, potential users of this technology ought to evaluate whether blockchain and smart contract technology can indeed produce the legal effect essential for their specific business case. Even for those business cases in which the technology might not perform a legal function, there might still be a legal risk. If the technology is used by an organization, it replaces software applications that might fulfill the same function but operate in a fundamentally different manner. This could, as is often touted, be cheaper, faster, or more reliable, but might also expose the organization to novel legal risks. An understanding of how blockchain and smart contract technology functions, how it is used in a specific organization, how it differs from the more traditional solutions that it replaces, and any interactions it may have with the relevant organizational context, will help mitigate any such future risks. A framework that outlines the effects, impact, and risks of this technology provides the guidance necessary for this: a smart contract taxonomy could form the basis of such a framework.

One important preliminary observation must be made: smart contracts are, despite their rather unfortunate name, not legal concepts. They are technological concepts. Therefore, any analysis of such concept must, at the very least, have due attention for their technological underpinnings and practical applications. Considering the above, this contribution will take four steps. First, a general overview of blockchains and smart contracts will be given. Secondly, the different types of smart contracts will be outlined. In this section, we will pay attention to types of smart contracts that might enjoy legal relevance. This is pivotal for those wishing to use this technology in a context where transactions are made in a manner that is enforceable and provides legal certainty for themselves, their partners, or their clients. Subsequently, in the third part, the practical impact of this taxonomy will be illustrated. This illustration will provide insights in the extent to which this technology is sufficiently mature and provides sufficient added value for organizations. Lastly, in the final paragraph, we will present evolutions, and applications of this technology in the context of which this taxonomy might be used. The overarching purpose of this contribution is to provide an overview of types and uses of smart contracts and provide guidance on how a taxonomy based on those types of smart contracts could be used by those considering using this technology.

Background and technology

Smart contract technology is rooted in a rather radical context. The initial proposal for smart contracts was published in Extropy, a journal that describes itself as a ‘Journal for Transhumanist Thought’ ([Szab96]). The decision to publish in this journal suggests a particular ideology, that of transhumanism. Central values of this ideology are ‘boundless expansion, self-transformation, dynamic optimism, intelligent technology, and spontaneous order’ ([More93]). These values suggest that the underlying ideology is effectively a rather extreme variation of techno-liberalism. Especially the principle of ‘spontaneous order’ makes this clear. Some have described this as ‘[an idea] distilled from the work of Friedrich Hayek and Ayn Rand, that an anarchistic market creates free and dynamic order whilst the state and its life-stealing authoritarianism is entropic’ ([Thwe20]). Such concepts were popular in the community that laid the groundwork for the technology that is in focus here. Known also as ‘crypto-anarchists’ or ‘cypherpunks’, the goal of this community was to develop technology that would enable economic and social conduct in a privacy-conscious manner and outside the reach of governmental authorities ([Ande22]). The efforts of this community have played a pivotal role in the technological developments that have ultimately culminated in blockchain-based smart contract platforms. As a result of this, the principles adhered to by this community are ingrained in the technology to this very day.

The extent to which this is the case becomes clear when blockchain-based smart contract platforms are compared to more classic technological solutions that might be supplanted or supplemented by this technology. Such technology might include, for example, online marketplaces, supply chain management tools, and payment solutions (see [Thol19] and [Reve19]). Blockchain and smart contract technology distinguishes itself from these classic solutions through five key aspects: the first three of which are a result of blockchain technology, whilst the last two are a result of the smart contracts capability that some platforms might have.

First, blockchain platforms are, in principle and up to a certain extent, immutable. This means that not one single party or group of parties can alter the state of information on the platform. This immutability applies on both a transaction and recordation level. The former of which is a result of the public-key encryption that is foundational to the platform, whilst the latter is a result of the way distributed consensus regarding the state of information is reached among the parties on the platform ([Anto17]). Secondly, the platform is transparent. A certain degree of transparency is necessary as the state of information on the platform is maintained by the parties collectively. This means that, rather than relying on a single centralized party charged with maintaining the state of information, the parties do so collectively. To perform the task necessary for this, certain information contained within the transaction and certain information regarding the transactions need to be available to the parties. A certain degree of transparency is therefore inherent to the system. This transparency, however, is not absolute. These platforms are built on a system based on public-key cryptography. This means that parties operate on these platforms using their public key. This public key therefore functions as a pseudonym. Examining the transparent platform can yield a wealth of information regarding transactions, including details such as the sender, recipient, value, and time. However, the cryptographic foundations of the platform do shield the identity of the natural persons behind the public key. Therefore, the third key aspect is pseudonymity.

Some blockchain platforms provide features that go beyond merely maintaining a record of past transactions. Such platforms provide the option for persons to program on the platform. If the programing that such a platform enables is sufficiently flexible and allows for sufficient complexity, it becomes possible to create entire software applications on that platform. Compare, for example, the Bitcoin blockchain with the Ethereum blockchain: where the Bitcoin blockchain is designed to transact with a cryptocurrency and, in light of this purpose, enjoys very limited programming capabilities on the platform itself, the Ethereum platform is designed from the ground up to enable the creation of decentralized applications. The Ethereum platform therefore incorporates a Turing-complete programming language that enables the creation of full software applications ([Bute13]). The term ‘smart contracts’ precisely denotes these software applications. This illustrates why smart contracts are technical concepts and not legal concepts (see on technology also [Weer19]).

C-2024-1-Verstappen-1-klein

Figure 1. Technology overview. [Click on the image for a larger image]

Smart contracts are, in other words, code that exists on a blockchain platform: if the platform allows for sufficient complexity and flexibility, it becomes possible to program that smart contract code into software applications, also referred to as smart contracts (see Figure 1). Smart contracts are therefore pieces of software rather than legal agreements. As a result of their software-character, the conditions contained within their code are executed automatically and independently of any human action. Moreover, smart contracts exist on the same platform as the assets that are being transacted with, and the records being modified through, the smart contract. This means that the smart contract can interact directly and immediately with those assets or records. No (third) party is required to give effect to the predefined consequences stipulated in the smart contract. Consequences as stipulated in the smart contract are, in other words, automatically enforced when the conditions are fulfilled. Hence, automatic execution and automatic enforcement are the final two characteristics introduced by smart contract technology.

A smart contract taxonomy

The purpose of a smart contract taxonomy is to organize the different variations of the technology that are currently being developed. Doing so provides a structure that can be used as the foundation of a more elaborate framework on the basis of which the legal risks created by this technology can be mapped out. The taxonomy distinguishes four types of smart contracts. It should be noted that a very similar taxonomy has been adopted by the European Law Institute as well ([ELI23]).

Type-1 smart contracts: software as a self-executory agreement

The first variation of smart contracts describes a piece of software in which the offer and acceptance coalesce. The relationship between the parties who transact by way of the smart contract is therefore governed by the smart contract ([Werb21]). It has been suggested that, in such a situation, the code might effectively ‘be’ the legal agreement as it constitutes the externalization of the parties’ consensus and proof of the content of the rights and obligations between the parties ([Tjon22]). Situations where this might be the case could, for example, be found in the context of decentralized finance (or ‘DeFi’). Think of, for example, platforms that enable parties to provide digital assets as security for a loan. Such platforms require smart contracts that stipulate and enforce the rights and obligations that the loans and securities require. If that smart contract is the sole instantiation of the agreement between the parties, that smart contract must be treated as defining the legal relationship. In such a case, the smart contract could be equated to the legal agreement.

Type-2 smart contracts: mere code

At their very core, smart contracts are nothing more than software. They are technological concepts rather than legal concepts. The great majority of smart contracts are just that; mere code. If such smart contracts do not fulfill any function that has a legal relevance, they are just software. This could be the case, for example, when a smart contract determines when a container leaves a ship that has entered a certain port. Such smart contracts might fulfill a pivotal function in a software suite but are of no legal relevance. These smart contracts are referred to as the second type of smart contract. Most smart contracts fall in this category.

Type-3 smart contracts: executory tools

The third variation in the taxonomy describes a situation in which a smart contract is distinct from a legal agreement, yet remains potentially legally relevant. In these situations, the smart contracts exist on-chain and parallel to a legal agreement that exists off-chain. In this case, the smart contract is used to give effect to the rights and obligations outlined in the legal agreement. Such a smart contract is therefore a tool that executes (part of the) legal agreements. Allen shows that smart contracts are ideally suited to be used as such executory tools ([Alle22]). If, for example, a soda machine, by way of a smart contract, orders a new batch of soda cans from the manufacturer, this smart contract is used to execute part of the overarching framework agreement that exists between the operator of the soda machine and the manufacturer of the soda cans ([Nave18]).

The smart contract is in hierarchical relationship with the legal agreement in which the smart contract is subservient to the legal agreement. However, the fact that the smart contract is subservient does not mean it is irrelevant or unimportant. After all, determining the content and validity of a legal agreement is done by assessing all relevant facts and circumstances, and the meaning that the parties to the agreement could reasonable have attributed to the agreement in light of those relevant facts and circumstances ([Kran20]). The technology is designed for contexts where parties transact remotely with minimal knowledge of each other’s identity. Consequently, reliance on factors beyond the smart contract executing the legal agreement is likely limited in determining the content and validity of such agreements. Therefore, as parties increasingly utilize this technology in a pseudonymous environment and lean on the smart contract as the executory mechanism, there is a diminishing pool of relevant facts and circumstances available for determining the meaning and validity of the underlying legal agreement. It follows from this that the more the parties apply this technology in a pseudonymous environment, and the more the parties rely on the smart contract as an executory mechanism, the less relevant facts and circumstances are available that can be used to determine the meaning and validity of the underlying legal agreement. In other words, the more parties rely on the smart contract as a tool to execute the separate legal agreement, the more important the smart contract will be in giving meaning to the legal agreement and determining the validity of that legal agreement.

Type-4 smart contracts: merger agreements

Lastly, there are smart contracts that exist in a form that makes them both machine-readable and human-readable. In the context of the taxonomy, this is the type-4 smart contract. An example of this is the Ricardian contract ([Grig22]). The feature of creating one single entity that is both machine-readable and human-readable at the same time, creates the option of creating a legal agreement and transforming it into a type-4 smart contract. Such a smart contract exists simultaneously on a blockchain platform in code, and therefore enjoys the benefits offered by the platform, while remaining susceptible to human comprehension. This fourth variation of smart contracts therefore describes an amalgamation that consists of two parts but exists as a single entity and, provided it meets the legal requirements, might be capable of producing legal effect. It must be noted that this final variation of smart contract technology is, at least to this day, largely theoretical.

Practical application

The preceding sections of this contribution have detailed the differentiating elements of the technology and how such technology might be categorized in a taxonomy that could be used to clarify the legal risks caused by implementation of smart contracting technology. See Table 1 for an overview of the taxonomy and an overview of examples of potential legal risks that might surface in the context of the different types of smart contracts. This final section aims to showcase three groundbreaking applications of the technology – currently being explored, tested, or even deployed – and to apply the taxonomic framework to these examples. Applying the taxonomy to these real-world examples will provide a general overview of the legal risks that exist and insights into the severity of such risks. Three such applications will be considered.

C-2024-1-Verstappen-1t-klein

Table 1. Overview of taxonomy including potential legal risks. [Click on the image for a larger image]

Applications of smart contracting technology and use of the taxonomy

Blockchain technology has been used as a foundation upon which different applications have been developed. The most well-known and most successful of such applications are the cryptocurrencies. Revolutionary as they might have been, in their core these cryptocurrencies offer relatively limited application. Cryptocurrencies use the underlying technology to enable the exchange of value in a distributed environment. This means that transactions between persons are now possible without any centralized party charged with tasks that would commonly be performed by a centralized party. Such tasks include, for example, whether a party has the right to make a transaction, determining whether the party is who they claim to be, or whether the units that the party is attempting to transfer have not be transferred previously. Solutions based on this technology are gradually being adopted by more established financial institutions. The Hong Kong Stock Exchange, for example, has been testing this technology since 2016 to enable a more seamless trade between Hong Kong and Mainland China ([HKMA17]). Launched in October of 2023, the final product is built through smart contracts, optionally available to users, and is presented as providing a more connected and more transparent settlement platform ([HKEX23]). The smart contracts used in the context of this example are predominantly type-2 smart contracts, meaning that they are mere code and have no legal relevance. It might be that the smart contracts that are employed in the context of settlement might have some legal relevance, but since the code is unavailable it is impossible to determine whether and to what extent this is the case.

Additionally, an application of this technology that relies on smart contracts other than the type-2 smart contracts can be found within supply chain operations ([Thol19]). According to an IBM survey, there is extensive experimentation with this technology in the realm of supply chains, especially concerning operational and supply chain management ([IBM20]). In such contexts, it becomes crucial to accord due consideration to the legal risks involved. Smart contracts might be used in the context of a supply chain to confirm receipt of goods, record performance, and trigger payments. Such aspects are not only relevant from an operational perspective, but they might also be pivotal from a legal point of view in case a disagreement arises between parties regarding events that happened in the context of the services provided. Some of the smart contracts used in the context of the supply chains are likely to be qualified as a type-3 smart contract. They are executory tools that are used to give effect to the legal agreement or part thereof. As such, there are considerable legal risks that must be taken into account. Consider a scenario where goods are lost in transit, yet the smart contract records the arrival of the container carrying the goods in the harbor, subsequently triggering a payment. Designers and operators should take such eventualities into account. Important questions in this context concern striking a balance between the relatively immutable nature of the platform and the automatic enforcement of the smart contracts. From a legal point of view, such question could emerge in the context of, for example, mistake, fraud, or disagreements about the content of the legal agreement.

Finally, one particularly interesting application of this technology is the creation and transfers of tokens on blockchain-based tokenization platforms. Such tokens function as units on a platform that represent an asset ([Kona20]). Especially the trade of non-fungible tokens has garnered a great deal of attention over the last few years. Whilst for some it might be very exciting to hold a token that represents a cute picture of a cat or a monkey, the technology allows for much more relevant applications. It is, for example, technically possible to have a token represent a claim or a classic financial instrument (see for example [ABN23]).

ABN AMRO was the first bank in Europe to register a digital bond for a Midcorp client on the public blockchain ([ABN23])

‘The entire process of preparing, placing and documenting the bond was digital. Ownership was recorded on the blockchain in the form of tokens that the investors acquired after they had paid for the bond. To ensure custody and security of the investors’ unique keys, ABN AMRO uses a wallet for accessing the digital bond.’

This final example of the implementation of the technology in question potentially introduces type-1 smart contracts in addition to type-2 and type-3 smart contracts. If a platform creates the option to effectively securitize claims or traditional financial instruments by way of a token, and any acquisition or trade of such tokens is limited to the platform alone, it is likely that the smart contract is the sole instantiation of the agreement between the parties. As such, the smart contract should be equated to the legal agreement. This means that all classic legal risks regarding formation, interpretation, and potential vitiation exist on-chain.

Conclusion

Smart contracts have been central to several hypes over the last few years. Those hypes have come and gone, but the development of smart contract technology and the potential applications of this technology has continued. Such developments are slowly giving rise to credible applications that are generating actual business opportunities. The nature of the technology that is at the root of such applications is fundamentally different than the technology it might supplant, and as such it will generate novel risks. Considering the way in which the technology is being applied, the legal risks should not be underestimated. Due to the highly technological nature of these risks, their integration with the organization, and their potential severity, businesses should prioritize preventing these risks proactively rather than mitigating them reactively after they have materialized. The taxonomy presented here provides a clear overview of the different types of smart contracts that exist. Such an overview could help an advisory practice doing exactly that: the taxonomy can be used to leverage technological know-how and risk management expertise to assist businesses in navigating the novel risks that designing and implementing products based on this technology might create.

References

[ABN23] ABN. (2023). ABN AMRO registered first digital bond on public blockchain. Retrieved from: https://www.abnamro.com/en/news/abn-amro-registers-first-digital-bond-on-public-blockchain

[Alle22] Allen, J. G. (2022). ‘Smart Contracts’ and the Interaction of Natural and Formal Language. In J. G. Allen & P. Hunn (Eds.), Smart Legal Contracts: Computable law in theory and practice. Oxford University Press.

[Ande22] Anderson, P. D. (2022). Cypherpunk Ethics: Radical Ethics for the Digital Age (1st ed.). Routledge. https://doi.org/10.4324/9781003220534

[Anto17] Antonopoulos, A. (2017). Mastering Bitcoin: Programming the open blockchain (2nd ed.). O’Reilly.

[Anto19] Antonopoulos, A. & Wood, G. (2019). Mastering Ethereum: Building Smart Contracts and Dapps. O’Reilly.

[Bute13] Buterin, V. (2013). Ethereum Whitepaper. Retrieved from: https://ethereum.org/en/whitepaper/

[ELI23] European Law Institute. (2023) ELI Principles on Blockchain Technology, Smart Contracts and Consumer Protection. Retrieved from: https://www.europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Principles_on_Blockchain_Technology__Smart_Contracts_and_Consumer_Protection.pdf

[Grig22] Grigg, I. (2022). Why the Ricardian Contract Came About: A Retrospective Dialogue with Lawyers. In J. Allen, Smart Legal Contracts (pp. 88-106). Oxford University Press. https://doi.org/10.1093/oso/9780192858467.003.0006

[HKEX23] Hong Kong Securities Clearing Company Limited. (2023). Synapse Platform Launch. Retrieved from: https://www.hkma.gov.hk/media/eng/doc/key-functions/financial-infrastructure/infrastructure/20171025e1.pdf

[HKMA17] Hong Kong Monetary Authority. (2017). Whitepaper 2.0 on Distributed Ledger Technology. Retrieved from: https://www.hkma.gov.hk/media/eng/doc/key-functions/financial-infrastructure/infrastructure/20171025e1.pdf

[IBM20] IBM. (2020). Advancing global trade with blockchain. Retrieved from: https://www.ibm.com/downloads/cas/WVDE0MXG

[Kona20] Konashevych, O. (2020). Constraints and benefits of the blockchain use for real estate and property rights. Journal of Property, Planning, and Environmental Law, 12(2), 109-127.

[Kran20] Van Kranenburg-Hanspians, K. & Derk, M. T. (2020). De kansen van blockchain technologie voor het contractenrecht. Overeenkomst in de rechtspraktijk, 1, 16-21.

[More93] More, M. (1993). Technological self-transformation: Expanding personal extropy. Extropy: Journal of Transhumanist Thought, 4(2), 15-24.

[Nave18] Naves, J. (2018). Smart contracts: voer voor juristen? Onderneming en Financiering, 26(4), 57-67.

[Reve19] Revet, K. & Simons, E. (2019). Start small, think big: Blockchain technology – the business case using the SAP Cloud Platform. Compact, 2019(1), Retrieved from: https://www.compact.nl/articles/start-small-think-big

[Szab96] Szabo, N. (1996). Smart Contracts: Building Blocks for Digital Free Markets. Extropy: Journal of Transhumanist Thought, 8(16), 50-53.

[Thol19] Tholen, J., De Vries, D., Van Brug, W., Daluz, A. & Antonovici, C. (2019). Enhancing due diligence in supply chain management: is there a role for blockchain in supply chain due diligence? Compact, 2019(4). Retrieved from: https://www.compact.nl/articles/enhancing-due-diligence-in-supply-chain-management/

[Thwe20] Thweatt-Bates, J. (2020). Cyborg selves: A theological anthropology of the posthuman (3rd ed.). Routledge.

[Tjon22] Tjong Tjin Tai, E. (2022). Smart Contracts as Execution Instead of Expression. In J. Allen, Smart Legal Contracts (pp. 205-224). Oxford University Press. https://doi.org/10.1093/oso/9780192858467.003.0010

[Vers23] Verstappen, J. (2023). Legal Agreements on Smart Contract Platforms in European Systems of Private Law. Springer.

[Weer19] Van der Weerd, S. (2019). How will blockchain impact an information risk management approach? Compact, 2019(4). Retrieved from: https://www.compact.nl/articles/how-will-blockchain-impact-an-information-risk-management-approach

[Werb21] Werbach, K. & Cornell, N. (2021). Contracts: Ex Machina. In M. Corrales Compagnucci, M. Fenwick, & S. Wrbka (Eds.), Smart contracts: Technological, business and legal perspectives. Hart.

Securing the quality of digital applications: challenges for the IT auditor

Since the advent of digital solutions, the ongoing inquiry into their reliability and security has been a central concern. An increasing number of individuals and companies are asking for assurance. There is a need for standards to report on the quality of the use of digital solutions. Primary responsibility rests with an organization’s management; however, incorporating an independent IT auditor can provide additional value.

Introduction

Digital developments are happening at lightning speed. We are all aware of the many digital applications and possibilities in both our business and personal lives. Often, however, we only know and use 10 to 20 percent of the application possibilities of current solutions, and yet we are constantly looking for something new. Or is this all happening to us from an ever-accelerating “technology push”? The covid pandemic that started in 2020 showed us once again that digital tools are indispensable. Digital tools enabled us to remain connected and operational, facilitating ongoing communication among us.

How do we know if digital applications and solutions are sufficiently secure? Do the answers generated by algorithms, for example, reflect integrity and fairness? Are we sufficiently resilient to cyber-attacks and are we spending our money on the right digital solutions? These questions are highly relevant for directors and supervisors of organizations, as they must be able to account for their choices. Externally, the board report provides the basis for policy accountability. It is primarily retrospective in nature and has an annual cycle. The board report could explicitly discuss the digital agenda. The professional association of IT auditors (NOREA) is investigating whether an (external) IT audit ‘statement’ ([NORE21]) could also be added (see also this article on the new IT audit statement). Accountability for the quality of digital applications and whether everything is done securely, with integrity and effectively takes on new dimensions now that developments are happening at lightning speed, and everyone is connected to everyone else. Administrators, regulators as well as end users and/or consumers are looking for assurance that the digital applications and the resulting data are correct. Validation through assurance by an IT auditor serves as an effective tool for this purpose. A confirmation of quality on the digital highway must and can be found.

These issues are at play not only within organizations, but also in broader society. Protecting privacy is firmly under pressure, the numerous digital solutions are building a continuous personal profile. Also, there are painful examples of the use of algorithms in the public domain ([AR21]) that have seriously harmed a number of citizens. Responsible development toward more complex automated applications requires better oversight and quality control, according to the Court of Audit in its report on algorithms in 2021 ([AR21]). Issues of digital integrity, fairness, reasonableness and security have taken on social significance.

Coupled with the introduction of the Computer Crime Act (WCC I), an explicit link to accountability for computerized data processing emerged for the first time in the 1980s. Meanwhile, the Computer Crime Act III (WCC III) ([Rijk19]) has been in force since 2019, which takes into account many developments in the field of the Internet and privacy. As the final piece in the chain of control and accountability from the WCC I onwards, the auditor must explicitly express an opinion on the reliability and continuity of automated data processing as far as relevant for financial reporting according to Civil Code 2, article 393 paragraph 4. Over four decades have passed, and we now grapple with an expanding array of legislation governing the control of digital solutions. These solutions extend beyond administrative processes to impact all core business functions, bringing with them a shift in the perspective on associated risks.

In short, it’s time to consider how quality on the digital highway (such as security, integrity, honesty, efficiency, effectiveness) can be assured. How can accountabilities be formed, what role do managers and supervisors play in this, and how can IT auditing add value? As indicated, these questions play a role not only at the individual organizational level, but also at the societal level. For example, how can the government restore or regain the trust of citizens by explicitly accounting for the deployment of its digital solutions?

IT auditing concerns the independent assessment of the quality of information technology (processes, governance, infrastructure). Quality has many partial aspects; not only does it involve integrity, availability and security, it also involves fairness and honesty. The degree of effectiveness and efficiency can also be assessed. To date, the interpretation of IT auditing is still mostly focused on individual digital applications and still too limited when it comes to the entire coherence of digital applications that fit within the IT governance of an organization. IT auditing can be an important tool in confirming the quality or identifying risks in the development and application of digital solutions if it is used more integrally. This establishes a harmonious interplay between the organization’s responsibility for its IT governance and the validation of its quality by an IT auditor.

Technology developments

The COVID crisis has undeniably brought remote work to the forefront and has heightened the significance of adaptable IT. Several emerging trends underscore the landscape of digital solutions and advancements.

What’s noteworthy is that a considerable number of organizations exhibit an intricate blend of technology solutions, incorporating both legacy systems and contemporary online (front-office) solutions. Ensuring data integrity, keeping all solutions functioning in continuity, being able to make the right investments and paying for maintenance of legacy solutions, and planning for all of that is certainly not an easy task.

Let’s briefly highlight a few trends commonly cited by multiple authors ([KPMG20]; [Wilr20]):

  • Flexible work is becoming the norm. Last year, the cloud workplace – more than predicted – grew in popularity. Employees had to work from home, which requires a flexible and secure IT workplace.
  • Distributed cloud offers new opportunities for automation. The cloud will also continue to evolve, continuously creating new opportunities that support business growth. According to Gartner analysts ([Gart20]), one of these is the distributed cloud. It can speed up data transfer and reduce its costs. Storing data within specific geographic boundaries – often required by law or for compliance reasons – is also an important reason for choosing the distributed cloud. The provider of the cloud services remains responsible for monitoring and managing it.
  • The business use of artificial intelligence (AI) is increasing. Consider, for example, the use of chatbots and navigation apps. This technology will be increasingly prominent in business in the near future. The reason? Computer power and software are becoming cheaper and more widely available. AI will increasingly be used to analyze patterns from all kinds of data.
  • Internet of Behaviors. Data is now the lynchpin for much of business processes. Data provides insight and therefore plays an increasingly important role in making strategic decisions. This data-driven approach is also applied to changing human behavior. We also call this the Internet of Behaviors. Based on these analyses, suggestions or autonomous actions can be developed that contribute to issues such as human safety and health. An example is the smartwatch that tracks blood pressure and oxygen levels and provides health tips based on those data.
  • Maturity of 5G in practice. In 2020, providers in the Netherlands rolled out their first 5G networks. With 5G, you can seamlessly stay connected on the move or in any location without relying on Wi-Fi. Apart from higher data upload and download speeds, the big changes are mainly in new applications, especially in the field of the Internet of Things. Examples include self-driving cars and a surgeon operating on his patient a thousand kilometers away via an operating robot. Such applications are promising.

Management responsibilities

Driving and overseeing digital solutions is not a given. “Unknown makes unloved” still plays tricks here. The complexity of technology deters, the mix of legacy systems and new digital solutions does not make it very transparent, many parties manage part of the technology chain and the quality requirements are not always explicit.

Still, some form of “good governance” is needed. Fellow Antwerp professor Steven de Haes ([DeHa20]) has gained many insights in his studies on IT governance. In his view, governance needs to address two issues concerning digital solutions. The first is whether digital risks are managed, which requires a standard to test against. In line with the COSO framework (COSO: Committee of Sponsoring Organizations) often used in governance issues, (parts of) the international CoBiT framework (CoBiT: Control Objectives for Information Technology) ([ISAC19]) can be chosen. Management explicitly identifies the applicable management standards for digital solutions, ensuring the clear establishment of both their design and operational processes.

The second question is strategic in nature: are the digital developments correct? Is the strategy concerning the deployment of digital solutions correct and are the investments required correct? Answering this requires a good analysis of the organizational objectives and the digital solutions needed to achieve them. As indicated earlier, the main issues are effectiveness and efficiency.

Establishing a robust organizational foundation begins with a well-structured organizational setup. This often involves using a “layer model” to arrange the various responsibilities. The primary responsibility for ensuring the proper use of digital solutions rests squarely on the shoulders of first-line management. This can be assisted by a “risk & control” function that can act as a “second line” to help set up the right controls and perform risk assessments. The second line can also set up forms of monitoring on the correct implementation and use of the digital solutions. Then, as a third line, an internal audit function can assess whether the controls in and around the digital solutions are set up and working properly; if desired, the external audit function can confirm this as well. In short, a layered model emerges to collectively ensure the quality of digital solutions.

Given the tremendous speed of digital change, continuous new knowledge of technology is needed. Effectively coordinating this effort while maintaining a focus on the quality of solutions and acknowledging their inherent limitations is the key to successful governance. It is not a static entity, continuously changes in the chain has to be evaluated and adjusted if necessary. Conceivably, the IT function (the CIO or IT management) could organize a structural technology dialogue that starts with knowledge sessions, addressing the quality of digital applications. End users and management share the responsibility of clearly defining quality requirements, overseeing them through change processes, and ensuring the ongoing monitoring, or delegation of monitoring, to guarantee the quality of digital applications and data.

The suppliers of the digital solutions also play an important role. They have to be good stewards and provide better and safer solutions. This does not happen automatically, as is regularly the case; the focus is more on functional innovation than on good management and security. The buyers of the solutions also still question the providers too little about a “secure by design” offering. Proper controls can, and in fact should, already be built in during solution design.

Are the new digital solutions becoming so complex that no one can determine the correctness of the content? From a management perspective, we cannot take such a “black box” approach. We cannot accept, for example, deploying a digital application without knowing whether it works safely. Management should pause and prioritize organizing knowledge or acquiring information about the quality before justifying further deployment.

Challenges for the IT auditor

These quality issues can be answered by IT auditors. In the Netherlands, this field has been organized for more than thirty years, partly through the professional organization NOREA (Dutch Association of EDP Auditors)1 and university IT audit programs.

The IT auditor has a toolbox to assess digital solutions on various quality aspects. In increasing number of auditing and reporting standards have been developed to provide clients with assurances or a correct risk picture.

On the positive side, current IT auditing standards can already answer many questions from clients about digital solutions. The key is for IT auditors to adequately disclose what they can do and to work with regulators to enrich the tools. The IT auditor has to use simpler language to clarify what is really going on. Clients can and should sharpen their questioning and take responsibility themselves, such as establishing the right level of control.

IT auditors are currently still mainly looking for technically correct answers and methodologies, while a dialogue is needed about the relevant management questions concerning IT governance. What dilemmas do managers and regulators experience when determining the quality level of digital applications and what uncertainties exist? This is what the IT auditor should focus on. Starting from a clear management question, the IT auditor’s already available tools listed below can be used in a much more focused way.

From an auditing perspective, when outsourcing, the standard ISAE 3402 (ISAE: International Standards on Assurance Engagements)2 was developed to keep both the auditor and the client organization informed about the quality of the audits performed by the service organization. The emphasis lies on ensuring the reliability and continuity of financial data processing. The resulting report is called a SOC 1 report (SOC: Service Organization Control).

An ISAE 3402 audit requires proper coordination on the scope of work and the controls to be tested (both in design and in operational operation). The performing IT auditor consults with both the service organization and the receiving customer organization to arrange everything properly. This also involves specific attention to both the “Complementary User Entity Controls” (CUECs), the additional internal control measures that the customer organization must implement, and the “Complementary Subservice Organization Controls” (CSOCs), the control measures that their possibly deployed IT service providers must implement. Frequent consultations occur with the client organization’s auditor, who incorporates the ISAE 3402 report as an integral part of the audit process.

The scope of an ISAE 3402 audit can be significant and already provide a solid basis for quality assurance of digital applications. An example from IT audit practice involves a sold division of a company that is now part of another international group. The sold division has plants in over 30 countries, all of which still use the original group’s IT services. A test plan has been set up to test the relevant general computer controls (such as logical access security, change control and operations management, also known as “general IT controls”), and all relevant programmed financial controls in the selected financial systems. In this example, this yields a testing of over eighty general computer controls and over two hundred programmed controls by a central group audit team and audit teams in the various countries.

Another assurance report is an ISAE 3000 report, which is prepared to demonstrate that the internal management processes an organization has in place are actually being carried out as described. Basically, this standard was developed for assurances about non-financial information. This may take the form of an ISAE 3000 attestation (3000A), wherein the organization internally defines and reviews standards and controls, with the IT auditor subsequently confirming their effectiveness. Alternatively, it can manifest as a 3000D (“direct reporting”), involving collaborative definition of review standards and controls by both the organization and the IT auditor.

The ISAE 3000 report (also referred to as SOC 23) can focus on many issues and also has multiple quality aspects as angles, such as confidentiality and privacy. Standard frameworks have since been established for conducting privacy audits, for example ([NORE23])4 based on ISAE 3000. The North American accounting organizations, including AICPA, CPA Canada, and CIMA5, have collaboratively developed comprehensive standard frameworks, such as SOC 2 modules on Security, Availability, Processing Integrity, and Confidentiality6. These are readily applicable to IT and SaaS services and are increasingly being embraced by IT service providers in Europe. For specific IT audit objects, such as specifically delivered online services/functionalities, these can be further focused or expanded with IT (application) controls relevant to the customer organization.

As a final variant, agreed-upon specific work can be chosen, referred to as an ISAE 4400 report. Users of the report then have to form their own opinion about the activities and (factual) findings that are presented by the IT auditor in the report.

In recent years, there has been plenty of innovation within the field of IT auditing to also assess algorithms, for example, and make a statement about them. Consider the issue of fairness and non-biased data. An interplay between multiple disciplines unfolds to comprehend the risk landscape of intricate digital solutions and offer assurances. IT auditors are partnering with data specialists and legal experts to ensure the reliability of algorithms.

Over the past 18 months, there has been a growing discourse regarding the potential inclusion of an IT audit statement within or as an addition to a company’s annual report. Specifically, the company would need to articulate its stance on digital solutions, their management, and, for instance, the associated change agenda. An IT auditor could then issue a statement in this regard. The professional association of IT auditors has developed a plan of action to actively develop this IT report and the communication about it in the coming year. There is ongoing consideration regarding the level of assurance achievable through the opinion; currently, we acknowledge a reasonable and limited degree of assurance from the statement system. Clients naturally seek maximum or, perhaps better, optimal assurance. In other words, the assurance they seek is not always found in an IT audit statement. Even better would be if the communication also provides assurance into the future, an area still untrodden by IT auditors.

Conclusion

As indicated earlier, tools already exist for the IT auditor to confirm the quality of digital applications. Clients must take responsibility to better understand digital applications and set up the corresponding IT governance. IT auditors can improve their communication, can empathize even more with management’s (their clients’) questions, and also provide understandable reports.

Addressing pertinent social concerns related to the implementation of digital solutions involves conducting a comprehensive risk inventory and evaluating the effectiveness of the existing controls. In addition to the traditional concerns focused on reliability and security, issues of effectiveness, efficiency, privacy and fairness come into play. The resilience of digital solutions is also an urgent issue. In the EU, the Network and Information Security Directive (NIS2 Directive)7 and the Digital Operations Resilience Act (DORA)8 for financial institutions have been established to strengthen digital resilience. The regulator of publicly traded companies in the United States (SEC) has also issued guidelines for annual reporting on cyber security (risk management, governance) and interim reporting of serious incidents. ([SEC23]).

The concept of secure by design is anticipated to become increasingly prevalent, as technology vendors recognize the necessity of implementing robust controls during solution deployment. Some suppliers also provide mechanisms to set up continuous monitoring, where the controls put in place are assessed for continuous correct operation and exceptions are reported. Management also plays an important role in this regard; embrace the principles described above. Remember that it is more effective and efficient to design controls during the change of digital solutions than to fix them afterwards.

If more and more continuous monitoring is provided, the IT auditor can move toward a form of continuous auditing, providing assurances about the deployment of the digital solution at any time. The “anytime, anyplace, anywhere” principle then becomes a reality in IT auditing. A nice, relaxing prospect within all the digital speeds.

Notes

  1. See www.norea.nl.
  2. See www.iaasb.org, ‘Standards and resources’.
  3. SOC 2 deals primarily with security (mandatory), availability, integrity, confidentiality and/or privacy, as outlined in the SOC 2 guidelines issued by the Assurance Services Executive Committee (ASEC) of the AICPA.
  4. There is a Dutch and an English version of the Privacy Control Framework.
  5. AICPA: American Institute of Chartered Professional Accountants; CIMA: Chartered Institute of Management Accountants.
  6. See [Zwin21] for an article on SOC 2 and [AICP23] for AICPA and CIMA standards.
  7. See [NCSC23].
  8. See [Alam22] for an article on DORA.

References

[AICP23] AICPA & CIMA (2023). SOC 2® – SOC for Service Organizations: Trust Services Criteria. Consulted at: https://www.aicpa-cima.com/topic/audit-assurance/audit-and-assurance-greater-than-soc-2

[Alam22] Alam, A., Kroese, A., Fakirou, M., & Chandra, I. (2022). DORA: an impact assessment. Compact 2022/3. Consulted at: https://www.compact.nl/articles/dora-an-impact-assessment/

[AR21] Algemene Rekenkamer (2021, 26 januari). Aandacht voor algoritmes. Consulted at: https://www.rekenkamer.nl/publicaties/rapporten/2021/01/26/aandacht-voor-algoritmes

[DeHa20] De Haes, S., Van Grembergen, W., Joshi, A., & Huygh, T. (2020). Enterprise Governance of Information Technology (3rd ed.). Springer.

[Gart20] Gartner (2020, 12 Augus). The CIO’s Guide to Distributed Cloud. Consulted at: https://www.gartner.com/smarterwithgartner/the-cios-guide-to-distributed-cloud

[ISAC19] ISACA (2019). COBIT 2019 or COBIT 5. Consulted at: www.isaca.org

[KPMG20] KPMG (2020). Harvey Nash / KPMG CIO Survey 2020: Everything changed. Or did it? Consulted at: https://kpmg.com/dp/en/home/insights/2020/11/harvey-nash-kpmg-cio-survey-2020.html

[NCSC23] Nationaal Cyber Security Centrum (2023). Summary of the NIS2 guideline. Consulted at: https://www.ncsc.nl/over-ncsc/wettelijke-taak/wat-gaat-de-nis2-richtlijn-betekenen-voor-uw-organisatie/samenvatting-richtlijn

[NORE21] NOREA (2021). Nieuwe IT check: NOREA ontwikkelt IT-verslag en -verklaring als basis voor verantwoording. Consulted at: www.norea.nl

[NORE23] NOREA (2023). Kennisgroep Privacy. Consulted at: https://www.norea.nl/organisatie/kennis-en-werkgroepen/kennisgroep-privacy

[Rijk19] Rijksoverheid (2019, 28 February). Nieuwe wet versterkt bestrijding computercriminaliteit. Consulted at: https://www.rijksoverheid.nl/actueel/nieuws/2019/02/28/nieuwe-wet-versterkt-bestrijding-computercriminaliteit

[SEC23] SEC (2023, 26 July). SEC Adopts Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure by Public Companies [Press Release]. Consulted at: https://www.sec.gov/news/press-release/2023-139

[Wilr20] WilroffReitsma (2020). ICT Trends 2021: dit zijn de 10 belangrijkste. https://wilroffreitsma.nl/nieuws/ict-trends-2021/

[Zwin21] Zwinkels, S. & Koorn, R. (2021). SOC 2 assurance becomes critical for cloud & IT service providers. Compact 2021/1. Consulted at: https://www.compact.nl/articles/soc-2-assurance-becomes-critical-for-cloud-it-service-providers/

Fifty years of IT auditing

About fifty years ago, IT audit made its appearance in auditing, which was also the reason for exchanging professional technical developments in a new journal called Compact. Of course, a lot has changed since then, but certain activities – albeit in a new look – have not changed all that much. As has been said so often in those fifty years, quite a lot is going to change, not only because the approach to auditing itself is constantly changing, but also because IT and audit techniques are constantly evolving, such as the emerging AI. What does this mean for the profession? Enough reason to take you on a fifty-year journey back in time and twenty years ahead in the development of IT auditing.

For the Dutch version of this article, see: Vijftig jaar IT-audit in de accountantscontrole

It started with the substantive audit

The history of IT audit (then called EDP audit [EDP: Electronic Data Processing]) begins some fifty years ago, hence the 50th anniversary of Compact. IT audit is immediately entirely dominated by auditing, because IT audit is developed by major accounting firms. At that time, the approach to auditing was still almost entirely centered on substantive1 auditing.

C-2023-2-Toledo-1-ENG-klein

Figure 1. Data-oriented (substantive) prevails. [Click on the image for a larger image]

The quality of the audited company’s IT is not relevant at all, because the auditor takes extensive samples and performs a lot of detailed checking. System-oriented auditing is not yet an option. The samples obviously have to be mathematically justified and determining the sample and the items to be considered still turns out to be quite difficult in connection with the various types of sampling routines and choices such as stratification, negative or no negative items, periods, sorting and, of course, the random nature, et cetera. This is where the IT auditor first appears on the scene. The IT auditor is then primarily a programmer, because with some theoretical sampling knowledge and knowledge of the client’s files, the IT auditor can provide excellent support. The IT auditor can now provide advance insight into the items in the file, allowing the selection by the auditor to be more effective and efficient. However, good knowledge of and experience with programming is important, because standard audit software does not yet exist. Often programming is still done in languages such as COBOL (Common Business Oriented Language, a language that is conceptually almost incomparable with today’s programming languages), at that time the standard software for administrative applications. In addition, you had to be good at (re)typing, because each program line had to go individually into a punch card. The financial auditor/financial IT auditor does not yet have a computer, so the processing has to be done on the client’s computer or, in exceptional cases, on the computer of a befriended relation, for example an insurance company, because service agencies are still rare. Everything is mainframe-oriented! The IT auditor already learns something new, however: the role of system software as well as the risks of that system software, for example if access security and logging is not properly set up. Finally, the IT auditor must of course ensure that his programs and data have not been tampered with. The first generation of IT auditors is relatively technically savvy.

Fortunately, the IT market is also beginning to see the financial auditor (IT auditor) as a target audience, and the first standard audit software packages are cautiously appearing on the market. Best known from that time is CARS, a large COBOL program with sampling routines and counts in which the IT auditor can add their own COBOL rules to make it organization specific. Since the laptop hasn’t been invented yet, the average IT auditor walks around with hefty suitcases with all those punch cards (having them mixed up would be a total drag …). But it’s still relatively easy as the file structure is sequential.

Introduction of the database

Shortly after, the database phenomenon makes its appearance. COBOL is not that suitable, and databases get their own supporting software. The IT auditor soon learns that using that database software is easier than CARS, although in the beginning that database software is not audit software, but mainly query software. Integration obviously does not take long and there is audit software for several database technologies, such as the independent package Culprit for IBM mainframes, for example. The problems at that time mainly involve accessing the file carriers (usually large tapes or very sensitive disks), which again are very specific to a certain type of machine. In short, they are only applicable to large known computer systems and large customers and therefore quite specialized. In the sixties and seventies, a large accounting firm like (now) KPMG had as many as thirty programmers who only programmed in the context of the annual audit.

PC becomes widely available

The big break came in the early 80s. The PC made its appearance and so did the floppy drive (still 8-inch format). This brought medium-sized organizations into the picture to support the financial auditor on duty. Again, audit software lags behind, because the (existing) mainframe packages do not run on those PCs and the floppies do not fit into a mainframe. KPMG is even creating its own software package, of course again focused on determining and taking samples, and on making all kinds of calculations to match the client’s financial records. There will even be an additional module that allows multiple files to be compared, a feat in those early days of the PC. When computerized financial accounting becomes commonplace, standard packages also become widely available. ACL and a little later IDEA are a few examples.

Need for greater understanding of security

In the 80s and certainly in the years that followed, the realization dawned that there were risks associated with all this IT, first in terms of security and then in terms of reliability. The financial auditor’s clients also felt a greater need to understand this. As a result, IT auditors are increasingly taking on the role of specialists who also go to the client on behalf of the financial auditor, not just to retrieve data files, but to assess the quality of the IT and advise on it. First the physical security of the IT environment is revealed in the survey, later logical access. Tooling is still virtually unavailable on the market for that, which means that the IT auditor has to examine many specific operating systems and databases and learn how security is organized.

Although PC use increases the number of data analyses to be performed, the number of programmers decreases quite a bit, because the creation of the analyses takes much less time due to standard applications and the relatively small-scale environment in which the software can operate.

An end to file analysis?

In the 90s, system-oriented auditing is strongly on the rise and the “traditional” use of audit software declines rapidly. The previously mentioned group of as many as thirty programmers at a large accounting firm has disappeared entirely, although some of them are able to advance as ‘regular IT auditors’. Yet this does not mean the end of file analysis. There are quite a few ‘standardization’ attempts, especially regarding the widely used SAP package. However, because of the many setting options, SAP still turns out not to be as standard as perhaps thought. The idea arises to create a front-end part for the extraction of data from the SAP databases that can be made customer specific or generation/version specific. The data is collected in a “meta database” for analysis and production of reports that strongly meet the auditor’s needs. This back-end part had to be highly standardized. Of course, practice turns out unruly because the front-end part always needs adjustments after new SAP versions or implementations, but the demands of the financial auditor also keep changing as more information and exceptions are obtained from the data, which in turn need to be explained. The financial auditor has their hands full because the cost-benefit picture is constantly under scrutiny. The benefits for the insights and security of the auditor’s audit approach do not always outweigh the effort required of constantly adapting SAP analyses.

Nevertheless, the seed has been planted and attempts are being made to revive data analysis in more industries, such as finance, where mostly self-developed systems at institutions predominate. The front-end part (data extraction) will always be variable here, but the back-end part (analysis and reporting) can then fit well with the auditor’s audit approach. Because of the cost of developing such solutions, the approach is primarily international. However, this adds up to a much wider range of financial systems among auditees (front-end complication) and a wider range of auditors’ requirements (back-end complication). Partly for this reason, only a few solutions were developed and not long-lived either.

The transition to system-based auditing

The IT auditor is already involved in examining processes and systems in the 80s. KPMG’s IT auditors develop the CASA method (Course Approach to System Audits), which is adopted by the NBA (professional body for financial auditors in the Netherlands. ed.) (then NIVRA) in the publication FASA (Factual Approach to System Audits; see [Koed85] and [NIVR88]). The objective is still mainly: ‘understanding the business’. In the 90s, system-based auditing emerges and there is more need for concrete insight into the processes and control measures. The IT auditor adapts the FASA method and Business Process Analysis (BPA) is born, where automated and manual internal control measures are explicitly recognized separately and per process step/risk. This distinction is important because the controls are different. For the IT auditor, this approach means a serious new object of investigation and assessing the automated controls against the general IT controls, especially change and test management and logical access security. So again, evidence of the proper functioning of the automated (application) controls must come from a system-based audit approach, i.e. entirely in line with the financial auditor’s audit approach.

C-2023-2-Toledo-2-ENG-klein

Figure 2. The balance has tipped toward system-based. [Click on the image for a larger image]

With the introduction of the Sarbanes Oxley (SOx) Act in 2002, much emphasis is placed on internal controls at companies. Pressured by the PCAOB regulator and the requirements of SOx 404, the field of system-based auditing is developing rapidly. The question from regulators, “How can I be sure that no one has been able to manipulate the data in question or modify application controls?” has caused headaches for many an auditor in PCAOB inspections and internal quality audits. In recent years, more guidance has emerged on how to deal with IPEs (Information Provided by the Entity, or in other words, how does the auditor determine that the auditee’s information is reliable?), the various layers in IT environments, interfaces, assurance reports in the audit and cybersecurity. So, what has this yielded in recent years?

Financial auditors and IT auditors are working better together and have a better understanding of each other’s fields. The audit methodologies of the various firms are making the role of the IT auditor increasingly clear. The new ISA 315 standard (“Identify and assess risks of material misstatement”) has also contributed to this. This standard includes extensive guidelines for gaining insight into information technology and general IT controls. Consultation on any deficiencies in the system of internal control, on the risk assessment of those deficiencies and on any compensating controls has improved. It also seems that the work to gain assurance on the effective operation of IT controls is increasing. This makes sense in our view because IT is becoming more complex and because there is always someone in the IT environment who can (or sometimes should be able to) circumvent controls anyway. Although the probability of occurrence is low, the impact can be significant. The challenge is to be able to assess these risks and determine the impact. Not many organizations are mature enough in terms of risk management to adequately mitigate these risks, nor are they able or willing to make the investments to do so. Only a select number of organizations remain where the IT auditor or financial auditor can perform an exclusively system-based audit within the IT domain. This realization leads in some cases back to substantive audits by the IT auditor or financial auditor, which completes the circle again between substantive and system-based audits.

What can we expect in the (near) future?

Data-oriented

More data analysis is taking place right now. This will develop much further with all the relatively easier to access data. Consider the developments in centralized “data lakes”, for example. These contain a lot of the organization’s data (operational, financial, etc.), making analysis relatively easy. For large organizations, these data lakes are becoming too large and complex and there is a trend towards “data meshes”, a form of decentralized small(er) data lakes, reducing complexity (also in management and responsibility). Of course there are tools that can link and analyze multiple of these data meshes. In short, a great field for the data analyst (commonly called data scientist these days), both within an organization and with the financial auditor. A financial auditor’s wish to use data analysis to gain insight into the money and goods flow and (automated) analyses of the peculiarities in this money and goods flow could finally become a reality.

The question naturally arises if and when the complexity becomes so great that the financial auditor/IT auditor will start using other tools to still gain insight into the large amount of data available, both within the organization to be audited and beyond. In other words, how long will it be before the financial auditor and IT auditor together can start using AI applications themselves? Surely it would be ideal if AI software could perform the analyses, especially for the aforementioned analyses of anomalies in the money and goods flow. We expect that AI software can be a great help especially for gaining a good understanding of the nature and cause of deviations and the impact on the financial flow. This is particularly true in the current situation where data analytics has quite a bit of “fallout” and the financial auditor and/or IT auditor still has to incur significant costs to study the fallout and determine the impact. A current example of this is MindBridge Ai Auditor, with which KPMG has an alliance. MindBridge Ai Auditor supports data analytics through modern technologies and – using statistical analysis and machine learning based on a wide variety of data sets – identifies the risks per individual general ledger or income statement. This is needed to identify potential anomalies and deficiencies in financial records.

System-based

As indicated above, we see a bright future for substantive auditing. The question is whether there will still be a need for an assessment of the system of internal control. It is possible that the balance between the substantive audit with extensive data analyses on the one hand and the system-based audit with (limited) partial observations on the other hand will change. We believe that a certain degree of system-based audit will still be necessary to determine if the organization has taken a certain (minimum) level of control measures. A control approach of substantive auditing without the organization having a certain minimum level of internal control will provide greater uncertainty, in particular with regard to the quality (including completeness) of the data. This is something that substantive audits cannot determine or can only determine to a limited extent. Consider, for example, whether all data are actually in the records, or the “chiffre d’affaires,” as financial auditors so eloquently call it.

In addition, regulators want to maintain continuous pressure on organizations and their auditors to ensure that the system of internal control at organizations is and remains adequate and that the risk of discontinuity and fraud remains limited. The SOx legislation and the (mandatory) role of the financial auditor in this regard is a good example and is not expected to disappear any time soon. For the IT auditor, this means that system-based audits in the area of generic IT (support) processes and specific information systems still have to take place at at least a select number of large organizations, although this also applies in a less formal way to smaller organizations.

By now, we see organizations using AI applications in practice (e.g., insurance companies). The internal control of AI applications will require the IT auditor to have a better understanding of the design and operation of such AI applications. The fact that things are moving fast is evident from the various audit frameworks that have been published, including by the Association of Insurers, as well as IIA (Institute of Internal Auditors Netherlands) and various other organizations. NOREA’s (Dutch professional association for IT auditors) Algorithm & Assurance Knowledge Group has already published several frameworks.

C-2023-2-Toledo-3-ENG-klein

Figure 3. Approach to the financial statement audit. [Click on the image for a larger image]

Broader role of the IT auditor

Recent years have shown that more is expected from the financial auditor than a “bare” financial statement audit. In particular, other laws and regulations, other than for the financial statement audit, are forcing organizations to include other information in the annual report, for example on the establishment and enforcement of privacy or information security/cybersecurity and ESG.

Although the European AI law is not yet in place, it is already clear that audits of products and services equipped with AI will fit into existing quality management systems of sectors such as logistics and healthcare. The Corporate Sustainability Reporting Directive (CSRD) will also broaden the role of the IT auditor. Starting in 2024, the first organizations have to start complying with these requirements. For now, only “limited assurance” is required, but it is expected that by the end of this decade “reasonable assurance” also needs to be provided for sustainability figures. Organizations are investing heavily to generate these figures. In this regard, reliability requirements play a role. The challenges may not be different from normal financial reporting chains, but there are specific areas of focus for ESG, partly because of the special areas of knowledge, but also because the reporting chains are still new and have never been subject to audit before. Also, employees in non-financial departments within organizations are less accustomed to strict compliance with regulations to be “in control,” with the risk of incomplete information and auditability.

Conclusion

The profession of IT auditors grew doing data file reviews and data analysis, leading the period when auditors primarily followed the substantive audit approach. When system-based auditing emerged in the 90s, later reinforced by SOx regulations, the focus of the IT auditor became less data-oriented and concentrated primarily on assessing programmed controls in financial reporting processes and the underlying generic IT management processes, such as change management and logical access.

Although the audit orientation is still system-based, there is clearly a revival of file searches/data analysis. Data are more approachable and data analysis tools are more powerful. System-based auditing is no longer seen as the holy grail.

We expect that the balance will again tip slightly toward data analysis, with more attention being paid, on the one hand, to encompassing overall controls (think of overall movement of cash and goods) and, on the other hand, especially to the (automated) analysis and risk assessment of the anomalies. A small dot of tools supported by AI is already shining on the horizon.

System-based auditing will not disappear because, on the one hand, it provides a good understanding of the organization and its processes and, on the other, it ensures the quality of the data captured during those processes. Where quality of IT processes is essential for internal controls in financial processes, quality in processes is essential for data analysis. This means that it’s not either system-based or substantive, but the best of both worlds. Those worlds are expanding as more and more topics other than purely financial statements are included in the annual report and the scope of the auditor. Most notable is ESG reporting, bringing new processes and data into scope.

In the 80s, a presentation by the Canadian Institute of Chartered Financial auditors (CICA) was frequently shown in the Netherlands. The gist was that in the magical year 2000, financial auditor Gene performed the annual audit by linking his “audit” computer with that of the auditee and the audit program did the rest. Miss Jane brought coffee (that was the way it was done in those days) and in the afternoon the results were discussed with the director of the audited organization.

In short, the future of the IT auditor in the context of the “financial statement” audit still needs a solid toolbox, but hopefully not like the punch card boxes and first draggable desktop computers which processed the data analyses. In the bottom two layers of the approach to the financial statement audit described earlier, a dual role of financial auditor knowledge and IT knowledge seems desirable, perhaps in an integrated profile of financial auditor and IT auditor. Although, given Gene’s example above, that will take longer than desired.

Notes

  1. Substantive versus system-based: in a substantive audit approach, the auditor obtains as much audit evidence as possible by selecting data and comparing it with external sources or by comparing it with other data already audited. This is often done on a sample basis. In a system-based audit approach, the auditor obtains audit evidence by assessing the adequacy of the system of internal controls in the processes and systems (design) and testing the operation of internal controls.

References

[Koed85] A.H.C. Koedijk (1985). Beoordeling betrouwbaarheid van een (geautomatiseerd) informatiesysteem: De CASA methode. Compact, 1985(4).

[NIVR88] NIVRA (1988). NIVRA-geschrift 44, Automatisering en Controle: Feitelijke Aanpak Systems Audit.

Celebrating fifty years of Compact and Digital Trust

On 7 June 2023, an event was hosted by KPMG to celebrate 50 years of Compact, in Amstelveen. Over 120 participants gathered to explore the challenges and opportunities surrounding Digital Trust. Together with Alexander Klöpping, journalist and tech entrepreneur, the event provided four interactive workshops, focusing on each topic of ESG, AI Algorithms, Digital Trust, and upcoming EU Data Acts, giving participants of various industries and organizations insights and take-aways for dealing with their digital challenges.

Introduction

As Compact celebrated its fiftieth anniversary, the technology environment had experienced technology evolutions that people could never have imagined fifty years ago. Despite countless possibilities, the question of trust and data privacy has become more critical than ever. As ChatGPT represents a significant advancement in “understanding” and generating human-like texts and programming code, you would never be able to predict what could be possible by AI Algorithms in the next fifty years. We need to take actions on ethical considerations or controversies. With rapidly advancing technologies, how can people or organizations expect to protect their own interest or privacy in terms of Digital Trust?

Together with Alexander Klöpping, journalist and tech entrepreneur, the participants had an opportunity to embark on a journey to evaluate the past, improve the present and learn how to embrace the future of Digital Trust.

In this event recap, we will guide you through the event and workshop topics to share important take-aways from ESG, AI Algorithms, Digital Trust, and upcoming EU Data Acts workshops.

C-2023-2-Saski-f1-klein

Foreseeing the Future of Digital Trust

Soon a personally written article like this could become a rare occasion as most texts might be AI-generated. That’s one of the predictions of the AI development shared by Alexander Klöpping, during his session “Future of Digital Trust”. Over the past few years, generative AI has experienced significant advancements which led to the revolutionary opportunities in creating and processing text, image, code, and other types of data. However, such rapid development is – besides all kinds of innovative opportunities – also associated with high risks when it comes to the reliability of AI-generated outputs and the security of sensitive data. Although there are many guardrails around Digital Trust which need to be put in place before we can adopt AI-generated outputs, Alexander’s talk suggested the possible advanced future of Artificial General Intelligence (AGI) which can learn, think, and output like humans with human-level intelligence.

Digital Trust is a crucial topic for the short-term future becoming a recurring theme in all areas from Sustainability to upcoming EU regulations on data, platforms and AI. Anticipated challenges and best practices were discussed during the interactive workshops with more than a hundred participants including C-level Management, Board members and Senior Management.

C-2023-2-Saski-f2-kleinC-2023-2-Saski-f3-klein

Workshop “Are you already in control of your ESG data?”

Together with KPMG speakers, the guest speaker Jurian Duijvestijn, Finance Director Sustainability of FrieslandCampina shared their exciting ESG journey in preparation of the Corporate Sustainability Reporting Directive (CSRD).

Sustainability reporting is moving from a scattered EU landscape to new mandatory European reporting standards. As shown in Figure 1, the European Sustainability Reporting Standards (ESRS) ) consists of twelve standards including ten topical standards for Environment, Social and Governance areas.

C-2023-2-Saski-1-klein

Figure 1. CSRD Standards. [Click on the image for a larger image]

CSRD requires companies to report on the impact of corporate activities on the environment and society, as well as the financial impact of sustainability matters on the company, consequently resulting in including an extensive amount of financial and non-financial metrics. The CSRD implementation will take place in phases, starting with the large companies already covered by the Non-Financial Reporting Directive and continuing with other large companies (FY25), SMEs (FY26) and non-EU parent companies (FY28). The required changes to the corporate reporting should be rapidly implemented to ensure a timely compliance, as the companies in scope of the first phase must already publish the reports in 2025 based on 2024 data. The integration of sustainability at all levels of the organization is essential for a smooth transition. As pointed out by the KPMG speakers, Vera Moll, Maurice op het Veld and Eelco Lambers, a sustainability framework should be incorporated in all critical business decisions, going beyond the corporate reporting and transforming business operations.

The interactive breakout activities confirmed that sustainability reporting adoption is a challenging task for many organizations due to the new KPIs, changes to calculation methodologies, low ESG data quality and tooling not fit for purpose. In line with the topic of the Compact celebration, the development of the required data flows depends on a trustworthy network of suppliers and development of strategic partnerships at the early stage of adoption.

C-2023-2-Saski-X-klein

CSRD is a reporting framework that could be used by companies to shape their strategy to become sustainable at all organizational and process levels. Most companies have already started to prepare for CSRD reporting, but anticipate a challenging project internally (data accessibility & quality) and externally (supply chains). While a lot of effort is required to ensure the timely readiness, the transition period also provides a unique opportunity to measure organizational performance from an ESG perspective and to transform in order to ensure that sustainability becomes an integral part of their brand story.

Workshop “Can your organization apply data analytics and AI safely and ethically?”

The quick rise of ChatGPT has sparked a major change. Every organization now needs to figure out how AI fits in, where it’s useful, and how to use it well. But using AI also brings up some major questions, for example in the field of AI ethics. Like, how much should you tell your customers if you used ChatGPT to help write a contract?

During the Responsible AI workshop, facilitators Marc van Meel and Frank van Praat, both from the KPMG’s Responsible AI unit, presented real-life examples that illustrate the challenges encountered when implementing AI. They introduced five important principles in which ethics dilemmas can surface: the Reliability, Resilience, Explainability, Accountability, and Fairness of AI systems (see Figure 2). Following the introduction of these principles and their subsequent elaborations, the workshop participants engaged in animated discussions, exploring a number of benefits and drawbacks associated with AI.

C-2023-2-Saski-2-klein

Figure 2. Unique challenges of AI. [Click on the image for a larger image]

To quantify those challenges of AI, there are three axis organizations can use: Complexity, Autonomy, and Impact (see Figure 3).

C-2023-2-Saski-3-klein

Figure 3. Three axis of quantifying AI risks. [Click on the image for a larger image]

Because ChatGPT was quite new when the workshop took place (and still is today), it was top of mind for everyone in the session. One issue that received substantial attention was how ChatGPT might affect privacy and company-sensitive information. It’s like being caught between two sides: on the one hand, you want to use this powerful technology and give your staff the freedom to use it too. On the other hand, you have to adhere to privacy rules and make sure your important company data remains confidential.

The discussion concluded stressing the importance of the so-called “human in the loop”, meaning it’s crucial that employees understand the risks of AI systems such as ChatGPT when using it and that some level of human intervention should be mandatory. Actually, it automatically led to another dilemma to consider, namely how to find the right balance between humans and machines (e.g. AI). Basically, everyone agreed that it depends on the specific AI context on how humans and AI should work together. One thing was clear; the challenges with AI are not just about the technology itself. The rules (e.g. privacy laws) and practical aspects (what is the AI actually doing) also matter significantly when we talk about AI and ethics.

There are upsides as well as downsides when working with AI. How do you deal with privacy-related documents that are uploaded to a (public) cloud platform with a Large-Language Model? What if you create a PowerPoint presentation from ChatGPT and decided not to tell your recipient/audience? There are many ethical dilemmas, such as lack of transparency of AI tools, discrimination due to misuses of AI, or Generative AI-specific concerns, such as intellectual property infringements.

However, ethical dilemmas are not the sole considerations. As shown in Figure 4, practical and legal considerations can also give rise to dilemmas in various ways.

C-2023-2-Saski-4-klein

Figure 4. Dilemmas in AI: balancing efficiency, compliance, and ethics. [Click on the image for a larger image]

The KPMG experts and participants agreed that it would be impossible just to block the use of this type of technology, but that it would be better to prepare employees, for instance by providing privacy training and use critical thinking to use Generative AI in a responsible manner. The key is to consider what type of AI provides added value/benefit as well as the associated cost of control.

After addressing the dilemmas, the workshop leaders concluded with some final questions and thoughts about responsible AI. People were interested in the biggest risks tied to AI, which match the five principles that were talked about earlier (see Figure 3). But the key lesson from the workshop was a bit different – using AI indeed involves balancing achievements and challenges, but opportunities should have priority over risks.

Workshop “How to achieve Digital Trust in practice?”

This workshop was based on KPMG’s recent work with the World Economic Forum (WEF) on Digital Trust and was presented by Professor Lam Kwok Yan (Executive Director, National Centre for Research in Digital Trust of the Nanyang Technological University, Singapore), Caroline Louveaux (Chief Privacy Officer of Mastercard) and Augustinus Mohn (KPMG). The workshop provided the background and elements of Digital Trust, trust technologies, and digital trust in practice followed by group discussions.

C-2023-2-Saski-5-klein

Figure 5. Framework for Digital Trust ([WEF22]). [Click on the image for a larger image]

The WEF Digital Trust decision-making framework can boost trust in the digital economy by enabling decision-makers to apply so-called Trust Technologies in practice. Organizations are expected to consider security, reliability, accountability, oversight, and the ethical and responsible use of technology. A group of major private and public sector organizations around the WEF (incl. Mastercard) is planning to operationalize the framework in order to achieve Digital Trust (see also [Mohn23]).

Professor Lam introduced how Singapore has been working to advance the scientific research capabilities of Trust Technology. In Singapore, the government saw the importance of Digital Trust and funded $50 million for the Digital Trust Centre, the national center of research in trust technology. While digitalization of the economy is important, data protection comes as an immediate concern. Concerns on distrust are creating opportunities in developing Trust Technologies. Trust Technology is not only aiming to identify which technologies can be used to enhance people’s trust, but also to define concrete functionality implementable for areas shown in Figures 6 and 7 as presented during the workshop.

C-2023-2-Saski-6-klein

Figure 6. Areas of opportunity in Trust Technology (source: Professor Lam Kwok Yan). [Click on the image for a larger image]

C-2023-2-Saski-7-klein

Figure 7. Examples of types of Trust Technologies (source: Professor Lam Kwok Yan). [Click on the image for a larger image]

C-2023-2-Saski-f4-klein

Presentation by Professor Lam Kwok Yan (Nanyang Technological University), Helena Koning (Mastercard) and Augstinus Mohn (KPMG). [Click on the image for a larger image]

Helena Koning from Mastercard shared how Digital Trust is put in practice at Mastercard. One example was data analytics for fraud prevention. While designing this technology, Mastercard needed to consider several aspects in terms of Digital Trust. To accomplish designing AI-based technology, they made sure to apply a privacy guideline, performed “biased testing” for data accuracy, and addressed the auditability and transparency of AI tools. Another example was to help society with anonymized data while complying with data protection. When there were many refugees from Ukraine, Poland needed to analyze how many Ukrainians were currently in Warsaw. Mastercard supported this quest by anonymizing and analyzing the data. These could not have been achieved without suitable Trust Technologies.

In the discussion at the end of the workshop, further use cases for Trust Technology were discussed. Many of the participants had questions on how to utilize (personal) data while securing privacy. In many cases, technology cannot always solve such a problem entirely, therefore, policies and/or processes also need to be reviewed and addressed. For example, in the case of pandemic modeling for healthcare organizations, they enabled the modeling without using actual data to comply with privacy legislation. In another advertising case, cross-platform data analysis was enabled to satisfy customers, but the solution ensured that the data was not shared among competitors. The workshop also addressed that it is important to perform content labeling to detect original data and prevent fake information from spreading.

For organizations, it is important to build Digital Trust by identifying suitable technologies and ensuring good governance of the chosen technologies to realize their potential for themselves and society.

Workshop “How to anticipate upcoming EU Data regulations?”

KPMG specialists Manon van Rietschoten (IT Assurance & Advisory), Peter Kits (Tech Law) and Alette Horjus (Tech Law) discussed the upcoming data-related EU regulations. An interactive workshop explored the impact of upcoming EU Digital Single Market regulations on business processes, systems and controls.

The EU Data Strategy was introduced in 2020 to unlock the potential of data and establish a single European data-driven society. Using the principles of the Treaty on the Functioning of the European Union (TFEU), the Charter of Fundamental Rights of the EU (CFREU) and the General Data Protection Regulation (GDPR), the EU Data Strategy encompasses several key initiatives that collectively work towards achieving its overarching goals. Such initiatives include entering into partnerships, investing in infrastructure and education and increased regulatory oversight resulting in new EU laws and regulations pertaining to data. During the workshop, there was focus on the latter and the following regulations were highlighted:

  • The Data Act
  • The Data Governance Act
  • The ePrivacy Regulation
  • The Digital Market Act
  • The Digital Services Act and
  • The AI Act.

C-2023-2-Saski-8-klein

Figure 8. Formation of the EU Data Economy. [Click on the image for a larger image]

During the workshop participants also explored the innovative concept of EU data spaces. A data space, in the context of the EU Data Strategy, refers to a virtual environment or ecosystem that is designed to facilitate the sharing, exchange, and utilization of data within a specific industry such as healthcare, mobility, finance and agriculture. It is essentially a framework that brings together various stakeholders, including businesses, research institutions, governments, and other relevant entities, to collaborate and share data for mutual benefit while ensuring compliance with key regulations such as the GDPR.

The first EU Data Space – European Health Data Space (EHDS) – is expected to be operable in 2025. The impact of the introduction of the EU Data Spaces is significant and should not be underestimated – each Data Space has a separate regulation for sharing and using data.

C-2023-2-Saski-9-klein

Figure 9. European Data Spaces. [Click on the image for a larger image]

The changes required by organizations to ensure compliance with the new regulations pose a great challenge, but will create data-driven opportunities and stimulate data sharing. This workshop provided a platform for stakeholders to delve into the intricacies of newly introduced regulations and discuss the potential impact on data sharing, cross-sector collaboration, and innovation. There was ample discussion scrutinizing how the EU Data Strategy and the resulting regulations could and will reshape the data landscape, foster responsible AI, and bolster international data partnerships while safeguarding individual privacy and security.

Key questions posed by the workshop participants were the necessity of trust and the availability of technical standards in order to substantiate the requirements of the Data Act. In combination with the regulatory pressure, the anticipated challenges create a risk for companies to become compliant on paper (only). The discussions confirmed that trust is essential as security and privacy concerns were also voiced by the participants: “If data is out in the open, how does we inspire trust? Companies are already looking into ways not to have to share their data.”

In conclusion, the adoption of new digital EU Acts is an inevitable but interesting endeavor; however, companies should also focus on the opportunities. The new regulations require a change in vision, a strong partnership between organizations and a solid Risk & Control program.

In the next Compact edition, the workshop facilitators will dive deeper into the upcoming EU Acts.

Conclusion

The workshop sessions were followed by a panel discussion between the workshop leaders. The audience united in the view that the adoption of the latest developments in the area of Digital Trust require a significant effort from organizations. To embrace the opportunities, they need to keep an open mind while being proactive in mitigating the risks that may arise with technology advancements.

The successful event was concluded with a warm “thank you” to the three previous Editor-in-Chiefs of Compact who oversaw the magazine for half a century, highlighting how far Compact has come. Starting as an internal publication in the early seventies, Compact has become a leading magazine covering IT strategy, innovation, auditing, security/privacy/compliance and (digital) transformation topics with the ambition to continue for another fifty years.

C-2023-2-Saski-f5-klein

Maurice op het Veld (ESG), Marc van Meel (AI), Augustinus Mohn (Digital Trust) and Manon van Rietschoten (EU Data Acts). [Click on the image for a larger image]

C-2023-2-Saski-f6-klein

Editors-in-Chief (from left to right): Hans Donkers, Ronald Koorn and Dries Neisingh (Dick Steeman not included). [Click on the image for a larger image]

References

[Mohn23] Mohn, A. & Zielstra, A. (2023). A global framework for digital trust: KPMG and World Economic Forum team up to strengthen digital trust globally. Compact 2023(1). Retrieved from: https://www.compact.nl/en/articles/a-global-framework-for-digital-trust-kpmg-and-world-economic-forum-team-up-to-strengthen-digital-trust-globally/

[WEF22] World Economic Forum (2022). Earning Digital Trust: Decision-Making for Trustworthy Technologies. Retrieved from: https://www.weforum.org/reports/earning-digital-trust-decision-making-for-trustworthy-technologies/

How does new ESG regulation impact your control framework?

Clear and transparent disclosure on companies’ ESG commitments is continually becoming more important. Asset managers are increasing awareness for ESG and there is an opportunity to show how practices and policies are implemented that lead to a better environment and society. Furthermore, stakeholders (e.g., pension funds) are looking for accurate information in order to make meaningful decisions and to comply with relevant laws and regulations themselves. Reporting on ESG is no longer voluntary, as new and upcoming laws and regulation demand that asset managers report more extensively and more in dept on ESG. As a result of our KPMG yearly benchmark on Service Organization Control (hereinafter: “SOC”) Reports of asset managers, we are surprised that, given the growing interests and importance of ESG, only 7 out of 12 Dutch asset managers report on ESG, and still on a limited scope and scale.

Introduction

Before we get into the benchmark we will give you some background on the upcoming ESG reporting requirements for the asset management sector. These reporting requirements are mainly related to the financial statement. However, we are convinced that clear policies, procedures as well as a functioning ESG control framework are desirable to reach compliance with these new regulations. Therefore, we benchmark to what extent asset managers are (already) reporting on ESG as part of their annual SOC reports (i.e., ISAE 3402 or Standard 3402). We end with a conclusion and a future outlook.

Reporting on ESG

In this section we will provide you with an overview of the most important and relevant regulations on ESG for the asset management sector. Most of the ESG regulation is initiated by the European Parliament and Commission. We therefore start with the basis, the EU taxonomy, which we disclose high-over followed by more in detail regulations like Sustainable Finance Disclosure Regulations (hereinafter: “SFDR”) and Corporate Sustainability Reporting Directive (hereinafter: “CSRD”).

EU Taxonomy

In order to meet the overall EU’s climate and energy targets and objectives of the European Green deal in 2030, there is an increasing need for a common language within the EU countries and a clear definition of “sustainable” ([EC23]). The European Commission has recognized this need and has taken a significant step by introducing the EU taxonomy. This classification system, operational since July 12th, 2022, is designed to address six environmental objectives and plays a crucial role in advancing the EU’s sustainability agenda:

  1. Climate change mitigation
  2. Climate change adaptation
  3. The sustainable use and protection of water and marine resources
  4. The transition to a circular economy
  5. Pollution prevention and control
  6. The protection and restoration of biodiversity and ecosystems

The EU taxonomy is a tool that helps companies disclose their sustainable economic activities and helps (potential) investors understand whether the companies’ economic activities are environmentally and socially governed sustainable or not.

According to EU regulations, companies with over 500 employees during the financial year and operating within the EU are required to file an annual report on their compliance with all six environmental objectives on 1 January of each year, starting from 1 January 2023. The EU ESG taxonomy report serves as a tool for companies to demonstrate their commitment to sustainable practices and to provide transparency on their environmental and social impacts. The annual filing deadline is intended to ensure that companies are regularly assessing and updating their sustainable practices in order to meet the criteria outlined in the EU’s ESG taxonomy. Failure to file the report in a timely manner may result in penalties and non-compliance with EU regulations. It is important for companies to stay informed and up-to-date on the EU’s ESG taxonomy requirements to ensure compliance and maintain a commitment to sustainability.

SFDR

The SFDR was introduced by the European Commission alongside the EU Taxonomy and requires asset managers to disclose how sustainability risks are assessed as part of the investment process. The EU’s SFDR regulatory technical standards (RTS) came into effect on 1 January 2023. These standards aim to promote transparency and accountability in sustainable finance by requiring companies to disclose information on the sustainability risks and opportunities associated with their products and services. The SFDR RTS also establish criteria for determining which products and services can be considered as sustainable investments.

There are several key dates that companies operating within the EU need to be aware of in relation to the SFDR RTS. Firstly, the RTS is officially applied as of 1 January 2023. Secondly, companies are required to disclose information on their products and services in accordance with the RTS as of 30 June 2023. Lastly, companies will be required to disclose information on their products and services in accordance with the RTS in their annual financial reports as of 30 June 2024.

It is important for companies to take note of these dates as compliance with the SFDR RTS and adhering to the specified deadlines is crucial for companies. Failure to do so may again result in penalties and non-compliance with EU regulations. Companies should also stay informed and keep up with the SFDR RTS requirements to ensure that they are providing accurate and relevant information to investors and other stakeholders on the sustainability of their products and services as these companies are required to disclose part of this information as well.

CSRD

The CSRD is active as of 5 January 2023. This new directive strengthens the rules and guidelines regarding the social and environmental information that companies have to disclose. In time, these rules will ensure that stakeholders and (potential) investors have access to validated (complete and accurate) ESG information in the entire chain (see Figure 1). In addition, the new rules will also positively influence the company’s environmental activities and drive competitive advantage.

C-2023-2-Rietschoten-1-klein

Figure 1. Data flow aggregation. [Click on the image for a larger image]

Most of the EU’s largest (listed) companies have to apply these new CSRD rules in FY2024, for the reports published in 2025. The CSRD will make it mandatory for companies to have their non-financial (sustainable) information audited. The European Commission has proposed to first start with limited assurance upon the CSRD requirements in 2024. This represents a significant advantage for companies as limited assurance is less time consuming and costly and will give great insights in the current maturity levels. In addition, the Type I assurance report (i.e., design and implementation of controls) can be used as a guideline to improve and extend the current measures to finally comply with the CSRD rules. We expect that the European Commission will demand a reasonable assurance report as of 2026. Currently, the European Commission is assessing which Audit standard will be used as the reporting guideline.

Specific requirement for the asset management sector

In 2023 the European Sustainability Reporting Standards (ESRS) will be published in draft by the European Financial Reporting Advisory Group (hereinafter: “EFRAG”) Project Task Force for the sectors Coal and Mining, Oil and Gas, Listed Small Medium Enterprises, Agriculture, Farming and Fishing, and Road Transport ([KPMG23]). The classification of the different sectors is based on the European Classification of Economic Activities. The sector-specific standards for financial institutions, which will be applicable for asset managers, are expected to be released in 2024, although the European Central Bank and the European Banking Authority both argue that the specific standards for financial institutions is a matter of top priority due to the driving force of the sector regarding the transition of the other sectors to a sustainable economy ([ICAE23]). We therefore propose that financial institutions start analyzing the mandatory and voluntary CSRD reporting requirements and determine – based on a gap-analysis – which information they already have versus what is missing and start working on that. 

Reporting on internal controls

European ESG regulation focusses on ESG information in external reporting. However, no formal requirements are set (yet) regarding the ESG information and data processes itself. In order to achieve high-quality external reporting, control over internal processes is required. Furthermore, asset managers are also responsible for the processes performed by third parties, e.g., the data input received from third parties. It is therefore important for an asset manager to gain insight in the level of maturity of the controls on these processes as well.

Controls should cover the main risk of an asset manager that can be categorized a follows:

  • Inaccurate data
  • Incomplete data
  • Fraud (greenwashing)
  • Subjective/inaccurate information
  • Different/unaligned definitions for KPIs

In order to comply with the regulations outlined in Figure 1, it is recommended to include the full scope of ESG processes in the current SOC reports of asset managers. Originally, the SOC report is designed for providing assurance on processes related to financial reporting over historical data. In our current society, we observe that more and more attention is paid to non-financial processes. We see that the users of the SOC reports are also requesting and requiring assurance over more and more non-financial reporting processes. We observe that some asset managers are including processes such as Compliance (more relevant for ISAE3000A), Complaints and ESG in their SOC reports. KPMG performed a benchmark on which processes are currently included in the SOC reports of asset managers. We will discuss the results in the next paragraph.

Benchmark

By comparing 12 asset management SOC reports for 2022, KPMG observed that 6 out of 12 asset managers are including ESG in their system descriptions (description of the organization), and 7 out of 12 asset managers have implemented some ESG controls in the following processes:

  • Trade restrictions (7 out of 12 asset managers)
  • Voting policy (4 out of 12 asset managers)
  • Explicit control on external managers (4 out of 12 asset managers)
  • Emission goals / ESG scores (1 out of 12 asset managers)
  • Outsourcing (0 out of 12 asset managers)

We observe that reporting is currently mostly related to governance components. There is little to no reporting on environmental and social components. In addition, we observe that none of the twelve asset managers report on or mention third party ESG data in their SOC reports.

We conclude that ESG information is not (yet) structurally included in the assurance reports. This does not mean that ESG processes are not controlled; companies can have internal controls in place that are not part of a SOC report. In our discussion with users of the assurance reports (e.g. pension funds) we get feedback that external reporting on ESG related controls is perceived as valuable given the importance of sustainable investing and upcoming (EU) regulations. Based on our combined insight from both ESG Assurance and advisory perspective we will share our vision on how to report on ESG in the next paragraph.

Conclusion and future outlook

In this article we conclude that only 7 out of 12 asset managers are currently reporting on ESG-related controls in their SOC reports, and still on a limited scope and scale. This is not in line with the risks and opportunities associated with ESG data and not in line with active and upcoming laws and regulations. We therefore recommend that asset managers enhance control on ESG by:

  • implementing ESG controls as part of internal control framework (internal reporting);
  • implementing ESG controls as part of their SOC framework (external reporting);
  • assessing and analyzing with your external (data) service providers and relevant third parties regarding missing controls on ESG.

The design of a proper ESG control framework first starts with a risk assessment and the identification of opportunities. Secondly, policies, procedures and controls should be put in place to cover the identified material risks. These risks need to be mitigated in the entire chain, which means that transparency within the chain and frequent contact among the stakeholders is required. The COSO model (commonly used within the financial sector) could be used as a starting point for a first risk assessment, where we identify inaccurate data, incomplete data, fraud, inaccurate information and unaligned definition of KPIs as key risks. Lastly, the risks and controls should be incorporated within the organizational annual risk cycle, to ensure quality, relevancy, and completeness. Please refer to Figure 2 as an example.

C-2023-2-Rietschoten-2-klein

Figure 2. Example: top risks x COSO x stakeholder data chain [Click on the image for a larger image]

References

[EC23] European Commission (2023, January 23). EU taxonomy for sustainable activities. Retrieved from: https://finance.ec.europa.eu/sustainable-finance/tools-and-standards/eu-taxonomy-sustainable-activities_en

[ICAE23] [ICAE23] ICAEW Insights (2023, May 3). ECB urges priority introduction of ESRS for financial sector. Retrieved from: https://www.icaew.com/insights/viewpoints-on-the-news/2023/mar-2023/ecb-urges-priority-introduction-of-esrs-for-financial-sector

[KPMG23] KPMG (2023, April). Get ready for the Corporate Sustainability Reporting Directive. Retrieved from: https://assets.kpmg.com/content/dam/kpmg/nl/pdf/2023/services/faq-csrd-2023.pdf

Automation and IT Audit

The introduction of IT in organizations was rather turbulent. Under the motto “First there was nothing, then the bicycle came”, everyone had to learn to deal with the effects of IT on people and processes. This article describes the development of IT and the role of auditors related to the investigation into the quality of the internal control on behalf of the financial audit. IT-related laws and regulations are discussed as well as KPMG’s involvement with professional organizations and university study programs.

For the Dutch version of this article, see: Automatisering en IT-audit

The beginning of IT

Since 1965, computers were introduced in organizations, mostly for simple administrative applications. In this period the accountancy profession had a slight wake-up call when during control work at audit clients (assistant) auditors desperately mentioned: “They bought a computer.” At that time, there was no IT organization yet whatsoever.

Start automation with batch processing

Simple processes were automated and processed by computers (that meanwhile had been introduced by IBM, BULL, and UNIVAC) that could only perform one process at a time. The responsibility for automation lay almost always with the administrative function of the organization.

The required programs were written in programming languages such as assembler or COBOL. The elaboration of the required functionality occurred on pre-printed forms after which the programmers themselves had to record the instructions on punch cards. These large quantities of punch cards were read in the computer center and recorded on magnet tapes, after which processing took place. The output was printed on paper. The same process was used for the processing of mostly administrative data. The computer was controlled through the so-called Job Control Language (JCL). Computer operators could initiate operations with the aid of these JCL processes.

In time, complexity grew and the expert in the area of computer control programs – the systems programmer – entered the scene. Both the quality and effectiveness of the internal control measures within organizations were pressured as this new function of system programming could manipulate the results of processing out of sight of the internal organization.

The accountancy profession quickly acknowledged that automation could be of influence on the quality of the system of internal control measures within organizations. Already in 1970, the Netherlands Institute of Register Accountants (NIVRA1), issues Publication number 1 with as its title Influence of the administrative automation on the internal control. That same year the Canadian Institute of Chartered Accountants issues the book Computer Control Guidelines, followed in 1974 by Computer Audit Guidelines. In 1975, the NIvRA Publication 13 follows: The influence of automated data processing on the audit.

Use of the computer in the audit

It was a local step for an auditor to quickly choose to use the client’s computer or an in-house computer to provide the client with the required information on behalf of the audit. Standard packages such as AudiTape were marketed. Within KPMG, a department was created in 1971 entitled Automation & Control Group with programmers that ensures that the audit control practice was fully equipped. Next to the much-used statistical “currency ranking” sampling method, a better method is developed, called the sieve method.

Needless to say, it was stressed that there is a need for the audit client to attend the processing of the software developed by auditors or standard audit software used.

The development of the COMBI tool (Cobol Oriented Missing Branch Indicator) within KPMG offers the possibility using test cases to acknowledge the “untouched branches” in a program, which can be applied efficiently during the development phase of programs.

Foundation of KPMG EDP Auditors

After a short start-up phase of a few years, in which specialized accountants used audit software on computers at audit clients, the Automation & Control (AC) group was established in the period 1971-1973. This group consisted of financial auditors with IT affinity (which were trained and rotated every three years) and programmers for the development of queries and audit software, such as the abovementioned COMBI.

In 1974, it was decided to establish a separate organizational unit entitled KPMG EDP Auditors (KEA, hereinafter KPMG). The attention of the auditor moved to engaging (IT Audit) experts, who had to establish whether the embedded system of measures of internal control in the organization was also anchored in the configuration of the information system. This also happened with respect to acquiring certainty that application programs developed by/under responsibility of the user organization would be processed unchanged and in continuity.

Specialized knowledge of the auditor’s organization was required due to the complexity arising from the introduction of large computer systems with online/real-time facilities, database management systems and standard access control software (COTS, Commercial-Off-The-Shelf). After all, the organization must be able to identify the impact that this new technology will have on internal controls and be able to recognize the implications for the auditor’s work.

It is in that context that in 1974 it was decided to issue a professional journal entitled Compact (COMPuter and ACcounTant). Initially, it was primarily intended to inform the (financial) audit practice, but was increasingly highly appreciated by other organizations, mainly audit clients.

Introduction of complex computer systems

Since 1974, the application and usage of computers accelerated as the new computers could perform multiple tasks simultaneously. In addition, an infrastructure was created that allowed the user organization to collect data directly. The IT organization therefore became an independent organizational unit and was usually positioned within the Finance hierarchy.

IBM introduced the operating system MVS (Multiple Virtual Systems) and shortly after (1975) software under the collective name of Data Base Management Systems (DBMS) was marketed. The emphasis of IT applications was placed on on-line/real-time functionality. Other computer suppliers introduced similar systems.

The efforts of auditors aimed at assessing the quality aspects of automation initially focused mainly on assessing the measures of physical security of the computer center and the availability of a tested emergency plan.

When the development of batch environment to online/real-time environment continued, the importance of logical security, as well as quality of procedures, directives and measures in the automation organization came to the fore. Think of the arrangement of access control; back-up, and recovery procedures; test, acceptance and transfer procedures of application software; library management, etc.

The introduction of complex computer systems not only meant a migration from classically organized data to a new IT environment, but also a migration of control measures to higher software layers (access control systems, sub schemes within DBMS’s). The entire data conversion project from the classical IT environment to on-line/real-time necessitated a sound planning of the conversion, define phases, set-up procedures for data cleansing, determining completeness and correctness of data collection and security measures during the entire project.

Many Compact issues have discussed this complexity of the above-mentioned profound events and the impact on the internal and financial audits.

Minicomputer systems

More or less simultaneous with the introduction of large complex mainframe computer systems, smaller computer systems were introduced. As these became bigger, they were called mid-range computers.

For the KPMG organization this meant further specialization, as the introduction of minicomputer systems in SME organizations usually had different consequences for the design of the system of internal controls and for the security measures to be taken in these organizations.

KPMG authors successively published articles in Compact with subjects such as the reliability of administrative data processing with minicomputers; the decentral use of small computers: a new problem, and also: the influence of the implementation of small-scale automation on the audit.

Newer versions of mid-range computer systems have a more complex architecture which enables better possibilities for realizing a securely operating IT organization. Especially the security options for the popular IBM AS/400 system were extensively published.

In addition, the security of PC systems and end-user computing was addressed. A Compact Special was entirely devoted to the manageability and security due to the increasing use of PC networks (including risks, methods, audit programs and tooling).

Auditing of information systems

Atypical of automation is that for data processing, the development of hardware and supporting software occurs (almost) simultaneously. Since the beginning of the seventies, auditor, and more specifically the experts specialized in IT auditing, also focused on audit information systems to verify whether internal control measures were not only correctly configured in application programs, but also in the underlying operating system software.

A research methodology entitled “System Assessment approach & financial audit” was developed early on in The Netherlands and periodically updated further to frequent usage. Later – in 1998 – this methodology was followed up by the internationally adopted Business Process Analysis (BPA) method.

The rapid increase of electronic payments and the mapping of its consequences for the manageability as a result of these new developments should be mentioned as well as the discussions about the use of encryption and possible consequences of legislation.

The quality of the system development organization was also investigated. This development ultimately led to Enterprise Resource Planning (ERP) systems as a further integration of applications occurred, and subsequently, to improve the control of ERP Projects, Quality Assurance measures were introduced. In literature, both underexposed management aspects were discussed at ERP implementations and the complexity of defining and implementing approvals.

E-business advances rapidly too. Electronic payments on the Internet become more or less self-evident, e-mail conventions were developed. Assessing the critical infrastructural security mechanisms, such as Public Key Infrastructure (PKI), to which end a national framework of audit standards and procedure needed to be developed, became important to IT Auditors. The KPMG PKI standards framework was later adopted internationally in the WebTrust standard. Above all, KPMG focused on the assessment of risk management and E-business environments.

Information Security Policy

Information Security has been in the spotlight ever since the start of the automation of data processing. Since the early eighties, the subjects of organizational security, logical security, and physical security (see Figure 1), as well as back-up, restart and recovery and fallback options were considered in conjunction.

C-2023-2-Neisingh-1-EN-klein

Figure 1. Class about Information Security (Dries Neisingh at the University of Groningen). [Click on the image for a larger image]

In the 90s, the Information Security policy was highlighted as a sound foundation for information protection. Since then, many KPMG authors have shared their knowledge and experience of Information Security in almost all Compact volumes.

Artificial Intelligence and Knowledge Based Systems

At the beginning of the eighties, an investigation was started within KPMG examining the possibilities of the use of Artificial Intelligence (AI) in the audit. In 1985 “Knowledge-Based Systems (KBS): a step forward into the controllability of administrative processes” was introduced as a result of, amongst others, the developments in AI, higher processing speeds and larger memories. The KBS software does not contain human-readable knowledge but merely algorithms that can perform (in the rule-base) processes based upon externally stored knowledge.

In the following years, there were new developments, as evidenced by publications on Structured Knowledge Engineering (SKE), developed by Bolesian Systems. Further to the above, KPMG published about “Control software and numerical analysis” and about “Testing as a control technique”.

Microcomputer in the audit

After the successful growth of the use of computers in the (financial) audit, the attention partly spread to the use of the microcomputer in the audit. In 1983, an automated planning system became operational. Subsequently, a self-developed audit package was demonstrated with which file researches could be executed.

The use of the micro in organizations to support administrative information processing was extensively published, as well as its use at the auditor as a control tool. The micro was therefore used both as stand-alone and as part of a network.

Within KPMG, two projects were being started, notably the development of software for connecting the computer of the audit client with the auditor’s micro and the development of control programs for processing on the auditor’s micro. KPMG’s Software Engineering department researches software engineering, operating systems (e.g. UNIX), computer viruses, electronic payment, and the debit card.

IT outsourcing

Organizational scale and/or financial capacity sometimes mean that automated data processing was being outsourced to computer/IT service organizations that usually make use of available standard packages on the market. Especially IT outsourcing grew rapidly in the nineties and early this century.

Jointly founded IT organizations – as a shared service center – arise as well. An example is the founding of six computer centers spread across the country on behalf of healthcare insurers administrations. Each health care insurer uses the same functionality and was on-line/real-time connected to one of the regional computer centers. From the start of this special cooperation, KPMG was involved as IT Auditor for overall quality assurance. Several opinions were issued on the quality aspects of the newly established IT organization, as well as the effective operation in continuity of these organizations and on the automated business rules and controls of the software. After all, the health insurance funds themselves carried out the user controls.

NBA publication 26 entitled Communications by the auditor related to the reliability and continuity of automated data processing, paid attention to these problems. Later, Publication 53 was published regarding Quality opinions on information services. In practice these were named Third Party Statements (TPMs).

IT-related laws and regulations

Inspection and certification of IT

Since the beginning of the 80s, the subject of IT inspections and certifications regularly popped up on the agenda. The foundation “Institute to Promote Inspection and Certification in the Information Technology”2 was established. The Netherlands Standardization Institute (NNI3) was already working on a standard for the quality system for software. Within KPMG, much attention was paid to the possibility of issuing opinions on software quality systems, but also for the certifying of software and development systems. Compact, for instance, published widely on the issues at hand.

Finally, the foundation KPMG Certification was established. In January 1998, it officially received the charter “Accreditation of BS 7799 Certification”4, because by the end of 1997, ICS (International Card Services, now part of ABN AMRO Bank) had received the first Dutch certificate for this international Information Security standard.

In November 2002, the above accreditation of KPMG Certification was followed by the first accreditation and certification of PinkRoccade Megaplex (now part of telecommunications company KPN) for the TTP.nl certification scheme “Framework for certification of Certification Authorities against ETSI TS 101456”. This refers to the servicing of digital certificates for making use of digital IDs and signatures. Today, this is comparable to the eIDAS certification.

Memorandum DNB

The Memorandum “Reliability and continuity of automated data processing in banking” published by the Dutch Central Bank (DNB) in 1988 was in itself no revelation. Since the start of KPMG’s IT Audit department, specialized IT Auditors were deployed in the audit practice of financial institutions, related to the assessment of internal controls in and around critical application software, and measures taken in the IT organization.

Various Compact issues show that the IT Audit involvement was profound and varied. My oration at the University of Groningen in 1991 entitled “There are banks and banks” critically contemplates this Memorandum.

It is worth noting that in April 2001, the DNB presented the Regulation Organization and Control (ROB), in which previous memoranda such as the one regarding outsourcing of automated data processing were incorporated.

Computer crime

Since the mid 70s, publications under the heading “Computer Abuse” increasingly appear. Several “abuse types” were subsequently described in Compact. The subject remains current.

In November 1985, the Minister for Justice installs the Commission “Information technology and Criminal Law” under the presidency of Prof. H. Franken. KPMG was assigned by this commission to set up and perform a national survey among business and government to acquire insight (anonymously) in the quality and adequacy of internal controls and of security controls in IT organizations.

In the report that appears in 1987, the image sketched about the quality and effectiveness of the measures taken was far from reassuring, both in small and in large IT environments. The committee’s conclusion was therefore (all things considered) to make criminalization of computer-related crime applicable, taking into account the findings presented, if there were “breaches in secured work”.

Privacy

The creation of laws and regulations regarding privacy (as the protection of personal data) has a long history. At the end of 1979, the public domain was informed in a symposium called “Information and automation”, which focused on the imminent national and international legislation regarding data protection, privacy protection and international information transport.

Subsequently, Compact was being used as an effective medium for employees and especially clients and relations to inform them on developments. In cooperation with the then Data Protection Authority5 a “New brochure on privacy protection” was issued by KPMG further to the Data Protection Act (Wpr) being enacted in 1988. Especially since 1991 there were many publications on privacy authored by KPMG employees. KPMG also conducted the first formal privacy audit in the Netherlands together with this privacy regulator.

In 2001, the new Dutch Data Protection Act (Wbp) replaced the Wpr – due to the EU Data Protection Directive 95/46/EC. At that time, an updated Privacy Audit Framework was also introduced by a partnership of the privacy regulator with representatives from some public and private IT auditing practices, including KPMG.

An interview published in Compact 2002/4 with the chairman of the Board entitled “Auditor as logical performer of Privacy audits. Personal Data Protection Board drives Certification”.

Transborder data flow

Already in 1984, an investigation was performed into the nature and scope of data traffic crossing borders and especially into the problems and legislations in several countries.

In 1987, KPMG and the Free University of Brussels published the book entitled Transborder flow of personal data; a survey of some legal restrictions on the free flow of data across national borders. The document consisted of an extensive description per country of the relevant national legislation, from Australia to Switzerland and the OECD Guideline. It discussed the legal protection of personal data, the need for privacy principles and the impact of national and international data protection laws on private organizations.

Encryption

The use of encryption rapidly increased partly because of the introduction of (international) payment systems. Other applications of encryption also took place, such as external storage of data devices. In 1984, the Ministry of Justice considered initiating a licensing system for the use of encryption in case of data communication. The granting of such licenses should also be applied in the case of the use of PCs, whether or not included in a network.

Partly further to the outcome of KPMG’s investigation “Business Impact Assessment cryptography” and pressure from the business community, any form of encryption regulation was refrained from.

Legal services

Expanding KPMG product portfolio with legal IT services was a logical consequence of above developments, which occurred since 1990 with the recruitment of lawyers with IT and Information specialization. The regulatory developments not only referred to the above-mentioned legal subjects, but also to the assessment and advise of contracts for the purchase of hardware, software packages and to the purchase of software development as well as escrow (source code depository), dispute resolution, with probative values of computer materials and copyright, etc.

The Compact Special 1990/4 was already entirely devoted to the legal aspects of automation. In 1993, due to the developments in IT and law, KPMG published a book entitled 20 on Information Technology and law. KPMG authors and leading external authors contributed articles. In 1998, one of the KPMG IT lawyers obtained her doctorate with her PhD thesis Rightfully a TTP! An investigation into legal models for a Trusted Third Party. The legal issues had and still have many faces and remain an important part of servicing clients.

IT Audit and Financial Audit

The relationship between the IT Audit and the Financial Audit practice has been strengthened over the years. As organizations started using IT more intensively in (all) business processes the meaning of anchoring the internal control and security measures in the IT environment became inevitable. Determining that the system of internal controls in organizations was anchored in continuity in the IT environment required employing IT Audit expertise. Initially, the IT Audit was supporting the audit; however, the influence and meaning of the use of IT in organizations became so immense that seemingly solely employees with both a RE and RA qualification would be capable to perform such an audit.

The publication of the Compact article 2001/3 entitled “Undivided responsibility RA for discussion: IT-auditor (finally) recognized” dropped a bombshell within the accountancy profession. Many KPMG authors published leading articles on the problem. The subject had already been considered in 1977 with the publication of the article “Management, Electronic information processing and EIV – auditor”. “Auditor – Continuity – Automation and risk analysis” is extensively covered in 1981. From 1983 onwards, articles on audit and ICT (Information and Communication Technology) were published quite regularly.

In recent years, Compact has explored this decades-long relationship between IT auditing and financial auditing in several articles, such as in Compact Special 2019/4 “Digital Auditing & Beyond”. In this Compact issue, the article by Peter van Toledo and Herman van Gils addresses this decades-long relationship.

The broadened field of IT Audit(or)

Over the years, it became clear that the quality on the general internal, security, and continuity controls significantly affected the possibility to control the IT organization and with it the entire organization and its financial audit. Subsequently, the effectiveness and efficiency of the general IT controls system attracted in-dept attention.

From the 80s onwards, KPMG’s Board decided to broaden the service offering by also employing (system) programmers next to auditors with IT expertise, as well as administrative information experts, computer engineers and the like. And finally, even (IT) lawyers. Consequently, a wide range of services arose. The KPMG organization’s pioneering role within the industry also served as a model for the creation of the professional body NOREA.

As the integration of ICT continued to take shape, a further expansion of knowledge and services in that direction took place. Some employees obtained their PhD (i.c. promotion to dr.) or an additional legal degree (L.LM), and some even became (associate) professor.

The Chartered Accountants associated with KPMG all were a member of the NIVRA, now the NBA. The activities employed in this organization on behalf of the audit practice were mentioned before. It took quite some time before, in addition to NIVRA, the professional association of EDP Auditors would be established (1992). The admission requirement of membership of the Netherlands Order of Chartered EDP Auditors (NOREA) was to have completed a two- or three-year EDP Audit study at one of the three universities that offered this new degree. Of course there were transitional arrangements for those who had proven knowledge, expertise, and experience. Like NiVRA, a Board of Discipline was installed at NOREA.

Within NiVRA there was much interest in the development of IT and its consequences for the financial audit. However, the expertise as far as IT was concerned, was initially mostly concentrated in the Netherlands Society for Computer Science of IT professionals (NGI6), in which KPMG played an important role in various working groups such as “Policy and risk analysis”, “Physical security and fall-back”, “Security supported by application software”, “architecture”, “Privacy protection and EDP Audit”.

University studies

A prominent practice like KPMG has a mission to also provide a stimulus to research and education in the field. Therefore, KPMG has made an important contribution over the years to university education in the area of both EDP Auditing and on the influence of the use of ICT on the control of organizations and on the financial audit.

It meant the development of an EDP Audit study program and on the other hand the setting up of new university chairs / professorships in the area of IT Audit and administrative organizations.

  • Already in 1977, Dick Steeman was appointed at the Erasmus University Rotterdam. Steeman took office as extraordinary professor with pronouncing the public lesson “Management, Electronic information processing and EIV-auditor”.
  • In 1990, Dries Neisingh was appointed professor at the University of Groningen, department Accountancy, chair “reliability aspects automated information systems”. The speech’s subject was “the Memorandum regarding reliability and continuity of automated data processing in banking (Memorandum DNB): there are banks and banks”.
  • At the beginning of 1991, the appointments of professor EDP Auditing at the Erasmus Universiteit of Cor Kocks and Hansen Moonen at the Tilburg University followed.
  • In 1994, professor Ronald Paans joined KPMG. He already was a professor EDP Auditing at the VU Amsterdam (Free University).
  • In 2002, dr. Edo Roos Lindgreen was appointed professor “IT in Auditing” at the University of Amsterdam. In 2017 he was appointed professor “Data Science in Auditing”.
  • In 2004, dr. Rob Fijneman became professor IT Auditing at Tilburg University.

Figure 2 shows the management of KPMG’s IT Audit practice in 1987 with some of the above-mentioned people.

C-2023-2-Neisingh-2-EN-klein

Figure 2. Management of KPMG’s IT Audit practice upon retirement of Han Urbanus (in 1986). From left to right Dick Steeman, Dries Neisingh, Hans Moonen, Tony Leach, Han Urbanus and his wife, Jaap Hekkelman (chairman of NGI Security), Cor Kocks and Herman Roos. Han Urbanus and Dick Steeman jointly founded KPMG EDP Auditors and started Compact magazine. [Click on the image for a larger image]

Compact

The introduction of Compact in April 1974 was an important initiative of KPMG’s IT Audit Board. The intention was to publish an internal publication on IT subjects on a regular basis. The standard layout became primarily one or a few technical publications, IT developments, ABC News (Automation, Security, and Control), new books and articles included in the library and finally “readers comments”. In the first years, ABC News articles were frequently drawn from EDPACS7 magazine and other international publications.

The first issue started with the article “the organization of testing” and a contemplative article about “software for the benefit of the financial audit: an evaluation”. In the second issue, the first article was continued with subjects such as test monitoring, acceptance tests and system implementation.

Over the years, Compact becomes increasingly widespread: both clients and potential clients appear highly satisfied with the quality of the articles and the variety of subjects. Compact developed into a professional technical magazine! The authors were KPMG employees with occasionally contributions of external authors.

Since 1983, articles regularly addressed the relationship between Audit and IT. In Compact Specials, the editorial staff renders the meaning of such a special issue: “as usual every year a Special appears on audit and IT Audit. In the meantime, it has become habitual to confront CPAs and Register EDP Auditors (RAs, REs and RE RAs) with the often-necessary deployment of EDP Auditors in the financial audit practice after the completion of the audit of the financial statements and after the cleaning of files and prior to the layout of files for the new year”.

On the occasion of 12.5 years of Compact on Automation & Control, the book 24 about EDP Auditing was published in 1986. The book contained a bundle of updated articles from Compact, written by 24 authors. The preface started with a quote by Benjamin Disraeli: “the best way to become familiar with a subject is to write a book about it”.

Increasingly, Compact Special issues were published. In 1989, a Special appeared on “Security” and in 1990 on “The meaning of EDP Auditing for the Financial auditor”. Five external authors from the business also contributed to this Special also as well as a public prosecutor and Prof. mr. H. Franken.

In the run up to the 21st century, it became rapidly clear for many organizations and more especially for the IT sector as well as for EDP Auditors, that problems would definitely arise at the processing of data by application software as a result of the millennium change. Compact became an important medium to attract attention to this both internally and externally. Compact 2000/1 looks back with the article “Across the threshold of the year 2000, beyond the millennium problem?”.

The anniversary issue 25 years of Compact appeared in 1999/2000. Of the 57 authors 50 were employed by KPMG in various functions as well as seven external authors (among them a few former employees). It was a dashing exploit: a publication of 336 pages with 44 articles. The introductory article was called “From automation and control to IT Audit”. In the article “essential assurance over IT” largely goes through the clusters of articles.

Barely recovered from the millennium problem, the introduction of the euro presented itself. In Compact 2000/1 attention was paid to the introduction of the euro entitled “and now (again) it is time for the euro”. The Compact issues 2000/5 and 2000/6 were entirely devoted to all aspects of the conversion to the euro. Articles were being published under the header: “Euro conversion: a sound preparation is not stopping at half measures” and “Implement the euro step by step: drawing up a roadmap for the transition”. And: “Validating the euro conversion”, “Euro emergency scenarios” and “Review of euro projects”.

Conclusion

In the thirty years that were briefly reflected in this article, a lot has happened in the area of the development and application of IT in business and government. For (financial) auditors, it was not easy to operate in this rapidly changing environment. Training courses were not available, and knowledge was sparsely present within or outside the organization.

KPMG has taken the lead to making problems accessible for accountants by the creation of KPMG EDP Auditors and the simultaneous start of publishing Compact magazine. In addition to that, next to auditors, different types of IT professionals were also recruited. Many are to be thanked (the promoters and the successive generations) for the fact that with the start of KPMG EDP Auditors and the broadening of knowledge areas, the emerging market demand could be served adequately. KPMG has timely facilitated that sufficient time and investments could be leveraged for education and product development; this is why KPMG EDP Auditors could lead the way in the market.

The thirty years (1971-2002) have flown by. A period in which many have contributed and can look back with satisfaction. This is especially true for the author of this article who has summarized an earlier article of almost sixty pages.

Notes

  1. Currently, the Royal Netherlands Institute of Chartered Accountants (NBA).
  2. Original name: “Stichting Instituut ter bevordering van de keuring en Certificatie in de Informatie Technologie (ICIT)”.
  3. Currently the Netherlands Standardization Institute is named NEN: NEtherlands Norm.
  4. Currently known as ISO 27001 accreditation.
  5. The Data Protection Authority as had different names, aligned to the prevailing Privacy Act. Currently it is named Authority Personal Data (in Dutch: “Autoriteit Persoonsgegevens”), before that the Personal Data Protection Board (in Dutch: “College Bescherming Persoonsgegevens”) and initially the Registration Office (in Dutch: “Registratiekamer”).
  6. Currently the KNVI, the Royal Netherlands Society of Information Professionals
  7. EDPACS was issued by the EDPAA (EDP Audit Association); currently, the ISACA Journal is published by ISACA, the International Security Audit Controls Association.

Spanning fifty years of IT & IT Audit with only four Editors-in-Chief

To commemorate the fifty-year milestone of Compact, the acting Editor-in-Chief interviewed his three predecessors. The early years and history of fifty years of Compact are covered, as well as their expectations for the future of Compact as disseminator of knowledge and good practices.

Editors-in-Chief of Compact magazine

Steeman

Dick Steeman, retired, Editor-in-Chief 1974 – 1994
Neisingh

Dries Neisingh, retired, Editor-in-Chief 1994 – 2002
Donkers

Hans Donkers, ex-partner KPMG, founder WeDoTrust, Editor-in-Chief 2002 – 2015
Koorn

Ronald Koorn, partner KPMG, Editor-in-Chief 2015 – current

What were remarkable developments in your Compact era?

We started with Compact when the punch cards where still around, while financial institutions and multinationals began to use new IBM systems with keypunch and programming capabilities (S/360, S/3) that were far more efficient in automating their massive administrative processes. Initially, the accountants used their own computer “for auditing around the computer”. In the early days, the audit focus was on data centers and the segregation of duties within IT organizations.

The knowledge of programming lacked at accounting firms in the seventies, therefore we first wrote articles on programming, testing and data analytics for our Financial Audit colleagues. Clients such as Heineken, KLM and ABN AMRO were keen on obtaining Compact as well. That’s how the magazine expanded. Due to the influence of Herman Roos and KPMG’s Software Engineering unit, Compact articles also addressed more technical subjects. So, the target group broadened beyond financial/IT auditors to IT specialists, IT directors and CFOs/COOs.

A nice anecdote is that when we issued Compact editions internally within KPMG the first few years, we were even proactively approached by the publishing company Samsom (now Wolters Kluwer) to offer their services for publication and distribution. We contractually had to issue four editions annually, which was in some years challenging to accomplish – especially besides all regular work. In other years, we completed four regular editions as well as a Compact Special for upcoming IT-related developments, such as Euro migration, Y2K (Millennium), ERP systems or new legislation (e.g., Privacy and Computer Criminality).

In 2001, we’ve issued our first international Compact edition (coordinated by the interviewer), as we wanted to address international variations and best practices. It was distributed to 25 major KPMG countries for their clients. Although, several non-native English authors overestimate their English writing proficiency.

Compact has always been focused on exchanging good practices and organizations are quite keen on learning from leading companies and their peers. Therefore, we changed the model from a – partly paid – subscription model, where authors were paid as well, via a controlled circulation model to a publicly available magazine. Writing articles was also an excellent way for less experienced staff to dive into a subject and structure it well for (novice) readers to understand. Of course, we’ve also been in situations where we had to hunt for (original) copy and actively entice colleagues to showcase their knowledge and experience in order to adhere to our quarterly publishing schedule. Several authors never completed their epistle, but luckily we always managed to publish a full edition.

We’re all pleasantly surprised by the current depth and quality and that Compact survived this long!

The name Compact was derived from COMPuter & ACcounTant. What do you see as the future target audience?

Besides the traditional target groups of IT auditors, IT consultants and IT-savvy financial auditors, it is also very useful for students. They can supplement their theoretical knowledge with practical examples of how technology can be applied in a well-controlled manner in a business context. There still are very few magazines highlighting the subjects that Compact addresses, such as IT Governance and IT Quality.

At least accountants (CPAs) need to know about IT due to the criticality of their financial audits, they cannot entirely outsource that to IT auditors. They should also address in their Management Letter whether “IT is in Control”. Of course, Compact is and should remain a good medium for communicating good practices to CFOs, CIOs and CEOs. Sometimes this knowledge sharing can be achieved indirectly via an IT-savvy accountant.

A brief history of IT & IT Auditing

As the past fifty years have been addressed in multiple articles in this edition, we have tried to consolidate the main trends in a summary table. We have aligned this summary with the model in the article “Those who know their past, understand their future: Fifty years of information technology: from the basement to the board” elsewhere in this Compact edition.

Several developments passed through different decennia; we have only indicated in which phase the main genesis took place.

C-2023-2-Interview-t1-klein

How can the Editorial Board further improve Compact?

Compact has survived where other magazines were terminated are just faded-out. For commercial IT magazines it’s challenging to sustain a viable revenue model. So it is recommended to keep Compact free-of-charge and objective, and to emphasize the thoroughness of IT Audit and IT Advisory work based on a foundation of factual findings. That is a real asset in this ever-changing IT domain, where several suppliers promise you a “cloud cuckoo land” and where ISO certifications are skin-deep. Furthermore, it is recommended to include articles written with clients as well as photographs to make it more personal.

More authors could showcase their deep expertise with articles, which also guarantees the inflow of articles and the continuity of Compact. Furthermore, you can leverage the network of all internal and external authors and their constituents to market the expertise of authors. For instance, besides informing C Level, accountants, IT consultants and IT auditors of relevant IT themes, you could also inform a broader group in society. In the past, Compact authors were interviewed for newspapers, TV, industry associations, etc.

About the Editors-in-Chief

Dick Steeman is a retired KPMG IT Audit partner in the Netherlands. Together with Han Urbanus, he established KPMG EDP Auditors and launched Compact. He was the Editor-in-Chief of Compact from 1974 until 1994.

Dries Neisingh is a retired KPMG IT Audit partner in the Netherlands. During his working life he was a Chartered Accountant, a chartered EDP Auditor and professor of auditing reliability and security aspect of IT at the University of Groningen. He was involved with Compact right from the first issue in 1974 and was the Editor-in-Chief from 1994 until 2002.

Hans Donkers used to be a partner at KPMG and is one of the founders of WeDoTrust. He was the Editor-in-Chief of Compact from 2002 until 2015.

Ronald Koorn is an active partner at KPMG in the Netherlands and has been the Editor-in-Chief of Compact since 2015.

Compact editors

Besides the Editors-in-Chief, we also wish to specifically thank the following editors with their Editorial Board tenure of at least ten years:

  • Aad Koedijk
  • Piet Veltman
  • Rob Fijneman
  • Brigitte Beugelaar
  • Deborah Hofland
  • Pieter de Meijer
  • Peter Paul Brouwers
  • Maurice op het Veld
  • Jaap van Beek

And the Compact support staff over the decades: Henk Schaaf (editor), Sylvia Kruk, Gemma van Diemen, Marloes Jansen, Peter ten Hoor (publisher at Uitgeverij kleine Uil and owner of LINE UP boek en media), Annelies Gallagher (editor/translator), Minke Sikkema (editor), Mirjam Kroondijk and Riëtte van Zwol (desktop publishers).

Five years of GDPR supervision at a glance

Ever since the General Data Protection Regulation (GDPR) came into effect, privacy has become a prominent issue. Apart from the ongoing debates on the precise interpretation of legal provisions, there have been notable developments in the enforcement actions undertaken by the Dutch Data Protection Authority. In this article, we reflect upon the fines that have been imposed by the Dutch Data Protection Authority in recent years, which have drawn significant attention. As an organization, what measures should you take to avoid being subjected to similar enforcement actions?

Introduction

The General Data Protection Regulation (hereinafter referred to as “GDPR”) was enforced in May 2016, and organizations were granted a two-year transition period until May 2018 to align their business operations with the GDPR. After this period, the Data Protection Authorities were authorized to enforce the GDPR, including the imposition of a maximum fine of 20 million euros or 4% of an organization’s annual global turnover; whichever is higher. Despite this, the Dutch Data Protection Authority (hereinafter referred to as “the Dutch DPA”) has been hesitant to impose fines, even after the expiration of the transition period. Only a few fines were issued in the initial years following 2018, as per the annual reports of the Dutch DPA. The reasons cited for this were the organization’s restricted capacity and the decision to allocate that capacity primarily towards significant, high-impact investigations, such as the childcare benefits scandal (“toeslagenaffaire”) or issues related to the coronavirus. It was not until 2021 that the Dutch DPA began to expedite its enforcement efforts, resulting in a greater number of organizations being fined, and for larger amounts. This trend was also observed among other European Data Protection Authorities – see Figure 1.1

C-2023-1-Huijts-1a-EN-kleinC-2023-1-Huijts-1b-EN-klein

Figure 1. Overview of the number and sum of fines from European privacy regulators ([CMS23]). [Click on the image for a larger image]

Given the Dutch Data Protection Authority’s recent implementation of regular fines, it is essential to reflect on the measures that organizations must undertake to ensure GDPR compliance and avoid facing a fine. This article examines one or more administrative fine decisions for each fine category as defined by the Dutch DPA.2 We provide a comprehensive discussion of the following categories for which fines have been imposed by the Dutch DPA:

  • inadequate basis for data processing;
  • insufficient fulfilment of information obligations;
  • Insufficient implementation of data subjects’ rights;
  • non-compliance with general data processing principles;
  • inadequate technical and organizational measures;
  • insufficient compliance with data breach notification requirements.

C-2023-1-Huijts-2b-EN-kleinC-2023-1-Huijts-2a-NL-klein

Figure 2. Overview of the number and sum of fines by fine category ([CMS23]). [Click on the image for a larger image]

Fine guidelines from the DPA

The Dutch DPA’s fine system is segregated into various categories, which are indicative of the severity of the data breach. Each category is linked to a particular fine range, within which the Dutch DPA decides the final sum of the fine, taking into account the circumstances of the infringement. These factors include the nature, duration and gravity of the breach, the extent of damage incurred, and the number of data subjects affected. Furthermore, if the Dutch DPA deems the fine range inappropriate for the breach, it may impose a fine beyond the set limit, subject to an absolute maximum of 20 million euros or 4% of the annual global turnover. For a comprehensive overview of the classification by category, refer to the Dutch DPA’s published policy rules ([AP19a]).

Administrative fine decisions by the DPA

Inadequate basis for data processing

Using fingerprints for employee time clocks based on consent

Personal data can be divided into two categories: regular and special categories of personal data. Regular personal data includes information such as name, address, and telephone number, whereas special categories of personal data comprise sensitive information such as health or political views. Due to the sensitive nature of the latter, the processing of special categories personal data is generally prohibited.

In April 2020, the Dutch DPA imposed a fine on a company for the unlawful processing of special categories of personal data ([AP19d]). The company used fingerprint scanners for employee timekeeping purposes. Fingerprints are classified as biometric data and fall under the category of special personal data. While Article 29 of the GDPR permits the processing of such data for security purposes, in this case, the fingerprints were only used for attendance and timekeeping, which does not fall under this exception. Employee consent could also be an exception, but this is generally not presumed to be freely given in a dependent relationship such as that between an employer and employee. Furthermore, obtaining consent is not enough; the company must also be able to prove it. In this case, the company was unable to prove consent, and as a result, was found to be in violation of Article 9 of the GDPR’s processing prohibition. The Dutch DPA imposed a fine of €725,000.

DPA’s investigation re-emphasizes the conditions imposed on the data subject’s consent. Consent is legally valid when given freely, clearly and the user is sufficiently informed. It is important that refusing consent must not have any adverse consequences in any form. Consent must also be demonstrable.

WIFI tracking on a general legal basis

The processing of personal data, including regular personal data, must be based on one of the legal bases provided in Article 6 of the GDPR. The municipality of Enschede claimed that it was allowed to process personal data for the purpose of measuring crowds in the city center on the basis of performing a public task. To achieve this, eleven sensors were used to continuously capture WIFI signals from passing citizens, which were then stored under a pseudonym. However, the public task that serves as the basis for the processing of personal data must be set out in a statutory provision. The municipality relied on Article 160 of the Municipalities Act, but the Dutch DPA deemed this provision to be too broadly formulated, and stated citizens could not infer based on this article that their personal data was being processed. Moreover, the basis of legitimate interest did not apply in this situation either. As a rule, a public body cannot rely on legitimate interest as a basis, as its tasks must be defined in a statutory provision. An exception to this is when a public body acts as a private party, but this exception did not apply in this situation.

In addition to the absence of a specific legal basis for WIFI tracking, the necessity requirement was not met as measuring crowds can be done in a much less intrusive way. Furthermore, the data was stored for a long period, which could allow citizens to be tracked and patterns of living to be identified. For instance, it was possible to determine where someone worked. Due to these multiple violations, the processing by the municipality of Enschede can be considered unlawful, and the Dutch DPA imposed a fine of €600,000 ([AP21a]).

The DPA’s investigation emphasizes that government organizations should be careful not to base processing operations on overly general provisions. In addition, a thorough assessment of the necessity requirement should also be made.

Using legitimate interest for purely commercial purposes

Article 6 of the GDPR mentions pursuit of a legitimate interest as the last possible basis for processing personal data. It is generally known that a public authority cannot rely on this, but there is still uncertainty as to whether a private party with exclusively commercial interests can do so.

In this regard, the Dutch tennis association “De Koninklijke Nederlandse Lawn Tennis Bond” (hereinafter referred to as KNLTB) provided personal data of its members to two sponsors for promotional purposes. One of the sponsors used members’ addresses to offer discount flyers, and the other sponsor approached members by phone with an offer. The KNLTB argued that the data was provided under the guise of a legitimate interest. However, according to the Dutch DPA, their reasoning cannot be considered a legitimate interest. For a successful appeal based on a legitimate interest, the processing must be necessary to serve the interest, the interest of the data subject must not outweigh the legitimate interest, and the interest must be a legitimate interest. According to the Dutch DPA, the latter requirement means that the interest must be named as a legitimate interest in (general) legislation or elsewhere in the law. It must be an interest that is protected and enforceable in law. Moreover, the (written or unwritten) rule of law must be sufficiently clear and precise. The rule of law to which the KNLTB attached processing is freedom of enterprise. The Dutch DPA called this interest insufficiently concrete to qualify as a legitimate interest. Consequently, a fine of €525,000 was imposed on the tennis association ([AP19e]).

The KNLTB contested the fine imposed by the Dutch DPA and appealed the decision. The national court, facing uncertainties about the interpretation of the concept of “legitimate interest,” referred preliminary questions to the European Court of Justice (hereinafter referred to as the ECJ). A preliminary question is a query that a national court can ask the ECJ to interpret European law. The position taken by the Dutch DPA has been previously contradicted by the European Commission and by the court in the VoetbalTV case, where the Dutch DPA took a similar stance on legitimate interest. It remains to be seen whether the Court of Justice will concur with the Dutch DPA’s interpretation.

Whether a private party can process personal data based on a legitimate interest with exclusively commercial interests is not sufficiently clear from the DPA’s fine decision. It is advisable to use this basis as restrictively as possible.

Insufficient fulfilment of information obligations

A privacy statement that does not match the target audience

In 2021, the widely used social media platform TikTok was fined €750,000 by the Dutch DPA for violating the requirements of the first paragraph of Article 12 of the GDPR ([AP21b]). This article stipulates that organizations must provide data subjects with information about the processing of their personal data in a concise, transparent, easily accessible, and understandable form using clear and simple language. Typically, this information is presented in the form of a privacy statement. However, TikTok’s privacy statement was only available in English to its Dutch users, who primarily consist of young people under the age of 16. Given this demographic, TikTok could not assume that their users were proficient in English.

It is therefore important for organizations to determine the target audience in advance. Based on this, a comprehensible privacy statement can be drafted using an average member of the intended target group as a benchmark. It is also important that a translation of the privacy statement is available if the target group speaks a different language. If there is a target group consisting of young people, who enjoy specific protection under the GDPR, a privacy statement that is also understandable for younger target audiences will have to be drafted.

Insufficient implementation of data subjects’ rights

An access request in line with Article 12 GDPR

Article 12 of the GDPR sets out specific regulations regarding the exercise of data subjects’ rights, including the right to access. This right requires that the provision of data be free of charge, unless the requests made by the data subject are unfounded or excessive, particularly in cases of repetitiveness. The assessment of what constitutes repetitiveness must be done on an individual basis. The Bureau Krediet Registratie (hereinafter referred to as BKR) found this out first-hand. The BKR provided two options for submitting a right of access request: either electronically (which required payment) or once a year by post, free of charge. The Dutch DPA deemed the default requirement of electronic payment for a right of access request to be incompatible with Article 12 of the GDPR and penalized the BKR with a fine of €830,000 ([AP19c]).

According to the Dutch DPA, the option of a free annual request for access by post did not alter BKRs violation of Article 12 of the GDPR. Similarly, limiting free access to personal data to once per year via post was also found to be in violation of this provision. Whether a request for access is excessive or unfounded should be determined on a case-by-case basis, and the fact that a data subject requests access more than once per year would not necessarily make the request excessive.

It is important to establish the identity of the data subject when responding to a request for access. However, DPG Media was fined by the Dutch DPA for requesting a copy of proof of identity from data subjects in order to establish their identity ([AP22a]). The DPA considered this too intrusive, especially because of the sensitive nature of identification documents. The DPA stated that the least intrusive way to identify data subjects should be used, for example by combining information already held by the controller. This could include a customer number combined with an address.

It is therefore important to ensure a free request for inspection and that, if there appears to be an excessive request, it is assessed on an individual basis. In addition, it is important for the identification process that the least intrusive means of identification is chosen. In any case, sending a copy of an identification document is considered to be too intrusive.

Non-compliance with general data processing principles

A European representative for organizations outside Europe

The GDPR applies both to organizations based in the European Union and those based outside the EU if they focus on processing personal data of EU citizens. Such was the experience of LocateFamily.com. The website did not comply with the requirement of Article 27 of the GDPR to appoint an EU representative in writing. They were under the impression that because they were not based in the EU, they did not have to comply with the GDPR. However, this was not the case and it resulted in a fine of €525,000 ([AP20d]).

Due to the international nature of the internet, organizations will more than likely process personal data of EU citizens at some point. If this is the case and your website is available in the EU, for example, and the euro can be used as currency for transactions, you will probably have to comply with the obligations of the GDPR. In that case, you also need to appoint an EU representative.

Inadequate technical and organizational measures

Inadequate security of internal systems

One of the first fines imposed by the Dutch DPA since the GDPR came into effect was against the HagaZiekenhuis. The hospital was fined because its medical patient records were not adequately secured, resulting in numerous employees accessing the files of a Dutch celebrity without any legitimate reason to do so. The hospital was obligated to monitor access, according to the Dutch DPA. Moreover, the security measures were found to be inadequate because multi-factor authentication was not implemented. As a result of the insufficient security measures, the HagaZiekenhuis was fined €460,000 ([AP19b]).

Two years later, a similar situation occurred at another hospital, Amsterdam’s OLVG. Inadequate monitoring of accessed records and insufficient security resulted in a fine of €440,000 imposed by the Dutch DPA ([AP20c]). Inadequate security of internal systems has been seen in several organizations. For example, maintenance company CP&A was fined €15,000 for inadequately securing its absence registration system ([AP20a]), the Ministry of Foreign Affairs was fined €565,000 for inadequate security of the National Visa Information System (NVIS) ([AP22b]), and the UWV had taken insufficient technical measures to secure the process for sending group messages, which resulted in a fine of €450,000 ([AP21c]).

Just like hospitals, health insurers deal with medical data of data subjects, and therefore, authorization should be established to restrict access to sensitive personal data to include only those employees who need it to perform their duties. However, the Dutch DPA conducted an investigation and found that marketing staff at health insurer Menzis had access to sensitive personal data. It is important to note that accessing personal data is also considered processing under the GDPR. Apart from inadequate access rights, Menzis also failed to maintain log files. Although there was no evidence that the marketing staff accessed this personal data, the mere possibility of such access was enough for the Dutch DPA to impose an order subject to fines for noncompliance on Menzis ([AP18]).

Viewing personal data also falls under processing according to the GDPR. It is advisable to allow only employees for whom it is necessary to have this access to this data. It is also important to ensure that systems can track who can view personal data, so that unauthorized access can be monitored.

Insufficient password requirements

In addition to multi-factor authentication, it is important to establish password requirements to prevent data breaches. In September 2019, Transavia’s systems were hacked through two accounts belonging to the company’s IT department. The hackers were able to access these accounts easily, as they did not require multi-factor authentication and the passwords were easily crackable, such as “12345” or “Welcome.” Additionally, these accounts provided sufficient access for the hackers to breach the larger systems without further security thresholds in place. Despite Transavia’s timely reporting of the data breach, the Dutch DPA imposed a fine of €400,000 ([AP21d]) due to its seriousness.

The level of security referred to in Article 32 GDPR that should be strived for depends on the risk associated with the processing. An adequate security level is determined based on various factors, such as the nature and scope of the personal data being processed.

Insufficient compliance with data breach notification requirements

Failure to report data breaches (on time)

The final category of fines pertains to the issue of data breaches, which unfortunately is a common occurrence in many organizations. Unauthorized persons may gain access to personal data, or such data may be inadvertently released or destroyed. Such an occurrence is referred to as a data leak, which must be reported to the Dutch DPA within 72 hours if there is a potential risk to the data subject(s). For instance, PVV Overijssel experienced a data leak when an email was sent to 101 recipients, making all the email addresses visible to everyone. As a result of failure to comply with the notification requirement, PVV Overijssel was fined €7,500 ([AP20b]). Booking.com was also fined for a data breach in which an unknown third party gained access to the personal data of data subjects. Because Booking.com did not report the data breach to the Dutch DPA within 72 hours of discovery, this ultimately resulted in a fine of €475,000 ([AP20e]).

Ideally, of course, you would like to prevent a data leak, for instance by taking appropriate technical and organizational measures, but this will not make it one hundred percent impermeable. In the event of a data leak, it is essential to report the data leak (in good time) in order to limit the damage for those involved and your organization as much as possible. Swift action should be taken to plug the data leak and by tightening up security, a data leak can be prevented in the future.

Conclusion

Although the Dutch DPA has only issued 22 public fines in recent years, this should not lead organizations to believe that they are exempt from Dutch DPA investigations and potential fines. It is a misconception that only large organizations are targeted by the Dutch DPA, as was demonstrated by the fine imposed on PVV Overijssel.

It is important to note that the Dutch DPA has significant discretion in terms of the sanctions it can impose. The range of enforcement options includes fines, orders subject to fines for noncompliance, or a combination of both. The Dutch DPA can also issue reprimands or formal warnings, although the latter appears to be used less frequently. In fact, the last formal warning issued by the Dutch DPA was in 2020 ([AP20f]).

Organizations should strive to avoid sanctions by drawing lessons from the Dutch DPA’s overview of fines. One key takeaway is the importance of having a lawful basis for processing personal data. For example, a company was fined for unlawfully processing special personal data in the form of fingerprints, while a municipality was fined for collecting location data of citizens in a disproportionate manner. The Dutch DPA has also provided guidance on the meaning of “legitimate interest” in the context of the Dutch tennis association’s fining decision, although this should not be taken as the final word on the matter.

Another crucial aspect is complying with information obligations, ensuring that the target audience is taken into account. Organizations should also implement data subjects’ rights effectively and employ appropriate technical and organizational measures, such as access restrictions, logging and monitoring, multi-factor authentication, and password requirements. Lastly, organizations should comply with the notification obligation towards the Dutch DPA in the event of a data breach.

What’s next?

Historically, we have seen that (published) fines were often complaint initiated. We expect this trend of the “beep system” to largely continue. It is therefore important for an organization to set up a good privacy complaints procedure, in order to resolve complaints themselves as much as possible.

The preliminary questions raised because of the fine decision on the Dutch tennis association could have major implications. Currently, the Dutch DPA differs from other Data Protection authorities, in the sense that a mere profit motive cannot be considered a legitimate interest. If confirmed by the Court, this will have major implications for all organizations that often rely on this basis.

Looking ahead, we also anticipate that the Dutch DPA will continue to pay close attention to new developments in artificial intelligence (AI), algorithms, data trading and profiling in the following years. These topics, while not as clearly reflected in the published fines, have been focal points of the DPA in recent years. Given their increasing significance in modern society and the rapid developments in these areas, we anticipate that these issues will remain a focal point for the Dutch DPA. For example, since January 2023, there is a new organizational unit within the Dutch DPA, the Algorithms Coordination Directorate, which will specifically oversee the use of algorithms.

Although the draft budget of the Ministry of Justice and Security includes a budget increase for the Dutch DPA, for instance for the establishment and work of an algorithm supervisor, the Dutch DPA mentions that its budget is insufficient to properly handle all supervisory tasks ([AP22c]). They must work with only a quarter of the budget compared to other Dutch supervisory authorities (such as the AFM or ACM, that have a budget of €100 million). We expect continued yet steady growth towards a sufficient budget over the next decade.

Notes

  1. Note that these numbers reflect only the fines disclosed and do not reflect the full number. In addition, these numbers reflect only actual fines and do not include cases where correct follow-up was given after a warning or order under fine. See also [DPA].
  2. Based on the different fine categories, a selection has been made from the published fines.

References

[AP] Autoriteit Persoonsgegevens (n.d.). Boetes en andere sancties. Retrieved from: https://www.autoriteitpersoonsgegevens.nl/nl/publicaties/boetes-en-sancties

[AP18] Autoriteit Persoonsgegevens (2018, February 15). Last onder dwangsom en definitieve bevindingen. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/besluit_last_onder_dwangsom_menzis.pdf

[AP19a] Autoriteit Persoonsgegevens (2019, February 19). Boetebeleidsregels Autoriteit Persoonsgegevens 2019. Retrieved from: https://www.autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/stcrt-2019-14586_0.pdf

[AP19b] Autoriteit Persoonsgegevens (2019, June 18). Besluit tot het opleggen van een bestuurlijke boete en een last onder dwangsom. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/besluit_haga_-_ter_openbaarmaking.pdf

[AP19c] Autoriteit Persoonsgegevens (2019, July 30). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/besluit_bkr_30_juli_2019.pdf

[AP19d] Autoriteit Persoonsgegevens (2019, December 4). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boetebesluit_vingerafdrukken_personeel.pdf

[AP19e] Autoriteit Persoonsgegevens (2019, December 20). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boetebesluit_knltb.pdf

[AP20a] Autoriteit Persoonsgegevens (2020, March24 ). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boete_cpa_verzuimregistratie.pdf

[AP20b] Autoriteit Persoonsgegevens (2020, June 16). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boete_pvv_overijssel.pdf

[AP20c] Autoriteit Persoonsgegevens (2020, November 26). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boetebesluit_olvg.pdf

[AP20d] Autoriteit Persoonsgegevens (2020, December 10). Besluit tot het opleggen van een bestuurlijke boete en een last onder dwangsom. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/20210512_boetebesluit_ap_locatefamily.pdf

[AP20e] Autoriteit Persoonsgegevens (2020, December 10). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/besluit_boete_booking.pdf

[AP20f] Autoriteit Persoonsgegevens (2020, December 15). Formele waarschuwing AP aan supermarkt om gezichtsherkenning. Retrieved from: https://www.autoriteitpersoonsgegevens.nl/nl/nieuws/formele-waarschuwing-ap-aan-supermarkt-om-gezichtsherkenning

[AP21a] Autoriteit Persoonsgegevens (2021, March 11). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boetebesluit_ap_gemeente_enschede.pdf

[AP21b] Autoriteit Persoonsgegevens (2021, April 9). Besluit tot het opleggen van een bestuurlijke boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boete_tiktok.pdf

[AP21c] Autoriteit Persoonsgegevens (2021, May 31). Besluit tot het opleggen van een boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boete_uwv_beveiliging_groepsberichten.pdf

[AP21d] Autoriteit Persoonsgegevens (2021, September 23). Besluit tot het opleggen van een boete. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boete_transavia.pdf

[AP22a] Autoriteit Persoonsgegevens (2022, January 14). Besluit tot het opleggen van een boete. Retrieved from: https://www.autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/boetebesluit_dpg.pdf

[AP22b] Autoriteit Persoonsgegevens (2022, February 24). Besluit tot het opleggen van een boete en een last onder dwangsom. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/besluit_bz_24_februari_2022_openbare_versie_definitief.pdf

[AP22c] Autoriteit Persoonsgegevens (2022, October 24). Informatievoorziening voor de beantwoording van feitelijke vragen door de minister voor Rechtsbescherming inzake de vaststelling van de begrotingsstaten van het Ministerie van Justitie en Veiligheid voor het jaar 2023 [Official message]. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/beantwoording_feitelijke_vragen_door_de_minister_voor_rechtsbescherming_inzake_vaststelling_van_de_begrotingsstaten_van_het_ministerie_van_justitie_en_veiligheid_voor_2023_-_2.pdf

[CMS23] CMS.Law (2023). GDPR Enforcement Tracker – list of GDPR fines. Retrieved on February 17, 2023, from: https://www.enforcementtracker.com/?insights

Data ethics and privacy: should, can, will?

When using technology such as artificial intelligence (AI), ethical considerations play a major role in our society. There is a reason for this: as we increasingly face public scandals related to the misuse of personal data, the call for responsible policies concerning ethics and privacy is growing. The trust that customers, employees and citizens have in both public and private organizations is at stake. The critical question for organizations is: how do we get the most out of what data and technology have to offer while simultaneously addressing ethical and privacy concerns?

This article takes a closer look at data ethics and the intersections with privacy, discusses the legal developments and provides practical tips on how to get started setting up and strengthening data ethics in organizations.

Introduction

May 25, 2023, marks the fifth anniversary of the European privacy law, the General Data Protection Regulation (GDPR). For many organizations, privacy protection is now an integral part of their business operations. However, there is still more that can be done.

Even with the introduction of the GDPR, Dutch people’s confidence that companies and government organizations handle their privacy well remains low. A 2021 privacy survey of a sample of the Dutch population ([KPMG21]) showed that a quarter of Dutch people harbor considerable distrust of the government, and their trust in technology companies is even lower. This manifests itself in growing concerns about their own privacy.

C-2023-1-Bakker-1-EN-klein

Figure 1. The trust Dutch citizens have in their government is not very high, but the trust they have in large technology companies appears to be even lower ([KPMG21]). [Click on the image for a larger image]

Whereas trust in government agencies and companies is declining, interest in privacy is increasing. An overwhelming majority of the Dutch (86 percent) think it is good that there is a lot of focus on privacy ([KPMG21]). This is substantially more than at the beginning of 2018, when the KPMG survey “A little privacy please” showed that 69 percent considered privacy an important issue ([KPMG18]; see also [ICTM18]). In addition, this interest is confirmed by the fact that the Netherlands is one of the leaders within the European Union in terms of reporting data breaches ([DLAP21]). One explanation for this increasing attention to privacy is the continuing developments in the digital transformation that society is undergoing. As a result, this question is now at the forefront of privacy and data ethics debates: how responsible or ethical are all the technical developments that succeed one another in a relatively short period of time?

The Dutch Data Protection Authority, the Autoriteit Persoonsgegevens (AP), concluded in its annual report for the year 2021 that society has reached the point where digitization can no longer take place without ethical value considerations ([AP22b]). In their private lives, as consumers and citizens, people are constantly confronted with new technologies, although they may not always realize it. Technologies such as algorithms have taken a structural place in our daily lives. Whether it is making a purchase in an online store and paying afterwards, taking out a loan from a bank, or applying for a grant from the government, there is a good chance that the application will be assessed using technology.

New technologies bring tremendous opportunities for developing new products and services, ensuring a better customer experience, and improving efficiency in the workplace. However, to ultimately continue the successful integration of new technologies, organizations must use them responsibly. Data ethics and privacy play an essential role in this regard. The GDPR provides guidance on the ethical use of personal data, but data ethics is broader than just the collection, use, retention and deletion of personal data.

What is data ethics?

Ethics is about right and wrong. About what society, citizens or consumers think is fair, just and acceptable ([Meel]). Viewed from a privacy perspective, data ethics is not so much about whether an organization may process personal data and whether the processing meets the requirements of the GDPR, but rather it is about a question that is more fundamental. It is about a question that is more fundamental. Even if organizations could or want to do something (e.g., from both a legal or technological perspective), organizations must continually ask themselves whether they should from an ethical perspective. In other words: it is allowed, but is it the right thing to do?

Data ethics requires a different way of thinking within an organization that focuses on the impact a data operation has on people and society. Data ethics revolves around this question: from an ethical perspective, is what we want to do or have the capabilities for, the right thing to do? A privacy professional may see common ground with conducting a Data Protection Impact Assessment (DPIA), which identifies the privacy risks of a personal data processing activity to data subjects. However, data ethics is much broader than privacy. Data ethics is about non-discrimination, avoiding bias and acting transparently and responsibly toward people affected by the use of technology. The following example illustrates this ([Ober19]).

A hospital in the United States used a commercial algorithm to determine which group of patients might need more care than average. Algorithms are widely used in the United States by hospitals, government agencies, and other healthcare providers. It is estimated that about 200 million people are assessed annually using such algorithms.

In this case study, an algorithm ranked patients based on the extent to which they used medical care. Patients in the 97th percentile and above were marked as “high risk” by the algorithm and were automatically enrolled in a health program. Wanting to improve the health of high-risk patients is a noble goal, but in retrospect it was found that a bias based on race was present in the algorithm. According to the algorithm, patients of color are healthier than white patients, but this turned out to be the wrong conclusion.

The reason for this bias could be traced to the input data. People of color are less likely to use healthcare services than white people and spend an average of $1,800 less per year on health care than white people. The algorithm inferred that people of color must be healthier, since they use fewer healthcare services. However, this assumption was incorrect. The dataset on which the algorithm was based, consisted of 44,000 white patients and only 6,000 patients of color. Because of this limited input data, the algorithm made incorrect assumptions that had a negative impact on healthcare access for a certain group of people.

Legal developments in data ethics: the AI Act

When it comes to data ethics in the public debate, it is regularly about the reliability and the increasing use of data and algorithms in private and public organizations, and also about how to use algorithms in a controlled, ethical way. Worldwide, the European Commission has taken the lead in regulating the use of artificial intelligence. This has resulted in a bill called the Artificial Intelligence Act (AI Act).

The AI Act aims to establish a framework for responsible AI use. According to the Act, AI systems must be legally, ethically and technically robust and must respect democratic values, human rights and the rule of law. The fact that the AI Act is focusing on regulating the use of AI from a technical and legal perspective, is not surprising. What is unique to this Act, is the strong emphasis on data ethics. The aim is to reach a final agreement on the AI Act this year (2023), but there is no concrete deadline. When a final agreement is made, there will be a grace period of around two years to allow affected parties to comply with the regulations.

The AI Act introduces new obligations for companies and governments, as well as a supervisory authority and a penalty system. These are detailed in the sections below. It is important to emphasize that no final agreement has been reached on the exact content of the AI Act. In other words, legal developments (and proposed amendments to the AI Act1) are rapidly following one another. For example, adjustments to the AI Act are currently being considered to deal with developments around ChatGPT and similar generative AI models. In other words, the legal and technical developments that may have an impact on the AI Act, are worth keeping an eye on.

Conformity assessment for high-risk AI systems

The AI Act introduces conducting a so-called conformity assessment by an outside body. In other words, if an AI system could pose a high risk to the health, safety or fundamental rights of people, its providers must have an assessment conducted by an independent third party to identify and mitigate those risks. These assessments help ensure compliance with the AI Act. For AI systems with limited or minimal risk, less onerous requirements apply. In that case, a self-assessment or transparency requirement is sufficient.

The legislative proposal for the AI Act currently states that the European Commission is the body that determines what constitutes a high-risk system and when a mandatory conformity assessment must be conducted. AI systems that will potentially qualify as high risk include systems for migration and asylum, critical infrastucture, law enforcement, and product safety. In addition, it is currently being examined whether generative AI models such as ChatGPT should also be regarded as high risk.

Based on the proposed AI Act, there is also the possibility that an AI system classifies as a high-risk system, but a conformity assessment is not required. In such cases, a self-assessment is sufficient. Currently, the AI Act states that the European Commission will determine for which (high-risk) AI systems a self-assessment should be performed.

High-risk AI systems must meet strict requirements under the AI Act before they can be marketed. Measures to be implemented under the proposed AI Act include: establishing a risk management process that specifically oversees the AI application, setting high data quality requirements to prevent discrimination, maintaining logs, establishing documentation around accountability, ensuring transparency, establishing a system in which people oversee the AI applications, and ensuring security and accuracy standards.

AI database for high-risk systems

Another new aspect of the AI Act relates to the creation of an AI database in which high-risk AI systems are to be registered. The AI Act currently states that the database will be managed by the European Commission and aims to increase transparency and facilitate oversight by regulators.

Introduction of national AI supervisor

The proposed AI Act currently contains an obligation for each member state to form or designate an authority to supervise compliance with the AI Act. This national supervisory authority will participate in the European AI Board (EAIB), which will be chaired by the European Commission and will also include the European Data Protection Supervisor (EDPS). Recently, the Dutch Data Protection Authority, the AP, was appointed as algorithm supervisor in the Netherlands. With this appointment, the Netherlands is already fulfilling its future obligation under the AI Act.

Fines for failure to comply with AI Act

Like the GDPR, the AI Act will include a penalty system. The biggest fine that can be imposed under the Act is a fine of up to 30 million euros or 6 percent of annual global turnover, whichever is higher. This is 2 percent higher than the highest fine category under the GDPR. Aside from the moral obligation for companies to take data ethics and privacy seriously, there will be financial incentives to set up AI systems in accordance with the upcoming AI Act.

How to put data ethics into practice

It is clear that the AI Act will make its appearance in the future. Nevertheless, it is important to realize that legislation is only the basis and that acting ethically requires more than complying with new legislation. What can organizations do today to promote the ethical handling of data and raise awareness in their organization?

Contact the Privacy Officer

The concerns that exist about AI systems are in many cases about the use of personal data. Despite the fact that privacy and data ethics are two different topics, they often overlap. This means that if an organization has appointed a Privacy Officer, in all likelihood they are already working on the topic of data ethics in the light of the use of personal data.

The GDPR has an obligation to conduct a DPIA on personal data processing activities that may result in a high privacy risk. In many cases, this obligation will also apply to AI systems that process personal data. Even though the AI Act focuses on the quality of AI systems while the GDPR focuses on the use of personal data, the two laws converge when personal data is used in AI systems. Therefore, Privacy Officers can be a good starting point to prepare the organization for the upcoming AI Act. They can help identify which systems in the organization use AI and whether these systems may pose a high risk.

Establish an ethical framework

The first step to securing data ethics in an organization is to establish what ethical handling of data specifically means for the organization. This can be done by formulating values and principles around the topic of data ethics, for example an ethical framework or compass. It is important that the ethical principles align well with the culture and core values of the organization and are recognizable to employees from all levels of the organization ([Meel]).

Organize independent oversight

Data ethics is an abstract topic, but it needs a very concrete interpretation. Most organizations are not (yet) equipped to deal with the ethical dilemmas that arise when new technologies, such as algorithms, are deployed. Furthermore, there is often no monitoring of the integration of ethical principles into business operations. A powerful tool in both establishing ethical principles and closing the gap between principles and practice, is the establishment of effective and independent oversight. This can be done by an independent committee, internal audit teams, or an independent third party ([Meel]).

Conduct a Fundamental Rights and Algorithm Assessment

When an organization works with algorithms, it is wise not to wait for the introduction of the AI Act and to already start identifying risks when using algorithms. This can be done by conducting a Fundamental Rights and Algorithm Impact Assessment (FRAIA). FRAIA is the English translation of the Dutch “Impact Assessment Mensenrechten en Algoritmes” (IAMA). The FRAIA was developed by the Utrecht Data School and helps to make careful decisions about the use of algorithms. The FRAIA is mandatory for government agencies and can also help other organizations gain a better understanding of the considerations and risks involved in the decision-making process concerning the deployment of algorithms. It is also a way to “practice” the impending assessments that the AI Act will most likely introduce.

According to FRAIA, the decision-making process regarding algorithms can be divided into three main stages:

  • Stage 1: preparation. This stage is about deciding why an algorithm will be used and what its effects will be.
  • Stage 2: input and throughput. This stage is about the development of an algorithmic system. In this stage, it is decided what the algorithm must look like, and which data is being used to feed the algorithm. Within this stage, the FRAIA further distinguishes between:
    • Stage 2a: data, or input. This involves asking questions that pivot on the use of specific data and data sources.
    • Stage 2b: algorithm, or throughput. This involves questions regarding the algorithm, and its operation and transparency.
  • Stage 3: output, implementation and supervision. This stage is about how to use the algorithm, i.e., about the question which output the algorithm generates, how that may play a role in policy or decision-making, and how that can be supervised.

Source: [MBZK21]

Conclusion

There is currently no clear body of standards, laws, or case law in the field of data ethics. While the AI Act aims to fill this gap, ethical handling of data requires more than following the letter of the law. Take the example of the GDPR, Europe’s data privacy law. The GDPR gives us more control over our personal data, but the ethical principle of privacy is a much broader and abstract issue than simply protecting data. Therefore, an organization that sees its customers’ privacy as its responsibility, will have to think beyond just avoiding a GDPR, and soon, an AI Act fine ([Meel]).

Notes

  1. The final content of the AI Act is currently still being negotiated in Europe. This means that this article provides an insight into the developments concerning the AI Act but cannot provide certainty on the final content of the AI Act.

References

[AP22a] Autoriteit Persoonsgegevens (2022, March 15). AP Inzet Artificial Intelligence Act. Retrieved from: https://autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/ap_inzet_ai_act.pdf

[AP22b] Autoriteit Persoonsgegevens (2021). Jaarverslag 2021. Retrieved from: https://www.autoriteitpersoonsgegevens.nl/sites/default/files/atoms/files/jaarverslag_ap_2021.pdf

[DLAP21] DLA Piper (2021, January 19). Nederland tweede van Europa in aantal gemelde datalekken sinds invoering AVG. Retrieved from: https://www.dlapiper.com/en-nl/news/2021/01/nederland-tweede-van-europa-in-aantal-gemelde-datalekken-sinds-invoering-avg

[EC21] European Commission (2021). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Retrieved from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

[ICTM18] ICT-Magazine (2018, October 16). Onderzoek KPMG: Nederlander nauwelijks bekend met nieuwe privacyrechten. Retrieved from: https://www.ictmagazine.nl/onderzoek-kpmg-nederlander-nauwelijks-bekend-met-nieuwe-privacyrechten/

[Meel] Van Meel, M. & Remmits, Y. (n.d.). Risico’s van algoritmes en toenemende vraag naar ethiek: Deel 4 – De burger en klant centraal bij het gebruik van algoritmes [KPMG Blog]. Retrieved from: https://home.kpmg/nl/nl/home/topics/artificial-intelligence/vertrouwen-in-algoritmes/risicos-van-algoritmes-en-toenemende-vraag-naar-ethiek.html

[KPMG18] KPMG (2018). Een beetje privacy graag. [Report can be requested at KPMG.]

[KPMG21] KPMG (2021, October). Meer zorgen over privacy: Het resultaat van ons privacy onderzoek onder consumenten. Retrieved from: https://assets.kpmg/content/dam/kpmg/nl/pdf/2021/services/meer-zorgen-over-privacy-whitepaper.pdf

[MBZK21] Ministerie van Binnenlandse Zaken en Koninkrijksrelaties (2021, July). Impact Assessment Mensenrechten en Algoritmes. Retrieved from: https://open.overheid.nl/documenten/ronl-c3d7fe94-9c62-493f-b858-f56b5e246a94/pdf

[Ober19] Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019, October 25). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453, Retrieved from https://www.science.org/doi/10.1126/science.aax2342

The sustainability reporting journey of Swiss luxury watchmakers

The theme of sustainability has gained momentum in recent years as companies are pushed by regulatory pressure to step up their sustainability initiatives and transparency on all three pillars of ESG (environmental, social, and governance). In 2018, the WWF analyzed the maturity of the 15 biggest Swiss watch companies in relation to sustainable processes, governance, and initiatives. The results showed that most of the reviewed companies communicated very little about their ambition to become more sustainable or did not even communicate at all. In October 2022, KPMG watch and luxury experts reviewed the progress of sustainability initiatives, social media sentiment and news coverage related to the sustainability activities of the 15 brands from the 2018 WWF report. The review showed that most of the 15 brands have online content dedicated to sustainability on their brand websites, but not all the brands are sufficiently transparent on the environmental impact of their supply chain. Further, the social media sentiment analysis suggests that consumers’ desire to purchase a luxury watch might not be triggered by the sustainability efforts of the brand. However, the consumer might forgo the purchase of the watch when the brand is perceived as not active in sustainability efforts.

Introduction

In its 2018 report, A precious transition – demanding more transparency and responsibility in the watch and jewelry sector, WWF Switzerland ([Grun18]) analyses the ecological and societal impacts of the 15 leading Swiss watch manufacturers and presents examples of how they could improve their environmental footprint. The results were categoric: the majority of corporations in the watch and jewelry industry were either non-transparent or didn’t appear to have deep aspirations to improve their sustainability. For this industry to catch up, WWF Switzerland made the following recommendations for manufacturers and underlined the power that consumers have to initiate the change in this industry:

  • Improve the value chain’s transparency
  • Source responsible raw materials
  • Embrace sustainability in the company’s practices
  • Report on pertinent sustainability issues in public
  • Work together with other players in the industry
  • Innovate for circularity

Quantifying the actual benefits for companies that are committed to sustainability is in many cases not straightforward. Although the impact on brand value and consumer opinions can be immediate, the financial benefit might only come in the long term. Our watch and luxury industry experts assessed the progress of sustainability initiatives, social media sentiment and news coverage related to ESG (Economic, Social and Governance) activities of these brands. Further, the article highlights areas of focus that can enable the luxury watch brands to drive their sustainability agendas. We analyzed primary and secondary data for the purpose of this article.

Was there any progress since 2018?

In October 2022, the (international) websites of each of the 15 watch brands that were part of the original scope of the 2018 WWF Report (Figure 1) were visited and analyzed. The 15 brands in scope of the WWF report include:

  • Richemont Group: Cartier, IWC, Piaget, Vacheron Constantin and Jaeger LeCoultre
  • Swatch Group: Omega, Longines, Tissot, Swatch and Breguet
  • LVMH Group: Tag Heuer
  • Independent brands: Rolex, Audemars Piguet, Pathek Philippe and Chopard

The purpose of the online review was to learn of the progress made by these brands in the transparency of their supply chain, their use of responsible materials and the integration of sustainability into their business practices. This approach was designed to accommodate the behavior of an environmentally conscious consumer whose decision to buy a watch from one of the 15 brands is influenced by whether the brand would meet good environmental standards. What we learned from the brands’ websites shows mixed signals of improvement since 2018. Where some brands openly communicate the governance, policies and processes they have in place to reduce their climate impact and increase transparency on the sourcing of materials, others continue to lack transparency and appear to have little aspirations to improve their sustainability. Only 8 of the 15 brands we looked at have an established a specific sustainability page in their websites.

C-2022-4-Galanti-1-klein

Figure 1. WWF assessment of 15 leading Swiss luxury watch makers, 2018. [Click on the image for a larger image]

Communication

The websites of Richemont, Swatch and LVMH have a dedicated page for sustainability and the companies also publish a sustainability report, in accordance with the Global Reporting Initiative (GRI) standards. Most of the information in the groups’ reports applies to the brands the groups own and is thorough and very informative. However, although the websites of the individual Richemont brands have dedicated pages for sustainability with comprehensive information, most of the relevant data is consolidated at group level with few sustainability use cases illustrated for the individual brands. IWC is the only brand we observed that has published its own sustainability report, separate from, although in alignment with, the group (Richemont). Omega reports some information on the Responsible Jewelry Council and the Kimberley Process whereas Audemars Piguet’s reports on its commitments to sustainability. Chopard also has a dedicated webpage concerning sustainability.

Business practices

Except for some of the 15 brands, sustainability governance, policies, clear roles and responsibility and a reporting line to the Board, are presented in the brands’ websites or can be inferred from the groups’ sustainability reports. Governance would typically include the establishment of a sustainability steering committee, or equivalent, the figure of a Chief Sustainability Office, or equivalent, as well as sustainability teams and officers at different levels of the organization. Governance and risk assessment frameworks are well-illustrated in the sustainability websites for Chopard, IWC and Audemars Piguet.

Further, two-thirds of the 15 brands are members of the Responsible Jewelry Council (RJC) and are certified by the Council’s Code of Practices (CoP). Achieving certification on Code of Practices demonstrates a brand’s commitment to responsible sourcing and promotes transparency and traceability in a brand’s supply chain. These brands have in fact established supply chain policies, sourcing policies in compliance with the OECD Due Diligence for Responsible Business Conduct for precious metals, as well as supplier codes of conduct to which their suppliers are required to adhere. The Richemont, Swatch and LVMH have also listed the Sustainable Development Goals (SDGs)1 they have committed to and their associated timeline for implementation. They have also mentioned their performance of Life Cycle Assessments (see Box 1) in their sustainability reports.

Following the global RJC CoP certification, the next major milestone in the sustainability reporting journey of a watch or jewelry brand is the RJC Chain of Custody (CoC) certification. This certification provides assurance to consumers on how a brand’s products and materials have been sourced, traced and processed through the supply chain. Adherence to CoC standards is ensured through ongoing independent audits. Across the 15 brands in scope, we found mentions of a CoC certification at IWC, Vacheron Constantin, Omega and Audemars Piguet.

Climate neutrality

Greenhouse Gas Emissions (GHG) measurement (Scope 1, 2 and Scope 3 of the establishment) and targets for carbon footprint reduction are comprehensively reported in the sustainability reports of Richemont, the Swatch Group and LVMH. The measurements reflect a varying degree of maturity from established emission targets for Scope 1 and 2, to targets still being defined and implemented for Scope 3 emissions. Most of this information is reported on a consolidated basis and breakdowns for the individual brands are not available, except for some examples of initiatives and efforts at brand level. Richemont, Swatch Group and LVMH are also committed to the Science-based Target Initiative (SBTi), which provides target setting methods and guidance to companies to set science-based targets in line with the latest climate science(wri.org). Brands like Cartier, Piaget, Vacheron Constantin and IWC claim to reach carbon neutrality through the offsetting from the funding of environmental projects, other brands like Jaeger LeCoultre and Chopard have reported 40% reduction in their carbon footprint.

The environment

Most of the brands are fairly vocal about their contribution to preserving the environment and promoting biodiversity. Efforts in this area are focused on the brands’ headquarters, production sites and boutiques and vary with different degrees in the use of 100% renewable energy, the use of solar panels and circular water systems, chemical wastewater and scrap metals disposed by independent third parties, the elimination of single-use plastic bottles, the availability of sustainable transport for employees, boutiques with LED lights and with LEED status (Leadership in Energy and Environmental Design), a green building certification program used worldwide.

C-2022-4-Galanti-2-klein

Figure 2. The Audemars Piguet Manufacture des Forges building, which opened in Le Brassus in 2008, was the first Minergie-ECO® certified industrial site in Switzerland (source: Audemars Piguet). [Click on the image for a larger image]

The majority of the brands have also changed or are in the process of changing their packaging to more sustainable materials. For example, packaging made of paper foam that is compostable and recyclable as well as packaging that is compliant with the Forestry Stewardship Council (FSC)2 and the Programme for the Endorsement of Forest Certification (PEFC)3, or other sustainable packaging solutions which follow the OEKO-TEX Standard 1004.

Sourcing materials

For the Richemont brands, the Swatch Group, Tag Heuer and Chopard confirmed in their sustainability reports that the diamonds they purchase are compliant with the Kimberley Process Certification5, so brands communicate the commitment to the removal of conflict diamonds from their supply chains. For the Richemont brands and Chopard, we also found evidence of their adherence to the System of Warranties (SoW) from the World Diamond Council, which ensures that all diamonds traded are Kimberley Process compliant and have also been handled in accordance with universal principles of human rights, labor rights, anti-corruption and anti-money laundering. The SoW is applied each time the ownership of any natural diamond changes hands within the industry, both when exported or imported and when being sold in the same country.

As for the sourcing of gold, 99.6% of the gold purchased by the Richemont brands is CoC certified. Audemars Piguet reports 100% of the gold purchase as certified by an independent party and Chopard reports the use of 100% ethically produced gold (i.e. from RJC CoC certified suppliers and from artisanal mined gold produced in an responsible way).

With regard to the sourcing of materials for their watch straps, both Richemont and the Swatch Group communicated their adherence and that of their brands to the International Crocodilian Farmers’ Association (ICFA) as well as to the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), an international agreement between governments, aiming to ensure that international trade in specimens of wild animals and plants does not threaten the survival of the species. Several brands are experimenting or are already offering straps made of vegan alternatives, recycled, recyclable, compostable, as well as bio-based materials.

Box 1. Life Cycle Assessment (LCA) ([Swat22])

Choosing a sustainable design strategy is an essential part of product development. Based on the results of life cycle assessment, a comparison can be made between the environmental impacts of different materials, products of processes that perform the same function and have the lowest environmental impact throughout their life cycle are selected. The LCA is also used to identify opportunities for improving the environmental performance of the company’s products, including packaging, at different stages of their life cycle. This means that informed decisions can be made in new developments with regard to the procurement of raw materials, the selection of processes, end-of-life treatment, etc.

How sustainability is driving innovation 

Research ([Llor21]) shows innovation and sustainability are often two highly connected topics. Innovation can be a key enabler for companies to achieve their sustainability goals and this equally applies to the watch industry, where watch companies work to integrate sustainability in their business practice ([Yum21]).

In 2022, Swatch launched a new watch collection called the MoonSwatch, which combined the iconic Omega Speedmaster Moonwatch design elements with “Bioceramic”, an innovative material. Bioceramic is made up of two-thirds ceramic and one-third castor oil extract, a vegetable oil pressed from castor seeds. The MoonSwatch is one example of the several innovative approaches that Swiss brands have come up with in recent years, to meet the demand for more sustainable products, from the younger generations of consumers. A trend which has led to some fascinating innovations (see Box 2).

C-2022-4-Galanti-3-klein

Figure 3. The MoonSwatch in the “Earth” variant (source: Swatch). [Click on the image for a larger image]

For example, nowadays, watch cases can be made with titanium produced from scrapped aircraft parts, recycled precious metals, or from recycling or upcycling plastic bottles collected from the ocean. There are also brands that offer watch movements assembled using restored Swiss movements. And dials are being offered with transparent ceramic glass instead of sapphire crystals, which required a smaller carbon footprint to be produced. 

Several brands propose watch straps made from recycled or upcycled fishing nets, plastic bottles, or mixing ceramic and bio-sourced plastic. Other companies have specialized in creating alternative leathers obtained by recycling leather scraps from the leather manufacturing production process, or vegan leather obtained from the bark of mulberry, cork and apple. In some cases, straps are entirely made of green waste and are fully compostable at the end of their lifecycle. 

In 2021, IWC launched TimberTex straps (80% plant-based, responsibly sourced from European forests), produced in Italy. The straps are made primarily of paper from responsible sources, called TimberTex. Luxurious in look and feel, the TimberTex straps have a soft and supple texture. Unlike synthetic leathers, which are often petroleum-based plastic, the material used in TimberTex straps is composed of 80% natural plant fibers. The cellulose used comes from Forest Stewardship Council (FSC) -certified trees grown in the non-profit organizations and responsibly managed European forests. The material is then manufactured in Italy using traditional papermaking techniques and colored with natural plant-based dyes. In 2022, the maison launched the MiraTex straps (made of plants and minerals including FSC-certified natural rubber and fillers like cork powder and mineral colorants).

Box 2. Production of sustainable raw materials by Panatere ([EPHJ21])

A Swiss company based in Saignelegier, Panatere is a pioneer in the production of sustainable raw materials. The company specializes in the production of 100% recycled and recyclable stainless steel, which is locally sourced from scrap steel from companies in the watchmaking or medical industry operating in the Swiss Jura region. The carbon footprint of Panatere’s recycling process is 10 times smaller than the standard process for the production of stainless steel. The company is also working on setting up a local solar oven in the Jura region with the objective to increase its offer of recycled and recyclable stainless steel from 50 to 200 tons per year and produce solar materials leveraging a network of partners within a range of 50 km around the company. Panatere has been finalizing its process of the production of solar raw materials since 2021 using a solar oven in the French Pyrenees, right in the middle of the Swiss Watch Valley. The use of the solar oven would enable further reducing the carbon footprint of the production of stainless steel to 165 smaller than the standard production (e.g. achieving an almost neutral carbon footprint).

C-2022-4-Galanti-4-klein

Figure 4. The Solar Oven in the French Pyrenees (source: Panatere SA). [Click on the image for a larger image]

ESG Social Media analysis

To provide deeper insights into the impact of the sustainable initiatives in luxury watchmaking, we have performed an ESG sentiment analysis and news coverage analysis for each of the 15 brands that were in scope of the WWF report in 2018. Although the WWF report focuses largely on the Environment dimension, we have taken all three ESG dimensions into account to obtain a holistic view for each of the brands.

First, our research identified the “ESG Share of Voice” that is the number of times a brand is mentioned on social media in general versus how many occurrences are related to ESG. Next, we classified both categories of social media mentions as either positive, neutral or negative. This approach allowed us to assess both the relevance of ESG for each brand and whether ESG creates or supports the positive perception. Further, we conducted an analysis of spikes in ESG mentions to estimate the difference between overall ESG consumer sentiment and reaction of social media considering specific world events or particular business actions.

Overall insights

Regardless of the different brands and their ESG sentiments, we can summarize our research in three key observations:

  • The ESG Share of Voice is relatively low, ranging between 1% and 7%. This may suggest that customers show interest in sustainability, but that it is not a prevailing topic
  • There are eleven of the 15 brands where consumers have a more positive sentiment on the brand as a whole than their perception of the brand related to ESG
  • Social media spikes are strongly connected to specific ESG related events or company activities. The spikes are outliers in the dataset of social media mentions and impact the average ESG sentiment values

C-2022-4-Galanti-5-klein

Figure 5. The results of the social media data analysis (KPMG analysis, 2022). [Click on the image for a larger image]

Category 1: High performers

Chopard, IWC and Omega have the highest share of positive ESG mentions. Those brands also demonstrate a positive delta when comparing the ESG sentiments to the overall brand perception. For example, out of the total social media mentions for Chopard, 31% is positive. When we filter the total number of social media mentions to only include those related to ESG, this number increased to 42%.

For those brands with a positive ESG performance, the impact of their performance is not always equally high. Continuing with Chopard as an example, their ESG share of voice is only 3% which means the largest part of their social media coverage is not related to ESG. Comparing this to IWC, for which 40% of all ESG mentions are positive, we can assume the impact of sustainability influences brand perception more as their ESG share of voice sits at 7%.

Although, there are many parameters that contribute to a successful ESG brand perception, the ESG sentiment high performers Chopard and IWC are also high performers in communication of dedicated ESG reports as mentioned earlier in this report. As such, it is likely that more transparency and communication have a direct and positive effect on ESG sentiment.

Category 2: Underperformers

The twelve remaining watch brands of the list have a lower performance on ESG sentiments. As mentioned earlier, eleven of these low performers have a less positive consumer sentiment regarding ESG as compared to the overall brand sentiment.

Within this group, brands such as Piaget and Vacheron Constantin score a relatively high ESG Share of Voice, which means that the impact of their relatively low positive consumer sentiment of ESG is greater than for the other brands.

Swatch appears to have a relatively high amount of negative ESG mentions. For Swatch, there has been a significant number of negative mentions related to sustainability and the buying process of MoonSwatch.

Impact of spikes in ESG mentions

When performing an analysis of ESG mentions across time, we see that the average number of ESG mentions per month is low. However, we also observe clear spikes in social media coverage when specific ESG related events or business updates occur. This suggests that the overall ESG sentiment of consumers is largely determined by these events, rather than by a continuous sustainability effort. From the sample in this analysis, social media spikes occur every three to four months and the duration of their impact typically lasts between three and five days.

Third parties can also influence ESG brand perception through social media. In some cases, potentially without knowledge of the watch company, watches are part of an ESG-related activity such as charitable giveaways. In these cases, the impact on ESG sentiment can be significant as these third parties often have a large social reach.

C-2022-4-Galanti-6-klein

Figure 6. Example of ESG spike analysis (Meltwater and KPMG analysis, 2022). [Click on the image for a larger image]

For IWC, the average ESG mentions lies between zero and 25 per month. However, IWC organized two charity events in May 2022 causing significant social media spikes. IWC auctioned off watches that were worn by celebrities in a previous charity event on during the Big Pilot Charity Auction on May 6th. This event caused a social media spike with over 300 mentions on a single day, out of which 90% are classified as positive. As such, it is likely that IWC’s high performance on ESG sentiment as described earlier, is largely thanks to these events.

C-2022-4-Galanti-7-klein

Figure 7. Example of ESG spikes caused by third parties. [Click on the image for a larger image]

The effects of third parties on ESG sentiment can be illustrated by for example Rolex. Similar as shown earlier, the average social media mentions are relatively low and stable. In this case however, the spikes are caused by social media activity from third parties such as celebrities or other companies.

Sustainability, do customers of luxury watchmaking care?

Our analysis suggests that there is a noticeable shift in consumer behavior and sustainability is becoming increasingly important in decision making of the consumer. However, there is mixed evidence regarding consumer demand for timepieces made in a sustainable manner. Some groups consider sustainability an absolute must, considering a luxury watches a necessity and therefore the production process should place a great emphasis on sustainability. Other groups remain more skeptical about the true impact of sustainably created watches, given the quantities are mostly very low and the watches are often inherited from generation to generation making them sustainable by nature.

The quest for quality, durability and high-quality accessible service appears to be equally important to consumers as sustainability of the product. The study suggests increased traction for secondary watch market, renting, co-owning and sharing of luxury watches. However, while these findings might indicate a shift of consumer preferences to sustainability, the recent shifts in demand and scarcity of the product availability might dilute the causation.

However, sustainability is becoming a part of the consumer journey. According to research, the consumer seems to consider sustainability during the information search and right at the purchase decision making point. Even though sustainability is being taken into account during the purchase process, the efforts of a brand to become more sustainable appear to still be a hygiene factor. The observation of the sentiment analysis suggests that the desire to purchase might not be triggered by sustainability efforts of the brand, although the consumer might change their mind if the brand isn’t active in pursuing sustainability goals. Furthermore, recent years of the pandemic seem to make the consumer more conscious of the environment as well as social impacts, such as child labor, gender equality, etc.

Sustainability plays a bigger role when the purchase is emotion driven and the consumer wants to feel good about their new product. However, the consumer is also looking for a high-quality product that sustains value for a longer period of time and if this consideration prevails over the emotional reasoning, the importance of sustainability of the product decreases.

C-2022-4-Galanti-8-klein

Figure 8. Overview of news coverage for luxury watches (KPMG analysis, 2022). [Click on the image for a larger image]

The news coverage of luxury watches is generally positive. Neutral and negative mentions were related to the geopolitical unrest in Ukraine, this topic accounted for less than 6% of all the mentions however. The main topic driving 45 percent of the news coverage is the “new collections”. The second strongest driver of the news coverage, with a significant offset of 33 percentage points, is the topic “watchmaking exhibition”. “Sustainability” related topics are account for 9% of the mentions in the news.

Conclusion

The theme of sustainability has gained momentum in recent years. The rise of regulatory pressure urges companies to strengthen their sustainability efforts as well as reporting on its results. Among the stakeholders driving the change are not only the regulators but also activist investors and consumers themselves. This report reviews the efforts and communication of 15 biggest Swiss watch companies that were primarily assessed by the WWF in 2018. The WWF maturity analysis flagged that the majority of brands either had very little or no active communication about their desire to become more sustainable. We reviewed the progress of sustainability initiatives for the 15 companies in 2022 and combined the findings with social media and news coverage analysis.

Most brands have website with pages that are dedicated to sustainability and some of them are very active in ESG initiatives and communication. Some brands report on ESG according to international standards and are part of international sustainability initiatives, while others are not sufficiently transparent on how their supply chain affects the environment. The social media analysis shows that the ESG Share of Voice is relatively low. Although not necessarily correlated, it is likely that active ESG communication and transparency positively influences consumers’ ESG perception. Furthermore, a brand’s environmental initiatives may not be what prompts customers to buy a premium watch. Nevertheless, if the brand is perceived to be inactive in sustainability, the consumer may hold back with the purchase.

Notes

  1. The United Nations 2030 Agenda for sustainability ([UN15]) includes 17 SDGs, which form the international and universally applicable framework for sustainable development.
  2. FSC certification ([FSC22]) ensures that products come from responsibly managed forests that provide environmental, social and economic benefits.
  3. The PEFC ([PEFC22]) is a leading global alliance of national forest certification systems. As an international non-profit, non-governmental organization, we are dedicated to promoting sustainable forest management through independent third-party certification.
  4. STANDARD 100 by OEKO-TEX® ([OEKO22]) certified products have been tested for harmful substances to protect your health. This label certifies that every component of the product, from the fabric to the thread and accessories, has been rigorously tested against a list of up to 350 toxic chemicals.
  5. The Kimberley Process (KP) ([Kimb02]) is a multilateral trade regime established in 2003 with the goal of preventing the flow of conflict diamonds. The core of this regime is the Kimberley Process Certification Scheme (KPCS) under which States implement safeguards on shipments of rough diamonds and certify them as “conflict free”.

References

[CITE22] CITES (2022). What is CITES? Retrieved from: https://cites.org/eng/disc/what.php

[EPHJ21] Environnement Professionnel Horlogerie Joaillerie (2021, May 7). Panatere veut installer un four solaire dans la Watch Valley. Retrieved from: https://ephj.ch/panatere-veut-installer-un-four-solaire-dans-la-watch-valley/

[FSC22] Forest Stewardship Council (2022). FSC Standards. Retrieved from: https://fsc.org/en/fsc-standards

[Grun18] Grünenfelder, D., Starmanns, M., Manríquez Roa, T. & Sommerau, C. (2018). A precious transition: Demanding more transparency and responsibility in the watch and jewellery sector. Environmental rating and industry report. World Wildlife Fund. Retrieved from: https://www.wwf.ch/sites/default/files/doc-2018-12/2018_12_07_WWF%20Watch%20and%20Jewellery%20Report%202018_final_e_0.pdf

[Kimb02] Kimberley Process (2002). Kimberly Process Certification Scheme. Retrieved from: https://www.kimberleyprocess.com/en/system/files/documents/KPCS%20Core%20Document.pdf

[Llor21] Llorca-Ponce, A., Rius-Sorolla, G. & Ferreiro-Seoane, F.J. (2021, August 18). Is Innovation a Driver of Sustainability? An Analysis from a Spanish Region. Sustainability, 13(16), 9286. Retrieved from: https://doi.org/10.3390/su13169286

[OEKO22] OEKO-TEX® (2022). OEKO-TEX® Standard 100. Retrieved from: https://www.oeko-tex.com/en/our-standards/standard-100-by-oeko-tex

[PEFC22] Programme for Endorsement of Forest Certification (2022). Standards and Guides. Retrieved from: https://www.pefc.org/standards-implementation/standards-and-guides

[Swat22] The Swatch Group AG (2022, March 17). Sustainability Report 2021. Retrieved from: https://www.swatchgroup.com/sites/default/files/media-files/swa_sr21_en_web_0.pdf

[UN15] United Nations (2015). Transforming Our World: The 2030 Agenda for Sustainable Development. Retrieved from: https://sustainabledevelopment.un.org/post2015/transformingourworld/publication

[WDC22] World Diamond Council (2022). System of Warranties. Retrieved from: https://www.worlddiamondcouncil.org/about-sow/

[Yum21] Yum, A. (2021, April 19). How Sustainability is Driving Innovation in the Watch Industry. Luxuo. Retrieved from: https://www.luxuo.com/homepage-slider/sustainability-innovation-watch-industry.html

Control by Design: risk-free processes as the holy grail

Risk management is gaining an increasingly prominent role within organizations. In a rapidly changing environment, increasing digitalization and more stringent regulations regarding service delivery, good risk management is a challenge. For a more automated risk management, the term Control by Design is used regularly, not only within financial institutions but also as an important risk trajectory for the future at other organizations. But what does this term mean? And why should it be necessary to embrace and apply this way of thinking? This article explains the background and opportunities of Control by Design. It also looks at the application, the possible barriers and how to deal with them to make the concept more concrete.

This article is also a call to other organizations to exchange views. Please send an email to info@compact.nl if you want to share your ideas.

Introduction

Business processes change continuously. Optimization takes place, (sub) processes or IT systems are outsourced, new products or services are developed, and old products or services are discontinued but still need to be managed. Laws and regulations or (internal) policies are introduced or modified, new risks appear or existing risks are weighed differently. In addition, reorganizations take place, responsibilities and priorities shift. This translates into more complex processes, the implementation of (manual) workarounds in order to meet new requirements. All this often happens faster than many IT departments can manage. This is also reflected in the controls, where manual checks on the workarounds and exceptions continually drive up costs and include the risk that those manual checks are not carried out adequately.

The complex and constantly changing and more burdensome regulations mean that important risks can no longer be mitigated by manual control measures alone. Further, in addition to the increasing costs of the process itself, there is increasing pressure on monitoring. Assurance on the operation of the control framework is sought through increasing first-, second- and third-line controls, driven by the Three Lines of Defense (3-LoD) model. The costs of the manual work involved in implementing the control measures plus the costs associated with manually monitoring the operating effectiveness of the control measures lead to an ever-increasing cost of control.

Three Lines of Defense

The Three Lines of Defense model consists of three lines that together oversee the management of risk. The first line consists of managers and employees who are responsible for identifying and managing risks as part of their daily work. The second line provides support and guidance by offering guidelines, policies and control frameworks. In addition, the second line also takes care of monitoring to determine that the risks are correctly managed. Finally, the third line focuses on an independent review or audit of the control framework as a whole or parts thereof, including the activities of the first and second line. Often this role is fulfilled by an internal audit department ([CIIA21]).

Besides the complexity mentioned above and the increasing cost of control, we see increasing digitization. Financial institutions are increasingly serving customers online, adapting processes and IT systems. Customer journeys are designed and adapted and new systems are purchased and/or developed. Redesigning and re-implementing processes gives you an opportunity to also manage your risks differently or, better yet, to prevent them. By including the (process) risks as early as possible in your design, you can organize the control of these risks much more efficiently. Read “Control by Design”! A groundbreaking and new idea? Well, no, but it is one that needs to be put into practice in order to actually reduce the cost of control. To achieve this, we will first consider the definition of Control by Design and offer thoughts on how to embed it into the change processes of the organization. We will subsequently explore scenarios in case optimal Control by Design is not feasible, and we will conclude with a number of obstacles and pitfalls one may need to overcome during the implementation of Control by Design.

Control by Design: risk management as a fixed part of the (IT) development process

The term Control by Design is not new. And so-called Application Controls have also been used and implemented for quite some time. The benefits are clear. A well-programmed IT system will do the same thing every time, even on a Monday morning or Friday afternoon. In addition, in terms of monitoring, you don’t need to do labor-intensive customer document monitoring, but instead the Application Control can be tested during the implementation or system change. For the Application Control to continue to function, you can rely on well-designed IT processes (General IT Controls) to ensure that the system continues to do what it is supposed to do. Adequate General IT Controls guarantee a controlled system change process, effective authorization management, and assured system continuity ([ISAC11]). These elements are a prerequisite for determining that an automated control (Application Control) continues to do what it is supposed to do.

Yet within organizations we see that such automated control measures are not always used to their full potential. Several things can stand in the way of broad automation. One example is that the implementation of automated control measures can be complex, expensive and vulnerable to change. It may also be that these measures are not given sufficient priority in change processes because such change processes generally focus on realizing business value. For example, choices are made to automate only key risk mitigation.

The difference between Control by Design and reactively implementing Application Controls (or automating existing manual controls) where risks become manifest, is that Control by Design is about setting up a process in such a way that certain risks are controlled (prevented or mitigated) directly from the process design. This means that the process and the associated risks are the starting point of the risk mitigation, instead of the automation of already existing control measures. It is important to ensure good interaction between the process owner (who knows how his process is structured), the risk manager (who knows where the risks and controls manifest themselves in the process) and the IT specialist (who knows what systems and data are used in the process). By aligning the risk management process to the development process of a product and/or IT system (modification), it is ensured that identifying the root cause of the most important risks, and automating the associated controls, becomes a part of the organization’s standard change mechanism. When prioritizing the change calendar, make sure that it is clear which risk-related changes (e.g. implementing a hard input control) can be included in planned changes (e.g. modifying input screens). After all, it is cheaper to replace the sewer pipe if the street is going to be opened up anyway to install the fiber optic network.

The idea here is as, for example, Elon Musk mentions in his First Principles approach: go back to basics. When you set up the process from scratch instead of adapting an existing process, you are more likely to come up with a different and possibly better suited design. This works best in a greenfield situation, where design choices can still be made and less restrictions exist resulting from an existing system landscape. The reality is that those situations are rare. So you should strive for a situation where the change processes take into account the objectives of Control by Design by default. This article focuses on that challenge.

Example

An example is offering a discount on a customer rate. Of course, this can be done by configuring manual discount authorization/approval levels in the system. A more efficient step, as well as less error-prone, is to let the system determine which customers are eligible for standardized discounts and to apply them automatically. And if the business operation can also work from fixed rates, then the process should already be set up so that discounting is not possible at all. The risk of incorrect or unjustified discounts is therefore enforced from within the process. Going back to basics: the (re)design of process and IT system.

Applying Control by Design in practice

To apply Control by Design, traditional risk management or process models remain in place. Indeed, for broad acceptance and proper operation, it is important to embed Control by Design into the models that are already used by the organization.

It is important to bring different disciplines together as much as possible: the process owner, risk management and the IT delivery partner. This is where two processes come together: the risk management cycle and the (IT) development process. It is in these processes where the Control by Design philosophy needs to be applied.

We recognize four important preconditions for the success of Control by Design. The first precondition is to apply Control by Design during the implementation of new IT systems and the digitization and/or adaptation of processes. The development process goes through the various phases of intake, analysis and determination of the requirements, in order to then build and implement these requirements. Whether you work according to a waterfall, agile or other development methodology, it always comes down to the fact that during the development process several steps of the risk management cycle are integrated, from identifying risks to mitigating and determining the monitoring strategy. In Control by Design, you want to align these steps and look specifically at where IT systems can be adapted to reduce certain risks or, better still, to eliminate them.

To do that, it must be clear which part of the end-to-end process is planned to be changed. To mitigate the risk, it is important to focus on the root cause of the risk. The BowTie and Five Times Why methodologies can be used to identify these root causes. The BowTie method breaks down the risk description into cause, event and effect ([Culw16]), after which elaboration on the cause can be achieved by asking several times why the risk arises. This is how you arrive at the final root cause ([Serr17]). If this root cause occurs in the part of the end-to-end process where a change is planned, Control by Design becomes particularly important. In order to be able to identify a risk, to perform the root cause analysis and to come up with the best approach to eliminate or (automatically) mitigate a risk in the process, the broad expertise of business, risk management and IT needs to be brought together at the right time during the change process.

This brings us to the second precondition; make sure that during the design you know where the key risks are located across the entire width of the business process. The end-to-end insight based on broad expertise is needed at that moment, because the actual root cause of a risk can occur in a completely different part of the process than where the focus lies at the moment of change. An example is when clients provide incomplete documents when requesting a product, which may result in incorrect advise or product approval. This risk can be mitigated in the closing phase by asking the client to submit these documents to complete the request, but carries the risk that the whole assessment and advise process needs to be reperformed to take the information of this documentation into account. Ideally, the cause of the risk can be eliminated in the intake phase, prior to the assessment and advise processes. With the end-to-end process approach, risks are identified across the process and system chain and control measures can be implemented at (or as close as possible to) the place where they arise. This prevents the duplicate implementation of control measures that mitigate the same risk and thus benefits efficiency. From the traditional risk analysis perspective, this step for Control by Design is of additional importance to shape the design in the right place and in a timely manner. You can replace the sewer pipe where the street opens up, but if the real problem is that far too much water needs to be drained, you’re better off replacing the pavement with urban gardens.

The third precondition is to standardize before you digitize. For Control by Design, the principle is that the more a process is standardized, the simpler the process and the easier it becomes to avoid a risk. This is not a new concept but it is an important basis, although it is not always possible. An indication of a lack of standardization is there being too many deviations/workarounds in the process. We will discuss this in more detail later in the article.

The fourth precondition is to have the right and accurate data to be able to use a properly functioning automated control measure (Application Control). It needs to be clear what data is needed at what point in the process. This data must be accurate in order for the control to function properly. After all, garbage in = garbage out. Data needs to be collected from reliable sources, after which the accuracy, completeness and timeliness of the data needs to be determined before its use as a basis for an application control.

C-2022-3-Schoonhoven-1-EN-klein

Figure 1. Four preconditions for Control by Design. [Click on the image for a larger image]

Example

Customer relationship management is important as a part of overall customer service. Does the customer still have the product that best matches his situation? In order to properly conduct customer relationship management, it is necessary to schedule customer contact in order to assess financial product suitability ,to record the notes of conversation and to plan the necessary follow-up actions. High workload and operational errors pose risks to this process. Using IT system support, several process risks can be reduced. CRM software builds in triggers for scheduling the customer appointments. During the appointment, the advisor walks through a workflow process within the IT system with the customer, completes the questions and automatically records the choices in the system. The report cannot be completed in the IT system until the advisor has provided their explanation of any exceptions or specific customer choices. The IT system then automatically saves the report in the customer file and e-mails it to the customer. Many actions are taken over by the IT system. The risk of not engaging in a timely conversation with the customer, not ensuring that all required questions are addressed, not having a record of the conversation, and not actually receiving the relevant information is reduced.

It looks simple on paper and the idea finds many supporters who recognize the benefits, not the least from the cost savings perspective. Who wouldn’t want to make more use of automated control measures to prevent manual work or make it impossible to make mistakes in the first place? However, the reality is different, especially in a more complex organization and a complicated IT landscape that has grown evolutionary. Without specifically taking into account the dilemmas raised by Control by Design, the chances of successful application are greatly diminished.

Some important things to consider in advance:

  1. Control by Design is not necessarily (just) automating the existing manual controls
    Manual controls in the process are performed in a different way than Application Controls or IT Dependent Manual Controls. For example, there may be more professional judgment involved, information needed to perform the control may have to reach the reviewer in different ways through different IT applications, information is recorded in documents instead of structured data, and so on. Automating the action performed by the controller is not the goal of Control by Design: ideally, the step should become redundant (e.g. through a preventive control at the right place in the process). This difference must be clear in order to avoid disappointment in the application of Control by Design and thus hinder its success.
  2. Control by Design is ineffective when there are too many deviations in the process
    A complex process is more difficult to control. When there are many product/process variations, it can be a lot of work to implement an automated, preventive control measure on all deviations that actually mitigate the risk in the process. Professional judgement necessary to perform a control and a lot of room for overruling business rules make it difficult to adequately mitigate risks via application controls. Theoretically, everything can be automated, but at irresponsible costs and with the result that the systems themselves become too complex.
    The better the processes are standardized, and the more product rationalization has taken place, the better the systems can be set up for preventive automated controls.

C-2022-3-Schoonhoven-2-EN-klein

Figure 2. The highway. Control by Design standardizes the primary process and eliminates or monitors possible deviations that can bypass controls. [Click on the image for a larger image]

  1. Control by Design also takes change capacity and thus requires priority
    Implementing and applying Control by Design requires commitment and investment prior to the actual IT implementation, at the expense of the available change capacity. Agile development teams with overflowing backlogs steer towards realizing as much business value as possible. A conscious prioritization of the requirements of Control by Design is therefore necessary but not popular – the value only becomes apparent when avoiding manual activities the cost of which is usually not adequately weighed against the return of other changes prioritized in a sprint. Therefore, when implementing Control by Design, its rules should be enforced: i) deviations from the Control by Design principles and steps in the change process should be made visible; ii) deviations should require formal approval; and iii) temporary acceptance of deviations should be monitored to ensure the right priority on the backlog later on. For example, when an IT system change involves a manual check instead of removing the root cause, this is a deviation from the Control by Design principles and should thus follow the above mentioned steps.
  2. Combined insight into the end-to-end process, IT and risk helps to make the right design choices
    A key objective of Control by Design is that risks should be prevented where they arise. But where is that? End-to-end processes are often long and complex, and transcend the responsibility of individual teams – at the functional, infrastructure and IT application levels. Parts of the process or technology may have been outsourced. Other parts may be using legacy IT products. Making changes in such cases is often complicated, costly and not future-proof.
    In practice, it is difficult to bring all the necessary knowledge together to deliver the right insights. Process documentation may be outdated, incomplete or insufficiently detailed. There are few employees who can oversee the entire process and their time is scarce. A (key) risk analysis at process level with a good understanding of the root causes of risks is indispensable. The importance of involvement of the complete “triangle” of process, IT and risk with the aim to strengthen each other and speed up the development process cannot be stressed enough. Additionally, we emphasize the need to ensure enough time to properly map out the risks and their root causes.
  3. The responsibility for implementation of an IT change that addresses a root cause may differ from where the risk manifests itself
    Even if a solid risk analysis identifies a clear root cause and the necessary (IT) change to prevent or mitigate the risk, the IT change needed does not in all cases fall within the responsibility of the team that feels the impact of the risk.
    Other scrum/development teams have their own responsibilities and priorities. Implementing a fix on a root cause may not score high on their list at that point in time. As a result, quick fixes and workarounds are often implemented, which take the pressure off the necessity to tackle the real root cause and leads to suboptimal solutions (… and go back to item 3 on this list). The parks department doesn’t have time to realize the urban gardens at present, so maybe just replace the sewer pipe for now?

Control by Design Funnel as an alternative

At the beginning of the article, lowering the cost of control was mainly broken down into two parts. With the (automated) prevention of risk, control costs in the primary process are decreased. Another way to reduce the cost of control is by more effective monitoring of the effective operation of controls. Manual file checking is the most labor-intensive form of monitoring. The Control by Design Funnel (see Figure 3) can be applied. This funnel indicated that the highest possible level of (automated) risk control lies in the development process. A lower level should only be examined if higher levels are not possible or the benefits do not cover the costs.

C-2022-3-Schoonhoven-3-EN-klein

Figure 3. Control by Design Funnel. [Click on the image for a larger image]

In order to apply the funnel properly, it is important to not only assess the control measures during risk analysis, but also to adopt a monitoring strategy. As mentioned in the introduction, we see that more and more assurance is sought within organizations by intensifying the testing of operating effectiveness of controls and monitoring whether certain risks still occur. Automated control testing (funnel option 2, see Figure 3) and smarter control testing by using data (funnel option 3) will in that sense contribute to reducing the cost of control. Requirements to enable this automated or smarter indicator-driven control monitoring need to be provided to the software development team as an outcome of the risk analysis, subsequent assessment of the Control by Design (im)possibilities and selection of the alternative according to the funnel.

Example

In an ideal scenario, the risk of not having a record of a customer conversation is prevented by the CRM process as mentioned in the previous example. If automating the process to such an extent is unfeasible at the moment, one could consider automated monitoring to determine whether all the customer appointments conducted that week have resulted in a record saved in the customer file. If this monitoring cannot be automated either, then one can look at the next layer in the funnel, which is based on indicators. It has been established that the cause of an incorrect customer conversation record is a lack of time on the part of the advisor writing a report of his conversation with the customer. If the report is prepared within a day of the conversation, errors are almost never found. Should it take longer, the chance of a faulty record has grown exponentially. Thus, the time between the appointment and storing the record of the interaction is a quality indicator, a means of determining whether the control measure is working adequately. If the indicator shows that less than 95% of the advisor reports are saved within 2 days of the appointment, additional quality checks become necessary. Such monitoring does require that you are able to get the right data from the systems. The method of monitoring must therefore be included during the development process and introduced as a requirement during development. If these requirements are not included, often the only remaining option to assess whether the process is “in control” is the least favored, labor intensive level 4: manual file checking.

Conclusion: Control by Design is an important concept for cost savings and better risk management

There is no universal blueprint for implementing Control by Design. Organizations differ from each other, and the way Control by Design is implemented can therefore vary. This depends, for example, on time, the maturity of the organization and the willingness to embrace the concept. It is therefore important to work towards objectives that are achievable for a specific organization.

Control by Design is an important concept to better manage risks and reduce the cost of control. Implementing the concept sounds simple, but in practice it can be problematic. There are several challenges to be encountered. Implementing Control by Design requires priority, an end-to-end process perspective and the right expertise to be at the table. It is a long-term process to adopt Control by Design as an integral part of the IT change process. Rollout comes down to small evolutionary steps, rather than radical change. It is important to make the right choices in how to deal with the scarce IT capacity: make sure you only have to develop control measures once by applying them in the right place. The effort that is invested during IT development will be more than returned after implementation. Expensive monitoring can be avoided as well as labor-intensive manual file checks to see whether the process is running smoothly.

This requires good anchoring of Control by Design in existing IT development and risk processes. Make the steps and tools as concrete as possible and make it mandatory, measurable and visible.

In addition, there are often other initiatives in the organization associated with Control by Design. Examples are Security by Design and Privacy by Design, Business Process Management or implementations of agile software development methods. These initiatives can reinforce each other and accelerate the transition to Control by Design. So take advantage of this to join forces.

Do you recognize the desire to apply this, are you curious about the experiences or do you want to know more? We warmly invite you to exchange views with us.

References

[CIIA21] Chartered Institute of Internal Auditors (2021). Position paper: The three lines of defence. Retrieved from: https://www.iia.org.uk/resources/delivering-internal-audit/position-paper-the-three-lines-of-defence/

[Culw19] Culwick, M.D. et al. (2016). Bow-tie diagrams for risk management in anesthesia. Anaesthesia and Intensive Care 44(6), 712-718. Retrieved from: https://journals.sagepub.com/doi/pdf/10.1177/0310057X1604400615

[ISAC11] ISACA Journal Archives (2011, September 1). IT General and Application Controls: The Model of Internalization. ISACA Journal. Retrieved from: https://www.isaca.org/resources/isaca-journal/past-issues/2011/it-general-and-application-controls-the-model-of-internalization

[Serr17] Serrat, O. (2017). Proposition 32: The Five Whys Technique. In O. Serrat, Knowledge solutions (pp. 307-310). Retrieved from: https://www.researchgate.net/publication/318013490_The_Five_Whys_Technique

Verified by MonsterInsights