Skip to main content

Shaping the synthetic society: What’s your role in safeguarding society against the systemic risks of AI?

In this article we explore the systemic risks of AI, identifying the various threats it poses to society at different levels of criticality. We complement this analysis with an overview of the interventions at society’s disposal to combat these threats. Reflecting on the (legislative) actions already taking place in the European Union, we identify which threats require most vigilance going forward. We conclude with recommendations on the roles that governments, organizations and citizens can play in further managing the systemic risks of AI.

Introduction

Microtargeted fake news around elections, incorrect data-driven fraud detection, and an epidemic of social media-induced anxiety amongst our youth. These are just a few of the examples that make it increasingly apparent that the risks of artificial intelligence1 (AI) go beyond mere incidents affecting a few unlucky individuals. These risks are not about go away naturally. On the contrary, we are moving towards what we could call a “synthetic society” ([Sloo24]) in which the use of AI permeates more and more aspects of our daily lives. The road ahead holds many promises, but there are also clear and imminent dangers. Unfortunately, current debate on AI risk often yields towards incidents instead of root causes and broader societal impact. Even when discussions turn towards “systemic risks”, they typically result in generic calls pro or contra AI usage2.

In this article we analyze in which ways the use of AI triggers significant societal harms, often through strengthening ongoing trends that are not new to society. In an effort towards gaining a deeper understanding, we propose a use case-based perspective on systemic risk, which discerns the different threats to society that are introduced or increased by using AI.

We will also provide an overview of possible interventions that may fully or partially address the threats, and compare them to the actions taking place in the European Union, particularly in the area of AI-related regulation.

Finally, we reflect on the roles that governments, organizations and citizens3 have to play in shaping the development and use of AI in a way that upholds human rights and dignity while fostering innovation.

Systemic risk defined

AI may create and exacerbate risks at different levels. We could visualize this as a pyramid of risk (Figure 1). At the bottom, operational level AI applications may prove to be unexplainable, biased or simply inaccurate. These are the concrete issues that affect individuals, groups and businesses and provide the juicy headlines about AI we often see in the news. At the top, AI may threaten our very survival as a species. This is the existential risk that is – so far – confined to the realm of spectacular science fiction. Somewhere in-between these categories lies the systemic risk of AI usage: the risk that AI undermines the functioning of society itself and impacts fundamental human rights. The demarcation between operational risk and systemic risk is admittedly gradual and blurry. This is especially true for big tech firms and governments. For organizations providing applications used in critical infrastructures or by large portions of the population, the shortcomings of an individual application may also constitute a systemic risk to society as a whole. Social media platforms such as Facebook or X (formerly Twitter) are concrete examples.

C-2024-11-Beer-1-klein

Figure 1. The pyramid of AI risk (source: authors). [Click on the image for a larger image]

While it may sound intuitively appealing, using the concept of systemic risk begs the question what part of the “system” we call society is actually at stake. If we look at the recent legislation around the use of AI in the European Union, we find ourselves short of an answer. Systemic risk is only mentioned in the Digital Services Act (DSA)4, although interestingly enough it is not explicitly defined there. However, the texts in both the DSA and AI Act (AIA) provide sufficient basis for a definition in the European context. For the purposes of this article, we define systemic risk as all threats of large-scale infringement on the fundamental rights of citizens, either directly or by undermining the institutions and democratic processes that aim to guarantee these rights5.

AI: a systems technology that warrants intervention

The systemic risk of AI matters because AI is a systems technology6 on steroids. Due to its versatility and power, it combines characteristics of various previous major inventions : like nuclear power it can be very destructive, like electricity and the computer it can fundamentally change the way we produce goods and services, like alcohol and drugs it can be incredibly addictive, and like printing, telecommunications and the internet it can change the way we interact. The impact of AI on society could be immense and current developments in society give serious reason for concern. This means that we cannot simply leave the development of AI to self-regulation.

A further argument to try and actively guide the development of AI lies in the astonishing speed at which development takes place. Just think of the way in which the introduction of ChatGPT and other large language models shook the world in a matter of months7. This should urge us not to wait and see how history unfolds. By the time we are confronted with the outcomes, the societal mechanisms needed to keep AI development in check may already be broken beyond repair.

At the same time, it should be recognized that the development and use of AI cannot easily be controlled as it is virtual in nature. In essence, the only resources required are data, computing power and data science skills. Aside from complicating the question of control, this also implies that a complete ban of AI from society, or parts of it, is not a realistic solution to address systemic risk. This is without even considering the potential benefits that society may miss out on by choosing not to invest in the development of AI. In other words: AI is here to stay, it has a huge positive and negative potential, and as society we will have to find a way to deal with it.

Systemic risk unpacked: AI-reinforced threats at three levels

The systemic risks of AI do not exist in a vacuum. So far we have been talking about the use of AI as if it poses radically new challenges to society. This is not the complete picture. In fact, being a versatile systems technology, AI often impacts society by strengthening structural trends that are already ongoing due to other social, economic and technological factors. For instance, disinformation has a long history, and the emergence of deepfake technology adds a new and troubling dimension to it. We therefore approach the systemic risk of AI by looking at what we will call AI-reinforced threats8. The connection to broader sociotechnical challenges helps to see AI risk as more than a technical problem which requires a technical fix. It also helps to be realistic and realize that remedies to the risks of AI on their own are not likely to solve social challenges inherently tied to the state of society and human nature. To continue the abovementioned example: even if we are able to properly address the risks of deep fakes, disinformation will never be completely eradicated from society.  

Each of these threats highlights a different risk to society, but they are explicitly not mutually exclusive. In fact, in practice they work simultaneously and may strengthen each other.

C-2024-11-Beer-2-klein

Figure 2. Overview of AI-reinforced threats (source: authors). [Click on the image for a larger image]

From a conceptual point of view the AI-reinforced threats to society work at three levels, as highlighted in Figure 2. First and most fundamentally, the use of AI can undermine our shared view of reality. If we are no longer able to discern facts from fiction, it not only makes us vulnerable in our personal lives, it also erodes the agreement within society about basic facts that underpin the social contract. The effects can be profound and may carry over to the other two levels discussed next. At this most fundamental level of shared reality, we discern two key threats.

Data-driven deception (disinformation and impersonation). Our belief in what is true and what isn’t has been used as tool and a weapon since the dawn of humanity. In this arena, generative AI9 is a potential game-changer that enables low-cost generation of realistic, synthetic text, audio and video. This may severely impact the human ability to discern facts from fiction and fakes. The motives behind its use can be criminal, political, or part of hybrid warfare, targeting either individuals or society. To date, the impact has been largely focused on individuals, as seen in cases of advanced voice cloning scams or deepfake revenge porn. However, it is easy to see the disruptive potential of political or military uses of generative AI. For example, automatically generated images, audio or video could be used to incite racial conflict and violence or disrupt the military chain of command.

Machine-in-the-middle (digitally mediated society). The use of AI further increases manipulation risks at the digital “interfaces” between people and organizations. If our interactions primarily take place virtually, the operators of our digital communication channels and platforms essentially have the power to determine what we see and don’t see. This also means we are not necessarily looking at the same reality anymore. For example, targeted pricing on online commercial platforms based on profiling undercuts the basic principle of markets having a single price that coordinates actions within that market. AI allows such manipulation to take place on a personalized level, at scale. Other concrete examples include the manipulation of search engine results or the creation of filter bubbles in social media.

At the second level, the use of AI has the potential to shift the balance of power between and within societies. The ability to automate intelligent actions reduces the need for human resources to exert control over others. At the same time, AI introduces new opportunities for surveillance and the use of force, enabling a further concentration of power in the hands of a few. This increases the risk of exploitation. At this level, two distinct threats emerge.

The modern panopticon10 (mass surveillance). Like misinformation, mass surveillance and other privacy threats are a topic that far precede the invention of AI. However, AI provides “Big Brother” with some powerful new tools. Through technologies such as pattern recognition and more specifically face recognition, AI enables the increase of mass surveillance both in scale and scope respectively. Ubiquitous profiling and monitoring may result in the ultimate surveillance society where everyone is being watched all the time. China’s social scoring system shows this is not just dystopian fiction. While mass surveillance is often associated with the state versus its citizens, it has equivalents in the context of workforce management and customer management. Both employees and customers of organizations are also at risk of extensive and invasive monitoring. Concrete examples include Amazon’s approach to worker surveillance ([Gru24]) and the Cambridge Analytica microtargeting scandal.

Autonomous armament (AI-powered weapons). AI enables far-reaching autonomy of both digital and physical weapons. One effect is a decreasing threshold to use violence, since the aggressor faces limited risk to incur the loss of human lives. Secondly, the opportunities of using AI in a military context may spark an arms race between nations or alliances. Domestic applications in the domain of policing, targeting the government’s own citizens, are also not unthinkable. While the image of an autonomous armed drone might be most closely associated with this threat, the development of virtual weapons such as highly automated hacking applications could also wreak havoc in digitalized societies.

At the third level, the use of AI impacts the fundamental human rights and well-being of large groups within society. The threats at this level may not directly affect the way we perceive the world or the power balance between actors in society, but their impact can be serious nevertheless. We identify five different threats at work here.

Human-off-the-loop (automated decision-making). The trend towards automated decision-making is as old as the invention of the computer. Where traditionally the decision rules used to be defined manually at design-time, AI takes automated decision-making a few steps further. By basing decisions on inferences from (large) sets of data, the possibilities of autonomous automated decision-making are greatly enhanced. This is not inherently problematic. When done right, in many cases automated decision-making can be an improvement over a fully manual process. Existing biases can be identified and accounted for. However, meaningful and proper means for human intervention, appeal and redress are not naturally guaranteed ([Eck18]). Combined with a continuous pressure for increased efficiency in most societies, we could end up in a world where there is nothing to be done after the “computer says no”. Concrete examples include the automated suspension of accounts at e.g. Microsoft ([Huls24]), automated reporting of child pornography by social media platforms, and the social benefits scandal in the Netherlands.

Statistical straitjacket (marginalization of all who deviate). Modern applications of AI are based on advanced statistics. As such, they inherently have the propensity to reproduce and even consolidate the status quo, including any undesirable or illegal biases. Secondly, they may not perform as well on subgroups with traits or behaviors that deviate from the norm. When insufficiently accounted for, this threat results in marginalization of statistical outliers. The statistical straitjacket takes many forms. We can see it at work in maltreatment of citizens with deviating backgrounds by government agencies ([PwC24] and [KPMG20]), but also in the difference in voice and face recognition accuracy between men and women.

Digital dependence (dependence on technology). We already live in a society almost completely dependent on technology. The occasional incidents with vital digital or physical infrastructures prove this point time and again, with the global CrowdStrike outage as the most recent example ([West24]). The use of AI is likely to push this dependency even further as more complex mental tasks can and will be offloaded to machines. This could lead to deskilling beyond the critical level within the population, perhaps even for mental capabilities such as critical thinking. The plane crash due to human error after the autopilot malfunction of flight AF477 in 2009 is an example of the possible impact when technology fails us ([Wiki24]). The same type of continuity risk applies in cases where AI allows us to do things that are literally not humanly possible. A concrete example is provided by the security market flash crashes resulting from errors in algorithmic trading systems ([Sorn11]).

Intellectual expropriation (threats to value creation from intellectual property). The concept of intellectual property (IP) and definition of its scope and boundaries have always been a complex discussion. Even more than physical property, intellectual property is a social construct up for debate. The ongoing discussions around patent rights in the pharmaceutical industry provide a clear example. The remarkable capabilities of generative AI have reframed discussions around intellectual property, offering a fresh perspective on the issue. Ingesting very large amounts of data and using these to recreate or mimic original works of thought or art, generative AI enables the appropriation of value from other party’s IP. Another example is search engines providing information to users without generating revenue for the source websites, a significant concern for media companies.

Digital addiction (threats to mental and social well-being). As with many of the previous threats, issues around addiction are not new to mankind. However, in the domain of digital services regulation to prevent addictive effects – or even awareness of such risks – has largely been lacking.11 The use of AI opens up new avenues for digital providers to get users hooked on their services. AI enables hyper personalization of digital content, allowing the exploitation of human psychological weaknesses, tailored to the individual. Perhaps the most vivid example of this threat is the excessive smartphone and social media usage by children (and adults).

Mitigating AI-reinforced threats: a toolbox of interventions

While the breadth and scope of the societal threats posed by AI may seem daunting, society has several mitigating actions at its disposal. In the toolbox available we discern four modes of regulation that can be deployed to address the systemic risks of AI12, shown in Figure 3. Each of these modes of regulation acts on a different part of the AI market and lifecycle.

Cultural interventions. This category contains interventions that aim to affect values, norms and knowledge within society. It’s the domain of active citizens, opinion makers and influencers, NGOs and lobby organizations. It’s also the only category that does not rely on government action, although the government can play a facilitating role. The primal intervention here is the public debate itself, which is needed to establish the norms and values around AI use. In a sense this is an intervention preceding all others. It provides a common ground to set the political agenda for legislation and determine what kind of behavior is socially acceptable in AI. Secondly, education of the public can play an important role in reducing some of the AI-reinforced threats. An example is the National AI Course in the Netherlands. Knowledge of the strengths and weaknesses of AI helps to bolster resilience against exploitative use cases. Again, efforts to raise awareness and instruct the general public may be organized within society itself or can be facilitated by the government. Next, voluntary agreements, e.g. via covenants between stakeholders within society, can also play a role to regulate the use of AI without taking legislative action. A clear example is the growing number of schools that ban the use of smartphones during school hours. Finally, society itself can play a role in monitoring the behavior of governments and organizations alike via independent investigations and research by parties such as NGOs or labor unions.

Governance interventions. This category includes interventions affecting the market model for AI and the position of actors in these markets. The most direct intervention is completely prohibiting certain business models. For example, to combat excessive profiling of customers by organizations, the government could forbid any business model that evolves around paying with your privacy. Other interventions work by affecting the position and power of actors within society. The least intrusive option here is to strengthen the regulatory bodies charged with market oversight as a countervailing power to large organizations. Similarly, for publicly provided services democratic control over the executive branches of government could be strengthened by shifting power to institutions that have a supervisory task. A more drastic step would be to set up state-sponsored alternatives to the existing commercial offerings – such as the GPT-NL language model – leaving aside the question of feasibility for now. Finally, as an ultimate measure the power of large players in the market can be curbed through nationalization or forcing the divesture of parts of these organizations.  

Engineering interventions. This category comprises the interventions that act on the development process of AI applications. The goal here is to set rules that prevent flawed or unethical design decisions. First, a compulsory systemic risk assessment forces organizations to explicitly consider and address risk as part of the development and maintenance process. Setting mandatory design principles for AI applications could have a similar effect. Such interventions leave the details of the design itself to the developing party. More prescriptive interventions include setting specific technical standards, or minimal requirements regarding transparency and the assessment of bias13. Such standards can be imposed by the government or developed by market parties as a form of self-regulation.

Outcome interventions. This category contains interventions aimed at regulating which use cases for AI can be developed and managing the consequences of the use of such applications. The most fundamental intervention within this category is updating the legal framework itself, to account for the new societal challenges resulting from the rise of AI. Given that an adequate framework is in place, the government may choose to put limitations on specific use cases in certain domains. The AI practices prohibited under the EU AI Act, such as facial recognition and social scoring systems, provide a clear example of this approach. Such limitations can be focused on specific vital areas in society, such as critical infrastructures and electoral processes. From a cross-border perspective, international treaties can be negotiated to control specific AI developments. However, the question of treaty enforcement will remain an issue.

C-2024-11-Beer-3-klein

Figure 3. Overview of interventions to address AI-reinforced threats (source: authors). [Click on the image for a larger image]

It goes without saying that substantial consequences of incompliance are a prerequisite for any of the interventions that work through legislation to be effective. This includes both fines imposed by regulatory bodies and liability for damages caused to third parties due to reckless AI usage. Without such penalties, bad actors may cynically and rationally decide that non-compliance is the most profitable course of action. This is what happened for example in the domain of data privacy before the EU General Data Protection Regulation (GDPR) came into effect.  

Choosing interventions: precaution versus non-intervention

Given the toolbox at our disposal, the next question is how to combine the interventions into an effective intervention strategy. This strategy may differ per threat, because of differences in the nature and severity of each threat. We need to recognize that both too much and too little intervention can be harmful to society. The threats at all three levels – shared reality, the balance of power, and fundamental human rights – must be addressed, while also considering the risk of stifling innovation and lagging behind other nations in the development of critical systems technology. At the very least we need some kind of criterion to determine in which cases the principle of precaution or non-intervention should prevail. While the principle of non-intervention fits with the relatively free market economy in Europe, we have already argued the risks to society might be too great to leave developments entirely to the market, and in some domains, the EU has adopted the precautionary principle.

We address the debate between precaution and non-intervention by essentially flipping the question on its head. To do this, we must first acknowledge that uncertainty and (moral) ambiguity are at the core of the debate around the use of AI. No one can reliably predict how AI technology will advance, no one can make an exhaustive list of the possible use cases, and no one knows how these use cases will play out in practice. More importantly, in the end many discussions around the use of AI are not clear-cut problems with a single right answer, but political and moral dilemmas that need to be agreed upon within society and translated into action, for example in the form of legislation. As the development of AI is taking shape, we need a continuous public debate and ongoing refinement of our interventions. In our society, the driving force behind the moral, political, and legislative processes is the liberal democratic constitutional state, which ensures safeguards for inclusive, pluralistic public debate and effective lawmaking. The answer to the question of intervention is therefore as follows: society should intervene swiftly and strongly in all AI-related developments that directly threaten the functioning of the democratic state itself.14

This approach does not provide all answers. Much can be debated about the definition of democracy and the point at which it ceases to function effectively. Nevertheless, our approach provides a clear direction. The threats at the level of shared reality and the balance of power clearly pose the most direct risk, given their fundamental and pervasive nature. They have the potential to end democracy internally via the election process or externally through war. For these threats the application of the precautionary principle is justified, including the use of more prescriptive interventions such as the prohibition of harmful use cases. Generally, for the other threats, we can afford to be somewhat more lenient, allowing for a degree of trial and error through market mechanisms.

Where do we currently stand in addressing AI-reinforced threats?

While we noted a lack of clarity and structure in the debate around the systemic risks of AI, this does not mean that mitigating actions are completely lacking. In the EU the AI Act, Digital Services Act (DSA) and Digital Markets Act (DMA) provide the clearest examples. These laws have been drawn up with the clear goal of contributing to the responsible use of AI in the interest of society. In Figure 4, we provide an overview per AI-reinforced threat that maps the criticality of the threat to strength of the interventions we currently see taking place. We emphasize this is a high-level assessment only, with the sole purpose of identifying which threats could be prioritized for further action. The appendix contains a more detailed overview to support our analysis.

C-2024-11-Beer-4-klein

Figure 4. High-level assessment of the current interventions in place per AI-reinforced threat (source: authors). [Click on the image for a larger image]

Based on our observations, we can conclude that for most of the AI-reinforced threats we still face concerns regarding the effectiveness of the current interventions in place. For a number of threats, we even lack the consensus and norms to take proper action. Combining this with our previous discussion on the desirability of using the precautionary principle, we can see that the most concerning threats are those of misinformation and impersonation (data-driven deception), and autonomous weapons (autonomous armament). In these cases, fundamental risk combines with a lack of effective interventions. Mass surveillance (the modern panopticon) comes in third, as for this threat more safeguards are in place, especially regarding the role of the government. From a societal perspective these three AI-reinforced threats can be seen as the most urgent to address.

What to do next? Action points for government, organizations and citizens

Our analysis has provided a framework for thinking about the systemic risks of AI, the toolbox of interventions available to society, and a view on the priorities on the road ahead. The pivotal remaining question is: who should act? As primary stakeholders the government, organizations and citizens all have an important role to play.

Government

The government occupies a precarious position regarding the systemic risks of AI. It not only holds the legal authority to regulate AI usage but is also one of the most powerful entities capable of causing significant harm. Of course, the government is not a single entity, but a collection of institutions. Considering our topic, we distinguish between the parts of government involved in legislation and regulation and the executive branch of government. In the area of legislation and regulation, we suggest the following action points:15

  • Continually reassess the bans or moratoria already in place (e.g. via the AIA) in light of new AI developments. Insofar a ban is not deemed feasible, consider developing specific standards and requirements for critical domains that are exempted from current legislation, such as national security.
  • Apply the precautionary principle to all developments related to AI-powered disinformation and impersonation and autonomous weapons. Ensure democratic institutions and processes are as robust to destructive forces as possible.
  • At the same time, specifically for autonomous weapons, ensure developments beyond the borders are closely monitored and acted upon. Treat AI capabilities as a key asset for strategic autonomy.
  • Update or clarify any legislation affected or outdated by the advent of AI.
  • Curtail the power of market players when they become too dominant in one of the key (social) infrastructures and prove not to be amenable to regulation.
  • Require organizations to perform a thorough risk assessment throughout the AI lifecycle and include clear guidance on what is expected from such assessments.
  • Set and enforce standards for critical uses of AI. This can take the form of mandatory design principles or technical requirements.
  • Set-up or support AI literacy programs to adequately inform citizens about AI.
  • Set up or strengthen the regulatory bodies to monitor compliance with AI-related regulation, and ensure fines and penalties are sufficiently high to have a deterrent effect.

The executive branch of government is where AI is being used in the services toward citizens. We suggest the following action points:

  • Ensure effectiveness of AI use is proven before deployment, or that at least a sunset clause is required.
  • Be prepared to provide transparency over the use of AI.

Organizations

Organizations are fundamentally incentivized to act in the interest of their most important stakeholders, such as shareholders or supervisory bodies. However, this does not mean they have a passive role and can only be expected to act under pressure of laws and regulations. Proactively addressing the systemic risks of AI can make sense from a strategic perspective. Acting in the interest of society helps to pre-empt stringent and costly regulation. Furthermore, it can help to remain attractive to employees and customers. We suggest the following action points, both for commercial and non-commercial organizations:

  • Invest in co-developing industry standards and practices that both improve the overall quality of AI applications and pre-empt further legislative actions.
  • Obtain insight in the portfolio of AI applications in development and operation as a basis to ascertain compliance with AI-related regulation.
  • Implement a lean but effective risk assessment process as part of AI application development, to ensure alignment with organizational goals and prevent blowback on ethical or legal issues later on. This process should cover technical, legal and ethical risks and dilemmas. With regards to systemic risk, the trends and threats described in this article may be considered as part of the assessment.
  • Establish fit-for-purpose AI governance practices, aligned to the risk profile of the organization’s AI portfolio. This includes topics such as data management, AI literacy, and monitoring of application performance.

Citizens

Citizens (and consumers) are often on the receiving end of AI mishaps and the harsh truth is that individually they may not be able to stand up against the state or a large organization. However, this does not justify a passive approach. In the end, it is the collective of citizens that defines public values and norms and (indirectly) guides the direction of government. It is a matter of getting involved and organized. Our suggested actions:

  • Get informed and contribute to informing others about AI, its applications, risks and possible mitigations at the level of the individual.
  • Get organized – be it via NGOs, unions, political parties or otherwise – to influence the public debate, government and organizations.
  • Take active part in the social, economic and ethical discussions that are needed to shape the values and norms that will determine what our AI-infused society will look like now and in the future.

Conclusion

In our analysis, we observed that AI has the potential to disrupt and destabilize our society in many ways. The systemic risk of AI is a multifaceted challenge, that can best be understood in terms of the broader societal threats that are aggravated by the introduction of AI. These threats relate to basic human rights, the balance of power within society and even our shared concept of reality. However, we are not powerless against these threats. We have discussed the toolbox of interventions that can be deployed to counter the systemic risk of AI and we see that – in the European context – action is already being taken via legislation aimed at AI and digital services and markets. Not all threats are mitigated yet and we can reasonably expect many more AI innovations to introduce new societal challenges. Being able to respond to such challenges effectively and in line with human rights is of paramount importance. We should therefore be extra vigilant regarding those AI-reinforced threats that directly undermine our liberal democratic institutions. Shaping the future development of AI is a collective responsibility, and everyone has a role to play in this important endeavor.

Appendix – Observations on the current interventions per AI-reinforced threat

C-2024-11-Beer-1t-klein

Notes

  1. “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment” ([OECD24]).
  2. The open letter in which a number of prominent AI experts called for a complete pause in the development of powerful AI systems ([Futu23]) is another well-known example of such generic reasoning combined with rather simplistic solutions.
  3. Throughout this article, we will not distinguish explicitly between people in their capacity as citizens within society and as consumers of goods and services.
  4. The DSA requires very large online platforms and search engines to perform a “systemic risk assessment” on their services.
  5. We note that this definition is only valid in the context of liberal democratic states. For a society under authoritarian rule, systemic risks will also be different.
  6. A versatile and unpredictable technology with a major impact on society ([WRR21]).
  7. ChatGPT reached 100 million monthly active users within 2 months ([Hu23]).
  8. The threats presented in this article are by no means a complete overview of all systemic risks to society; they are just an overview of those risks that are introduced or significantly amplified by the advent of AI.
  9. Simply put; generative AI models produce “content” such as text or images, instead of predictions. Although from a technical point of view, this content is also a complex form of prediction based on the input of the model.
  10. In its original conception by Jeremy Bentham the panopticon was an architectural design principle that induced self-regulation through the possibility of being supervised at any time. In the AI-powered version everyone is being supervised all the time.
  11. One exception is the recent EU Digital Services Act which requires very large online platforms to perform a systemic risk assessment specifically taking into account “negative consequences to […] physical and mental well-being”.
  12. Based loosely on Lessig’s “pathetic dot” theory ([Less06]) and adapted to be used on AI-usage as the object of regulation. Lessig discerned interventions via the market (here: governance), legislation (here: outcomes), architecture or “code” (here: engineering), and culture (here: culture).
  13. Examples of standardization in the domain of AI include the NIST AI Risk Management Framework and the ISO/IEC 42001, although both standards do not prescribe specific technical requirements.
  14. These arguments echo Karl Popper’s reasoning on protecting the open society ([Popp94]): the question is not who should rule, but how to ensure that unfit rulers can be peacefully deposed.
  15. Based on [Sloo24].

References

[Eck18] Van Eck, M. (2018). Geautomatiseerde ketenbesluiten & rechtsbescherming. Een onderzoek naar de praktijk van geautomatiseerde ketenbesluiten over een financieel belang in relatie tot rechtsbescherming. Retrieved from: https://pure.uvt.nl/ws/portalfiles/portal/20399771/Van_Eck_Geautomatiseerde_ketenbesluiten.pdf

[Eck24] Van Eck, M. (2024, February 16). Profilering en geautomatiseerde besluiten: een te groot risico? (in Dutch). Hooghiemstra & Partners. Retrieved from: https://hooghiemstra-en-partners.nl/profilering-en-geautomatiseerde-besluiten-een-te-groot-risico/

[EDPB24] European Data Protection Board (2024, April 17). Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms. Retrieved from: https://www.edpb.europa.eu/system/files/2024-04/edpb_opinion_202408_consentorpay_en.pdf

[Futu23] Future of Life Institute. (2023, March 22). Pause Giant AI Experiments: An Open Letter. Retrieved from: https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[Gru24] Gruet, S. (2024, January 23). Amazon fined for ‘excessive’ surveillance of workers. BBC. Retrieved from: https://www.bbc.com/news/business-68067022

[Hu23] Hu, K. (2023, February 2). ChatGPT sets record for fastest-growing user base – analyst note. Reuters. Retrieved from: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

[Huls24] Hulsen, S. (2024, April 2). Microsoft blijft accounts blokkeren zonder uitleg, ondanks nieuwe regels (in Dutch). RTL Nieuws. Retrieved from: https://www.rtl.nl/nieuws/artikel/5441925/microsoft-blokkeert-zonder-uitleg-account-experts-overtreding-dsa

[KPMG20] KPMG Advisory (July 10, 2020). Rapportage verwerking van risicosignalen voor toezicht Belastingdienst (in Dutch). Retrieved from: https://www.rijksoverheid.nl/documenten/kamerstukken/2020/07/10/kpmg-rapport-fsv-onderzoek-belastingdienst

[Less06] Lessig, L. (2006). Code version 2.0. Basic Books, New York. Retrieved from: https://commons.wikimedia.org/wiki/File:Code_v2.pdf

[OECD24] OECD. (2024, March). Explanatory Memorandum on the Updated OECD Definition of an AI System. OECD Artificial Intelligence Papers, No. 8, OECD Publishing, Paris. Retrieved from: https://doi.org/10.1787/623da898-en

[Pete24] Peters, J. (2024, August 22). How the EU’s DMA is changing Big Tech: all of the news and updates. The Verge. Retrieved from: https://www.theverge.com/24040543/eu-dma-digital-markets-act-big-tech-antitrust

[Popp94] Popper, K., & Gombrich, E. H. (1994). The Open Society and Its Enemies: New One-Volume Edition (NED-New edition). Princeton University Press. Retrieved from: https://doi.org/10.2307/j.ctt24hqxs

[PwC24] PwC Advisory N.V. (January 2024). Onderzoek misbruik uitwonendenbeurs (in Dutch). Retrieved from:  https://open.overheid.nl/documenten/dpc-97a155051e66b292ef3cc5799cb4aef61dcbf46b/pdf

[Sawe24] Sawers, P. (2024, June 14). Meta pauses plans to train AI using European users’ data, bowing to regulatory pressure. TechCrunch. Retrieved from: https://techcrunch.com/2024/06/14/meta-pauses-plans-to-train-ai-using-european-users-data-bowing-to-regulatory-pressure/

[Sloo24] Van der Sloot, B. (2024). Regulating the Synthetic Society. Hart Publishing, Oxford. Retrieved from: https://www.bloomsburycollections.com/monograph?docid=b-9781509974979

[Sorn11] Sornette, D. & Von der Becke, S. (August 2011). The Future of Computer Trading in Financial Markets – Foresight Driver Review – DR 7. Government Office for Science. Retrieved from: https://assets.publishing.service.gov.uk/media/5a7c284240f0b61a825d6d18/11-1226-dr7-crashes-and-high-frequency-trading.pdf

[West24] Weston, D. (2024, July 20). Helping our customers through the CrowdStrike outage. Official Microsoft Blog. Retrieved from: https://blogs.microsoft.com/blog/2024/07/20/helping-our-customers-through-the-crowdstrike-outage/

[Wiki24] Wikipedia. Air France Flight 447. Retrieved August 30, 2024, from: https://en.wikipedia.org/wiki/Air_France_Flight_447#:~:text=Air%20France%20Flight%20447%20(AF447,inadvertently%20stalling%20the%20Airbus%20A330.

[WRR21] Wetenschappelijke Raad voor het Regeringsbeleid. (2021). Opgave AI. De nieuwe systeemtechnologie. WRR-Rapport 105, Den Haag. Retrieved from: https://www.wrr.nl/binaries/wrr/documenten/rapporten/2021/11/11/opgave-ai-de-nieuwe-systeemtechnologie/WRRRapport_+Opgave+AI_De+nieuwe+systeemtechnologie_NR105WRR.pdf

Verified by MonsterInsights