Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
Across the world, AI is used as a tool for political manipulation and totalitarian repression. Stories about AI are often stories of polarization, discrimination, surveillance, and oppression. Is democracy in danger? And can we do anything about it?
In this compelling and balanced book, Mark Coeckelbergh reveals the key risks posed by AI for democracy. He argues that AI, as currently used and developed, undermines fundamental principles on which liberal democracies are founded, such as freedom and equality. How can we make democracy more resilient in the face of AI? And, more positively, what can AI do for democracy? Coeckelbergh advocates not only for more democratic technologies, but also for new political institutions and a renewal of education to ensure that AI promotes, rather than hinders, the common good for the twenty-first century.
Why AI Undermines Democracy and What to Do About It is illuminating reading for anyone who is concerned about the fate of democracy. Now available as an audiobook
A specter is haunting today's political world, and it's an ugly and dangerous one: authoritarianism. Everywhere democracies are under threat, including in the West. Whereas the late 1990s still saw a wave of democratization, today there are many anti-democratic tendencies, which sometimes result in a slide towards authoritarianism. A 2021 report shows that dictatorships are worldwide on the rise. Polarization is worsening, there is a dramatic increase in threats to freedom of expression, and 70 percent of the world population now lives in an autocracy, compared to 49 percent in 2011 (Democracy Report 2022). Western democracies are not immune to this trend. Some speak of a new world order, with powerful players trying to destroy the international order set up after World War II and the United States falling prey to polarization and "decay" (Erlanger 2022). There are also autocratization tendencies in Europe in the form of a lurch to the (far) right, for example in Hungary, Poland, and Serbia. In September 2022, a right-wing nationalist Swedish party gained more than 20 percent of the vote. The United Kingdom became politically and financially unstable after populists and later ultra-liberal conservatives rose to power. In the same year, far-right populists won the elections in Italy, and their leader Giorgia Meloni became prime minister. As is well known from history, anti-democratic politicians can come to power through democratic elections and subsequently undermine or even abolish democracy. In some contexts, this is an imminent danger today.
Digital technologies such as artificial intelligence (AI) offer many benefits and opportunities to society. But they also seem to play a role in those erosions of democracies and in the rise and maintenance of authoritarian and totalitarian regimes. Social media are blamed for helping to destabilize democracies by destroying truth and increasing the polarization. AI fares not better. Today, stories about AI are often stories of manipulation, polarization, discrimination, surveillance, power, and repression.
Even if authoritarianism might not be immediately on the horizon, the risks for democracy seem very real. Governments and international organizations are concerned. The US Biden administration recently warned of the dangers AI poses to democracy, complaining that there are limits to what the White House can do to regulate the technology.1 A European Commission website headlined "democracy in peril" in the light of a report on risks posed by current digital technologies such as false information, manipulation, surveillance, and the increased power of commercial entities on which we depend in this area and which set the agenda for our digital future.2 And earlier, the United Nations High Commissioner for Human Rights warned of the impact of AI on human rights, rule of law, and democracy.3 In other words, AI has come to be seen as a problem, and it's increasingly recognized that it's a problem for democracy.
Consider the Cambridge Analytica case (Cadwalladr and Graham-Harrison 2018), which involved voter manipulation based on analysis of big data. Millions of Facebook data were collected without people's consent and used for targeted political advertising in order to support political campaigns in the United Kingdom and the United States. And the use of AI in combination with social media has been said to drive political polarization and to propagate divisions in society, which can then be exploited by groups striving for power (Smith 2019) - groups that are not necessarily democratic. The rise of the far-right QAnon movement in the United States, which has led to a violent insurrection at the Capitol, is a case in point. It seems that we risk being locked in our own bubbles and echo chambers, besieged by algorithms that try to influence us and drive us apart.
The program ChatGPT, a large language model that has recently become both very popular and ethically controversial, has also been linked to undermining democracy. Some worry that AI could get out of control and take over political decision making.4 This may seem rather far-fetched and at least a matter for the distant future. But there is also the near-future concern that AI could nevertheless be used to influence political decision making. For a start, it could be a powerful lobbying instrument. For example, it could automatically compose many op-eds and letters to the editor, submit numerous comments on social media posts, and help to target politicians and other relevant actors - all at great speed and worldwide. This could significantly influence policy making (Sanders and Schneier 2023). It could also be used to spread propaganda - thus influencing elections.
Yet AI is not only being used to gain power but it also increasingly plays a role in existing governance institutions. Here, too, AI has been shown in a bad light. Consider the automated welfare surveillance system used by the Dutch, which a court halted because it said that it violated human rights and breached people's privacy: did the use of this system amount to "spying on the poor" (Henley and Booth 2020)? In Austria there was controversy about the algorithmic profiling of job seekers by the public employment service AMS, which was accused of unjustly discriminating against some categories of job seekers (Allhutter et al. 2020). AI court decision making has also been criticized for being biased. In the United States, the COMPAS algorithm, used by probation and parole officers to judge the risk of recidivism, has been said to be discriminating against black defendants: a report claimed that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism" (Larsen et al. 2016).
In the meantime, AI also became popular with autocratic governments. Western media have reported that China has been using AI for surveillance and repression. According to the New York Times, its citizens are under constant surveillance: phones are tracked, purchases are monitored, chats are censored. Predictive policing is used to predict crime, but also to crack down on ethnic minorities and migrant workers (Mozur, Xiao, and Liu 2022). Human Rights Watch claims that the Chinese government collects a lot of personal information and uses algorithms to flag people who are seen as potentially threatening. They say that this has led to restrictions of freedom of expression and freedom of movement. Some people are sent to political education camps (China's Algorithms of Repression 2019).
But the use of AI surveillance technology is not restricted to China or even to authoritarian regimes. Developed and supplied by China, but also by countries such as the United States, France, Germany, Israel, and Japan, it is proliferating around the world and even in democracies. According to a Carnegie's AI Global Surveillance (AIGS) index, more than half of the advanced democracies in the world use AI surveillance systems (Feldstein 2019a). Even if such technologies are used in political systems that call themselves democratic, there is always the risk that they are used for repressive purposes. In 2020, Brazil's far-right president Bolsonaro was accused of "techno-authoritarianism" for creating extensive data collection and surveillance infrastructure, in particular the Citizen's Basic Register that brings together data for citizens from health records to biometric information (Kemeny 2020). And even in Europe and the United States, the Covid-19 pandemic has been used to mobilize AI and other digital technologies in the service of stricter law enforcement and control of the population. For example, in 2020 Minnesota used contact tracing to track protestors engaged in demonstrations in the wake of the police killing of George Floyd (Meek 2020), and AI has been widely used to assist far-reaching population mobility control measures.
As we seem to be on course for the emergence of new forms of authoritarianism and totalitarianism aided by digital technologies, it is high time we thought about AI and democracy and repeated Hannah Arendt's question in The Origins of Totalitarianism (2017 [1951]) after World War II, but this time in the context of digital technologies. Notwithstanding differences in specific historical contexts, is democracy in danger, and are conditions for the rise of totalitarianism emerging again today? In what way do digital technologies contribute to these conditions? Does AI undermine democracy and lead to new forms of authoritarianism and totalitarianism - let's call them "digital authoritarianism" and "digital totalitarianism"5 - and, if so, how does that work and what can we do about it? And more generally, and beyond the question regarding totalitarianism, what is the impact of AI on democracy? Is AI good for democracy, and if not, what can be done about it? How can we make sure that AI supports democracy?
This book argues that AI as it is currently developed and used undermines the fundamental principles and knowledge basis on which our democracies are built and does not contribute to the common good. After putting the question in a historical context and analyzing it, guided by political-philosophical theories of democracy, it offers a guide to some key risks that AI poses for democracy. It shows that AI is not politically neutral but currently shapes our political systems in ways that threaten democracy...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.