Democracy and Our Digital Future in the Age of AI

Artificial Intelligence AI
Kismet, a robot designed to explore the concepts of sociable robots. Photograph taken by Polimerek in MIT Museum during Wikimania 2006, with permission of authorities of Museum.

By Amelia C. Arsenault

Democracy and Our Digital Future in the Age of AI

The last several years have seen significant advancements in the development of artificial intelligence (AI) and related technologies, spurring staggering rates of global investment and discourse espousing the promises of the ‘AI era’. As states have rushed to develop or acquire this technology, scholars have expressed concerns that AI may undermine democracy by expanding surveillance and facilitating the creation of hyper-realistic disinformation that sows discord and informational chaos.

While authoritarian governments have certainly used AI to repress, censor, and mislead, democracies are not immune from using this technology in ways that undermine key democratic rights and principles. Indeed, in order to advance a more democratic digital future, democracies must develop AI tools with the deliberate goal of preserving accountability, maintaining transparency, and protecting civil liberties.

AI — Supporting Authoritarian Aims?

Conventional wisdom maintains that authoritarian regimes are most likely to use AI to track citizens’ movements, behavior, and communications, with scholars such as Steven Feldstein arguing that autocrats are “more prone to abuse AI surveillance than governments in liberal democracies”. Indeed, China has avidly embraced AI-based surveillance tools in recent years. Perhaps most notably, China uses facial recognition and predictive analytics to engage in the mass surveillance, censorship, and repression of the Uyghur community. As a prominent exporter of surveillance technologies, China’s draconian use of AI domestically has raised concerns about the rise of  ‘digital authoritarianism’, whereby other authoritarians emulate China’s AI model to quash dissent.

AI could be used to direct attacks against key democratic norms and values. For instance, improvements in natural language models have sparked discussion about the potential for AI-fabricated disinformation that mimics real human speech. ‘Deepfakes’, which use neural networks and deep learning to produce realistic video, image, and audio forgeries, have also become increasingly convincing due to improvements in computing speed and data availability. While malicious actors have yet to successfully deploy these technologies on a large scale, AI could allow for the automated, mass dissemination of falsehoods and misleading information. The challenges for democracies are particularly acute, as public discourse requires access to reliable information and trust in traditional media.

Even when information is not necessarily false, the realistic nature of AI-generated text may threaten the integrity of democratic discourse. Recently, scholars have suggested that AI tools like ChatGPT could be used to engage in hyper-targeted lobbying of political officials or to dramatically misrepresent public opinion.

While liberal democracies are certainly no stranger to the challenges posed by disinformation, they have struggled to develop comprehensive solutions. While some have called for technological fixes to identify AI-fabricated media, these tools always lag behind the newest developments, resulting in a perpetual game of ‘catch up’.

Democracy’s Domestic Challenges

That autocracies are using AI to further authoritarian goals does not mean that democracies are somehow immune to misuse. Many states with long democratic histories, including the United States and the United Kingdom, have adopted predictive policing programs, which use AI to detect patterns in troves of historical data to then predict which geographical areas, groups, and individuals are most likely to be associated with criminality.

As citizens are largely unable to opt-out of predictive policing programs, these initiatives undermine the democratic right to privacy for society as a whole. However, the most significant harm posed by these projects may not stem from mass, ubiquitous surveillance of entire populations, but from their likelihood to exacerbate existing inequalities among specific communities. Because these models are trained on historical crime data that reflects the established record of disproportionate and discriminatory surveillance on the basis of racial and class-based biases, they tend to reinforce the over-policing of Black, Brown, and poor communities.

 Predictive policing programs often also involve the use of facial recognition technology to aid in the rapid identification of potential suspects. The implications that facial recognition holds for racial inequality are clear — scholars such as Joy Buolamwini and Timnit Gebru have demonstrated that these technologies often fail to differentiate between individuals with darker skin tones, leading to wrongful arrests.

Further, these technologies are being incorporated into a host of public and private spaces. Recently, a lawyer involved in an ongoing case against Madison Square Garden was denied entry into the stadium, even though she possessed a valid ticket, after facial recognition software matched her to a list of individuals banned from the venue. Activists in the US are also raising the alarm about the use of AI surveillance tools to identify and monitor protestors. Should democracies continue to use these AI applications without comprehensive regulation, these tools will weaken freedom of speech and assembly as well as the right to privacy.

Finally, citizens are often unaware of the extent to which their data is collected and analyzed. Documents uncovered by the Legal Aid Society and the Surveillance Technology Oversight Project in 2021 revealed that the New York Police Department had spent millions of dollars on a variety of AI technologies, including facial recognition and other biometrics, without public awareness. Similarly, the New Orleans Police Department engaged in a secret partnership with data analytics company Palantir for six years, adopting their software to predict crime and monitor social media communications, all the while evading public scrutiny. Failing to disclose information about the collection and use of sensitive data effectively eliminates opportunities for democratic accountability.

Intentional secrecy aside, the increasing complexity of algorithms, which are often proprietary, undermines efforts to evaluate algorithmic decision-making. As more sectors are subject to algorithmic modeling, including housing, education, and the judicial system, this opacity undermines democratic rights to due process and equal protection.

Conclusion: Rejecting techno-determinism

As autocrats gravitate towards repressive uses of AI and democracies implement this technology in ways that run counter to their own values, it may seem that AI is destined to erode democratic rights and norms. But, techno-determinism ignores the role that human decision-making plays in our digital futures. In fact, AI may also offer opportunities to strengthen and uphold democracy through personalized education, enhanced journalistic accuracy, and the potential codification of new rights.

Protecting democracy requires designing AI tools with the express goal of upholding civil liberties, equal access, and transparency. This requires identifying which issues should never be subjected to algorithmic modeling, crafting clear safeguards and regulations that enhance algorithmic transparency and accessible ‘explainability’, and addressing asymmetric power dynamics that prioritize profit over the needs of those communities most likely to be affected by AI. At the international level, democracies must work alongside allies that share a vision of global digital governance.

Democracy is not doomed to falter and fail in the ‘age of AI’. But, realizing the best of AI’s opportunities and avoiding its most serious risks requires leadership, action, and the political will to put democratic values at the forefront of technological development.

About the Author

Amelia C. Arsenault is a PhD student at Cornell University’s Department of Government. Her research considers the effects of AI on international politics, with a particular interest in the global proliferation of contemporary surveillance and ‘smart city’ technologies. Additional information and contact details can be found at: https://government.cornell.edu/amelia-arsenault.

Call for Writers

Do you want to publish a post on the blog? Send your submissions to jkempf@democracyparadox.com.  The blog is open to publishing a wide variety of perspectives on democracy, democratization, and world affairs. But please keep submissions between 500-1,000 words.

Democracy Paradox Podcast


Make a one-time Donation to Democracy Paradox.

Become a Patron!

Martin Wolf on the Crisis of Democratic Capitalism

Anna Grzymala-Busse on the Sacred Foundations of Modern Politics

More Episodes from the Podcast

Democracy Paradox is part of the Amazon Affiliates Program and earns commissions on items purchased from links to the Amazon website. All links are to recommended books discussed in the podcast or referenced in the blog.

Leave a Reply

Up ↑

Discover more from Democracy Paradox

Subscribe now to keep reading and get access to the full archive.

Continue reading