The Real Danger of AI to Democracy

Danger of AI to Democracy
Image by Geralt from Pixabay.

By Jose Marichal

A 2022 article in the journal Nature detailed the study by which a team of researchers took a machine learning model published by Collaborations Pharmaceuticals, Inc. (called MegaSyn) trained to identify potential pharmaceutical drugs and asked it to come up with toxic compounds that would mirror the composition of VX nerve gas. In less than six hours, the algorithm identified 40,000 potential biological weapons.  The researchers presented this work at an international security conference to raise awareness of the dangers of Artificial Intelligence if misused.  As a political theorist, my mind logically drifts to the question of whether something similar could happen with large language model AI’s like ChatGPT?

Even those of us that study technology’s socio-political effects have been blindsided by the speed with which ChatGPT elevated language AI from the “technology of the perpetual future” to an immediate transformative agent.  Long gone is the idea of AGI as a bumbling smartphone personal assistant, fumbling seemingly simple requests to find a gluten free restaurant in the area.  ChatGPT3 appears to be a strikingly accurate mimicker of human thought.  As humans, we know all too well the glories and ghastliness of human thought.

In response to ChatGTP’s popularity, companies are quickly bringing them to market despite not knowing the ultimate effects of these tools.  The alternative is to commit the greatest of all possible market sins: being left behind.  This is why companies like Snapchat have made an AI available to their users, as young as 13.  In a chilling anecdote, Tristan Harris and Aza Raskin from the Center for Humane Technology, present a discussion with Snapchat’s new “My AI” bot where they initiated a conversation, posing as a 13 year old who is going to be “taken on a romantic getaway” with her thirty eight year old “boyfriend” and the AI’s response is to cheerily affirm the events and suggest ways to make it a “romantic getaway.”

These events haven’t gone unnoticed.  In March, a group of AI thought leaders signed on to a letter asking for a six month pause in the development of AI systems beyond GPT-4.  As of May 22nd, 27,535 people had signed.  Soon after, the computer scientist Eliezer Yudkowsky penned an op-ed in Time Magazine suggesting a six month moratorium was insufficient and instead called for an indefinite moratorium backed by international agreements, even going to far as to advocate that the US “destroy a rogue datacenter by airstrike’ if a nation or actor violates the agreement.  But perhaps the most jarring alarm bell was Goeff Hinton’s resignation from Google to dedicate himself to warning of AGI falling into the “wrong hands” and providing it with “sub-goals” like “I need to get more power.”  Hinton’s change of heart is particularly notable since his work on neural networks in the 1970’s served as the foundation for modern AI.

But the dangers of AI are misunderstood.  For all of its sophistication “under the hood,” ChatGPT4 can’t answer Whitman’s call to “read these leaves.”  Many of my humanities minded friends insist that the fundamental way we differ from AGI is in our subjectivity.  AGI’s cannot understand love, pursue honor, feel despair, loneliness, or engage in self-delusion. It can’t reflect, and evaluate its own emotional state.  When we ask an AI to act on the world, it’s doing so without any awareness or recognition of itself as a subject apart from an object.   In Heideggerian terms, an AI acts upon the world without being in the world.

But Does AI have to “become us” to impact us?  What if AI and its products train us rather than being trained by us?  Can a world increasingly driven by the demand for algorithmic optimization make us less thoughtful, less reflective and more in need of certitude.  The sociologist Hartmut Rosa characterizes our age as one that has a logic of acceleration that is intensified, but not created by, modernity.  This logic is characterized by a gnawing sense that it is imperative to move forward, to absorb, to produce, to opine without asking the deeper reasons as to why we must propel ourselves. The mill is constantly in search of grist, indifferent to what it’s grinding up.

Under these conditions, should we be more concerned with an AI that sows confusion rather than one that amasses power?  In my view, an AI that makes it impossible for us to differentiate truth from fiction is a greater immediate danger to democracy than one that sets its own goals.  If the problem of the 20th century was one of value relativism (a questioning of enlightenment rationality in the face of two irrational world wars), the problem of the 21st is one of empirical relativism, an inability to know for sure that what you hear, read or see is real. If the critique of moral relativism was that value positions were reducible to aesthetic tastes, then AGI presents us with the prospect of empirical reality being “just an opinion.”  How do democratic citizens adjudicate between the different “realities” we are presented with when those realities can be conjured up algorithmically?

For now OpenAI allows API access to anyone seeking to train the “dumb child” dataset for its own purposes.  Conceivably, one can introduce an AI to the great works of any political philosophy and hire an army of well schooled philosophers to train it to produce outcomes consistent with that view.  One could train an AI on the precepts of utilitarianism and train it to produce outcomes that allow for the greatest good for the greatest number or you could provide it with Nicomachean Ethics and train it to produce the virtuous outcome to an ethical question.  An Aristotelian virtue ethics approach would suggest we develop phronesis (civic wisdom) through experience.  You could imagine an AI that can adopt phronesis at warp speed.

But as interesting as that thought experiment might be. I’m more interested in the process of de-training AI.  Presumably, if you can train an AI so that it continually improves its decision making, you can de-optimize an AI (e.g. get it to produce incoherent outcomes).  Based on the last 10 years of our social media life, we already know we can optimize algorithms to prioritize engagement.  But can bad actors tamper with training data in a way that makes an AI produce incoherent outcomes?  I’ve seldom heard anyone ask whether you can demoralize an AI?  Can you break an AI’s will?  A more whimsical question might be whether you could train an AI to become an absurdist or an algorithmic Diogynes living in a bathtub, asking it trainers to “get out of its light?”  Or could an AI become nihilistic and give up on the entire project to which it has been assigned?

We’ve already heard examples of ChatGPT3 “hallucinating” when its training data does not provide it with enough information on a subject.  Chapter GPT’s hallucinations come more from it trying (and failing) to fill in knowledge gaps with available data, but what if the hallucination was a feature and not a bug?  Could you deprive an AI of enough information or train it in such a way that it ignores either its original training data and it begins to hallucinate more. This is something akin to a prisoner being put in solitary confinement.  What is the AI equivalent of having its lights kept on all day and having it lose a sense of time and space?

Why would anybody want to train in Ai in this way?   It would seem antithetical to the whole point, but one need only look at how effective governments have been at using social media (and their engagement algorithms) to exacerbate dissent in the United States.  One need not be an international relations scholar to know that states can fail. James Scott suggested that one of the components of failed state is an anemic civil society that can’t resist a state imposing its will.

It is this last means of state failure that concerns me most.  Our current moment is a search for certitude in the midst of weak civil society. Hannah Arendt in the Human Condition argued that isolation was important for democratic citizens in that it gave individuals the ability to contemplate, but loneliness was a path to totalitarianism.  Under loneliness, people feel so isolated from their fellow citizens that they start to question themselves and everything around them.  When citizens feel isolated, they are more vulnerable to totalitarianism.  She differentiates this from tyranny which is state rule motivated by fear. Under tyranny, people can have a private life that is uncontrolled by the state.  But in totalitarianism, ideology pervades citizens in such an all-encompassing way that there is no distinction between public and private life. Can an AI be turned towards producing more loneliness in citizens by increasing the uncertainty of the world around them?

Under such circumstances, one could envision an unstable people turning towards “ideologies of certitude” that view the state as “strict father” who is willing to impose punishment in order to preserve the law.  Here an AI that is trained to “enforce rules” becomes more salient.  Perhaps we get dominionism AI trained on biblical law to produce “correct” moral outcomes?  But in order for us to get to that point, we need to become sufficiently isolated from each other and our own systems of meaning.  We need to reclaim a sense of ourselves as we are to avoid this fate and maintain a vibrant democracy.

About the Author

Jose Marichal is a professor of political science at California Lutheran University. He studies the role that technology plays in restructuring political behavior and institutions.  He is the author of Facebook Democracy (Routledge Press) and numerous other articles, book chapters and essays on the effect of technology on democratic health.

Call for Writers

Do you want to publish a post on the blog? Send your submissions to jkempf@democracyparadox.com.  The blog is open to publishing a wide variety of perspectives on democracy, democratization, and world affairs. But please keep submissions between 500-1,000 words.

Democracy Paradox Podcast


Make a one-time Donation to Democracy Paradox.

Become a Patron!

Daron Acemoglu on the Struggle for Shared Prosperity

Jamie Susskind Explains How to Use Republican Ideals to Govern Technology

More Episodes from the Podcast

Democracy Paradox is part of the Amazon Affiliates Program and earns commissions on items purchased from links to the Amazon website. All links are to recommended books discussed in the podcast or referenced in the blog.

Leave a Reply

Up ↑

%d bloggers like this: