Samuel Woolley is an assistant professor in the School of Journalism at the University of Texas at Austin and the project director for propaganda research at the Center for Media Engagement. His most recent book is Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity.
One of the things that we see happening online is sort of a democratization of propaganda. I say that in a bit of a tongue in cheek way, but what I mean is that the same things that allow anyone to produce news content or to blog online also allows them to build bots or to generate propaganda that’s not created by bots. So, we have a little bit of a many to many propaganda system on our hands at the moment.
- Introduction – 0:43
- Background on Technology (including Bots) – 3:00
- Artificial Intelligence – 10:17
- Democratization of Propaganda – 20:44
- The Legitimation of Ideas – 30:48
Democracy depends on space for contentious discussion and debate. This is what made the internet and social media seem as though it was such a perfect complement for democratic politics. Unfortunately, the very attributes that elevated new political voices also included more nefarious actors including propagandists.
Samuel Woolley has studied propaganda in the digital age. Sam has found digital propaganda shares a lot with older more traditional forms of propaganda, but it also has some distinct differences. He argues automation has “democratized propaganda.” It’s a fascinating idea that we discuss during our conversation.
Samuel Woolley is an assistant professor in the School of Journalism at the University of Texas at Austin. He is also the project director for propaganda research at the Center for Media Engagement. I reached out to him after reading his new book Manufacturing Consensus Understanding Propaganda in the Era of Automation and Anonymity.
Our conversation touches on some of the same themes as last week’s conversation with Josh Chin. But Sam provides more concrete explanations on some terms we probably know, but don’t quite understand. We talk about bots and artificial intelligence and how they change public discourse for good and bad today.
After listening to today’s conversation, I’d like to know from you how you think artificial intelligence will change democracy. Listeners on Spotify will see the question in the app. But you can also answer the question as a review on Apple Podcasts or reply to me on Twitter or Facebook. You can also email me at firstname.lastname@example.org. But for now… This is my conversation with Samuel Woolley…
Sam Woolley, welcome to the Democracy Paradox.
Thanks for having me, Justin.
Well, Sam, recently, I think it was this past week, I heard you say on another podcast how technology always begins with a lot of optimism before people begin to recognize the costs. Now, most of us began to notice the costs from the internet and social media around the time of the 2016 election or maybe the Brexit referendum. But you’ve worked on this for a very long time. When did you notice the downsides of digital technologies?
So, I began studying, I guess what you might call the downsides of digital technologies when I was doing my masters around 2010 or 2011. Specifically, during that time I was studying the ways that the Tea Party in the US was leveraging message boards and social media and made attempts to not just coordinate and organize, which is totally normal and natural, but also to coopt messaging as a mechanism to spread propaganda and try to game online systems. It wasn’t until 2013 when I went to do a PhD at the University of Washington that I really began working on what some people call coordinated inauthentic behavior or influence operations or propaganda online in a big concerted way.
At that time, I started collaborating with Phil Howard, who was my doctoral advisor. Phil and I started working on what we call computational propaganda. So, the use of automation and algorithms over social media in efforts to manipulate public opinion. Phil had been spending some time in the Middle East studying the Arab Spring and he wrote a book at that time called The Digital Origins of Dictatorship and Democracy about the ways in which governments and other groups, including democratic activists throughout the Middle East and North Africa were using social media as tools to mobilize and organize, but simultaneously the ways in which the powerful were leveraging these tools to manipulate folks and to stymie folks.
So, one thing that we noticed in a big way was that bots were being used to try to coordinate spam and other kinds of manipulation. So, we started studying that and now we’re here.
Well, that’s fascinating, because during the Arab Spring, people like Larry Diamond were still calling social media a liberation technology. So, it’s fascinating to think that you were involved in research already exploring how dictators and autocrats and others were using digital technology, including social media, the internet, all of that to manipulate. But why don’t we get into some of the nuts and bolts. A lot of the stuff that you write about involves things like bots and other propaganda tools. How pervasive are these? I hear a lot about them. You hear Elon Musk complain about them. How pervasive are bots really?
Great question. Bots are an infrastructural technology online. They are a core part of the internet and the way that the Internet’s built. So, they’ve existed as mechanisms for doing online work since the internet went public and even before that when it was a military and state technology based at universities. Bots are really any automated software designed to do a task that a human would otherwise have to do. They oftentimes get programs to do repetitive, routine boring tasks like scraping the web for data or cleaning databases. But you can automate social media profiles and this was something that people have understood for a while, even prior to social media, before we called it that. There were chat bots that have been created for decades that were used to chat with folks.
So, bots of any stripe online make up for about half of all web traffic according to some statistics. They’re massively active online. Social bots and chat bots on sites like Facebook, Twitter, YouTube, and all of the others also make up a very significant portion of traffic. There’s constantly this cat and mouse game between social media companies and the people who make and build bots where the social media companies are trying to catch and delete the more malicious, nefarious, or spammy bots. But simultaneously, there’s lots of good bots that exist on social media that get used to automate things like news organizations publishing every headline that comes out. So, rather than having a human do that task you program a bot to just update all the headlines coming out and post those.
There’s also humor bots online that mash up different words. There’s one called Two Headlines Bot that used to exist, I don’t know if it still does, by Darius Kazemi that was really funny. It would mash two headlines together and sometimes the results would be hilarious. So, all of this to say there’s all sorts of different sorts of bots on the internet and even on social media. There are many sorts of bots. We have a tendency to demonize all bots as being bad. But in fact, bots are not in and of themselves bad. It’s just that they can be programmed to do manipulative or nefarious things. Elon Musk in particular is concerned a lot with crypto bots and spam bots.
Versions of crypto bots prior to crypto have existed online for a very long time. Everyone’s familiar with spam email. A lot of that can get sent by bot. So, it’s constantly a battle that’s being waged online against the manipulative uses of automation. It just so happens that in recent years, political organizations have figured out they could automate some of their attempts to sow propaganda. That’s what my research is on.
I got the impression from reading your book that you’ve actually made some bots. That you learned how to create them. How easy is it to make a bot?
It’s really easy to make a bot. So, I have dabbled in making bots. Part of being an ethnographer is participant observation. So, in order to understand the people that I talk to, the bot makers, the people who build sock puppet armies, the propagandists online, over the course of the last several years, I have made some bots with collaborators. There are different kinds of technologies that exist online that allow everyday people, even people that can’t code, to build bots. There is, for instance, If This Then That, IFTTT, that allows people to automate their apps and devices. But, you know, another way of putting that is it allows people to build bots without having much knowledge of code.
Python is a language that’s emerged in the last several, I don’t know how many years ago that Python was created, but Python’s also a quite a simple programming language that’s a little bit easier to pick up than a lot of the other ones that have predated it. So, Python allows a lot of people to build bots as well. Suffice it to say that one of the things that we see happening online is sort of a democratization of propaganda. I say that in a bit of a tongue in cheek way, but what I mean is that the same things that allow anyone to produce news content or to blog online also allows them to build bots or to generate propaganda that’s not created by bots. So, we have a little bit of a many to many propaganda system on our hands at the moment.
Probably the most famous bot right now would be ChatGPT, which is a form of artificial intelligence. Are all bots forms of artificial intelligence?
That’s a great question. I think there’s a perception that that’s the case and it’s not true. All bots are, by definition, automated. So, they are built to do routine rote tasks over and over and over again. But that doesn’t mean that they’re programmed with actual intelligence. That doesn’t mean that they’re given the ability to make decisions or to learn from their surroundings. In recent years, there’s been a lot of innovation in the field of artificial intelligence, specifically in machine learning.
Machine learning is exactly what it sounds like. To simplify it, you teach a bit of code to learn from actions that it does online. It makes it sound like it’s smarter than it actually is, but it’s a way of programming a bit of code to learn from the ways in which, say in this instance, people interact with the given automated bit of software that is your bot. So, there are quote unquote artificially intelligent bots. They’re not particularly intelligent most of the time. They don’t understand human nuance in the ways that you or I would like sarcasm, humor, and they’re still programmed in a routine way. So, they still operate via mathematical formulas. So, even with ChatGPT, which is an incredibly sophisticated and very, very expensive to run bit of software you have detectable tells.
In fact, there’s already people building software that can detect whether or not a paper for a term class was written by ChatGPT. So, I think what I’m trying to get at here is that there’s a lot of fear that somehow people create these bots and then they let them go and then they become intelligent and they’re doing all these crazy things online. Bots can do unexpected things online, but usually it’s because of the ways in which people interact with them and figure out how to manipulate them to get them to do pretty bad things. So, Microsoft’s Tay was a great example of that. People manipulated the bot and taught it basically to say racial slurs and horrible things, but in and of themselves, they’re not truly intelligent and not all bots are AI per se.
So, what’s really the line between artificial intelligence and simple automation?
I think for me there’s two things. A simple automated bot might be programmed to spread the same corpus of messages over and over and over again. So, basically you have a spreadsheet and it has messages and the bot chooses from those messages on the spreadsheet and sends them out on a schedule every 10 seconds or every 15 minutes. But it only has a set amount of questions that it can create. An artificially intelligent bot by definition would have two things that would distinguish it. One is that it would be able to learn from its surroundings, so if people interact with the bot, it would be able to look at the syntax, look at what’s being said to it, and take from that content in order to produce new messages. It wouldn’t be constrained to just a group of a thousand messages.
The second thing is that the bot is pre-programmed with some aspect of being able to make its own decisions. There might be fail safes built in to stop the bot from saying certain things. But in a way, the bot in this case or an artificially intelligent chat bot would be able to pick and choose the kinds of things that it’s going to do based upon the formulas that do still constrain it. So, with those two things, you get closer towards artificial intelligence. Although the true idea of singularity and of software-oriented sentience is not something that I believe we’ve found in all the research I’ve done including my last book that I wrote last year called Bots, we don’t see anything like that happening.
So, you’ve described artificial intelligence as the capacity to learn and I feel like sometimes that’s a little bit misleading because it gives human characteristics to machines. From what I can tell, it seems like it’s more like pattern recognition which is inherently mathematical and it makes a lot of sense. But they’re just recognizing patterns that are typically more complex. They might involve syntax. They might involve the way that people say things, but at the end of the day, they’re just recognizing patterns and replicating them. Am I understanding that right when I think about artificial intelligence that way?
Honestly, you’re correct, and it’s a very important distinction to make. I think that we have a tendency to impart human characteristics onto these software in certain ways because of the science fiction that’s out there and the things that we’ve learned in the past from novelizations of the future. The reality is that this technology is routine, it is mathematical, and yes, it has the illusion of being able to make decisions and create complex output. But at the end of the day, it’s still operating via math. So, there are still many things that we can do in order to suss this stuff out, to catch it, to stop the more nefarious forms of it.
As it gets more and more sophisticated, it becomes more and more difficult to track. Because the goal really of the tests that are out there, including the Turing Test which many people know, is to determine whether or not the thing that’s operating online is human enough to be deemed human. But the problem and the perk of the internet is that anonymity is a very real thing. So, you can’t know who’s behind the keyboard a lot of the time.
Well, I think what’s so scary about artificial intelligence is how it’s teaching us how much of human creativity and human ingenuity is really just the creation of different patterns and our own ability to recognize patterns that can be replicated then by machines. But one of the things that really interested me when I heard you talk about different types of bots, whether they’re AI bots or other types of bots, was that you said they always have some kind of imprint from the person who created them. What do you mean by that?
Yeah, so the thing I’m really fond of saying, probably to the annoyance of my readers, is that there’s always a person behind a bot. So, what I mean by that is any piece of code that gets generated is built with the biases or the intentions, put another way, of its creator. Bots are no different than other software in that regard. The creator, the coder, makes decisions about what to prioritize and how So, given that this is math and that there’s code underlying these bots, if you have a social media bot, it’s going to be programmed to talk about specific things. So, you might program it to spread a specific type of political messaging in support of a specific candidate and not about another. All of those decisions matter.
So, for instance, Safiya Noble has written very compellingly about how algorithms can be used to perpetuate oppression. Meredith Broussard has also written about how thoughts can be used to spread both smart things and quote unquote dumb things based upon the decisions of the creator behind it. So, problematically in this case, humans are riddled with biases. In many cases this means that the bots themselves can extend the racism, the hate of the creators of the people that build them. But by proxy though, if you think about it, that means they can also be used to do really beautiful things. Tim Wong and others have talked about the ways in which bots can be used as really useful social prosthesis or very useful social scaffolding that get used to connect people or to generate humor or to do things that are surprising and delightful as well.
The other element to this is that the line between bots and humans, particularly online, is not always clear, because you describe a lot of ways that there are oftentimes humans that either interject themselves into those accounts or the account begins with their input before they automate the account. Can you describe how sock puppets and other ways that humans engage in that type of automation even as it’s ongoing?
Yeah, we have a tendency to think and, even in research, to talk about bots in a binary way. We say an account on social media, if we’re talking about a social media bot, is either automated or it’s not automated. The reality is that automation exists on a gradient, so there’s nothing oftentimes that stops the person that built the bot or someone that they know who they’ve given access to their account to simply log on through either the backend or frontend of a given social media site and start spreading human-based messages.
So, one of the things that they’ve done is to delete bots based upon this binary action where they can figure out if an account is fully automated based upon the math. Simply put, if it’s tweeting on a schedule on Twitter or if it’s messaging on Facebook in a network of accounts that look exactly like it and are just boosting one another. Many of the bot makers that I’ve spoken to have figured out that it’s as simple as logging on one time a month onto certain bot driven accounts and spreading some human-based messaging to trick the detection algorithms that the social media companies have built. So, in those ways, when a person logs on, functionally taking control of the bot, you’re starting to see more sort of cyborg action.
But when a person is using an anonymous account or an account that’s taking someone else’s identity to message manually, we call that a sock puppet. So, a sock puppet is a account that is driven by a person’s actions. The thing is any given account that is driven by a bot can at times be driven by a sock puppet. As you pointed out, oftentimes the accounts that become automated are accounts that have been online for a very long time. So, there’s a marketplace for social media accounts in the depths of the web where if you’ve had accounts for a number of years you can sell them. Then those accounts will be used to be automated because they have more legitimacy. They have a history of human usage and the social media companies are much less likely to delete such an account because of fears about stifling speech.
So, one of the terms that really resonated with me in your book was this idea of democratizing propaganda. In fact, you have a line in the book where you write, “We are witnessing the democratization of propaganda. Perhaps this juxtaposition of democracy and propaganda seems contradictory, but it is deliberate.” You’ve already mentioned it earlier. I want to come back to it right now. Can you explain a little bit about what you mean when you say democratization of propaganda?
Yeah. Our perception of propaganda for the last several hundred years has been as a top-down informational mechanism. Generally speaking, the way that propaganda’s been theorized, for instance, during the 20th century and in recent years has been that state intelligence services, militaries, corporations, very well-resourced organizations. In fact, propaganda as a term has its history in the Catholic Church with the Office of the Propagation of Faith. These were the organizations that were able to leverage various media whether it was books or eventually newspapers, radio, tv, film, in order to spread biased, loaded messaging, attempting to manipulate people’s emotions or to manipulate public opinion.
Partially this was because we existed in this media system in which the powerful, the elite, as Herman and Chomsky discussed in Manufacturing Consent were able to use the one-to-many media system in a very controlled way. They were able to monopolize ownership of media, especially of TV and newspapers and the like and in so doing control at least some part of the messaging, if not deliberately, or specifically from journalists to journalists, certainly in a gatekeeping and framing fashion. So, making decisions about what ended up getting printed in many circumstances, especially if it was going to be something they didn’t want to get printed.
Now with the internet, the same force has democratized people’s access to all kinds of information. The things that made people really excited about the internet that they could read anything online, they could access anything, they could also use it to organize and communicate in countries with limited media systems without authoritarian leaders cracking down on them, these same features have allowed people of all sorts, including small groups, to try to spread their own messages to manipulate public opinion.
So, when I talk about the democratization of propaganda, what I’m saying is that the same features on the internet that allow people to be a journalist, to break news, also allow them to try to control what other people are thinking through the use of tools like bots, but also through sock puppet accounts, coordinated groups of actual real human accounts that actually use real people’s names, but also have coordinated influencers. People have figured out that they can amplify specific content. They can suppress other kinds of content by harassing people. So, functionally, what we have now is a system wherein propaganda is in the hands of many people.
With the closing off of APIs, recent moves, for instance, by Elon Musk to shut down open access to the Twitter API, you do have a move back to a system where elites are prioritized because they have the money to actually pay to access the APIs on Twitter. But there’s many other places online where this democratization of propaganda still plays out.
I find it fascinating because it conjures the idea of non-state actors proliferating and even creating propaganda and not just non-state actors as if they’re big, huge organizations or corporations, but just everyday people becoming propagandists themselves without necessarily the direction of a government or the state or anybody. It’s just being a propaganda entrepreneur, if you will.
Yeah, and it’s staggering. In fact, some of the earliest examples of coordinated what I call computational propaganda, so the use of bots to perpetuate propaganda on social media, were actually done via small groups of people that you might call astroturfing groups. A great example was a tea party group based in Iowa. Now, the Tea Party famously had a lot of connections to some very wealthy individuals and so definitionally you might call some elements of the Tea Party grassroots, some elements of it AstroTurf. It wasn’t a monolithic organization. However, in this circumstance, this group of people in Iowa were able to build a bunch of automated accounts that injected their opinions during the special Massachusetts runoff for Ted Kennedy’s seat in 2010.
Some researchers, Metaxas and Mustafaraj, wrote a study about how they spread messaging that Martha Coakley, the Democrat in the race, was anti-Catholic which is an incredibly damning accusation in a place like Massachusetts and Coakley ended up losing the race. Not only did she lose the race though, other media organizations picked up that messaging that was initially spread by those Tea Party bot accounts and spread it legitimately. They basically laundered the information so that it spread to even more people. So, examples like that are a dime a dozen at this stage. There are so many different groups that have figured out how to leverage not just bots, but also other tools and techniques online.
Oftentimes, it’s small groups and people that are the trailblazers in this space because they’re operating from a place of tactics. They’re not bogged down by the bureaucracies that governments are bogged down by. So, oftentimes what we see is governments ending up adopting the manipulation strategies of these small groups. Also, it’s worth saying that commercial entities have perfected a lot of these systems as well and so a lot of the spam, propaganda, manipulation techniques that we see being used by governments also come from commercial entities too, like marketers and PR firms and things like that.
Has the democratization of propaganda changed the goals of propaganda in this new digital landscape? Because the way that you just described it a moment ago, it wasn’t just about persuading people, it was about legitimizing ideas, bringing them into the conversation of elites and traditional media sources. So, are the goals different now that we’ve got this digital landscape? Have the objectives actually changed?
Yeah, so whereas before, take for instance, in World War I and World War II, you had the United States and its allies, and then Germany and its allies, leveraging propaganda in order to concretize their positions as powers in order to recruit people to the war machine whether it was to the front lines or to the factories. Now, we see a change in propaganda. Rather than trying to entice people to contribute to a given state or country or way of being or way of believing, we see propaganda being pushed more towards the active measures model. The model developed by the USSR during the Cold War. Things like hypernormalization. This idea that rather than trying to just persuade people, you can do many other things.
One thing you can do, and this is why my new book’s called Manufacturing Consensus rather than Manufacturing Consent, which is what Herman and Chomsky, wrote about and before them, people like Lippman and Gramsci, is that you can actually use social media technology to create the illusion of widespread, not just acceptance to discuss something, but agreement that something is important. So, from consent to consensus, which I believe is much more powerful. You create this illusion via bots, via sock puppets, via coordinated accounts that something is very popular. Then the goal, and we see this bear out in the research, is that there’s a bandwagon effect and that people end up picking that content up.
So, functionally, everyone right now is very concerned with disinformation, for instance, the purposeful spread of false content. What you end up having happen via these mechanisms is the spread of disinformation is only the beginning. It quickly metastasizes into misinformation, which is accidentally spread content by real people. You see this kind of cascade effect. This situation in which the propaganda becomes so much bigger than it initially was through the scaling power of the internet through virality. The other thing that can happen oftentimes in these situations is that rather than persuading people to vote for a particular candidate, these operations are designed in such a way that they are working to engender apathy, to engender anger, polarization, to get people to check out of the political system, to get people to be skeptical of institutions like medicine, education, science.
That is very much a Russian active measures model that is able to have been scaled in the internet in ways that I don’t think even the USSR could have predicted when it was generating these mechanisms back then. Something else that does bear reporting is that the bots or the sock puppets aren’t just built to try to have conversations with people to try to get them to change their opinion about a politician and vote for someone else or to get them to be apathetic or angry.
They also are often built to try to have conversations with the algorithms that underlie the social media sites themselves. This is a really critical thing because oftentimes I’ll talk to people and they’ll, ‘I would never be tricked by a bot. Not only that, but no one I know would be tricked by a bot or a sock puppet account because it’s some random account trying to talk to me.’ The reality of many of these accounts is that they are not built to talk to you. They’re built to create the illusion of false traffic, again, to manufacture consensus to the algorithms themselves.
So, the trending algorithms, the recommendation algorithms, which are, as we said earlier, these things built based upon math. They’re built to look at routines, trends, these sorts of things and to trick them into thinking that the content is popular through sheer quantity and shared traffic. Then basically what ends up happening is the algorithms will recurate that content via their trending mechanism. So, on their front pages or on their sidebars to tell you, you should look at this. This is what’s happening that you should pay attention to.
It sounds like bots don’t even have to try to persuade people, because their goal is to legitimate ideas rather to convince people of new ideas. I mean, that sounds like an entirely different purpose for propaganda than in the past because normally you think of propaganda as something that’s trying to convince you of something that you wouldn’t normally believe. This sounds more like it’s trying to legitimate ideas that would’ve normally been outside of the public discourse.
I think that’s right, and I think that’s something that we see a lot in contemporary politics and in the zeitgeist today. We see a lot of things that would otherwise have remained consigned to the shadows or never entered the public conversation suddenly burst into social media trends and recommendation systems, but also honestly getting laundered through well-meaning journalists who are using social media as a mechanism for their reporting so they’re actually tracking trends themselves. You see the rise of data journalism has led to journalists leveraging data online in order to figure out what they should be reporting on based upon popularity.
So, you’ll see that happening as well. It is a different sort of system. It’s a system in which you see a lot of seeding and fertilizing of ideas into the conversation stream and then you just allow people to pick it up and let it grow that way.
There was an example in your book where you mentioned how a single journalist that’s like a television journalist repeating an idea that bots and sock puppets hav been trying to infiltrate into the conversation, just that single journalist repeating it on national news, was just this enormous win for them. That’s what they were really looking to accomplish, rather than to convince millions or even just thousands of people of the idea. Just getting it so that the news reporters were reporting on the idea was their goal because if they can get to that then convincing becomes so much easier.
Precisely. So, I’ve used the term information laundering a lot of times and it is exactly that. The goal is to get a more legitimate or well-placed person or organization to, well-meaning or not, spread that information. Sometimes the news organizations will spread it because it simply fits the goals of the story that they’re telling and they don’t care whether or not it’s real or fake. In other circumstances, they are spreading it thinking that it is a real person that says the XYZ thing, so they’re featuring the tweet in a story or that it’s a real set of accounts that have been driving traffic around this trend, so it’s worth repeating.
But in my conversations with bot makers and computational propagandists and other digital propagandists, they’ve made it very clear that the goal is to get well-placed folks to spread the message in order to inject it into the popular consciousness. In the preceding paper that my colleague Doug Guilbeault and I wrote called “Manufacturing Consensus,” it was a paper in the book Computational Propaganda about the United States, we studied the ways that these bot accounts and the builders behind them were able to get very well-placed folks, including politicians, including news anchors and others to spread this kind of content and in so doing granted legitimacy.
It’s absolutely a problem and it’s absolutely something that we don’t think about intuitively when we think about this stuff. So, one of the things I argue in the book is we’ve got to change the way that we think about propaganda, not just to think about it as a democratized space where anyone can do it, but also to understand what the means and ends are in a much more sophisticated way so that we can combat it.
It’s also got me thinking about influencers differently. You recently wrote a paper in the Journal of Democracy called “Digital Propaganda: The Power of Influencers” and the reason why I’m thinking about influencers is we use the name, the term, as if they’re actually influencing people. But based on our conversation, it sounds like what they’re really doing is legitimizing and normalizing ideas in the public discourse. I mean, does that sound right?
Yeah, and in so doing, eventually influencing people. But one of the things that I find fascinating in today’s political world and commercial world is that influencers, real people on social media or content creators, if you want to call them that, are able to get paid small amounts of money to promote specific products or specific ideas. So, it began as a commercial enterprise to get influencers who had a larger following online to spread or promote a post about a particular company or product. Quickly it’s merged into a political endeavor as well, where you have coordinated groups of influencers, real people with large followings in many cases, or very targeted followings and others, spread a message that is political in nature, that is supporting a particular idea or candidate.
In so doing, not necessarily only influence the people that follow them, but also influence news reporting, influence the whole information ecosystem because it creates this idea that a certain thing is important or worthy. Many people respond by saying that, ‘Well, you know, in the commercial space, influencers tend to disclose.’ I think that it’s important to understand that disclosure rules online are pretty bad and that, yes, many times influencers do disclose that they’re advocating for a product and they’re paid. But in political circumstances, that’s less than clear. The law is also less than clear, particularly in the United States because of Citizens United.
So, one of the things that I say in that piece and that I’ve continued to think about is the ways in which coordinated groups of influencers are almost the next logical step in computational propaganda. Bots seem by comparison, very easy to delete and get rid of because you can say that’s an automated thing. It’s not a question of free speech. But when it comes to an influencer, a real person spreading this kind of content, even if it is coordinated, at what stage do you say it’s problematic? Maybe you say it’s inauthentic and you’re able to figure that out, so the social media companies will delete it in that case.
But the distinction between real speech and coordinated speech in this case is very difficult to parse. I think that the social media companies are rightfully extremely worried about this because oftentimes the payments to these folks by political campaigns are happening off of the social media platforms. So, tracking this stuff is very difficult.
So, we’ve posed a problem. You’ve explained how there’s a lot of bad things happening on the internet and social media. Are these problems actually solvable? Like do you have hope for the future in terms of how we interact with the digital landscape and how we can better understand propaganda just as everyday ordinary citizens?
Yeah, I remain optimistic. To be very frank, I think that the internet still has a lot of power to democratize and to bring hope to people who live in controlled environments. Not just hope, but also practical tools for organization and communication. But progress is necessary online right now. We have a set of social media platforms that dominate the landscape that are very much built to optimize for eyeballs on the screen for the selling of ads, and frankly, for propaganda. So, we’ve got to build new platforms. A lot of what we see happening right now is an attempt to retrofit platforms like Twitter, Facebook, and YouTube which have just for years been designed to be optimized for the things that I’ve just mentioned, really to be optimized for control of people.
We need to start asking ourselves what it would look like to build the next wave of socially oriented platforms or other kinds of media technologies that were built with democracy in mind. That were built with human rights in mind and those kinds of questions are powerful ones. There are organizations like New Public led by Eli Pariser who is a great thinker in this space, and Talia Stroud here at UT that are actually doing this kind of ideation to build new platforms. But the fixes are not just in the technology itself. The fixes are also in society. People are talking about these problems more than ever before.
Back in 2013 when we started the Computational Propaganda Project at University of Washington, like you said, popular thought was that the technology was mostly beneficial for democracy. Now people have a very different view of this stuff, so people are catching up. Society’s catching up. Governments are beginning to create media literacy programs that are actually very meaningful and helpful to citizens. People will oftentimes mention Finland or New Zealand and other countries like that. We need to start doing these sorts of things. However, in the United States, we need to also think on a sociocultural scale and a linguistic scale and about how we don’t just provide these things to certain groups. We provide them to everyone wherever they’re at.
Then finally, we have to have policy in this space. We have to have reasonable laws and regulations in the social media space. The EU is driving regulation online. The United States has got to catch up and my hope is that eventually they will. I don’t have very much hope in the next couple of years that that’s going to happen, but it’s important for people to remember, and I’ve said this before, but I’ll say it again, that Mark Warner and Josh Hawley, a prominent Democrat and a prominent Republican, were up until quite recently trying to co-create laws to prevent digital surveillance.
That tells me and should tell you that Republicans, Democrats, and anyone else in this country really have an interest in making sure that the online space is safer and that it’s more trustworthy. That it allows free speech whenever possible. So, I believe this can happen, but I believe change also has to come in order for it to do so.
Well, Sam, thank you so much for this conversation. I want to plug your book one more time. It’s called Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity. And let me plug the article one more time. It was “Digital Propaganda: The Power of Influencers.” So, thank you so much for joining me today. Thank you so much for writing your book. It’s been an excellent conversation.
Thank you for having me, Justin, and thanks for this wonderful podcast.
“Digital Propaganda: The Power of Influencers” in the Journal of Democracy by Samuel Woolley
Democracy Paradox Podcast
Email the show at email@example.com
Democracy Paradox is part of the Amazon Affiliates Program and earns commissions on items purchased from links to the Amazon website. All links are to recommended books discussed in the podcast or referenced in the blog.