Jamie Susskind Explains How to Use Republican Ideals to Govern Technology

Jamie Susskind

Jamie Susskind is an author and barrister. He has held fellowships at Cambridge and Harvard Universities. His work is at the crossroads of technology, politics, and law. His most recent book is The Digital Republic: On Freedom and Democracy in the 21st Century.

Listen on SpotifyListen on AppleListen on Google Listen on Stitcher

Access Bonus Episodes on Patreon

Make a one-time Donation to Democracy Paradox.

The problem in both cases is not Zuckerberg or Musk, but the idea of a Zuckerberg or Musk. The idea that, simply by virtue of owning and controlling a particular technology, someone wields arbitrary or unaccountable power which can touch every aspect of our liberty and our democracy.

Jamie Susskind

Key Highlights

  • Introduction – 0:44
  • Challenges of Digital Technology – 3:18
  • Artificial Intelligence – 20:09
  • A Digital Republic – 40:27
  • Possible Solutions – 43:42

Podcast Transcript

A few weeks ago I spoke with Samuel Woolley on bots, artificial intelligence, and digital propaganda. Afterwards, I felt like I would wait a few months before returning to the theme of technology’s impact on politics. A few days later I came across Jamie Susskind’s new book The Digital Republic. I had meant to read the book for some time, but had put it off. I was worried it would rehash the same topics so many other books on technology have already discussed. 

I quickly discovered I was wrong. Jamie’s book is different because it deals with the legal challenges digital technology creates. Before I read this book, I thought other authors had explored this topic. But most other authors focus on the politics rather than the legal implications. This means Jamie has a different perspective when he offers insights and recommendations.

Jamie Susskind is a barrister by trade, but has become known as among the most influential writers on issues of technology and the law. His most recent book is The Digital Republic: On Freedom and Democracy in the 21st Century.

Our conversation goes beyond the problems of AI and social media to reflect on the fundamental challenges technology poses for our legal frameworks. Jamie then provides not just proposals to address them, but an entire philosophical framework to reframe how we think about technology. I think it’s a different way to think about the challenges of technology and I think you will too. 

Now if you enjoy this episode, please consider supporting the podcast on Patreon or as a premium subscriber on Apple Podcasts. You’ll also get access to a growing library of bonus episodes. The most recent one features Jeffrey Isaacs in a conversation about the life and work of Robert Dahl. Click the link in the show notes to become a Patron and gain access to the bonus content. If you have questions feel free to email me at jkempf@democracyparadox.com. But for now here is my conversation with Jamie Susskind…

jmk

Jamie Susskind, welcome to the Democracy Paradox.

Jamie Susskind

Thank you for having me.

jmk

Well, Jamie, I really loved your book, The Digital Republic: On Freedom and Democracy in the 21st Century. It’s a very expansive book. It’s difficult for me to figure out exactly where to begin, and part of the reason is that it’s a topic, technology, that this podcast has tackled a number of different times. So, I’m a little hesitant to start by asking the basic question about where technology goes wrong. But I do think that it helps us to understand how you think about the problems that we face. So, let me start with a quote from your book. You write, “Computer code has a formidable ability to control human activity silently, automatically, precisely, and without tolerating any objection and it is used to enforce a growing number of society’s rules.” Can you tell me about how computer code controls human activity?

Jamie Susskind

In the olden days when you went to a library and borrowed a book, or you went to the video store and borrowed a video, you would be told a deadline by which to return the book or the video and if you failed, you would receive a penalty perhaps a payment due for every day that you were over. Today, if you borrow a book on a Kindle, you sort of rent it, or if you rent a movie on Amazon Prime, you don’t have a choice about whether and when to return it. After a while, it’s just simply removed from your device.

Imagine taking a self-driving car and you want it to get to the hospital because it was an emergency. You might find that unlike if you were driving the car, that car might refuse to go over the speed limit even by a mile an hour or two miles an hour. It might refuse to drive on territory, which its GPS systems told it that it would be trespassing. It might refuse to park in certain places. Again, if you were driving the car, you might do these things and accept a sanction later on.

Final example, shortly before the last presidential election there was a great furor because there was a New York Post article about Hunter Biden, which Twitter made it impossible to post on its platform. Not just that if you did it, there was some kind of sanction, but that if you actually posted the link and then hit Tweet, the tweet would not send. All of these are examples of the same thing. They’re examples of the way that computer code contains rules. So, when we interact with digital technologies, we are subject to the rules that are coded into them. It’s for this reason that 20 years ago or so scholars like Larry Lessig said that code was like law, but a particular kind of law.

Lessig used the example of a door. He says, we’re used to being in a world of doors, which say ‘Do Not Enter,’ but you can. You just might suffer the consequences. But code is like a locked door. You can’t get a computer program to do something that it is not otherwise coded to do. So, my philosophy is pretty simple. As more and more of our lives are mediated through technology, our actions, our interactions, our transactions, then we are subject to the rules that are coded into those technologies. Those who write those rules have a growing degree of social power. Software Engineers are becoming social engineers. Now my book tries to understand what we might do about that source of power because you’re not going to find it anywhere in the political science textbooks of the 20th century.

jmk

It’s interesting that you compare code to law, because the law is different than code since it applies to everyone the same way. You make a law and everybody’s supposed to follow it in the same fashion. Now, where that changes is that it might be applied to different people differently, but that’s because of the human element in the law. Digital technology fundamentally tries to customize what it’s doing to each individual person. As technology improves and there’s artificial intelligence behind a lot of the technologies, it’s going to further try to customize its technology to each person.

So, we would think that there wouldn’t be a type of bias behind it, because in the law you establish the, the law and it applies to everyone equally. But in technology, you’ve said that algorithms can have a bias. Can you explain how that’s possible, if it’s similar to something like the law that’s supposed to apply to everybody?

Jamie Susskind

There’s a lot in that question. Let me try to unpack it. Firstly, how is computer code not like the law? Well, one, laws are made in public and we get to see the lawmaking process and we get to see the end result, the written law, as it is there on the page. The same cannot be said of computer code. Largely, it is made in private, behind closed doors, and it is either hidden from view or difficult to understand. A second difference is that while computer code is made by private corporations in pursuits of private profit, the law, at least in a democratic society, acquires some of its legitimacy from having been made by elected officials according to recognized procedures of law making which bear the stamp of legitimacy and authority.

So, what you have with code is essentially a kind of private rulemaking without the checks and balances and without the transparency that one would expect of laws that were made in the public domain. Now you are right that one of the interesting things, and I don’t say whether it’s a good or a bad thing, about code as a rule making system is that it is more flexible depending on the person. So, to use a slightly anachronistic analogy, think about speed limits. We all have to follow the same speed limits when we drive our car because the law makes a broad presumption about the average ability of the average driver.

But there’s no reason in particular why you couldn’t have it, so your car could go faster than mine because your record has showed and the data reveals you to be a more cautious and less accident-prone driver than me. So, you might have a different speed limit from me in the future. That’s assuming that we still drive our cars, which is something that I doubt in the long run. But the point you make is right. What code can do is that it can customize rules according to the characteristics of the person who those rules are being applied to among other factors, because you can also customize the rules, for instance, to improve profits for whoever happens to be writing the code.

So, imagine for instance, that you have a vending machine in a garage. You might have a piece of code which raises the prices of a Diet Coke on any day that happens to be particularly hot, a kind of algorithmic pricing method, which responds to perceived changes in demand or likely changes in demand. That’s obviously nothing to do with the person buying the Diet Coke. Still, it’s a kind of price sensitivity that the old methods couldn’t allow. How do these systems have bias? Well, that is a very big question and it applies not just to the rules that algorithms make us follow, but also the determinations that they make.

So, for instance, algorithms are increasingly used to distribute things of importance in society. They might determine our access to credit, to insurance, to housing, to jobs and one of the advantages it has said is that these systems aren’t like humans with our fickle prejudices and biases. But, of course, one of the disadvantages is that they can themselves introduce new biases or new injustices or they can amplify old injustices. So, for instance, if you train a face recognition system only on a database of white faces, that system is going to struggle to see faces of people of color. If you train a voice recognition system mostly on the voice of men, then it is likely to struggle to hear the voices of women. There you have an example of where the data sets that are used to train these systems can result in an injustice.

But there are also injustices in the world which these algorithms simply amplify. So, for instance, if you scrape an enormous amount of language off the internet and use it to train a large language model, and this was a few years back, but then you get it to play some word games, you might say something like man is to doctor as woman is to… And it won’t reply doctor. It will say nurse or you might say, man is to architect as woman is to, and it might say, interior designer. The reason for this is not that the people who designed the system were somehow biased against women or wanted to fit them into more stereotypical roles. It’s because the data itself reflected the gendered use of language of society at large.

So, these machine learning systems, which are trained on data, which is itself derived from society, instead of correcting for biases, will often amplify and reproduce them. Perhaps the clearest example was the old Google autofill. So, you’d start typing something into Google. You know, you might type in ‘Why do Jews…’ and it would propose ‘Why do Jews have big noses? Why do Jews control global finance?’ That’s not because engineers at Google were anti-Semitic. It’s because that’s what other people had searched for in the past and the algorithm was trying to offer up a convenient prediction of what you might be likely to search for.

So, when you train algorithms using data from the real world, there is a risk of partial data which like the examples of the voice recognition and the face recognition systems. There’s also the risk of data that is itself problematic simply because it reflects injustice that already exists in the world. Instead of correcting for it algorithms entrench and amplify it. That’s something that I write a lot about in the Digital Republic.

jmk

So, you started out by saying that code is a lot like law and we oftentimes think that there is very little law when it applies to technology. But one of the ways that we all engage in technology that reminds us of how law affects it is when we agree to accept terms like whenever we install a new program, whenever we decide to use a new type of technology, a new app. You described that as the consent trap. I think that’s really vital to understand how you think about the intersection of digital technology and law. Can you tell us a little bit about that and what it is?

Jamie Susskind

Yeah, so in the book, I reject the premise that digital technology is ungoverned, a kind of wild west. I think what’s more accurate to say is that the laws that govern digital technology, that do exist and there are many of them, are inadequate for the purpose that they are trying to meet. The example that you’ve just cited, the example of the kind of do you agree, yes or no? Do you consent, yes or no? Please sign these terms and conditions. It is a very good example of that.

I mean, if you really step back and think about it, and I give the statistics in the book, basically no one reads them. Those who do read them, don’t understand them. Those who do understand them find them so vague that they give no indication of what they’re actually signing up to. So, we may share your data with third parties. What does that mean? What we essentially have is this pretense, this farce, this kind of jumping through a hoop where we consent to things or agree to things, but we’re not really consenting or agreeing in any meaningful way at all. Yet in many jurisdictions that consent is supposedly the front line of your defense against the arbitrary and enormous power of digital technology.

We’ve inherited this idea that consent is somehow a dignified and liberal way of governing digital technology. That it gives us a kind of autonomy, but it doesn’t track reality. When you are presented with terms and conditions, you don’t have the choice of renegotiating them. It’s take it or leave it. In fact, what terms and conditions therefore do is the opposite of what they’re supposed to. What the law should do is try to rebalance the relationship between those who have power, the tech companies, and those who don’t. But allowing tech companies to lay down the terms and conditions of their product use and basically requiring us to sign them if we want to use them, doesn’t reverse the power that already adheres in the digital technologies, but rather entrenches it.

So, my quibble, my problem, is not so much that there is no law, but that we’ve allowed a legal mechanism to arise, the mechanism of consent, which actually does the opposite of what law and regulation is supposed to do in this context, which is rebalance an imbalance of power.

jmk

That’s really fascinating the way that you describe that the acceptance of terms and conditions is actually shifting the balance of power to the technology companies and away from ourselves, because typically we think of laws and regulations as aiming to protect individuals rather than entrenching power within companies or corporations. You make the case that a lot of the regulatory regime exists to protect or to empower those corporations or the people who control digital technologies. Can you provide some other examples where the law emboldens or empowers those digital technologies or digital technology companies rather?

Jamie Susskind

Yes. So, just to start with it’s helpful to think about other areas of life in which there is a kind of imbalance of power and how the law treats those relationships. So, the law places duties on parents towards their children, duties of trust, fiduciary duties, which have nothing to do with the contract between them. So, their kids have never signed the terms and conditions that mean that their parents have to look after them in a certain way. The law recognizes the imbalance of power and it imposes requirements on the parents.

Likewise, if a doctor treats you, that doctor owes you a duty of care that is entirely separate from any contract you may have together. So, it’s not by agreement or by consent that the law imposes duties on that doctor towards you, but rather by imposing a duty just because it recognizes the imbalance of power and the social responsibility of the doctor. Likewise, when a banker looks after your money, there are certain duties they owe to you irrespective of what happens or is said in the contract between you.

So, we have all of these other areas of life in which the law recognizes imbalances of power where one group in society happens to have more power or more knowledge or more resources or more responsibility and it imposes higher obligations on them, essentially not to abuse their position and to look after those whose interests might be affected. Technology is a bit of a lacuna. It’s different. So, if you accept my argument that I make in the Digital Republic that digital technologies carry power and that power is growing. The question is why do we not treat them more like we would treat lawyers or doctors or bankers or teachers or parents or whatever it is for other people of responsibility in society.

So, I think that that is a kind of gap in the law or rather an inadequacy in the law. Yes, I do make the case in the book that actually a lot of our existing laws and regulations not only fail to address the imbalance that exists, but kind of entrench it and make it worse. The consent trap, the problem of terms and conditions, is a very major one.

Another one in the United States is the fabled Section 230 which grants a wide range of immunities to social media platforms when they both do and do not moderate content on them. A breadth of immunity, which does not exist in Europe or in the United Kingdom, but does in the United States. I think it affords an unnecessary degree of protection to those who no longer need it as much as they might have done in the late nineties or early aughts. That’s another example. But there are also ways in which the law has simply failed to catch up. So, old concepts of things like defamation are basically quite ill-suited for the protection of reputation on the internet. They may have worked in the 20th century, but in the 21st century, they don’t.

jmk

Earlier you mentioned how there’s a lack of transparency behind what technology companies do. That oftentimes the code is private and difficult to know how they come to the decisions that they. Why is it difficult to tell whether an algorithm has broken the law? Why is it that we can’t just require companies to give us access to their code so that we can kind of better understand what it’s doing and understand whether or not it’s abiding by different laws that we have established in the past?

Jamie Susskind

There are layers of opacity here. So, the first layer is that a lot of companies use the existing law to keep their code and the data that they use secret as a kind of trade secret. In a sense, that’s understandable. It’s their secret sauce. They don’t want people to game it. They don’t want their arrivals to copy it. So, it’s a bit like the protection of trade secrets in other areas. They use the law to stop people having access. But then assuming you could and were allowed to look under the bonnet, computer code is not very easy to parse. It is fiendishly complex, particularly in machine learning systems of the kind that are becoming more prominent.

A lot of the time, engineers within the companies themselves don’t even understand exactly how or why an algorithm has reached a particular determination. Indeed, these systems work in a way that is so different from the human brain and that difference is precisely what makes them valuable and makes them additive in society. They do stuff that we can’t do by operating differently from the way that we operate. But that makes them really difficult to understand. That’s for their human creators, still less a kind of regulator or broader society. So, there’s a sort of double problem of transparency there and we have to find new ways of holding these systems to account which don’t necessarily involve transparency of the kind that we may have associated with human decision makers in the past.

Now, just pausing that for a second. Human decision makers are kind of opaque as well. When a judge gives their judgment, they do have to give reasons for it. But those reasons might not actually be the real reasons and we have to sort of take it on trust a lot of the time that the reasons people give for their actions are in fact the real reasons. But at least there are reasons. When it comes to machine learning systems, that is not always true. So, there are various ways that people are thinking of adapting the law and legal procedure in order to improve, if not transparency, at least accountability on the part of technology companies.

So, my basic principle is this, it should never be a defense if someone is accusing you of breaking the law, let’s say, of discriminating them against them, it should never be a defense to say, ‘Well, my system is so opaque that I can’t tell you whether that’s true or not and therefore your case fails.’ The burden should always be on the technology producer, designer company to show where a credible allegation is raised that their systems are compliant with the law. So, if you are unable to produce a system which is compliant with the law and which you can show that is compliant with the law, the problem is with the system, not with the law.

I don’t want to live in a society where decisions are made all around us every day, a thousand times a day, by systems that we cannot even understand, still less explain, still less hold accountable. So, what I try to imagine in the book is how we can incentivize companies to put as much energy and genius into making their systems slightly more explainable and slightly more accountable as they do into making them more innovative and more powerful in other ways, because that must be our aim, to enjoy the benefit of these technologies without being completely in their thrall.

jmk

Now, that’s something that I think most people would be astonished to learn. The fact that machine learning systems, artificial intelligence technology systems, that exist oftentimes produce results that its designers cannot explain. That they can’t work backwards and explain how their code produced the decisions that it actually came to.

So, what you’re emphasizing is the fact that because it can’t explain how it came to those decisions, we don’t know whether those decisions that it made broke different laws, for instance, employment laws. We don’t know whether it avoided certain candidates based on race or gender or other protected classes. If we use it in the court system, we don’t know whether it came to certain decisions for sentencing based on characteristics that have nothing to do with the case. I mean, we have no idea how it came to some of those decisions. Why is it that we don’t understand how artificial technologies actually produce the decisions that they make?

Jamie Susskind

Well, we do understand it at a high level. We understand how they work, how they are trained, how they are operated, but when it comes to individual decisions, it’s just incredibly complicated. You’re talking about a system that uses enormous amounts of computing power to process potentially billions or trillions or more data points, countless different variables, each with an incredibly subtle weighting. It’s not something that can be easily explained in a few bullet points or indeed in human language at all. It’s a type of processing that doesn’t lend itself well to logical linguistic explanation and that’s just inherent in these kinds of technologies.

jmk

Now, my understanding of artificial intelligence is that when we describe it as machine learning, it’s a little bit disingenuous because what it’s doing is recognizing patterns that exist. It’s looking for things that you can identify from extraordinarily large data sets that establish different patterns that a human being wouldn’t be able to identify as easily because it’s processing so much information and it’s looking over so much data and applying them in ways that are unlike the way that people would normally think.

But it’s inherently using numbers. It’s using equations to be able to figure out how different things actually relate with one another. Why is it that we can’t figure out a way to work backwards and spit out an explanation that can at least tell us how an individual decision might have actually have been established? Why is it that we can’t unravel the different patterns that the machine uses for different examples?

Jamie Susskind

Because it’s incredibly complicated and involves too much data and too many connections and too much processing. I mean, don’t get me wrong, there are very smart people out there who are working on technical solutions to the problems of AI explainability, AI accountability, AI transparency, and so forth. But the long and short of it is we do find it very difficult to explain how, for instance, the brain makes an individual decision in a particular moment to do a particular thing or think a particular thing. The same is true of these artificial intelligence systems. By the way, I agree with you that terms like intelligence and learning and machine learning are slightly unhelpful because they suggest an analogy with the biological world, which doesn’t really equate.

But as I say, the way that we have established communicative norms for the justification and explanation of human decisions is the giving of reasons. So, here are facts. Here is logic based on those facts. Here is how those facts and that logic led to this conclusion. That is not how machine learning systems work. Not only is it not how they work, but you can’t even really use logic and the kind that I’ve just described to explain it even if it wasn’t the way it worked. The reason I say that is that sometimes we use logic as humans to explain decisions that we actually didn’t use logic to make. So, you might make it for an emotional or an intuitive reason and then try to find some logic for it. That process is itself really hard for machine learning systems. People are devising other solutions.

So, for instance, let’s say you think a recruitment system might have discriminated against you on the grounds of your race. You might not be able to explain how it reached an individual decision vis-a-vis. But you may have been able to test that system over a long period of time to see whether it does, in fact, generally discriminate on the grounds of race by looking at its outcomes. So, you know how many people apply for jobs, what proportion of them are of particular racial backgrounds, what proportion of them succeed, what portion of them fail, how does that compare to test cases and other racial groups?

So, you might be able to have given that system a kitemark or a stamp of regulatory approval, which says that this system is fit for purpose and it’s therefore very unlikely, at the very least, that it has discriminated against you on the grounds of race. That’s one kind of way that people are proposing that you might regulate systems by regulating them at the system level rather than at the individual decision level.

jmk

Do you feel like to properly regulate artificial intelligence, we need to actually put the brakes on some of the applications that we’re currently using for artificial intelligence?

Jamie Susskind

I don’t want to do that because I recognize and I’m excited about the enormous benefits that digital technology and artificial intelligence is going to bring for humanity. I don’t want to unnecessarily forestall or hold back that progress. For me the problem is on the other side of the ledger. I think if you have inadequately resourced legislatures and governments, inadequately educated political classes, a lack of political will and understanding in relation to this issue, then what you’re going to have is regulation that is slow, that is harmful rather than helpful, that is highly politicized where it shouldn’t be, that is enthralled to industry groups and subject to industry capture. You’re basically going to get bad laws and bad regulation.

So, what I argue for is a better form of regulation, a better form of legislation. A political system that is better able to deal with what I think is one of the great challenges of this century. So, I don’t want to hold back time and I don’t want to hold back progress. I want to bring our political processes and systems into the 21st century. Don’t get me wrong. They’ll always be playing catch up and it’s always going to be difficult. But just now they’re nowhere near where they need to be.

jmk

So, one of the solutions that you provided was to see whether a system at a high-level produced results that came across as if they break the law. Like statistically analyzed the system to see if it has racism embedded within it. At least in the United States that would be extremely controversial because sometimes that seems acceptable within the way that we applied the law, but at other times, it’s not accepted. People actually contest that. It’s something that we debate a lot about whether or not systemic forms of racism demonstrate examples of racism that breaks the law or not. Do our current laws that we have on the books, are they designed to enforce the types of discrimination, the types of law breaking that an artificial intelligence system would actually produce?

Jamie Susskind

No, it’s not what they’re designed to do. Most of these laws predated the technologies that we’re talking about. Can they, with some legal and judicial ingenuity, do not a bad job? In some circumstances, yes, they can. So, lots of discrimination lawyers, for instance, are already writing interesting articles about how existing concepts in discrimination law could be applied to new forms of discrimination that take place in the machine context. But there are invariably difficulties that the law does not yet cater for. One of them, for instance, is the burden and standard of proof. Most of the time, by and large, to generalize the burden of establishing discrimination is going to be on a person alleging it. But that is very difficult if the system that they are challenging is opaque and hidden from view.

So, what people are proposing, for instance, is a difference in legal procedure whereby if you raise a credible allegation, the burden then shifts to the person who operates the system to justify and explain why there is no discrimination. So, that’s the lawyer’s answer to the problem that opacity raises. But let me give you a different example. A few years ago, Facebook patented a system which would determine the credit worthiness of a person by reference to the credit worthiness of the friends they had on Facebook. The theory was there is some statistical correlation between whether you are likely to pay your debts and whether your friends are people who have paid their debts in the past.

So, if you are friends with a lot of people who have bad credit scores, then you are likely yourself to be just a slightly less reliable person to loan money to. Now you can kind of understand that from a sort of statistical machine learning technological perspective. But from a moral and political perspective, you might ask whether it is right, whether it is fair, whether my access to a loan should truly depend on whether the people I happen to be friends with on Facebook have repaid their loans in the past. My point is existing discrimination laws that are based on protected characteristics like age, race, gender, you know, groups that have been traditionally marginalized in the past, those laws are utterly inadequate to deal with the situation that I’ve just described. They’re not made for that kind of world.

Here’s a different example. People who use Hotmail email addresses are twice as likely to crash their cars apparently. That’s something that machine learning systems have detected. But should that mean if you go onto a website and apply for car insurance, that that platform should be able to take into account the fact that you have a Hotmail rather than a Gmail email address when deciding the terms in which to offer you insurance.

Some people will say, follow the statistical efficiency. Other people will say no, things like your email address shouldn’t morally or politically have any impact on whether you can access something as important as insurance. Again, because it’s nothing to do with age or race or gender or religion or nationality this is not something that existing discrimination law has anything to say about. Now in the UK there might be some data protection law that has some impact on that question. But in essence, what we’re doing is we’re trying to patch up old laws to match new realities.

jmk

It also encourages different types of behavior because we haven’t fully grasped how surveillance is going to shape and change the way that we interact online yet. I think future generations will probably behave very differently online than current generations. Already when I’m on social media or if I am watching videos on YouTube, I’m interacting with the algorithm in the back of my mind. Whether I like an image or look closely at a post is sometimes based on whether or not I want the algorithm to continue giving me those types of posts and those types of recommendations. I’m afraid to interact with some posts, because I think that the algorithm is going to give me more of that when that’s not really what I want to see on a regular basis.

So, I wonder if the lack of regulation is going to make it so that some of these ideas, some of these things that technology can recognize is going to be irrelevant in the future if we don’t create some basic guardrails around it. Because people will start acting very differently, ensuring significantly less information if we don’t provide some basic rights, some basic understanding of these rules beforehand.

Jamie Susskind

Yeah. So, I agree with all of that, and one of the questions that I ask, not just in The Digital Republic, but in my first book, Future Politics, is what effect does it have on us just knowing that we are being watched, just knowing that there is data being gathered about us. So, one thesis which you might trace back to philosophers like Foucault and Benham is that when you know that you are being watched, you are less likely to do things that are perceived as sinful or shameful or because you don’t want to be caught or you don’t want to be known as a person who did that thing.

A very common example that I think of is cars, which track where you go. You shared a family car 30 years ago you could, if you wanted, drive that car to a place where you didn’t want your other half to know whether you are having an affair or you are going to buy drugs or do something otherwise that you wouldn’t want your partner to know about. Then you could drive home and no one would be the wiser. These days, if your journey is tracked, whether it’s on your phone or your car or any other device you happen to have on you, that kind of anonymity of movement is gone.

There is a significant risk that someone will be able to point to where you were at a given time and that risk, the argument runs, makes you less likely to do the thing in the first place. So, without anyone telling you to, you’ve changed your behavior. I think it is too early on a kind of empirical level to say exactly how these technologies will change the way that we behave. But one suggestion is that actually our norms might adapt as fast as the technologies.

What do I mean by that? I mean that my generation, let’s say people in their kind of early thirties, did a lot of stuff online in 2009-2010 posting Facebook photos and the like which we later regretted. You know, photos of bad behavior, silly behavior when you were drunk, or sometimes even more serious stuff. The use of phrases that have become out of fashion. The use of terms that are now considered unacceptable, all of which done in good faith, all of which done in the flush of youth. That was a period before people fully appreciated what it would mean for most of their life to be captured and caught and stored in permanent or semi-permanent form.

Now, younger people are either going to take one of two courses. They’re going to moderate their behavior and post less of it and seek to record less of it. I think some do. There is also, I think, a relaxing of the norms of shame and at least some of the norms of intimacy that may have appealed to my generation and those who came before. So, younger people are much more likely to post photos of themselves in intimate spaces or comments that reveal intimate thoughts and feelings. There is a kind of increased openness, a personal openness that comes with the social openness of technology. This stuff is already fascinating and we’re really in the early days of it.

I think it is hard to say exactly how human society will change, but what I’m pretty sure of is that a fundamental rearrangement of surveillance in society such that almost all of our actions, interactions, transactions, are going to be caught and recorded in data in some way is very likely to have an effect on the way that we live our lives.

jmk

So, Jamie, your book is called The Digital Republic. What does a republic convey that a democracy does not?

Jamie Susskind

Well, let me explain to your listeners that when I talk about republican ideas and the idea of a republic. I’m not talking about the modern capital R Republican party in the US, although obviously that can be traced back to the tradition that I am seeking to place myself into. I’m talking about a set of ideas that reaches back to the Roman Republic and has emerged and been important at many of humanity’s turning points, wars and revolutions, constitutions, declarations, et cetera. The central political idea of republicanism is that it is wrong for us to be at the mercy of the arbitrary power of anyone in society. That is to say applying it to the example of kings, for instance, and Monarchs.

The republican didn’t just want nicer kings or nicer queens. They wanted the abolition of the idea of kingship itself because they considered that they were unfree so long as a king or queen could treat them if they wished in a way that was undesirable, irrespective of whether they happened to do it. So, a Republican approach to employment and labor law says we don’t just want nicer bosses. We want rights in the workplace. A republican approach to slavery in the past said, we don’t just want kinder slave owners or more beneficent slave owners. We want the removal of the slave master relationship altogether. A republican approach to gender norms in marriage didn’t just want nicer husbands, but the abolition of the laws that gave men power over women.

The philosophy of republicanism says in the way that I advance it in the book that there is no point or little point moaning about Mark Zuckerberg or Elon Musk and hoping for someone better or saying I preferred Twitter when it was under its previous control or I wish Mark Zuckerberg would depart the scene and someone wiser would take over. Because the problem in both cases is not Zuckerberg or Musk, but the idea of a Zuckerberg or Musk. The idea that simply by virtue of owning and controlling a particular technology, someone wields arbitrary or unaccountable power, which can touch every aspect of our liberty and our democracy. So, to be a digital republican is to want to see laws that hold power accountable rather than just hoping that the power is used in a benign and wise.

jmk

So, what I’m hearing from you is that it’s not just about the ends of the laws. It’s not just about how we get there. It’s also about the process to get there. It’s not just about what the regulatory regime looks like, but it’s how we design that regime to regulate technology in the future as well.

Jamie Susskind

That’s exactly right. It’s a deep philosophical argument, which says we are not truly free if we are subject to the power of others whose actions we cannot understand, still less control. We live in a state of unfreedom so long as that is the case.

jmk

So, Jamie, we’ve been talking a lot about different laws. We’ve been talking about some of the problems. We’ve also hinted at some of the solutions. But can you offer a big picture idea of what a better regulatory regime for technology would actually look like?

Jamie Susskind

Yeah. So, the book contains a philosophy of regulation and then some applications of that philosophy. I’ve already sketched out what the republican philosophy sounds like and in the book I offer a menu of different solutions to various different regulatory problems. Now, there is no one size fits all answer because the laws that govern social media might be different from the laws that govern robotics or the laws that govern artificial intelligence systems, et cetera. But in the book, I try to lay out a series of different types of law and types of institutions that we might see.

So, I’ll give you some examples. I think it is wrong that senior people within the tech industry are not subject to personal obligations and duties like lawyers, doctors, and pharmacists are. I would place duties on particular people in the tech industry to make sure that they conform to certain standards of integrity and probity and professionalism. That’s one. Two, I think that certain products would benefit from standardization and regulation like we have for food or architecture or engineering or factories. I think that certain products should be subject to testing and then subject to regulatory oversight and licensing and approval. So, a system of rules and standards that more closely approximates how we do it in other parts of the economy.

I would like to see new regulatory powers and new regulatory bodies and institutions that are able to investigate, that are able to oversee, that are able to test the conformity of digital technologies with the law. Again, that shouldn’t be controversial. We have those in every other area of economic life. But we don’t yet have an adequately resourced body of people who are trained to do that. I would like to reform antitrust law to move it away from a narrow economic conception of harm to allow antitrust regulators to take into account much broader social aims as well, like media diversity and concentration of power. Some people will say that’s not antitrust law. Fine. I don’t care what you call it. I think it should be a power that the government or regulators have.

I would like to see some regulation of social media platforms, but at the systemic level. That is to say I would never want to see the state or the government involved in individual decisions about particular bits of content or particular individuals. The scale and difficulty of regulating social media platforms makes that basically impossible and impracticable. What I would like to do is something along the lines of what the UK government is considering, which is a requirement of social media platforms that they have in place adequate and proportional systems for the meeting of certain social aims that are laid down in Parliament. So that’s just a flavor of some of the things that I propose. You know, all of it’s difficult. All of it’s ambitious. All of it’s new. But this is the stuff we’ve got to start thinking about.

jmk

Well, Jamie, thank you so much for joining me today. I do want to plug the book one more time. It’s called The Digital Republic: on Freedom and Democracy in the 21st Century. I’ve really enjoyed talking to you. I think that it’s such a different perspective to be talking about technology in terms of the types of laws that we should be applying to it, rather than just talking about the general concerns that we have and the challenges that we face about technology. It’s a much more specific book. But at the same time, it’s an incredibly ambitious book. So, thank you so much for writing it. Thank you so much for joining me today.

Jamie Susskind

Justin, thank you so much.

Key Links

Follow Jamie Susskind on Twitter @jamiesusskind

Learn more about Jamie Susskind

Democracy Paradox Podcast

Samuel Woolley on Bots, Artificial Intelligence, and Digital Propaganda

Ronald Deibert from Citizen Lab on Cyber Surveillance, Digital Subversion, and Transnational Repression

More Episodes from the Podcast

More Information

Democracy Group

Apes of the State created all Music

Email the show at jkempf@democracyparadox.com

Follow on Twitter @DemParadox, Facebook, Instagram @democracyparadoxpodcast

100 Books on Democracy

Democracy Paradox is part of the Amazon Affiliates Program and earns commissions on items purchased from links to the Amazon website. All links are to recommended books discussed in the podcast or referenced in the blog.

Leave a Reply

Up ↑

Discover more from Democracy Paradox

Subscribe now to keep reading and get access to the full archive.

Continue reading