Daron Acemoglu on Technology and the Struggle for Shared Prosperity

Daron Acemoglu

Daron Acemoglu is the Elizabeth and James Killian Professor of Economics at MIT. He is coauthor (with James A. Robinson) of The Narrow Corridor, Why Nations Fail, and The Economic Origins of Dictatorship and Democracy. His latest book (with Simon Johnson) is Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity.

Listen on SpotifyListen on AppleListen on Google Listen on Stitcher

Support the Podcast Monthly on Patreon

Make a one-time Donation to Democracy Paradox.

If you have this model of AI, which is geniuses design machines and those machines or algorithms are going to scoop up all the data and they’re going to make better decisions for you. That’s fundamentally anti-democratic.

Daron Acemoglu

Key Highlights

  • Introduction – 0:33
  • Technology and Progress – 2:06
  • Productivity – 14:01
  • Artificial Intelligence – 24:42
  • Shared Prosperity – 34:31

Podcast Transcript

Daron Acemoglu is among the most recognizable names in the scholarship on democracy. He has authored numerous books and articles, many of which are coauthored with longtime collaborator James Robinson, including The Narrow Corridor, Why Nations Fail, and The Economic Origins of Dictatorship and Democracy. 

He has a new book out coauthored with Simon Johnson called Power and Progress: Our Thousand-Year Struggle Over Technology & Prosperity. The ideas in this book are ambitious. They try to intersect aspects of technology, economics, and politics. Our conversation touches on artificial intelligence, economic inequality, and, of course, democracy. 

If you like this episode, please consider supporting the podcast as a premium subscriber. For just $5/month you can access a growing catalog of bonus episodes. The most recent discusses the recent elections in Turkey with Michael Wuthrich. Follow the link in the show notes to become a premium subscriber. Like always, if you have questions or comments, I’m available at jkempf@democracyparadox.com. But for now… This is my conversation with Daron Acemoglu….

jmk

Daron Acemoglu, welcome to the Democracy Paradox.

Daron Acemoglu

Thank you, Justin. It’s my pleasure to be here.

jmk

Well, Daron, your book was amazing. I mean, the book is Power and Progress: Our Thousand Year Struggle Over Technology and Prosperity and it tackles on a lot of the questions that we talk about on the podcast, including technology. It even gets into a lot of questions about democracy, which, of course, is one of the big questions in your research. But I want to start with a question that really caught me off guard and it’s central to the book itself. Why do we assume that technological progress benefits everybody?

Daron Acemoglu

There is a very good reason for it. In fact, let me give you two very good reasons for it. It still doesn’t make it a truth, but two excellent reasons. One is technological progress means humans expand what they do. They can do more things and, in fact, our visions of technology come partly from our struggle against nature. So, we think of technology as better ways of protecting ourselves from nature, from natural hazards, and controlling our environment. So, it must have some benefits. Second, historically, we are in a much, much better place today than we were say 300 years ago. A large part of that is thanks to technology.

So, the industrial revolution that started sometime around the middle of the 18th century in the United Kingdom in Britain unleashed a set of forces that have made our generation, our parents’ generation much, much more comfortable, much, much healthier, much, much more prosperous, much, much more secure than people who lived, say in the middle of the 18th century. So, these two together create a natural tendency among social scientists and all kinds of commentators that there must be an arc of technology bending towards good things.

But the key argument of the book is that technology is what you make it to be. You can actually do great things with technology and you can do awful things with technology and the benefits that we are enjoying right now have not been automatic. It was a result of a protracted struggle in which we had to build new institutions, change the balance of power in society, develop new notions of what we wanted, and redirect technological change towards things that were more beneficial for workers.

jmk

So, let’s dig into that a little bit deeper. Why don’t we start with some examples. You note that even the earliest forms of technology really created different forms of, I think the best way to put it is, inequality. Can you kind of explain how that happened and how those affected people in their actual lives?

Daron Acemoglu

I think that is just absolutely fundamental and many people have said it in one form or another, but I think one person who nailed it is H.G. Wells in the Time Machine, which we quote at the beginning of the first chapter. You know, he said, ‘Well, you thought technology is about controlling nature, but it’s as much about humans controlling humans.’ That’s the double role of technology. You can do many things with it and once somebody controls technology, what’s there to stop them from putting them onto a higher pedestal, giving them greater status, greater wealth, greater power. So, it’s always been like that. That’s why it’s a 10,000 year, a 100,000-year struggle, except that we have better data. The issues of automation and political struggle are more clear in the last thousand years.

So, the Industrial Revolution is as much about inequality as progress. Later we turn it into progress, but in the early stages, they had new coal mines, steam power, spinning machines, weaving machines, new factories, and all of those increased productivity, increased what the British economy was producing, but at the same time enriched a small group of people and in fact deepened the hardship of a very large fraction of the population who did not experience much real income growth, who saw their autonomy decline, their working hours lengthen, and their living conditions worsen.

jmk

You’re kind of creating a dichotomy where we either use technology to make things better or we can use technology to make things worse. Is it really so either/or or is it a case that sometimes technology is used for harmful ways, but it still makes a lot of people’s lives better, even though maybe it produces enormous amounts of inequality?

Daron Acemoglu

Absolutely, 100%. It’s not a dichotomy. It’s much more of the gray sort and it’s not like rich versus poor, capitalists versus workers, elites versus citizens. Many, many different groupings. So, many technologies make some groups better off, some groups a little bit better off, some groups worse off. So, there are many grades of this, but that makes it no less interesting. It really highlights the detailed nature that requires deliberation and hence democracy is critical. We need deliberation about how big changes are happening in society, because they’re going to have complex effects.

So, if you look at the period from the end of World War II to about 1980, you had disruptive technological change during that era as well. Manufacturing became much more mechanized. You have the early digital technologies such as numerically controlled machinery being introduced in many industries. You have new services emerging and these are all creating various types of distributional effects. But if you look at the macro picture during that era, what you see is a remarkable one. The economy is growing very rapidly and it’s a shared type of prosperity. So, for example, the real earnings of low education workers are growing even faster than those of college graduates. Wages are growing at a slightly faster rate than average productivity or GDP. You have unions. Democratic processes much more alive than they are today.

So, it’s a very different type of experience, but it doesn’t mean there aren’t any losers. There were people who lost their jobs. Some industries declined. Some workers couldn’t find jobs and became long-term unemployed. But on the whole, it was a much more shared prosperity experience than what we had been used to before. Today we have the same tensions, but on a much grander scale. So, the next wave of digital technologies, for example, office automation and then robotics type processes in manufacturing did lead to bigger displacement. Some of the new jobs, new tasks, that were central for the growth of the 1950s and 60s didn’t materialize. So, you see that workers that used to specialize in manufacturing, blue collar occupations, actually lost out while other groups of workers and managers and bosses actually made more money during this period.

jmk

So, before we get to the idea of shared prosperity, I want to dig in even deeper into the ways that technology really kind of developed. One of the things on my mind is I’m thinking about some of the earliest forms of technological development, like during the agricultural revolution. That’s a period that is well documented that hunter gatherers actually lived longer and had better nutrition. It was considered to be really a step down as we went deeper into the agricultural revolution for a number of generations and for a very long time before we got to that same quality of life.

But at the same time, when I think through the process of how the agricultural revolution would’ve happened, I’m thinking of the fact that I don’t know that anybody recognized the downsides when it began. I mean, people were just planting things to begin with and continued to migrate and move around. They saw it as a benefit that they could have a constant source of food available at the same locations and then they found that they could settle down into certain areas and that they’d have a consistent form of food. They didn’t think of all of the consequences of that. Is that the same thing that we see happen again and again throughout history, that we don’t recognize the consequences when we develop these different forms of technology?

Daron Acemoglu

Well, I think I like the way you described the early technological revolutions related to agriculture and settled agriculture, because I think in doing so, you’ve avoided mistakes that many popular authors make about that era. So, what we know, to the best of my understanding of the literature, is that some groups of hunter gatherers, which I have no reason to think that they’re not representative, were healthier, had a more balanced diet, were taller, and most probably worked far fewer hours than agricultural workers during the age of big empires such as Egypt, the Mesopotamian big states of 6,000, 7,000, 8,000 years after the process of transitioning to agriculture started. What we also know is that there were many different ways in which that agricultural transition took place. There were probably many factors at play.

So, for instance, two well-known sites in Anatolia, Çatalhöyük and Göbekli Tepe, were more or less around the same time give and take a thousand years. But one looks to be a very egalitarian, less socially stratified, early agricultural society. The other one seems to have been very hierarchical and in fact may have become hierarchical even before it became fully settled. So, that sort of shows there were many different ways of doing it and I don’t know what people understood at the time they started doing the transition. Almost surely one important element was yours. It’s been documented from a number of sites that people became semi-sedentary. They started storing food and coming back to the same places where they had some good track record of semi-domestication or the fertility of the soil was good.

At the same time, there was what anthropologists used to call complexity emerging, meaning that there was greater stratification like in Göbekli Tepe where there’s a religious cast emerging. There are people who become more like the elites because of political or religious reasons or shamans. So, it’s a complex process. We don’t know exactly what happened during that time period. Undoubtedly, some of it is exactly what you’re saying, which is that people didn’t understand what they were getting into, but some of it is that they were coerced during later periods of technological transitions. There are several examples of state building going hand in hand with changes in production technologies. Sometimes that’s because there is a political revolution and somebody centralizes power in their hands and then induces or coerces others to be the producers while they can be the elites enjoying the fruits.

So, I think both of those are relevant for today. I think new economic relations most of all requires some sort of persuasion. You need to convince people that this is what they should do. This will be good for them. It’s the right thing to do. It’s the just thing to do. That’s why conversations about the future of technology, the future of work, are so central to our current moment. There is some amount of coercion. If you don’t have any other options, you’ll have to go with what is being offered to you. There is always the threat of force in the background. Today, we have that threat of force very much in the background in modern industrialized nations. So, persuasion is even more important.

jmk

I find it fascinating that you write so much about topics like politics and democracy, and now technology, because your background is actually as an economist. So, I want to ask you question from your book that’s about economics. You write, “The notion of marginal productivity is distinct from output or revenue per worker. Output per worker may increase while marginal productivity remains constant or even declines.” That’s a lot of words. But at the end of the day, you’re saying that the idea of productivity is not necessarily linked to output per worker. How are those two concepts actually distinct?

Daron Acemoglu

This is the only part of the book that requires a little bit of posing and thinking because I think even the word productivity is open to misinterpretation. So, there are many famous economists, and I will not name them, who make claims on this, which I think are incorrect. Those claims are rooted in the fact that they’re using productivity in a number of distinct meanings in the same sentence. The best way I understand or the best way I can explain this is actually via an example that we use in the book. It’s a sort of the joke that the factory of the future will have two employees, a man and a dog, and machines. The man is there to feed the dog and the dog is there to make sure that the man doesn’t touch the machine. So, that’s like a hyper-automated future.

Now, think of the productivity measure that many people instinctively are drawn to in that factory. It’s output divided by the number of employees. That’s huge. You have one employee and this factory is producing thousands of gadgets, millions of gadgets. So, if you go by productivity, you’ll say this is great. This worker is hugely productive. But that’s an illusion. This worker is not productive. That’s the joke. This worker is completely dispensable. He’s there just to feed the dog. They can easily get rid of the worker and the dog and nothing will change in the machines in the factory. So, his marginal productivity, meaning his contribution, is tiny.

So, in the market economy, unless there are other considerations such as fairness, rent sharing, et cetera, which there are, and we emphasize them in the book, but in the market economy, the purest model that we have in our books, that worker should be paid a tiny wage. His marginal productivity’s incremental contribution to output is very small. This distinction is very important because people think, ‘Oh, well, we’re going to automate. We’re going to produce the same output or more output, so we need less workers. That’s great for workers. They become more productive.’ No, they haven’t become more productive in this incremental productivity, marginal productivity.

This becomes even more important because if you think of technology as a broad concept, not just automation, there are many things that we can do to make workers more productive marginally. We can give that worker better tools. He can do better designs. He can be creative. He can be adaptive. He can do the maintenance of the machines. He can do the repair of the machines. He can do the things that machines cannot conceive or cannot do. But this worker is not doing any of that. His only job is to feed the dog. So, that’s the choice. What do we do with the tools?

jmk

That’s fascinating, because you’re making a distinction between marginal productivity, the amount of productivity that that person adds to the productive process, rather than just talking about productivity by itself.

Daron Acemoglu

Yes. When economists talk about productivity, they sometimes mean exactly marginal productivity and they sometimes mean output divided by the number of employees, but the two are very distinct. It so happens that in many of the simplest models and teaching tools that we use in economics, those two are identical or they are very, very tightly linked. That is one of the reasons why sometimes people without realizing it jump into a huge assumption that the two are going to comove in reality as well. Whereas I think a lot of evidence show that they don’t.

jmk

Yeah. I think the best way to think about marginal productivity would be is if we add a second worker to this scenario. How much productivity is there?

Daron Acemoglu

Yeah, he’ll, he’ll look after the dog too. Nothing will happen in this example. Exactly.

jmk

Yeah, so that’s a great example. That helps us understand that if the machines are doing all the work, if the people at the top are the ones that are making everything happen, then adding more workers is going to be not only unnecessary, but just a cost that eventually you need to find a way to get rid of.

Daron Acemoglu

Exactly. But I think it’s also self-fulfilling. So, one of the things that we mentioned, and perhaps we should have spent more time on this, there is a different vision that managers can have. As a manager, I can think of labor as a cost. If the main role in my mind of labor is a cost, and for many companies it’s a major part of the cost, then I’m going to try to cut it. I’m going to try to reduce it or I can think of labor, my employees, as a resource.

They are the blood and soul of the company. Then if I think of them that way, I’m going to try to find ways of making them more productive. Again, the two are not the same. Making that loan worker more productive and hiring many like him would necessitate creating tools for that worker. But if I view as a cost, then let me get rid of the dog and the man.

jmk

It reminds me of a story about Elon Musk that at Tesla he had been trying to automate things so heavily and trying to find ways to let go of different people and have people not involved in the process. But eventually he realized that all of this automation was making things less productive.

Daron Acemoglu

You know, Elon Musk is a very brilliant character. I mean, anybody who writes the history of our age is going to feature him. He’s a brilliant entrepreneur and he is the epitomization of the hubris of our age. He’s been present when many mistakes were made and many inventions were made. And he is in his own bumbling ways, kind of honest too. So, yes, Tesla was an exercise in excessive automation, but then he came out and said, ‘We made a mistake. Humans are underrated. We excessively automated.’ He was forced to say that because Tesla was not able to produce the cars that it was promising to people.

If he had learned from history, he would’ve known this. The Japanese made the same choices early on when they started feeling demographic pressures. They were way ahead of everybody else in introducing robots. They are way ahead of everybody else today still in robots. But then they soon realized that if you automate and don’t have humans in the loop, many mistakes get made. Quality declines. New products become harder to create. So, they built a very different system where robots and humans work together. Humans remain in the loop. They remain the decision-making hosts.

jmk

Can you tell us a little bit more about what humans bring to the table that robots, machines, computers, technology really can’t?

Daron Acemoglu

Well, I think this is a contentious area, so I will give you my perspective. But some people will disagree and I’ll tell you why they will disagree as well at the end, which is partly about your conceptualization of what the human brain is about. But in my mind humans are completely special. They are unique in their diversity. You cannot reduce to humans to saying they do this better. No. They do many things differently and better. So, the skill that a carpenter brings, a gardener brings, an electrician brings, a designer brings, a tailor brings, I think, those are all unique and we should not try to erase them. If you look into it, they are unique in the way that they bring creativity and difference onto the table.

So, a tailor is unique, because he’s trying or she’s trying to find creative solutions and in the process inventing new things and new designs. A carpenter does exactly the same thing. It is a complete problem-solving task. If we try to automate these things, we’re not going to be able to apply the same type of creativity. So, creativity is key, but there’s another aspect that I think is very central for human interaction, social intelligence. We as humans enjoy empathy, enjoy social interaction, communication, the give and take, the counter-arguments, the group decision making, the group interactions. You can say, ‘Oh, those are not relevant. We should erase them.’ But according to which welfare criteria. This is the basis for two key arguments in the book. One which we develop fully.

The other one we say that the whole conceptualization in terms of machine intelligence is essentially an ideological choice. We are immediately elevating machines to the level of humans and judging them on the basis of how human-like they are with just that one word, intelligence, in the way that Turing conceptualized it, whereas I think what we should want from machines is usefulness. They should be useful to humans. A calculator is useful. It’s not intelligent and we shouldn’t say, ‘Oh, well, we can make things more efficient by having workers not talk to each other and not communicate.’ That is just a very dystopian way of thinking about it. If we are trying to improve human conditions, then the fact that they enjoy that camaraderie, that’s actually important. That’s one of the things that we want to maintain, not erase with automation.

So, I think it’s a very different perspective. We want machines to be useful to humans rather than some abstract notion of machine intelligence. This leads into the comments that I said about the sort of the abstract ‘what is it that humans can do?’ What is it that machines can do? There is one more sort of ideological step here, which again was most clearly taken by Turing and before and by Church, which is to think of the human mind as a computer before we had computers. So, that is everything is just computation. That if the human mind is a computer, that’s just step-by-step computation. That’s where our consciousness comes from. That’s where our creativity comes from. That’s where analytical design comes from. That’s where hand eye coordination comes from. So, we can also redo all of these things with machines.

That was the Turing way of thinking of computation, the Turing way of thinking of machine intelligence and all of these things. So, once you go down this path, it becomes very difficult to say humans have creativity and machines don’t because if the human brain is a machine, we can build a bigger machine or just replicate the human brain. I think that really misses what’s unique about humans.

jmk

So, what do you think artificial intelligence is? Because that’s something that I’ve been thinking a lot about. I’ve talked to a few different authors about it and I’d love to get your perspective. I mean, what is artificial intelligence?

Daron Acemoglu

Well, in the book, one of the versions of the book, we had a definition of artificial intelligence, but none of the experts could agree with it. They couldn’t agree with what was wrong with it either. Some said it was too narrow. Some said it was too broad. I think artificial intelligence is a very difficult term to define. Russell and Norvig in their textbook start with 12 different definitions of artificial intelligence. I think the one that most resonates with people, and this is the one that we go to, is a very non-specific one, human level capabilities. But you can see that’s a fraudulent definition. You know, what is that? How does it help you define that?

But I think the difficulty is actually inherent in the mistake that we are making here of emphasizing intelligence. I think there are different types of AI. There are different types of algorithms and they can do different things. Generative AI, for example, is one specific type of algorithm. Leave aside the question of whether GPT4 is intelligence. You know, it’s very clear what generative AI is doing, except it has hundreds of billions of parameters. So, we can’t describe it and it’s designers don’t understand what it’s doing. But at an abstract level, it’s clear what it is and then that’s going to be useful for performing a range of tasks. I think if you think about it that way, AI is just a continuation of digital technology. So, I don’t see a sharp demarcation line between digital technologies and AI.

Definitely AI expands or rather many AI type things facial recognition, language recognition, classification, image recognition, expand on things that we could have done or we did with digital technologies 10 years ago. But it’s just a continuation of digital technologies and digital code and algorithms on data. Still, the question is, ‘Is that what humans do as well?’ To some degree, that is what humans do as well. But I think humans do other things as well. They are much more versatile in the types of cognition that they engage in.

jmk

When I think about artificial intelligence, it seems like what it’s doing is pattern recognition based on large data sets. It’s recognizing patterns that people wouldn’t see because we don’t have as much information. Sometimes we might see them, like experts would know those patterns just intuitively. But then it’s replicating those patterns.

Daron Acemoglu

Absolutely. That is what the majority of current AI does. It is pattern recognition and classification on the basis of very, very large training datasets. That’s why AI is data hungry. AI is imperialistic. It is in the nature of AI and all of these big tech companies to take other people’s data, because if they didn’t, they wouldn’t be able to do anything. Now there are other types of AI. AlphaGo and AlphaZero don’t do that. They’re not data driven. So, AlphaZero and AlphaGo, for example, use a huge amount of computational power and some clever algorithmic tricks, but they are not learning how to play go and chess from other people’s data. They have the rules in themselves, and then they’re playing themselves. They’re generating their own data.

So, it’s a very different architecture, but it only works for things that have very, very clear rules, such as a parlor game. You couldn’t use AlphaZero in situations in which there is human element. You couldn’t use the AlphaZero type of architecture for predictive policing, because predictive policing is against other humans.

jmk

Again, though, even the AlphaGo that you’re describing is still pattern recognition, because it understands what the rules are. So, it’s noticing patterns even when it’s playing against itself. Then it’s replicating those to come up with new rules.

Daron Acemoglu

Yeah, it’s generating its own data and then it’s learning on the basis of reinforcement learning type of feedback.

jmk

The reason why I’m bringing it up is because I think when we start to think about it that way, we start to understand the limitations of what artificial intelligence does. Because I think that there’s some kind of almost myth that comes up that artificial intelligence is like magic or something that people think that it can just do anything that we want it to. But at the end of the day, it’s all math-based. Again, if we think that humans are just thinking based on patterns and based on math, then maybe artificial intelligence can replicate humanity.

Daron Acemoglu

That’s exactly it. Unfortunately, though, this may be a bit hard to admit, but the problem is we don’t understand how humans think. I think there are many layers to human thinking. There are parts of it that may be emergent. That is parts of it may be outside the rules of simple statistics and deterministic mathematics. There’s a lot of analogy drawing. There’s a lot of over-generalization that gets mixed in and then sorted out over time. I mean, look at the way that… Noam Chomski has been making this point as well and he doesn’t get everything right either, I think he’s being criticized by some AI specialists and language specialists, but I think on one point he’s right. The way that a child learns language is fundamentally different and in some ways superior to anything we’ve seen from machines.

jmk

It also reminds me of the work of Dan Kahneman and his co-author. I’m forgetting the name off the top of my head.

Daron Acemoglu

Amos Tversky.

jmk

Yes.

Daron Acemoglu

Who unfortunately died young.

jmk

Yes, and their big insight was the fact that people don’t think mathematically, which would challenge the whole idea that AI can ever completely replicate the way that humans think, because we don’t think in terms of traditional mathematical concepts.

Daron Acemoglu

Right. I think, you know, Kahneman is an amazing scholar from whom I, and many others have learned a lot. But there are two lines of argument in that body of literature led by Kahneman and Tverski that cut in opposite directions. One is that human thinking is complex and that complexity may not be what current AI programs do, but on the other hand, they emphasize systematic mistakes and fallibilities of humans. That’s actually fuel to the AI people because they say, or they think, ‘Humans are so imperfect. We can build better versions of humans.’ That sort of emphasis on human shortcomings then triggers the next round of let’s use machines instead of humans. There is feedback there that could have some unforeseen consequences.

jmk

Yeah, but in some ways, AI is also using heuristics to be able to come up with the patterns that they’re using.

Daron Acemoglu

Absolutely, and also getting some of the human mistakes into its dataset and amplifying them. I mean, look at large language models despite the best efforts of their designers. It’s been trained on Reddit and once you’ve been trained on Reddit, how do you avoid the worst type of human biases and mistakes?

jmk

So, you actually mentioned in the book that AI produces a lot of, so-so technology. Can you explain what you mean by that?

Daron Acemoglu

Yeah, I didn’t mean to say that everything AI will produce is so-so technology. In the book and in my other writing, what I’ve tried to emphasize is that AI is a tool that could be very useful to humans and could be very inspirational in terms of some new things that it can do. On the other hand, I worry that many applications of AI so far have been so-so technologies in the following sense. That it does the things that humans do quite well a little bit cheaper, sometimes a little worse, and as a result, it doesn’t really improve aggregate productivity that much. But it creates a lot of displacement. It creates a lot of distributional effects.

So, the best example you can think about is customer service. AI is now used massively in many customer service functions. It just doesn’t work very well. Everybody’s frustrated. I don’t know of anybody who says, ‘I wish I could get the AI, rather than the human.’ It transfers the costs to the user. Rather than get your problems solved, you spend more time getting problems not solved. But it’s a cost saving for firms. But how much? None of the firms are becoming massively more profitable because of customer service. They’re becoming a little bit more profitable. They’re cutting costs a little bit. That’s what social technology is.

Compare that to something like the Ford Motor Company’s complete revolution of how the factory was organized, how electricity was used, how the interchangeable parts system turned into the modern manufacturing structure, how engineers, repair, maintenance and design tasks were integrated into the production of machinery and cars. Those were revolutionary because they created a lot of tasks, a lot of new products, a lot of new avenues. We don’t do that with social technology.

jmk

At the same time though, some of that technology can actually be a cost because you give up on something when you set aside human labor. Customer service is the best example where some companies have actually made a competitive advantage by having it where you immediately get ahold of a person when you call the line and they advertise that.

Daron Acemoglu

Absolutely. You lose something. You gain something. I mean, we don’t have the data to do this sort of work, because companies are not sharing that sort of very granular data. But if you had the data, my guess would be that there some high fraction, 50% or so, of moderately successful applications of AI and there is another 50% or so that’s not so good or loss-making applications of AI. You increase the costs, but you don’t improve product quality and we could learn a lot about how to do it better. I think there is a promise of turning AI, especially with generative AI, into things that are not so-so. But I worry that’s not the direction we’re going.

jmk

So, let’s, let’s shift over to what you described as shared prosperity. At the beginning of the book, you have a term that you call the productivity bandwagon. Can you explain what that is?

Daron Acemoglu

Yeah, the productivity bandwagon is really the essential element of where we started, which is why do we think that technological progress is believed to be so beneficial to pretty much everybody in society. The idea in some economic models and some economic thinking is you introduce new machinery that makes companies more productive and as they become more productive, they want to go out and hire more labor. They want to expand their scale and then as they go and try to hire more labor, so do their competitors. That bids up wages and that’s the productivity bandwagon. Everybody jumps onto the bandwagon.

But going back to our discussion at the beginning, you see that won’t happen when you increase output per worker, but you don’t increase marginal productivity, because firms will say, we can expand our operations by building more machines and all we need is one man and a dog, so we don’t need more labor. So, the productivity bandwagon requires that all we do with technology isn’t automation. We’ve always done automation. We’re going to continue to do automation. We can get a lot of good from automation, but the problem becomes if we just do automation, if most of the technologies are just going after automation, eliminating humans, that’s a problem.

That’s why the 50s and 60s are different than the 90s and 2000s. In the 90s and 2000s we went all into automation whereas in the 50s and 60s, we automated, but at the same time we created new tasks, new products, new functions.

jmk

One of the big points that you make in the book though is that to be able to create a successful productivity bandwagon involves a number of different policy choices, a number of different choices that we make in society. What are some of those choices that we made in the 50s and 60s that actually produced a successful productivity bandwagon?

Daron Acemoglu

I think there are two aspects to the productivity bandwagon and the choices are related to both of them. One is that new technological capabilities should expand what humans do so that their marginal productivity increases. So, that’s one, not just automation. Second is an institutional and social set of choices so that workers get a fair share of what they produce. You can have a failure of shared prosperity when either of these two pillars breaks down. So, you can have all automation and that’s not going to generate shared prosperity or you can weaken labor so much that even when labor becomes more productive, you don’t need to pay workers more. The choices relate to both of them.

One is how do we use our knowledge, our amazing human understanding of advances in scientific knowledge? How do we use it? Do we use it just for automation or do we find ways of empowering, enriching the productive process for workers? The second is institutional choices. Does democracy work? Is there an oligopoly of firms that push down wages? Are unions representing worker voice? So, all of those are institutional choices about countervailing powers and who is organized in what way that is going to be important for how the gains generated by technological advances are shared.

jmk

I think this gets to something that’s much more complex. Because, on the one hand, some people have argued, a lot of people have argued, that if we can just create as much productivity as possible, we can just redistribute those gains and everybody will end up benefiting in the end. You’re not arguing that.

Daron Acemoglu

You know that argument is so deceptively attractive that it is created both on the left and the right. On the right, that’s the sort of hyper individualism of Silicon Valley, for example, like in Peter Thiel. You know, let the gales of entrepreneurship work in the age of AI. We’ll generate wonderful machines and may create inequality, but at the end everybody will benefit. On the left, you know the title of a book that came out in the UK summarizes it that’s written by a former advisor of Jeremy Corbyn. It’s called Fully Automated Luxury Communism. So, we automate all the work and we create a new type of communism where we generate all of that amazing output and we distribute it among everybody in society. I think both of those are flawed.

jmk

So how would you imagine us doing that right now?

Daron Acemoglu

Give up that dream. That’s the struggle part of our title. It’s always a struggle. If we’re not going to lose that struggle or shared prosperity is not going to lose that struggle, it has to be by redirecting technological change and creating countervailing powers.

jmk

So, I imagine trying to create laws that define more of what intellectual property people own.

Daron Acemoglu

Absolutely, or define who owns data. Look, we haven’t talked about democracy, but it’s actually very intimately linked. If you have this model of AI, which is geniuses design machines and those machines or algorithms are going to scoop up all the data and they’re going to make better decisions for you. That’s fundamentally anti-democratic. I don’t know how anybody thought – Well, I thought it myself, probably 25 years ago – that the internet was going to be like a pro-democracy force. But today I think that’s really hard to believe, because the nature of the current approach or defining approach is so top down, so centralized. You scoop up data. Algorithms control everything. That cannot be a pro-democratic.

So, we need to break that. We need to break that by having a more decentralized, pluralistic, diverse perspective that puts humans at the center. We need a humanist AI.

jmk

There’s a line in the book where you write, “Cacophonous voices may be the greatest strength of democracy.” As we’re thinking about AI in these large data sets, I mean, eventually you’d imagine that all of the different AI models would be working off of the same data sets if we allowed it.

Daron Acemoglu

It’s even worse than that. Again, let’s go back to ChatGPT. How is that created? So, there is a first base model that is trained with unsupervised learning through generative AI that’s trained on Wikipedia. That’s lots of free labor that’s been exploited by OpenAI and Microsoft. It’s based on encyclopedias and Google Books. Again, free labor. It’s trained on Facebook, Reddit, newspapers, all sorts of publications that are racist, biased, crazy conspiratorial theories based on extremist ideologies. All of that goes into the training data. Then if you just look at how generative AI predicts the next word, it’s going to say unbelievably maniacal things.

So, what do they do? Then they engage in a supervised learning phase. They teach the machine, the algorithm, the model, not to say certain things. Don’t say Nazis are good. Don’t say there are inferior races and superior races. Don’t say Donald Trump is a great president or whatever. But who decides what is allowed and what is not allowed? Normally we rely on democracy. It’s the democratic process that reaches a consensus that we say, we’re not going to allow child pornography. That’s a consensus. You know, the Germans agreed they’re not going to allow Nazi propaganda on the airways after the experience of the Third Reich. But now a centralized company with a few engineers decides that, so it’s incredibly centralized.

jmk

But for me it’s even more than that. The way that we get these different voices in democracy is literally through the different experiences that we have. Meaning that if artificial intelligence all has the same knowledge base, all has the same dataset, there’s only one voice. Whereas if you have carpenters and office workers and lawyers and all kinds of different professions, it’s not just that they know different things, it’s that they literally experience different things.

Daron Acemoglu

Absolutely 100%. That’s not something we explain in the book, but yes, absolutely. Because I don’t think the science is there yet, but my belief is very much like yours that human cognition is very much multi-dimensional and it depends on your experiential functions as well. You’ll learn something very differently when you talk to somebody face to face than when you do things with your hands or when you fail. But even that is not enough because that’s diversity. We can put diversity into the funnel of AI, but what’s going to come out.

Again, the way that we rely on human social intelligence, cognition, democratic processes, civil society. All sorts of other things that have evolved over tens of thousands of years in human evolution for that diversity to merge into something that’s productive rather than just polarized. Okay, fine. We’ve failed in that not polarized way in the last 30 years, but at least there is some hope.

jmk

So, if we’re going to get back to shared prosperity, what are the policy choices that we should be making? What are some of the institutional choices that we should be making? Because one of the reasons why you wrote the book is that we’re not on the route to shared prosperity right now, so how do we get back there?

Daron Acemoglu

Absolutely, and you know, I think that’s a good place to end because it’s our weakest point. But it’s the most important point. It’s our weakest point because nobody has a silver bullet. But the way I like to think about it is, first we want to clarify what are our aspirations. Then we want to change the narrative. Then we want to build the institutional foundations of that. Then we want to talk about specific policy. My aspirations are very clear and I hope I can convince people that what we want from machines is machine usefulness, in particular, defined as algorithms, new widgets and gadgets that make humans more productive and humans more empowered.

We’re going to create new tasks and new ways of using information for workers of all sorts. We’re going to empower them so that they can become better human agents and better decision makers and better participants in whatever organizations they are in and we’re going to empower them in the political domain. That’s the aspiration of what we want from machines in my opinion. That requires a change in the narrative, so we need to abandon the productivity bandwagon, the naive techno-optimism thinking that the gales of creative destruction will naturally take us to a place where everybody is happy or to stop talking of fully automated luxury communism.

We want to realize that it’s a struggle. We have to work to create these things. We have to build institutions. We have to build ways of redirecting technological change and then we have to actually do the work or building those institutions. How do we build countervailing powers? How do we build institutions that correctly regulate new emergent technologies? How do we create a better democracy? Then we have specific policies to implement some of those things. Now, there are big uncertainties about these specific policies, because they’re not separable from the institutions and we haven’t built the institutions, so there will be a lot of pushback. There’s going to be a lot of backlash. There’s going to be a lot of complications.

Let me give you one example. European Union’s General Data Protection, GDPR, was a great idea, but it backfired. Do I blame the European Commission for GDPR? No, I don’t. I think they had the right idea of protecting individual privacy to data and how you’re going to use your data. But at the end, tech companies are wily enough that they found ways around it and actually it may have made things worse. So, we’ll have to experiment with these policies.

But I think the pillars in terms of policies of what I would like to see are clear. One is we have to stop the appropriation of data by big tech companies. You have to do that via regulation and by building new institutions such as, for example, data unions that Jaron Lanier has called for. I think we need to discourage the most pernicious types of business models like those based on individualized digital ads that create emotional outrage, get people hooked and reduce them to passive consumers or news and information from like-minded people or outrageous information from the other side rather than active participants.

So, that requires different business models and that I think calls for a digital ads tax. Although I don’t think it’s going to be a full solution, we should consider breaking up of the largest tech companies because they are much bigger than any organizations that humans have experienced in the past. We should find ways of introducing worker voice that is not easy because I don’t think the old model of unions is going to work anymore and very importantly, we should take steps to redirect technological change.

There we suggest two policies in the book. One is there is a bias in our fiscal system that taxes labor more than capital and that creates an artificial reason for automating excessively. So, we should get rid of those asymmetries. Second, we should subsidize more human friendly, worker friendly technologies, although that’s, again, not an easy one because identifying those technologies is not always easy.

jmk

Well, Daron Acemoglu, thank you so much for joining me today. I want to plug the book one more time, Power and Progress: Our Thousand Year Struggle Over Technology and Prosperity. I do want to emphasize that we’re talking about some really heavy topics, but this is a really easy read. I mean, it’s not a difficult book to be able to get through and it actually has a lot of interesting stories in it. So, thank you so much for joining me today. Thank you so much for writing it.

Daron Acemoglu

Thank you very much, Justin. It was a great conversation. Thanks for inviting me and I look forward to seeing the podcast.

Key Links

Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity by Daron Acemoglu and Simon Johnson

Why Nations Fail: The Origins of Power, Prosperity, and Poverty by Daron Acemoglu and James A. Robinson

Learn more about Daron Acemoglu

Democracy Paradox Podcast

Jamie Susskind Explains How to Use Republican Ideals to Govern Technology

Samuel Woolley on Bots, Artificial Intelligence, and Digital Propaganda

More Episodes from the Podcast

More Information

Democracy Group

Apes of the State created all Music

Email the show at jkempf@democracyparadox.com

Follow on Twitter @DemParadox, Facebook, Instagram @democracyparadoxpodcast

100 Books on Democracy

Democracy Paradox is part of the Amazon Affiliates Program and earns commissions on items purchased from links to the Amazon website. All links are to recommended books discussed in the podcast or referenced in the blog.

Leave a Reply

Up ↑

Discover more from Democracy Paradox

Subscribe now to keep reading and get access to the full archive.

Continue reading