At The Money: Algorithmic Harm with Professor Cass Sunstein, Harvard Law
What is the impact of “ Algorithms” on the prices you pay for your Uber, what gets fed to you on TikTok, even the prices you pay in the supermarket?
Full transcript below.
~~~
About this week’s guest:
Cass Sunstein, professor at Harvard Law School co-author of the new book, “Algorithmic Harm: Protecting People in the Age of Artificial Intelligence” Previously he co-authored “Nudge” with Nobel Laureate Dick Thaler. We discuss whether all this algorithmic impact is helping or harming people.
For more info, see:
~~~
Find all of the previous At the Money episodes here, and in the MiB feed on Apple Podcasts, YouTube, Spotify, and Bloomberg.
And find the entire musical playlist of all the songs I have used on At the Money on Spotify
Transcript:
Barry Ritholtz: Algorithms are everywhere. They determine the price you pay for your Uber; what gets fed to you on TikTok and Instagram, and even the prices you pay in the supermarket. Is all of this algorithmic impact helping or harming people?
To answer that question, let’s bring in Cass Sunstein. He is the author of a new book, “Algorithmic Harm: Protecting People in the Age of Artificial Intelligence” (co-written with Orrin Bargil). Cass is a professor at Harvard Law School and is perhaps best known for his books on Star Wars, and co-authoring “Nudge” with Nobel Laureate Dick Thaler.
So Cass, let’s just jump right into this and start by defining what is algorithmic harm.
Cass Sunstein: Let’s use Star Wars, say the Jedi Knights use algorithms and they give people things that fit with their tastes and interests and information, and people get, if they’re interested in books on behavioral economics, that’s what they get at a price that suits them. If they’re interested in a book on Star Wars, that’s what they get at a price that suits them.
The Sith by contrast, take advantage with algorithms of the fact that some consumers lack information and some consumers suffer from behavioral biases. We’re gonna focus on consumers first. If people don’t know much, let’s say about healthcare products, an algorithm might know that, that they’re likely not to know much. It might say, we have a fantastic baldness cure for you, here it goes and people will be duped and exploited. So that’s exploitation of absence of information – that’s algorithmic harm.
If people are super optimistic and they think that some new product is gonna last forever, when it tends to break on first usage, then the algorithm can know those are unrealistically optimistic people and exploit their behavioral bias.
Barry Ritholtz: I referenced a few obvious areas where algorithms are taking place. Uber pricing is one; the books you see on Amazon is algorithmically driven. Clearly a lot of social media – for better or worse – is algorithmically driven. Even things like the sort of music you hear on Pandora.
What are some of the less obvious examples of how algorithms are affecting consumers and regular people every day?
Cass Sunstein: Let’s start with the straightforward ones and then we’ll get a little subtle.
Straightforwardly, it might be that people are being asked to pay a price that suits their economic situation. So if you owe a lot of money, the algorithm knows that maybe the price will be twice as much as it would be if you were less wealthy. That I think is basically okay. It leads to greater efficiency in the system. It’s like rich people will pay more for the same product than poor people and the algorithm is aware of that. That’s not that subtle, but it’s important.
Also, not that subtle is targeting people based on what’s known about their particular tastes and preferences. (Let’s put wealth to one side). And it’s known that certain people are super interested in dogs, other people are interested in cats, and all that is very straightforward happening. If consumers are sophisticated and knowledgeable, that can be a great thing to make markets work better. If they aren’t, it can be a terrible thing to make consumers get manipulated and hurt.
Here’s something a little more subtle. If an algorithm knows, for example, that you like Olivia Rodrigo (and I hope you do ’cause she’s really good), then gonna be a lot of Olivia Rodrigo songs that are gonna be put into your system. Let’s say there, no one’s really like Olivia Rodrigo, but let’s suppose there are others who are vaguely like her, and you’re gonna hear a lot of that.
Now that might seem not like algorithmic harm, that might seem like a triumph of freedom and markets. But it might mean that piece people’s tastes will calcify, and we’re going to get very balkanized culturally with respect to what people see in here.
They’re gonna be Olivia Rodrigo people, and then they’re gonna be Led Zeppelin people, and they’re gonna be Frank Sinatra people. And there was another singer called Bach, I guess I don’t know much about him, but there’s Bach and there would be Bach people. And that’s culturally damaging and it’s also damaging for the development of individual tastes and preferences.
Barry Ritholtz: So let’s put this into a, a little broader context than simply musical tastes. (And I like all of those). haven’t become balkanized yet, but when we look at consumption of news media, when we look at consumption of information, it really seems like the country has self-divided itself into these happy little media bubbles that are either far left leaning or far right leaning, which are kind, is kind of weird because I always learn the bulk of the country and the traditional bell curve, most people are somewhere in the middle. Hey, maybe they’re center right or center left, but they’re not out on the tails.
How does these algorithms affect our consumption of news and information?
Cass Sunstein: About 15, 20 years ago, there was a lot of concern that through individual choices, people would create echo chambers in which they would live. That’s a fair concern and it has created a number of let’s say challenges for self-government and learning.
What you’re pointing to is also emphasized in the book, which is that algorithms can echo chamber, you. An algorithm might say, “you’re keenly interested in immigration and you have this point of view, so boy are we gonna funnel to you lots of information.” Cause clicks are money and you’re gonna be clicking, clicking, clicking, click kicking.
And that might be a very good thing from the standpoint of the seller, so to speak, or the user of the algorithm. But from the standpoint of view, it’s not so fantastic. And from the standpoint of our society, it’s less than not so fantastic because people will be living in algorithm driven universes that are very separate from one another, and they can end up not liking each other very much.
Barry Ritholtz: Even worse than not liking each other, their view of the world aren’t based on the same facts or the same reality. Everybody knows about Facebook and to a lesser degree, TikTok and Instagram and how it very much balkanized people into things. We’ve seen that in, in the world of media. You have Fox News over here and MSNBC over there.
How significant of a threat. Does algorithmic news feeds present to the country as a democracy, a self-regulating, self-determined democracy?
Cass Sunstein: Really significant! There’s algorithms and then there are large language models, and they can both be used to create situations in which, let’s say the people in.
Some city, let’s call it Los Angeles, are seeing stuff that creates a reality that’s very different from the reality that people are seeing in let’s say Boise, Idaho. And that can be a real problem for understanding one another and also for mutual problem solving.
Barry Ritholtz: So let’s apply this a little bit more to consumers and markets. You describe two specific types of algorithmic discrimination. One is price discrimination and the other is quality discrimination. Why should we be aware of this distinction? Do they both deserve regulatory attention?
Cass Sunstein: So if there is price discrimination through algorithms in which different people get different offers, depending on what the algorithm knows about their wealth and tastes, that’s one thing.
And it might be okay. People don’t stand up and cheer and say, hooray. But if people who have a lot of resources are given an offer that’s not as, let’s say seductive as one that is given to people who don’t have a lot of resources, just because the price is higher for the rich than the poor, that that’s okay .There’s something efficient and market friendly about that.
If it’s the case that people who are not caring much about whether a tennis racket is gonna break after multiple uses, and other people who think the tennis racket really has to be solid because I play every day and I’m gonna play for the next five years. Then some people are given let’s say. Immortal Tennis racket and other, other people are given the one that’s more fragile, that’s also okay.
So long as we’re dealing with people who have a level of sophistication, they know what they’re getting and they know what they need.
If it’s the case that for either pricing or for quality, the algorithm is aware of the fact that certain consumers are particularly likely not to have relevant information, then everything goes haywire. And if this isn’t frightening enough, note that algorithms are an increasingly excellent position to know: “This person with whom I’m dealing doesn’t know a lot about whether products are gonna last” and I can exploit that. Or “this person is very focused on today and tomorrow and next year doesn’t really matter, the person’s present biased,” and I can exploit that.
And that’s something that can damage vulnerable consumers a lot, either with respect to quality or with respect to pricing.
Barry Ritholtz: Let’s flesh that out a little more. I’m very much aware that when Facebook sells ads, because I’ve been pitched these from Facebook, they could target an audience based on not just their likes and dislikes, but their geography, their search history, their credit score, their purchase history. They know more about you than you know about yourself. It seems like we’ve created an opportunity for some potentially abusive behavior. Where is the line crossed – from hey, we know that you like dogs, and so we’re gonna market dog food to you, to, we know everything there is about you, and we’re gonna exploit your behavioral biases and some of your emotional weaknesses.
Cass Sunstein: So suppose there’s a population of Facebook users who are, you know, super well-informed about food and, really rational about food. So they particularly happen to be fond of sushi, and Facebook is going hard at them with respect to offers for sushi and so forth.
Now let’s suppose there’s another population, which is they know what they like about food, but they have kind of hopes and, uh, false beliefs both about the effective food on health. Then you can really market to them things that will lead to poor choices.
And I’ve made a stark distinction between fully rational, which is kind of economic speak and, you know, imperfectly informed and behaviorally biased people, also economic speak, but it’s, it’s really intuitive.
There’s a radio show, maybe this will bring it home that I listen to when I drive into work and there’s a lot of marketing about a product that is supposed to relieve pain. And I don’t want to criticize any producer of any product, but I have reason to believe that the relevant product doesn’t help much, but the station that is marketing this product to people, this pain relief product must know that the audience is vulnerable to it and they must know exactly how to get at them.
And that’s not gonna make America great again.
Barry Ritholtz: To say the very least. So we, we’ve been talking about algorithms, but obviously the subtext is artificial intelligence, which seems to be the natural extension and further development of, of algos. Tell us how, as AI becomes more sophisticated and pervasive, how is this gonna impact our lives as, as employees, as consumers, as mem citizens?
Cass Sunstein: Chat GPT chances are knows a lot about everyone who uses it. So I actually asked Chat GPT recently. I use it some, not hugely. I asked it to say some things about myself and it said a few things that were kind of scarily precise about me, based on some number, dozens, not hundreds I don’t think of engagements with chat GPT.
Large language models that track your prompts can know a lot about you, and if they’re able also to know your name, they can, you know, instantly basically learn a ton about you online. We need to have privacy protections that are working there still. It’s the case that AI broadly is able to use algorithms – and generative AI can go well beyond the algorithms we’ve gotten familiar with – both to make the beauty of algorithmic engagement. That is, here’s what you like, here’s what you want, we’re gonna help you and the ugliness of algorithms, here’s how we can exploit you to get you to buy things. And of course I’m thinking of investments too.
So in your neck of the woods, it would be child’s play to get people super excited about investments, which AI knows the people with whom it’s engaging are particularly susceptible to, even though they’re really dumb engagements.
Barry Ritholtz: Since we’re talking about investing, I can’t help but bring up both AI and algorithms trying to increase so-called market efficiency. Uh, and I always go back to Uber’s surge pricing. Soon as it starts to rain, the prices go up in the city. It’s obviously not an emergency, it’s just an annoyance. However, we do see situations of price gouging after a storm, after a hurricane, people only have so many batteries and so much plywood, and they kind of crank up prices.
How do we determine what is the line between something like surge pricing and something like, abusive price gouging.
Cass Sunstein: Okay, so you’re in a terrific area of behavioral economics, so we know that in circumstances in which, let’s say demand, goes up high, because everyone needs a shovel and it’s a snow storm. People are really mad if the prices go up, though it might be just a sensible market adjustment. So as a first approximation, if there’s a spectacular need for something, let’s say shovels or umbrellas, the market, inflation of the cost, while it’s morally abhorrent to many, and maybe in principle morally abhorrent from the standpoint of standard economics, it’s okay.
Now, if it’s the case that people under short-term pressure from the fact that there’s a lot of rain are especially vulnerable, they’re in some kind of emotionally intense state, they’ll pay kind of anything for an umbrella. Then there’s a behavioral bias, which is motivating people’s willingness to pay a lot more than the product is worth.
Barry Ritholtz: Let’s talk a little bit about disclosures and the sort of mandates that are required. When we look across the pond, when we look at Europe, they’re much more aggressive about protecting privacy and making sure big tech companies are disclosing all the things they have to disclose. How far behind is the US in that generally? And are we behind when it comes to disclosures about algorithms or AI?
Cass Sunstein: I think we’re behind them in the sense that we’re less privacy focused, but it’s not clear that that’s bad. And even if it isn’t good, it’s not clear that it’s terrible. I think neither Europe nor the US has put their regulatory finger on the actual problem.
So let’s take the problem of algorithms, not figuring out what people want, but algorithms exploiting a lack of information or a behavioral bias to get people to buy things at prices that aren’t good for them – that that’s a problem. It’s in the same universe as fraud and deception. And the question is, what are we gonna do about it?
A first line of defense is to try to ensure consumer protection, not through heavy handed regulation. I’m a longtime University of Chicago person. I have in my DNA (note enviornment) , not liking heavy handed regulation, but through helping people to know what they’re buying.
Helping people not to suffer from a behavioral bias, such as, let’s say, incomplete attention or unrealistic optimism when they’re buying things. So these are standard consumer protection things, which many of our agencies in the US homegrown made in America. They’ve done that and that’s good and we need more of that. So that’s first line of defense.
Second line of defense isn’t to say, you know, uh, privacy, privacy, privacy. Though maybe that’s a good song to sing. It’s to say Al right to algorithmic transparency. This is something which neither the us nor Europe, nor Asia, nor South America, nor Africa, has been very advanced on.
This is a coming thing where we need to know what the algorithms are doing. So it’s public. What’s Amazon’s algorithm doing? That would be good to know. And it shouldn’t be the case that some efforts to ensure transparency invade Amazon’s legitimate rights.
Barry Ritholtz: Really, really fascinating.
Anybody who is participating in the American economy and society, consumers, investors, even just regular readers of news, needs to be aware of how algorithms are affecting what they see, the prices they pay, and the sort of information they’re getting. With a little bit of forethought and the book “Algorithmic Harm” you can protect yourself from the worst aspects of algorithms and AI.
I’m Barry Ritholtz. You are listening to Bloomberg’s At the Money.