The Lawfare Podcast
The Lawfare Podcast

Lawfare Archive: A World Without Caesars

7h ago52:519,492 words
0:000:00

From March 14, 2025: This episode of the Lawfare Podcast features Glen Weyl, economist and author at Microsoft Research; Jacob Mchangama, Executive Director of the Future of Free Speech Project a...

Transcript

EN

[MUSIC PLAYING]

[NON-ENGLISH SPEECH] [NON-ENGLISH SPEECH] [MUSIC PLAYING]

I'm Marissa Wong, internet law fair,

with an episode from the law fair archives for April 5,

2026. On March 25th, the jury in a landmark lawsuit found social media companies meta and YouTube liable for harm caused to users' mental health because of their addictive algorithms and design features.

The bell-weather case could open social media companies to more lawsuits over their algorithms, and its effects on users will be in. For today's archives, I chose an episode from March 14th, 2025, in which Renee Dereste sat down

with Glenn Wiel, Yaka Machin-Gama, and Ravi Ayer to unpack how social media algorithms shape user interaction through processes designed to be invisible and opaque to users.

The group also discussed how new decentralized platforms

are attempting to design pro-social media platforms for a more possible future. It's the law fair podcast. I'm Renee Dereste, contributing editor at law fair and associate research professor

at Georgetown McCourt School of Public Policy. I'm with Glenn Wiel, economist and author at Microsoft Research. Yaka Machin-Gama, executive director of the future of free speech project

at Vanderbilt University, and Ravi Ayer, managing director of the USC Marshall School Mealy Center. I just think no matter what our goals are, the design of sort of the overall information ecosystem

and what gets surfaced is critical.

Today, we're talking about design versus moderation. The way that social media platforms are built influences everything from what we see to what is amplified, what is even created in the first place, as users respond to incentives, nudges, and affordances.

These processes are often invisible or opaque, though new decentralized platforms are changing that. So we're going to talk designing a pro-social media for the future and the potential for an online world without seizures.

I want to just kind of bring you guys in right now to thinking about the difference between moderation as policing failed and state versus design, right? Design as a proactive way to cultivate behaviors to subtly shift norms, to guide users

in particular directions, not necessarily through top-down rule enforcement, but rather by determining the affordances of a system and what the system lets us do. So one of the reasons that I'm excited for this conversation

with you all specifically is that when I read your work, you all have such deep thinking about the specifics of ways that system design can produce better social media. I know that Glenn and Yaka have just had a paper recently released, you titled it Pro-Social Media.

And I'd love to just start with that. I think the term pro-social media is wonderful. I'd like to maybe ask you to define what that means and tell us a little bit about your work.

- Yeah, so I think the key idea that motivated the term

pro-social media is that obviously social media are doing something social. They're using social information to serve content. But that doesn't necessarily mean that they're achieving the goals that many people had

in creating social media, which was to strengthen connections across people, help communities be stronger and reinforce the social fabric that they build on. So social media could, in theory, either be like sustainable agriculture

that reinforces strengthens the soil at the same time as at harvest from it, or it could be like clear cutting agriculture. I think many people believe that social media has actually been undermining the social fabric

as it's been harnessing it. And we want to try to make that more sustainable, more regenerative, as satisfied.

- Yeah, I think what excited me, I have a much more narrow focus

than these two brilliant gentlemen, I come at this topic from sort of a free speech perspective. And I think what excited me about the pro social media approach is that I think people in the free speech space have very often been sort of on the defensive

and making these abstract, principled arguments

That were a bit difficult to apply consistently

when it comes to social media.

But also just not convincing a lot of people because social media makes the arms real perceived of speech so much more visible to a lot of people. So it makes the much more willing to engage in trade-offs and restrict speech than they were in the analog world.

And so I think that the pro social media approach

in many ways is a good way for free speech activists to frame a much more positive vision for social media, one that a empowers users at the expense of centralized platforms and/or governments. Today we're seeing a huge development

towards government-mandated content moderation. And also one that says, well, yes, free speech has some harms, especially in an online connected world, where anyone can share anything with anyone across borders and where some of the harms that can be involved

in free speech can be very visible, can travel with lighting speed and can lead to real-life hams. But here are some models that might actually use the power of speech and access to information to mitigate

and diffuse some of those hams in ways that I constructive, but that rely basically on speech rather than giving outside power to platforms and/or government.

So I think that's an incredibly powerful

and empowering vision for social media that resonates really well with a basic commitment to what I would call egalitarian free speech. - I wanna hold the decentralization piece for a little bit, because I know we're gonna talk about

what makes new experimentation possible. I know I've written about that. I've talked about that in the context of middleware here a little bit. I wanna focus in on specific features and designs and you guys articulate ways of thinking about this.

We talk a lot about bridging and balancing and this idea of bridging and balancing as a goal. What do we mean when we talk about bridging and balancing as a way to create a more pro-social web? - Well, I don't wanna go on too much of a historical regression here,

but I think it's useful to understand that both of the kind of competing visions, or falsely, as you pointed out, Renee may be competing visions of what media should be like. Today, really came out of World War II.

There were two different movements. One was that right after Pearl Harbor, Henry Luce, the publisher of Time, convened a commission called Hutchins Commission that came up with principles that the media could abide by

in order to avoid it being nationalized because they were very worried that division in the media, misinformation had led to U.S. not being prepared for Pearl Harbor, and that the government was just gonna nationalize the media as a result.

So they wanted to avoid that. On the other hand, there was a group of people led by Margaret Meade and other social theorists who thought that kind of the concentrated nature of the media that, you know, the broadcast nature of radio

in journalism, had led to fascism, and that the way to address that was to have, like, a much more multi-sided media.

And I think we've arrived at both kind of this desire

for bringing people together, and this desire to have lots of voices heard from those two respective movements. And I think the real question is how we can bring those together.

And that's really our goal in this paper is to use this notion that we truly came out of the Hutchins Commission of content that brings people together across divides and content that reflects the diversity of positions people have,

that's bridging and balancing,

as being sort of critical elements that need to show up,

while we also ensure that we have all the different diversity of angles that social media allows without getting too much gain-keeping. - And you're talking specifically about, I guess the question of how.

So I've seen Audrey Tang, who is the digital minister of Taiwan, was a co-author on your paper also, has spoken a bit about surfacing where content comes from, right kind of labeling communities that it originates from.

I was kind of intrigued by this idea because when I was at Stanford in an observatory, we would do these things that we kind of called narrative traces, right, where did something come from? Where did that meme originate?

And for us, that was a question of like, is it authentic, right? Did it come from some authentic community? Was it something that was kind of dropped in

through an influence operation from a state actor or whatever?

What is the provenance? How do you guys think about that? Why do you think surfacing where something originates is helpful for this bridging or pro-social behavior? How does it help us have a better web?

- I think there's actually two different aspects to this.

One is where does it originate from?

And Audrey's done some amazing work on that in Taiwan

on basically creating liability for things that don't have signed provenance. So I think that's really fascinating approaching one that I'm a big fan of. There's another element that we emphasize more

in this paper, which I also think is important, which has more to do with, where does the popularity of this originate? - Okay. - You know, there's the people who created it

and their signatures, but then there's the people who liked it and re-posted it and so forth. And obviously, the first one you can do just by having some kind of disclosure about where it came from or cryptographic signature,

which is what Audrey worked on. But the second one is complicated because there's gonna be thousands or millions of people who liked or retweeted something. So you can't just list all their names.

You have to give some characterization of them.

And that's why we focus on this notion of using the internal learning that the machine learning tools are doing about the communities that something's appealing to, is that's how they're doing personalization

in the first place. And then trying to be transparent about that as a way of giving a sense of who this is popular with so that you know the audience that you're sharing something with, which I think is a really important element,

not just for sort of misinformation or news-related reasons, but also just for isolation reasons. I think when you used to be, you would go to a concert or you would attend a lecture and you would get a sense of the other people in the room.

And that's much harder online. And obviously, it does happen in environments like Reddit, but we'd like to bring aspects of that to this by using transparency about the internal understanding of the community

that the models already have. - Ravi, I'm curious, when you were at your role at Facebook in other places, how did you all think about that question of what communities were engaging with content? Was that something that you were also attuned to?

This question of bridging as far as what was curated out to more people? - Yeah, I mean, I actually started my time at Meta working on polarization. And so I think there are three findings

or things that we learned that are relevant here and that can sort of give some specificity to what pro social media could be. One, there's some famous, there's a Wall Street Journal article about this, but there's also other books

that are not about Facebook about this.

And basically it's the finding that many publishers

and politicians say they produce worse content, they produce divisive content, they're not proud of because of the algorithms on social media. So, John Pretty of Buzzfeed went to Facebook and said, look, we're producing divisive content

not because we want to, but because that is what does well in your algorithm, there are many politicians in Europe say that. And so that's not a moderation thing, that's not about figuring out what you can say or not say.

That's a company sort of incentivizing, effectively paying people with attention to be more divisive. The second thing is that a lot of people see content, they don't like, and a lot of people don't like

divisive content. People don't want to argue with their relatives online all the time. There's one study, 70% of Facebook users see content, they want to see less of, they often see it

multiple times, procession, they often see it within the first five minutes of scrolling. And so, you know, there's a business incentive to actually reduce these kinds of divisive experiences. It's actually like turns people off of these products.

And that's why you see, I think a lot of people moving

from, you know, some of the more divisive platforms, too, like a blue sky or to someplace where it just feels like you can have a conversation again. And then the third thing is that, you know, one thing I worked on when these break the glass measures,

design measures that really, they're kind of like temporary design things that, that changes ecosystem and sort of changes in incentives. And we did that in part because when you rely on moderation, you make a lot of mistakes.

And so if you're working on something like me and Mara Ethiopia, something in some far off place, it's really hard for a company for anyone. It's really hard for anyone. Let alone a company thousands of miles away

to make decisions about what people as should or should not say. But if you can say something like, you know, look, maybe we shouldn't be optimizing for the thing that gets the most comments. You know, obviously the thing that gets the most

comments is not always the best thing, right?

Like the picture of my night out last night that got the most comments might be a great picture. But like your health information is not meant to be like debated back and forth, right? Like it's meant to be boring.

And the fact that you're talking about a lot, actually maybe means that it's not great information. And so, you know, the results of those kinds of experiments that, you know, reducing the incentive to comment back and forth or to reshare things actually improves the ecosystem.

I think there's another thing that we could learn about

specifically how do you create more process from media? - With the break-class measures, that was also, as I recall, particularly around like post-genuary six, that was also deprecating political content, right? And that was sort of trying to resurface more content

that bridged people in the sense of things that were more human, right, here's more from your friends, more baby pictures, more wedding pictures, is that the sort of things that were kind of upranked instead?

- I mean, there are lots of things that were done.

And I think there's an article in policy press

about all the very specific things that were done around January six.

I mean, the things that I think are most worth learning

from are removing some of these engagement incentives, so not just removing a whole class of content, but actually sort of improving the incentives within that class content. You know, people should be allowed to talk about politics,

but it shouldn't be incentivized to talk about it as entertainment. And when you optimize for like, thing that gets to those comments, it gets to be more entertainment. The other thing that was done around January six that I think is worth learning from is rate limits.

Like there was a reduction in the amount of times you could invite people to a group. You know, if you were to ask yourself, how many times should a person be able to invite people to a group?

How many times should I be able to message strangers? And how many times should I be able to do anything? You will come with a far lower answer than platforms have rates for. - Yeah, now I remember that.

Yeah, we were always mystified by that actually.

Yes, you would see a lot of the time when we would look at even looking at like markers of an authenticity, you would see one person mass blasting the same post in the same second into 60 different groups.

And it was always kind of a remarkable affordance that you had the power to do that. This was actually like funny enough, the old freedom of speech versus freedom of reach articles, the argument that is, and I were making back in 2018,

that got refrained as content moderation, right? Like Elon put it on top of his content moderation, page or whatever. But we were talking about it in the context of what you're describing in the context of curation, actually.

Like what is it that should be curated and amplified? Like in the moment, what is the incentive that you create for particular types of content to be boosted? And I think that's a really interesting question. And then I remember that,

maybe this takes this kind of into the question of like, who decides? Because one of the things that break glass measures actually became politically controversial, right?

And I'll go ahead and if you want to kind of pop in

on this with your opinion. But this question of why does the platform get to decide, this is a very opaque shift. It obviously has impact, particularly if political content is the thing that gets deprecated in these moments,

or people begin to feel that, that inability to invite people into groups is somehow limiting the potential growth of a political movement or something along those lines. Like this is where you start to see that tension come in.

The question around transparency and to what extent, to what extent design intersects with the regulatory conversation. It's a very interesting one, right? Because areas where the moderation conversation

quite clearly can't-- it's not as clear cut I think that the design conversation doesn't intersect in the regulatory conversation. And I'm curious what you all are seeing and thinking about on that front.

I think transparency is obviously important because it reduces the speculation and conspiracy theories around-- it probably doesn't eliminate it. But it ideally reduces it, especially if there are also

ways to track how platforms actually implement it. I mean, of course, there is a spectrum where you say, if you distinguish between freedom of speech and freedom of reach, and if you have clearly ideological ways to amplify reach and de-amplify it,

then I think we're getting into free speech territory.

But the more I guess you allow users to have input on this, the better because that then limits the platform's ability to skew the conversation. But then have full transparency on where the platform decides and what its design is actually based on

and the amplification of that. I think is the optimal solution to how you implement that in practice is something that I would leave for smarter people than myself, like Len and Ravina.

And I think you would probably never be able to have a system

that would satisfy everyone just because we deeply disagree about these things. And everyone, when you look at a platform and what goes on, everyone will have this tendency to say, well, I think that a lot of speech--

I have a voice here, but why am I not? And why add, do I only get-- why do I personally have such an extremely pathetic reach on blue sky for instance? That must be-- that must be because J-Graper

has decided in such a way that people like me don't get to-- Yeah, possibly be my post. Must be somebody's partner, some on the scale. No, I get it. Do you think you're out to be getting it something important,

which is the reason why I spend a lot of time focusing on terms like balancing and bridging or trying to come up with these big principles in terms of some communication rather than relatively technical tweaks,

Even though, of course, they have to be implemented technically,

is that I think the legitimacy and the way that we talk about these things and the ability to relate them to sort of democratic principles is actually like central to what it is for them to be good design features.

You know what I mean? Yeah, say more about that. I mean, I guess if you think about-- if you think about our democracy, like we have a principle of free speech, but we don't have the principle that anyone can come

and speak in front of Congress at any time, right? The people who get to speak in front of Congress have some kind of democratic procedure that ensures that they're representative of the population in some sense and that there's some process of doing that representation

that is like ridden down somewhere in a document and that people are concerned about the adherence to the rules of that document and so there's just like a huge amount that's put into the allocation of reach of like the effective voice that we have

as well as having free speech. So I think this is something that's very well established and I think the more that we can tie, however it is, that we are organizing things to like principles that are kind of meaningful and can be ridden down

and legitimated in this sort of way.

That was also very critical to what Audrey did in Taiwan.

I think the more that we're going to be able to get the legitimacy that's necessary for any of this to work 'cause the reality is if we moderate out X and Y but no one thinks that was legitimate they're gonna go find it somewhere else anyway

and they're not gonna buy into what they're getting on the platform.

So that legitimacy I think is just as important as the efficacy.

- But it also, and I think it has to be there has to be a very strong element of bottom-up legitimacy because otherwise you're just getting back to sort of the digital version of the analog public sphere where you have sort of traditional institutional gatekeepers

and then there's not going to be buying from those who didn't have a voice before. So I think that's incredibly critical and sort of going back also to this ideal of egalitarian free speech underlying this.

- No, I agree. I think have you seen the, there's a paper, Susan Benish wrote, I'm blanking on her co-authors name unfortunately, but it was on time-place and manner restrictions, right?

I think some of us have talked about this in the past. I've written about it in the past also. I was writing for a while about circuit breakers. The dynamic around, it was like information flows, right? How do we think about design and information flows?

When I was on Wall Street, circuit breakers are a thing that are put in place so that people can be put into a more reflective mindset so that stocks are not constantly whips on around when new news comes out to kind of temporary halt so that people can digest information.

And these models that we have are thinking about design tools and friction in particular as ways of creating temporary ways of shifting people, people's thinking, putting them into a more reflective mindset. So we're not kind of creating around

from one information crisis or rage machine to the next. And the ways that design can actually do that quite effectively, I think. I don't know if you all have seen that paper or that researcher, I think Ravi perhaps you have.

I wonder what you think about that. Yeah, yeah. The other author is Brett Frishman and that's the great paper. It's about time placed in manor friction and design. I think it's a great paper.

And I'd say that the most important thing,

you know, we're talking about who should decide, we don't want these big gatekeepers.

I think the best way you do this is no one decides, right?

So there's a way that you can reduce reach of content, which is like you identify kinds of content that you want to demote. And then there you're kind of making a migration decision. You're deciding like, these are things I don't like,

I'm going to reduce those things. But if you instead you decide like, I'm not going to optimize for what people pay attention to, I'm going to do surveys and give them things they aspire to consume, which tends to be more, you know, healthier content,

more aspirational content. And I'm not deciding, users are deciding, right? That's just a much more legitimate way to do it. And it supports users agency. It's not taking away from what users want.

And a lot of users, they don't want, you know, they get more sexual content than they want. They get more sensational content that they want. And so if you ask them, you know, aspirationally, what is it you want, you actually get a different answer.

So I think there's a way that you can design systems where no one's deciding, there is no like gatekeeper. It's really like designing so that users decide and all of the decisions are really content and neutral, not about what we do or do not want people to say.

- I want to talk about that development control to the users, maybe in the context of decentralization, but while we're talking about who decides, and we sort of alluded to the regulatory conversation.

And the thing that I always thought was interesting

that ties into the legitimacy piece here

is that I think most people don't like the idea

that social media companies decide, right, it is a form of unaccountable private power. It's quite opaque, nobody really knows what they're doing. There have been efforts to create some transparency, platform accountability and transparency act,

was one such law, never managed to pass. Ravi, I think you have looked at a number of other different types of regulatory interventions that touch more on design, what are you seeing? Where does the, if we say that user control is one thing,

It's a far way off, right?

And it may be that centralized social media incentives

don't align, and we can talk about whether that's true in a couple of minutes. Decentralized is its own animal. We can talk about trade-offs there. We don't want the government making moderation decisions,

how should we think about the role of the state, whether that's America, or what's happening in Europe with the DSA, how should we think about the regulatory conversation around design? - Yeah, I mean, I think it's analogous.

I use the analogy of cars and food. Like once upon a time, we didn't have regulations for how cars were designed, and so you could have a car without seat belts, or you could make food, however you want to, and you're meet factory, and then people got together

and said, "Look, we need some minimum standards "so that people don't get sick and people don't crash "and go through their windshield."

And so I think the physics of social media

are increasingly becoming understood, and we need minimum standards for the design of social media products, and in some ways,

our first amendment does some work for us,

because you actually can't regulate in the United States, what people can and can't say online, but you can regulate whether a product is safe, and so we've been quoted way down that there's a difference between the expressive components

of an algorithm and the functional components. So there is no message trying to be conveyed by an algorithm that says, you know, I wanna keep you on here as long as possible, and therefore, you know, and we also know

that there are lots of externalities to that. There's certainly harm to kids, and so you see things like the kids online safety act, or the safe for all act, or the age-of-privileged design code. You're seeing a lot of laws really go

on this design direction, both 'cause it's more effective, also it's more legitimate and, you know, less prone to abuse, and it's required by our constitution. - Maybe we should chat about the user option then. So right now, the decentralized option

that where users have the most control and there are the most users is BlueSky, or we've seen a pretty big adoption curve for them recently,

and I think everybody here's on BlueSky now, right?

The dollar, all three of you? Yeah, yeah, I could've gotten some problems with it, but the rest of us are doing fine. (laughs) I guess, for those who are listening

who are not on BlueSky, there's a lot of different ways that users can control their experience. There are interesting ways to have control over what used to be curated through the people you may know,

algorithms, which was Facebook and Twitter's ways of suggesting, users for you to follow algorithmically, now there are what BlueSky call starter packs, where you can find one person that you trust, you can click on their starter pack,

and you can subscribe to and follow all of those people. So it solves the cold start problem, and you have some agency over immediately going and finding people that you like, that you trust, that you find interesting,

and then seeing who they like, and trust and find interesting, and so you can kind of build your initial social graph that way. So there's the sort of social graph building piece. There is the ability to create and subscribe to feeds.

So for a long time now, you can go, you can just pick different types of feeds that you want. There's a gardening feed that I subscribe to, is a really crappy gardener. You can find people who will help you figure out

why your plants dying. There is black sky for people in the black community who want to find that that sort of black Twitter community on BlueSky. There are so many different types of identity

affinity group feeds that you can find and follow. There's different topics, news feeds. There's one that's really great that's a gift length, if you just want to subscribe to all the different gift lengths that people drop on the platform

and just read news for free, basically.

So it's a really, just a kind of a cool way to immediately cure at your feed. And the thing that's really nice is they make it very easy to toggle between feeds. So if one feed is very boring, if you're discovered feed,

or your friends that you follow are not posting very interesting things, or they're kind of quiet that day, you can pop into one of your other 10 feeds, and immediately see what else is happening elsewhere on the platform. And then finally, the other thing that they have,

which kind of at the intersection, we can say of moderation and design is there's the laborers. So you can actually choose to have certain content, either obscured or kind of hidden in your feed. You can put up a little interstitial over it, label it,

and you can also have shared block list. So that's roughly speaking the different ways in which users have incredibly granular control over very different types of the BlueSky experience.

I think one of the reasons that we've seen

in this adoption, I think, is really the mainstream platform swinging the pendulum pretty hard on the moderation front. So I think a lot of the migration to BlueSky was in response to Elon buying X, and I think the liberal audience is feeling that they didn't really

like what happened to curation on X. They didn't really like what happened to moderation on X, moving to a different platform. Thank you, Sazak, do the same thing. He recently had a pretty big shift in what he said,

X was going to moderate around again, a little bit of a bump there. Curious how you all see this shift to decentralized platforms.

I have seen it as an opportunity to show users

what is possible, but I'm not sure how many users

are thinking about it in those terms. I kind of get the sense that more people are there because they think of it as a vibe shift, right? They're fleeing what they see as like bad moderation and curation vibes on other places,

and so they're coming over to this new place, but they're not necessarily thinking about it in terms of wow, it's really fantastic that I have more agency. - My hunch is that you're right.

So if you fleece X and now Facebook, there's a good chance you did so because you thought

that maybe content moderation was getting too lags, right?

- Or because you saw Elon in your feed constantly, right? - Yeah, there's like-- - Curation on X got really weird. - Yeah, and also, I mean, Elon is something that I've written about many times,

is not exactly your principal civil libertarian free speech defender, he is very much someone who defines free speech as stuff he likes and has all kinds of arguments to limit and moderate things that he doesn't like.

But that's sort of the way he marketed it, and I think that that turned off some people, and also the announced changes by Zuckerberg, which, you know, I think you can look at it as you can, you can be sort of cynical

and you can say that was clear pandering to the new administration coming in in order to avoid sort of the worst reventious consequences of a new administration that where Trump obviously was not a big Zuckerberg fan.

But I think that some of the announcements were from a free speech perspective actually pretty good in the way that they were announced. Implementation obviously is different. So I like some of the features

that you mentioned on Blue Sky, that's fine. I guess from a free speech perspective, the real difference is how light touch is Blue Sky when it comes to the centralized continuation. So if, for instance, if you look at the hate speech policies

of Blue Sky, they're not that very different from other platforms, it's not sort of a, you know, we have all these features that we're going to be super, you know, we're not gonna touch a lot of hateful stuff,

centrally, I don't have any stats on how they implemented, but it's just when you look at the policy, it's not very different from the other platforms.

And we have to remember that the other platforms,

has been, we put out a report a year or two ago where we looked at what we call scope creep in, in hate speech policies of platforms. So we looked at, I think, eight platforms and their hate speech policies,

since they were first sort of publicly iterated

and up until 2022 or 2023. And you see huge increase in the number of protected categories, sticks, you see sort of lower thresholds. And even though most of the platforms

say that they are committed to human rights principles, their hate speech policies actually go way beyond the definition of hate speech in the international covenant and civil and political rights, is you and convention, which on the one hand,

protects speech, but then says you have an obligation to prohibit narrow categories of hate speech. And even though, I mean, these conventions are obviously not legally binding on private platforms, but they say they are committed to these principles.

But what we found was that very clearly the direction was towards more restrictive hate speech policies and just by looking at blue skies, a hate speech policy,

it doesn't seem to be much of a game changer.

And I'd be interested to see how they data on how they enforce this, because what we've done also a number of studies, first in Denmark, but then the latest one we did was Sweden, Germany, and France where we looked at some of the most popular politicians

and media outlets, and we looked at the number of deleted comments there, and we found that the vast number,

I think, between 90 and 98% on YouTube and Facebook,

respectively, were perfectly legal comments, and the most of those that were deleted were not only legal, they were not particularly controversial. So that suggests that this scope creep has had a consequence of an impact, not only on a lawful speech,

which you'd expect a fair amount of lawful speech to be moderated away, but even sort of speech that is not even particularly controversial. - I mean, I just like, that does tell us well with my experience, hate speech,

I'll agree with both Jacob and some of the Elon

that the concept of moderating on hate speech, the goal is reasonable, but the way it actually gets implemented in practice actually has a lot of negative effects. So a lot of things you end up taking down are things like men are scum, or, you know, they're not,

they're not things that we're actually thinking are harmful, and then a lot of things you end up leaving up are what you call fear speech. So people talking about a crime committed by an immigrant, just reporting on it, and then you see all the vitriol

generates, and so you're never gonna get at that kind of thing

with a policy, and so I think, you're right, Renee,

but I don't think people were responding to differences in moderation, 'cause I don't think those actually make a huge difference in divisiveness. I actually think the thing they're responding to is a vibe change between Twitter or X and Blue Sky.

Like people don't want to post something, and get attacked by 300 people, they want to have a reasonable conversation with regular people, and so if you have a platform where it's normative to just attack each other,

then regular people are gonna leave. - Well, I think design really does so much towards shaping norms, and that's, this is where I think that ties back into what Glenn is describing and the work around what do you curate,

what do you surface? You talked a little bit about bridging as a means for surfacing disagreement without being disagreeable. I think is how I've seen it expressed in its simplest form. Glenn, I don't know if you wanna talk about that.

I wanna also mention Mazenix work on overcoming digital helplessness and talking about the agency piece, but give me a little bit about that concept of how do we create that sense of where users do feel comfortable,

where the norms are such that you feel like you can't speak without being baraged by a mob of people because of what is curated in surface. It doesn't create main characters constantly.

- I mean, I think it's important to understand

that this emphasis on design over moderation is both a defender and a attacker thing. It's both good and bad. Like so, for example, there's wonderful work by some colleagues of yours

when you're at Stanford, Renee, Molly Roberts, Jen Pan, Gary Tain. And what they show is that the most effective stuff that the Chinese do is not actually the great firewall. - It's about the forum sliding.

(laughing) - What in the space with division attacks, with distraction, garbage, basically. You know, you can talk all you want about free speech, but if the room, if there's deafening noise playing everywhere,

it's not very feasible to speak over that, right? And so, I just think no matter what our goals are, the design of sort of the overall information ecosystem

and what gets surface is critical.

To achieve the goals of making people feel that they can be part of the conversation to me, it means that doing exactly what you were saying with the blue sky feeds, well, maintaining some of the ease that you get from a more algorithmic curation,

which is people need to know the context of the conversation. If people don't understand where they're speaking, it's gonna be very hard for them to do that. There's completely appropriate times to start

eulating or speaking in tongues. It's called church, you know, or mosque, right? But that's probably not something to do. When you're in an academic conversation about chemistry. And if we let everything get mixed together,

and people don't have any sense of that context, then you're gonna get a lot of inappropriate behavior, you're not for any particular malicious reason, but just because people don't know what conversation they have. Like, you know, men are scum, for example,

is a very contextual thing, like, if you are in a conversation that is meant to be bridging a bunch of different things on controversial issues related to feminism or abortion, saying men are scum is probably like,

it could be a pretty problematic remark. If you're having a conversation about, you know, sexual abuse, it might be a very appropriate thing to say. So, not giving a sense of the context or the audience that you're speaking to can really undermine our ability

to have civil conversations. And I think that restoring that in ways that are consistent with the ease of our rhythmic feed is really important. And that's a lot of what we're trying to do.

- But I think here, it's, again, it's important

that we still have those spaces for those who want the robust uninhibited discussions. Also because, I mean, human beings are, you know, we're driven by our emotions a lot of the time, right? We're sitting here, we're having a rational discussion.

We're saying, what would an optimal information space look like? And, and we can have great ideas about that,

but the human beings that navigated are not always motivated

by those, and so you have, let's, you know,

The latest example, the Khalil Mahmoud case, you know,

that's something that has upset a lot of people and they're going to vent their frustrations about it and their fears about government overreach on free speech. And they're not necessarily going to express that

in a very polite way, because they think that the government

is curtailing our first amendment rights in a way

that's really scary, and you have to have spaces for that,

even though it sometimes delves into hyperbole, then you can have, if, you know, a, a feed where first amendment lawyers have a much more substantive discussion about the niceties of the case, and I want those things to coexist.

- Do you want them to coexist on the same platform? I go back and forth on this. This is the challenge of decentralization, right? It gives people the opportunity to move and response to the vibes, meaning you don't have to be on Twitter.

You can go be on blue sky, which is currently perceived as lib Twitter. My hope is that it won't be for very long. I hope is that people recognize the technological capacity, the ability to build, much like the Fediverse, right?

Run your own server, do your own things, set your own rules, read it again, the same thing. You've got infrastructure, make your subreddit. You can have our conservative and our liberal coexisting in the same place on the same infrastructure.

Are you looking for people to be in dialogue with each other? Because that, I think, is the piece that is struggling. There are a lot of different social experience sites that are coming about.

And if you want to find the saltiest possible world,

it's always been there, it's called HN.

You can go, the question is, nobody's ever been deprived of that experience. The question is, how do you create the spaces where the disagreement manages to come and contact and achieve consensus?

Because my big concern is that we've created places that people can go to for the vibes. But we haven't created, we haven't found ways to use design as solutions to do that bridging and create that consensus.

And even as we've created more small public squares, which I think are good, we have not yet found the design solution that bridges that consensus space. So in any other way, I think about it is that giving the space for the small conversations

might seem like a contrast to doing the bridging. But I would actually argue that it's like a necessary other side of the coin.

Because until we understand what those smaller spaces are,

we don't even know what to bridge across at some level. So I actually think my optimal ideal design for this type of a situation is one where there is a common platform that has affordances for both those things and actually uses the data from each to inform the other.

Because by having the awareness of the small conversations, we know what the larger conversation, if you choose to tune into it, is going to need to navigate and bridge. Because without the smaller ones,

there's just no way to be a tune to that. - Mike Maznik wrote a really interesting piece. In January of this year, it's called Empowering Users Not Overlords, overcoming digital helplessness. It is asking users to make a pretty big

mental shift concept shift in how they engage with their role, their own role, their own agency, on social platforms. This question of, I remember some controversial people landed on blue sky.

And even though they didn't post very much, and they did nothing that was directly immediately obnoxious on blue sky, some members of the community were extremely angry that they were there. Because of their pass behavior and other platforms

that they found upsetting offensive, et cetera. And this question of, you can have a very strong block feature. You can empower users with specific tools. You can even create, again, with Federation, the ability to de-federate from other servers.

What do you think shifts the way that users respond,

rather than kind of calling for the mods to take an action?

Do you think that that's a reasonable expectation that people should be rethinking their relationship to their own agency here, or is that an unrealistic expectation? - I mean, I think there's some different categories of users, so I don't think everything needs to be devolved.

There's no one type of user, right? Like, there's some people who have massive followings, who have official positions within certain communities. And I think the notion of having those people take on additional responsibilities,

which they already do in the world, is very consistent with the role that they play. I mean, there are people who are youth, quote, users of blue sky who are also literally the editor of New York Times, or the pastor

of a megature, or whatever. And the notion that one would expect those people to take on roles in the digital space that are commensurate to their roles in the physical space, or that there would be digital native equivalence of those

that also exist makes to me a lot of sense. The notion that everyone needs to be acting

In such a sophisticated way seems unrealistic to me.

And I think the best designs would allow people

to sort of sort into those roles and take on those responsibilities as appropriate to the social role they're playing. - I mean, I'd say we do want people to interact in the same space.

And I think there should be a room for everyone, but I think I would prioritize regular people. And I think a lot of these platforms don't prioritize regular people. They prioritize the hyper-engaged online warrior.

And that's not most people in society. And so I think we in the world know how to make spaces that prioritize regular people. And if you are like an online warrior, who just wants to argue about everything,

we know how to sort of exclude the people

from those spaces, or make them take their turns,

or limit how much they dominate this space. And I think we just need spaces like that online,

where you know, and I think you should be able to argue

and save things in strong ways, or save things in academic ways, but you should be well-intentioned. You shouldn't be there to create an argument. You should be there to have a thoughtful discussion. And unfortunately, our spaces aren't designed for that.

And so it may take a shift. We may need to have the reason we don't see it happen as much because we often prioritize a space that's used a lot. And so we're used to like, I have to refresh my feed constantly and see what's new there.

Like, there isn't maybe something new to be learned, right? And so maybe we need to check our feeds every two days instead of every 30 minutes. And then maybe the conversation with you more natural. I like the Jay's talk at Southby.

I don't know how many people noticed this, but Zuckerberg had been going around in his sort of Roman-- how often do you think about the Roman Empire? Meem, sort of like, all Zucker enoughing, sort of shirts, comparing himself to Caesar,

the all Caesar, either Caesar or nothing. And then she had a shirt on that said, a world without Caesar's. I would butcher the Latin pronunciation, so I'm not even going to try. But I was just sort of a nice way of wearing a shirt

that sort of articulated the ideological distinction between a platform that is so run in accordance with the vision of one person and a very top-down controlled leadership versus the world without Caesar,

which I think is really a very appealing way

to phrase this, the potential of decentralization. Since I know we only have a couple minutes, I'm curious, you know, platforms, lawmakers, users. Everybody has very different visions for this future of speech online.

I'm curious what you all see as the most realistic outcome. Where do we see things going over the next five years? I think that we're at a moment where lawmakers in a lot of countries, including democracies, would be skeptical about, especially when it comes

to sort of decentralization, if you were to say, let's allow users to have more control. And then minimize our content policies and our centralized moderation. I think that, you know, if you look at what's going on

in Brazil, for instance, look at what's going on in India and look at the European Union, I think the European Union is now with the way that the Trump administration is acting. I think there's even more skepticism about American platforms in Europe

and even more of a wish to say we need to have control over what's going on on these platforms because they undermine our democracy. Unfortunately, I think some of those reactions are going to potentially frustrate some of the ideas

that we're discussing today. One of the things that I also like about the article is that some of these ideas have been implemented in Taiwan and so I spend a lot of time sort of saying,

we don't have to always think about, for instance,

and free speech debates, the dichotomy between Europe and the US, for instance. There are actually really interesting places around the world at country which faces an existential threat, including state-sponsored disinformation on a scale

that no other democracy faces that actually tries to navigate this challenge without resorting to some of the solutions that democracies, well-established democracies are fortunately afflating with. So I try to point to that as a way forward

and I think that would be well. Unfortunately, it seems to me that a lot of lawmakers are not necessarily don't know that and they still think in this very binary terms. So in the short term, I'm probably pessimistic

but I think what you need to do this goes back to my initial remarks

that especially when you're working the free speech space, you can't just talk about John Stuart Mill and principles you have to show something concrete, something that works, where people say, "Okay, I actually see this is something that works.

This takes care of some of my concerns

No longer so inclined to say,

"Well, I need a platform to implement my free speech

"or my policies and do away with the people that I don't like "or I want to the government to adopt these rules "to protect me from whatever evil forces I see out there." - Hey, I don't know if you've been following the financial markets or the newspapers, but it seems like it's a general time

of uncertainty and no one knows what's going on. And that can be a problem, but I actually think it's kind of great. I think predictions are disempowering. - Fair enough. - The certainty is important.

I think it's a moment for us to steer things

and to together make that change and to focus on it. So I don't know, there's a lot of bad outcomes. I'd be happy to talk about it and there's a lot of great ones and I think it's our chance to seize the reins. - I mean, I am actually more optimistic.

I think we all walk around with our phones that we have complex relationships with. If you ask people, most people, including kids, want to use their phones less and we have all these apps that are trying to get us to use them more.

And I think that there's just too much energy in the system, too many people who are unsatisfied with the status quo for nothing to change, I do think that the moderation paradigm

is somewhat held us back here where I think you get

into never-ending wars about what people should

and should not be allowed to say online. And I think the design paradigm is taking hold. There are more and more people thinking about how these platforms are designed, how do we give people choice, the digital choice act,

recently just passed a new tab to actually force people to force platforms to allow that choice across users. So I think there's just too much energy in the system and I talk to policymakers every day who are trying to make that change.

And so maybe it's not going to happen immediately, but people are not happening.

- And Robin, you deserve congratulations

for the wonderful work you did on that, so thank you. - Well, I know we are at time. And I just want to thank all of you for joining me today to chat about this. I feel like we could do an entire another hour

on what's happening in Europe and Brazil. So we will have to actually do that at some point, but thanks so much for talking about the papers, your work, both in the regulatory front and the academic and design front on the free speech front,

really enjoyed the chat, looking forward to having you all back in the future. - Thanks, you all, it's Robin. (upbeat music) - The Lothar Podcast is produced in cooperation

with the Brookings Institute. You can get ad 3 versions of this and other Lothar podcasts by becoming a Lothar material supporter at our website, Lotharmedia.org/support. You'll also get access to special events

and other content available only to our supporters. Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including rational security, allies, the aftermath, and escalation.

Our latest Lothar presents podcast series about the war in Ukraine. Check out our written work at Lotharmedia.org. This podcast is edited by Jen Potchamp and our audio engineer this episode was Cara Schillen of Goat

Rodeo. Our theme song is from Alibi Music.

As always, thank you for listening.

[BLANK_AUDIO]

Compare and Explore