The Lawfare Podcast
The Lawfare Podcast

Lawfare Daily: What’s Influencing Politics Online? X’s Algorithm, Creators, and the New Persuasion Machine

3h ago47:568,523 words
0:000:00

In this episode, Lawfare Contributing Editor Renée DiResta speaks with Nathaniel Lubin, co-author of “How Social Media Creators Shape Mass Politics,” and Philine Widmer, co-author of a recent Nature p...

Transcript

EN

In fact, in our study, we show that if people start using the algorithmic fee...

And they switch your political opinions to the right.

It's the Lothar podcast. I'm Renee Dereste, contributing editor at Lothar with Nathaniel Lubin, founder of Insights Studio, Survey 160, and the Better Inner Initiative. And Professor Colleen Woodmer of the Paris School of Economics. There's lots of effects happening all the time everywhere, and your exposure to these platforms is sort of a layering effect of lots and lots of different nudges. And any individual nudges might be very small, I might even close to zero, but you know, some of them are having effects and an aggregate they are having big effects.

And so that point I think is the thing that is what's most important.

Today, we are talking about social media and influence.

The algorithm or the influencer. What's shaping political persuasion? One of the things that I really appreciated about your two different papers was that I thought that they brought together and highlighted two different aspects of layers of influence online, right? So one is the platform itself, right? What happens when the feed changes when the feed changes were people see?

Filling yours, I think your paper and nature really focuses on that aspect of it, and then Nate yours, what happens when people follow particular types of creators creators that they trust?

What is the role that individuals play in influence and shaping political opinion? And so I was hoping that we could talk about the intersection of these two things, because this question comes up a lot actually, is it the algorithm or is it the influencer? Or is it both? Is it a combination of both? Nate, you were the co-author of how mass creators shape mass politics, a field experiment during the 2024 elections, and then Filling your paper was the political effects of excess feed algorithm. Maybe we can start by going into your respective papers, summarizing them and telling the listeners a little bit about them.

It's start with Nate? Yeah, absolutely. So the paper is still a preprint. Has social media created a shape mass politics secular experiment. So it was from the 2024 context. So for my part, we were working on the program side of this originally from a nonprofit called The Better Internet Initiative, which is a fellowship for 501c30 nonprofit fellowship for content creators to integrate our social material that is fact checked and accurate into their feeds. And so because that program is sort of long-reterm in nature, we were able to plan ahead to do this research. And so in the second half of 2024, we worked with some other colleagues on the paper to do a sort of longitudinal assessment of how a subset of those participants in that program, as well as some other participants outside the program.

You know, what the effect was of exposure to following those combinations of creators during that period. So there's a very dense amount of information in that and a lot of different ways to cut it, but the kind of punch line is a couple things. One is in the control groups people who spent more time online during that period had some shifts in their views and perspectives relative to people who were in the kind of basic control group.

Second finding was that interventions that were done pretty much universally all had quite strong effects relative to what we would have expected.

So they did shift their perspectives and knowledge of issues and some understanding of the world based on that. And then the main sort of comparison point of the paper was looking at what we called culturally or apolitical creators versus more politically oriented creators. And you know, people who ordinarily talk about politics with large, honestly pars and politics, but part of politics generally versus people who are mostly focusing on other issues and then this program was going to include more substantive material.

And what we found was that on a sort of per video basis, the more cultural, less political people whose creators were more influential per video. So they had much larger treatment effects when you look at the data that way.

Can you talk a little bit about how this intersects with, you know, your understanding of political influence and influencers over time?

Yeah, I mean, the this program or this research, you know, is based on sort of a larger project around thinking about how influence works and algorithms also very much a part of that. So these are as you pointed at the beginning, you know, mirror images of the same sort of questions. So we think about trying to improve information landscape, understanding, you know, the algorithms and sort of architecture of the platforms is one way to get it that another way is to change the lived experience of what humans on the platform are actually doing.

In different contexts, those could have, you know, more or less effect. So, you know, this, this is not kind of either or kind of way to think about it, it's just another way to cut that. And so, you know, from a kind of program perspective, you know, be I better initiative was saying, you know, it would be great if if some of the incentives were a little bit different, but, you know, we're going to end the meantime try to help people who are interested in taking this challenge on.

Do that more directly from my part.

Repeated exposure of any kind can have, you know, the kind of potential to persuade isn't mean just because you're in someone's feed that it is effective, but it means you have the potential to do that. And so.

Sort of thinking about the kind of linear sum of all the different influences that are happening. That's kind of the underlying theory of this.

That's a perfect transition, I think, feeling into your work. I'd love to maybe summarize your paper. And yes, I love to give you an overview of our research. And so we started the observation that algorithms decide with billions of people see every day. But at the same time, somewhat surprisingly, this question of whether feed algorithms shape political opinions hasn't really and found a quantitative or a conclusive quantitative answer in the previous scientific literature. And a prior study found that turning off the algorithm on Facebook and Instagram had no detectable effect on political attitudes, which is a bit puzzling because anecdotically. And we have suspected for quite the one that feed algorithms matter for political opinions.

And in fact, in our study, we show that if people start using the algorithm feed on X as opposed to the chronological feed, they do change their political opinions.

And they switch to political opinions to the right outcomes that we measured or on policy priorities, so what do people think the government should address should the government prioritize and say, and healthcare over integration.

And we asked how people thought about the criminal investigations into the on a track that were ongoing at the time in summer 2023. And we also asked people about their opinions on the war in Ukraine.

But what was most puzzling to us is that switching the algorithm off didn't reverse those effects, so for people that due to our study and stocked using the algorithm feed, we didn't find that they changed their political opinions.

And this asymmetry between the two effects, it also explains this puzzle from the previous literature, because the study that I mentioned before, the previous study, only looked at one direction. So they were able to study what happens when people who had previously used the algorithm feed, so what happens when these people turn off the algorithm feed. And in our case, because we were studying X, that already at the time had this choice between a chronological and algorithm feed. But study the treatment in both ways, right? People who came to the study and using the chronological feed, and we could randomize some of these people using the algorithm feed, and people who came to the study and using the algorithm feed, we could randomize some of them into using the chronological feed, so we can really see what happens in both directions.

Now a question you might have is, why is there this asymmetry, and what our data suggests is that it's driven by the accounts that people follow.

So if you are using the algorithm, you also see content from accounts that you don't follow yet, and then you start following these accounts. And the algorithm is turned off. You don't unfollow these accounts, meaning that this list of following this network on X that you have built under algorithmic inference is going to stay there, even if you turn the algorithm off. Yeah, and maybe one thing that I would like to add is we don't find a fact on a political polarization, so this kind of feeling of like a warmth in group and out group, but we do find this effect, as I mentioned, before on political opinions on current events, and of course this is speculative, we were only following people for seven weeks, and we already found that they changed the political opinions on current issues.

If they started to use the algorithm, so of course this begs the question, because in reality people use these algorithms for a month, or for years, this begs the question, what would happen in the long run to more deeply and held opinions. And one thing that's worth noting is that we combine different types of data. The first type of data is surveys at the baseline and in line, and we ask people about their political opinions, but we also collect data on which people actually see underbook settings, so what does a person see when they go on the algorithmic tab, what does a person see when they go on the chronological tab.

Plus we also collect the accounts that people follow, which is, and this comb...

Another finding I would like to emphasize is that in this analysis of what the algorithm promotes or demotes in that case is that we also looked at news outlets, and we find that the algorithm demotes news outlets.

So those are much less likely to appear in the algorithmic feed and the chronological feed on average, around one fourth of the posts, they are from news outlets, and compare to around 12% in the algorithmic feed.

So this is really a stark decrease in the presence of news outlets, which obviously and typically follow different standards and for their content, like fact checking and other types of things, and this might not apply to political activists that are updated a lot by the algorithm. The asymmetry is really the striking part, I wonder if we could maybe explain that a little bit, so switching the algorithm on, you see an attitude no shift, but switching it off then does not lead people to shift back and you talk a little bit about this, maybe you can explain why, why you believe that is.

Yeah, exactly, so that's true, that's probably the most striking finding, and so I'm coming back to these different types of evidence that we gathered, and what we can show is that when you turn on the algorithm, you start following these conservative political activists. But if we turn off the algorithm for you and you are still continuing on a list of following that you build while on the algorithm. So meaning that we switch, if we switch off the algorithm and for this respondents, they still have the same type of accounts that they follow.

I think it's interesting from a policy perspective because there's this discussion around the right of opting out of the algorithm, but if we take this asymmetry that we discover seriously.

It caused some doubt whether this is sufficient giving the stickiness of the effect, and of course we can't rule out that eventually and people would also see effects in the other direction, but what we can clearly say is that if this happens, it's much slower than the change of opinions due to this turning on of the algorithm. So you're seeing the feed shifts the network, essentially, that people are following, and then that network kind of outlasts the, you know, it persists even after they're, even if they toggle to something something subsequently, what percentage of them significantly changed who they were following during that time.

Because one of the things that's very interesting on platforms like threads now and on platforms like TikTok is that who you follow doesn't actually matter very much on certain platforms in the sense that they had this concept of unconnected content, right, where this is essentially the for you algorithm where they're just going to push you stuff anyway. And so what I noticed on platforms like threads is that people don't bother to follow you. They just assume that they're going to see you and you hear TikTok creators talk about this too or that question of what actually I see Nate kind of nodding along here also, what actually motivates the follow right what leads that sort of holy grail of the persistent connection to actually form.

I'm curious what you see in your research.

So I can give you specific numbers of like a share of people. I can tell you that this switch leads to around the 0.10 0.2 and standard deviation change in the probability of following the certain type of accounts. To come back to your, and I guess like broader question. Obviously, this asymmetry that we discussed it only makes sense in the world where there is a chronological feat, right, and in the world where we only have an algorithmic feat and everything is just like we learned from your pattern from your attention.

This kind of deliberate choice of do I want to see more or less of this person, yeah, it's not the right question to ask and I think it's actually very important.

I'm considering for like platform regulation, it's some sense, because there's always this question about like what would I like to see and what do I watch?

Because there's also, so that's not our study, but there's a growing literature on how these algorithms exploit and some biases that we all have as humans like sensationalist content, perhaps very like emotional emotional content. So there's this thing of being like hooked by things that if we were asked to deliver, like choose a what type of experience would I like to have or good I'd be like not like to be nunched to and on this platform. So yeah, this, this, this asymmetry is a bit specific to your question, where there is some form of like deliberate choice of yes, on to see and more and more of this and I think in the meantime and on next I've already changed like when you go to the, to the following tab now.

There's also some algorithmic curation in that now, and so you have to like g...

The broader takeaway and here I'm obviously moving beyond what we show in this study, but something that I would find consistent with what we find is in some sense the fact that we find this stickiness, it could easily apply to other things too because you just said like people know I'm going to see more of you if I watched your video.

So I'm not following you that's in some sense also a learned behavior. So whatever these algorithms do, I think this more general idea of these behaviors could be sticky and then these kinds of.

The type of content you see or your opinions or how you go about the platform we learn like the humans we learn how to interact with the platform.

So I think just generally caution us a bit to look at things very mechanically like we turn on algorithms to turn it off everything is reversible. I think it's it's something that we should be a bit more cautious with when thinking about policies that would that would improve the online experience on this platforms. Let me ask one more question that maybe ties into this question of feeds and removals, you're probably familiar with I believe you guys reference. In fact some of the studies that were done with the big meta studies meta the platform and they were in science, but these studies that were also looking at the effects of turning off or shifting feeds and the.

What seems to be non effects there do you want to talk maybe a little bit about the sort of intersection everything they had also found that there was you know essentially no depolarizing effect to see seem to consume kind of algorithmically curated content and the you know kind of public takeaway from that the media takeaway in the coverage at the time.

That that meant that there just wasn't an effect to the feed you should something very different in this led to some very interesting.

Discussion in the social science community about the difference in these findings particularly as Twitter shows this that is very strong pull to the right in your findings.

How did you how did you think about your work relative to that that prior work that had suggested that. You know, it doesn't matter turning it on turning it off it's all the same thing. Yeah, I think it's really is exactly as you put it, but not contradicting that earlier study because what they looked at was turning off the algorithm and they showed that this doesn't impact your your political opinions and he find exactly the same thing.

And even though it's a different platform a different and time period.

So it's really about this asymmetry and it ties back to what I said before I don't think we can mechanically think that turning on an algorithm is the same as turning it off and that's what what our impact suggests.

And so in that sense we're very consistent with what they found it's just. I think it for us there is a bit of a like a deeper question and going forward and also in future studies like what kind of research questions should we ask right and. Because as researchers it's typically very hard to know how this and recommender and algorithms this feed algorithms work and it's relatively hard for us to get access to data so just you know like it it requires a certain amount of imagination by the research to to think about okay why could there be not an effect if you turn it off even though we have all this anecdotal evidence and.

So in that sense I think it's just highlights that it's it's also very important to kind of like the details of the research and questions that we that we asked and another difference. Like very obvious and is that they look at the meta platforms and we look at X and and in that sense from a scientific and you point the effects are always specific to the given platform to the given time. But of course one does wonder like what in what we find is there something general like would we expect to see a similar like right report on other platforms to.

Maybe that's the right moment and to pass on the word again to Nate because and I think you also do speak up with this question and you're in your paper right. Yeah, I think I think a couple a couple of reactions to that one is I think we totally agree with the the. And so I think I think I think as an example that people often forget and these contexts is the algorithms often are recommending people to follow or accounts to follow when you create an account right or are introducing. Recommendations that are not the the recommendations of the algorithm and the feed.

As part of the larger architectures more broadly so if a platform like Twitter just to take or access to take a random one happens to have Elon Musk be promoted all the time right that might have an outside effect outside of. The way that you know back in the day when Facebook old school face book would recommend Barack Obama when you launched account almost all the time like that had an effect as well.

These things can in work in different directions.

I think that that larger architecture around what people want is the thing that is missed in a lot of this right there.

So I am the feelings papers amazing for showing a kind of contextual moment about a political effect there's some other papers I've seen that are you know arguing that there are engagement versus.

logical differences in people's satisfaction with the feeds and right and so a few of us in coordination with night Georgetown wrote a paper called better feeds because arguing. Much of things again, but basically that people should have more control over using what they want and that the incentives should be around the effects of exposure to these architectures broadly.

And I think that's very consistent with what pleans just saying that if you just focus on the input side out of it in the effect side you're going to miss most of the effect that you're concerned with.

So Nate, let's let's focus in then on influencers for a couple minutes. So your studies suggest that predominantly a political creators may be especially put in messengers because they are seen as more informative and trustworthy. I'm curious, can you talk a little bit about that aspect of the cultural creator component? Absolutely. So the distinction here is so I mentioned before the a political creators are predominantly talking about topics that are. In no way considered local and in fact we did a classifier of the content feeds of the different groups and there's a table in the paper that describes the ratios so these ones are.

In the range of 10 to 20 percent this was during the height of a presidential election that's actually quite low number relative to the ones that were classified as political.

More than three quarters of their content was considered that way so that's sort of the setup of this. You know we talked a bit in the paper about potential mechanisms and that's a little bit more specular we don't have as much direct evidence for what the causal reasons for the difference in the fact in the SAR or our persuasion amounts are but we have some some some some indirect indications here that are quite convincing. So one was people were following continuing to follow the cultural creators less political group much more after the study period ended.

And so you sort of see this kind of connection that was made outside of the incentives that we were introducing as part of the research. We also see sort of a pattern here where the the more political groups seem to be having their effect based on frequencies. So the number of videos that are actually shown during the intervention that are political much much greater for the political group that the political group which makes sense those are channels that are talking about this anyway all the time.

And so you sort of see this frequency and much more like advertising where the repetition is very likely the cause for the more political group. You know there are many other research directions were interested in going in future work based on this so they're diving deeper into the. The potential connections that people are making kind of parasocial notions that are in the literature is something that we're interested in. There's these studies that I've read that have come out arguing that polling I think maybe is better way to put this not studies, but the people are just interested in news that there's just interested in politics that they're kind of burned out when they're on social media.

Why apolitical supposedly content performed so well, do you hear anything like this either from the creators that you work with directly in your prior work or that was reflected in the study itself.

I mean I think it's a little bit context dependent right so during this period which again was very high.

Political salience period I don't think it's likely that politics was not driving attention to high level certainly that was quite pervasive. I think the concerns that most of the creators have is more of the opposite right the kinds of for the apolitical sort of cultural ones right again this was not part of interventions nothing like that right the is not on on profit intervention. If they were like concerned about causing backlash with their audiences they're concerned about getting things wrong or being attacked for being wrong like that kind of a thing.

And so a lot of the intervention value is getting them comfortable getting them having a capacity to do that well and accurately. So I think if you imagine recreating the study in a different context for that wasn't this sort of height environment you could imagine maybe it would look different. But yeah it's hard to know so net your paper also suggests that trust and parasocial connection are central to this. Can you talk a little bit about that maybe what exactly is it this comes up quite a lot in even in academia honestly this not not from the standpoint of how should we be studying it but just what is that what is it that thing that creators have that institutions or campaigns or legacy media and news increasingly do not.

So I think there's two dynamics that we're getting at in the paper that we're thinking about one is you know to have the chance to have a persuasion effect of any kind.

Condition for that is that you have to have seen the thing right and so that is threshold that's obvious but perhaps not always considered which is.

Why I think a lot of the most important work here for this more cultural dynamic happens because these people who are doing that kind of content creation.

Have large volumes and they are commanding attention and so if they choose to...

If the audiences of those profiles are very different than the more political groups or other kinds of groups who are maybe to first you know first approximation speaking to the choir picking speaking to people who are just a group of them. Their capacity for instance might be larger the second dynamic I think you're speaking to and the question is particularly for repeated exposures where they sort of have you know for the the channels were following is more important or the legacy view history is more important.

That repeated exposure might make make audiences more likely to believe them more likely to to take what they're saying on a kind of emotional level seriously and depending on what they're saying that might make what they're saying more credible and I think it more we're believable that what they're saying is not trade pector inaccurate. And in this research it's a little bit hard to know exactly why things are happening you just see the output effects but again they kind of indirect causal things seem to be consistent with what you would see if you were thinking about this as a parasocial connection theory for why the more a political creators are effective.

There's an interesting set of thinking around why the news accounts didn't perform as quite so well on Twitter in general which was that the news accounts didn't ever respond back right and this is it didn't have any different notice but they don't reply to you. And so you would have I remember Elon actually gloating about this when he first bought the feed because I study influencers also I would look at these differences in how they how they communicate and you know just what this what the styles were and political influencers in particular.

And one of the things that you would notice is that any on would mock the New York Times about this oh it's got no engagement meaning it's got these massive follower cows but there's no back and forth I when it tweets it doesn't get the same amount of engagement as when you know captured tweets. And the reason that is in part I think is that when the algorithm is waiting responses and you know when the algorithm is waiting engagement prior engagement when the algorithm is waiting likelihood that there is going to be some back and forth that other people will be able to.

Hop and on right to participate in to make the platform feel like a participatory conversation. This is one of these areas where I feel like the news accounts in the feed. They're they're not as well suited to the medium if the medium is trying to privilege conversation if the medium is trying to privilege. Social engagement and and that becomes something that is a real challenge because it does mean that you're going to be getting your news through the most you know charismatic evocative person as opposed to an organization I don't know if you have.

It's on that but this was something that came up as I was reading. I think this is an excellent excellent point and I don't think or at least I wouldn't read our studies like engagement.

Maximize even if like certain kind of changes in political opinions were driven by like engagement.

I don't think engagement is bad per se right or a platform that drives the drives engagement and to a certain degree. I think there's also a deeply funny, deeply informative content on social media so there's clearly the potential of this being something fun and informative.

I think the question is a bit like when it starts to replace other sources of information for certain types of information which might by nature be a bit less less interesting or less less engaging.

I think it starts to have effects on on the democratic debate and it's a bit the same discussion around fake news right sometimes reality just can't catch up with what you could potentially invent so in that sense. Right now I think we're building this extreme where engagement is really like the bread and butter of these platforms and there might be additional like specific kinds of interest of like owners or platforms or other like a big stakeholders but the engagement maximization is clearly a big part of it.

Maybe perhaps worth it in an equity rumor where there's just too much weight on that because the public information is for the public debate engagement is not everything right I think if we thought about it from a more like what would be the kind of wishlist of how we would like a public debate going I don't think that engagement would be the only thing that we would put on that list of the.

The ideas that we would have about how a good and democratic and debate would would go on and also this being said a lot of it is not political right and people share means about cats etc.

So our concern is specifically about like how how are we organizing and this like huge marketplace for ideas that have impact have an impact on politics.

So we had these two kind of areas of focus we have what did the machine rank then we have kind of who do I let into my feed repeatedly because I like them.

Do you think about this kind of a question for both of you do you think these...

How should we be thinking about the intersection between these these two things yeah and so I think I think that at that at the very beginning and I think the two papers are very complimentary because. I think economists call would be measure sometimes they're like a reduced form so you measure the effect of the algorithm on these opinions and then the next question is but how and so in our paper really we do go this extra step and we show that one very plausible hypothesis that we can support by the data is it's the it's really the the accounts that you follow it's these and contents that you're exposed to and I think that's exactly what Nate is showing paper right if you.

If you start and falling certain times of counts they they will have an influence there might be some limitations to external validity of this right if you count is like.

So the boring or something you might be asked to follow it but it wouldn't impact you but let's assume it's an account that is like reasonably engaging if you're more likely to see it. But they they they will be an effect so I do think taken together they they help us understand a bit better the mechanics so this content create is the matter and. You can be even if you're kind of pushed to follow new accounts through.

And I think it doesn't cost a bit against this like.

You know idea that recommend algorithms are just like.

I'm helping you to find the content that you were interested all along and in that sense they're politically neutral because they're just showing you. What you wanted to see all along I think both paper they show that. What you consume online.

Even if changed like through random forces it changes your your political opinions and I think this makes it just much harder to defend this.

The neutrality of algorithms I think then when we take into account how big these platforms are we know that they influence your your opinions.

It just raises some questions about whether we want to want to stay a bit in this kind of wait and see acrylic I mean there's a lot of like political action going on but I think largely it's still like relatively passive. So these two things taken together for me they highlight the need for much bigger debate about how we want these platforms to function especially when it comes to politically relevant information. Did you see in the argument the sub stack newsletter.

Like you should Jane wrote a very interesting essay called Twitter is not real life did you happened to see this one go by it's up for okay for listeners who maybe didn't see this one.

The argument is the outlet they do series graphs to charts from polling data that they pulled looking at. As a platform now this is a little bit after your study feeling finding that ex news consumers are notably more conservative than other platform audiences at this time right so they know that for example ice popularity on ex is close to break even there even while they are wildly unpopular on other platforms and overall that Donald Trump enjoys a high popularity rating still net popular among Twitter news followers.

The claim is not only that audience is self sort but also that ex under musk you know it intersects with your your point that as ex pushes users right word there is a. You kind of push it as like a vibes problem for people over indexing on the platform he's arguing that ex at this point has become a conservative platform. This is reflected in work that yeah we saw this at Stanford and observatory back of the day we saw like gaps user base had gone back to their true social user base had gone back to their because this was a platform where they could do things and get reach that they couldn't on those smaller niche platforms that they had kind of decamped to for while they now had a moderation environment favorable to them and so it was very interesting.

argument that he makes but i think so there's sort of markedly different i'd guess is the the takeaway then me median voter. But one thing that that this raises is this question of when we look at platforms today how should we interpret studies as you know i think there's still a bit of a legacy perception. Among many people that platforms are still where everyone is right that Facebook is where everyone is that Twitter is where everyone is and increasingly when you actually look at the demographics on the platforms that fracturing has occurred and that is not true.

How do you both think about this. And I absolutely agree with that i think there's also a massive distinction between the content creator profile or the active engagement profile of users platform like x versus the passive consumption assumption.

So that is a recipe for real challenges representativeness when you do a surv...

Obviously the platforms don't provide that kind of data at high fidelity by default really ever if they ever did and so you know it's very hard to do an assessment in that way with credibility but absolutely do think in much the same way you know the TV audience selecting which people channel to watch or seeing a similar pattern in terms of. What kind of channel and then what kind of universe within that channel you opted to write you to probably as the most pervasive from all groups it's probably most important channel despite what people would say but your experience as a recommendation experience.

While we different depending on what part of the world you're focusing on and so I think you know the kind of parallel dynamics of what experiences on the platform is the thing that i would point to there right so there are and whatever world of of the network you look at. We're going to be a handful of very large channels are very large influence factors of whatever the count type is or post for the base of the platform that i need to give time are having lots of influence and so that's sort of a dominant.

I think there's also the argument that there's the elite effects phenomenon that happens on twitter right which is that there is still a different.

You know each of those scales happening which is hard because we don't have any dependent reference frame to look at it with i think the there's also the argument that there's the elite effects phenomenon that happens on twitter right which is that there is still a disproportionate number even as the user basis shifted right even as the influencer basis you know has shifted right the creators are a little bit more skewed.

That it does still shape political opinion by virtue of its role as a breaking news a place that people still tend to gravitate to for break news the fact that the left hasn't necessarily.

And then in one particular other alternative place what do you think about the elite effects argument with twitter as it's continued you know salience is a place for for political discourse not only in the US i know that this is true. So I don't know if our study speaks to you i could say you know as someone who spent a couple years in my life with we deck open 100% of the time and felt the pain of that i still think twitter has an effect in much the same way for the left that like seen and does right i mean i.

Transcendent on when there's a election results or a war and pretty much never else and i think that's kind of the way a lot of consumers now treat X when it's not their day to day there's the daily users who are there but then it's still a thing that you have on your phone that when there's a moment you want to see something you might go to so i think i think you're right that at high salience moments can have an effect. You know there might be cases where that feed exposure can have an outside effect in sort of framing a debate at a high salient moment that said i would be cautious at over extrapolating because i think we over index for those moments.

Daily it's relative to all the rest of the time i think i think a commonality both these papers but papers is that you know. There's lots of effects happening all the time everywhere and your exposure to these platforms is sort of a layering effect of lots and lots different not just any individual nudge might be very smaller might even.

Those to zero but. You know some of them are having effects and and aggregate they're having big effects and so that point i think is the thing that is what's most important.

What do you think about the implications from your respective findings on the what should be done front you do think that there are policy ramifications that come out of your takeaways here do you think that there are.

Media literacy arguments or what do you want the either policymakers or regulators to understand about about what you've what you found but where do you think platform should be going.

I'll jump in here with a very kind of somewhat self interest date comment because doing a study like ours it takes a lot of public research money a lot of time and then we have one study to document the effect of a feed algorithm. That billions use every day in like one context with five thousand people and I don't think this is sustainable given that the impact that these platforms have.

I think we need a much better infrastructure to create transparency and some steps are dying being done in this direction and with the DSA for example.

There's still this like general problem of like first of all and who decides which which quite of questions are audited and there should be some organized procedure of like understanding what are the questions that we want to audit as societies how can we get the data to audit them. What can we potentially even like have some like an access to the code of these platforms and not just like selected questions of the code that the platforms themselves decide to share but really like how can we get this insights because for us any study is just like one puzzle piece right.

I think you're needs to be some much more systematic ongoing monitoring there...

I suppose to kind of figure out as I go or or should I be able to choose much more should I be able to I don't eventually bring along my own recommended role given that I can plug in in the platform. So I think there's many many shades of discretion about transparency and for me from a research perspective some and not a policy maker and a researcher that's one of the things where I see a huge potential to create better knowledge to create more knowledge that will help us and to create much more informed policies and.

Not just like the policies per se but also like the enforcement instruments like in practice how do we make it work not just what would we like to regulate it really and how can we also and make this work in practice how will it.

And I think we're broadly we've been talking in this conversation about political persuasion that's not the area that I would be focusing regulation on it all I would be thinking about.

Given the attention and issues there where there's much more consensus and legacy more product liability standpoint so the the frame so we look that before I think we slide to that which is to say let's not worry about regulatory perspective inputs that people are creating let's worry about the output effects of exposure and there are ways to run.

I think that's one of the ways to do this control trials and longitudinal assessments for population exposure protected classes like kids and have do arm principles associated that and.

The way to not cause a problem that dimension well optimizing for engagement without any other dimension then great good for them they should do it but if they can't the read why they should stop is not because. I would say it's because they would have an effect on a thing like kids health and so we should be focusing on the actual the actual thing so holding policy and and outside and you know regulators know that holding all that aside if you had a magic wand. How would you redesign the feed tomorrow or what would you change would it be an incentive for creators the feed itself the public stability to choose and understand how both work like what do you think is the one most impactful thing that that we could be doing here.

How to magic wand I would hope that the you know product leadership of all these companies.

Gift their emphasis on their optimization timelines from short term to long term. And include metrics that are associated with societal health rather than individual time.

And so societal health to me things like social trust or human in action metrics that are well grounded in the social science literature so they had a do no harm principle over the long term associated with those kinds of metrics I think.

Much better shape. I would like to add a small thing to this and I very much agree with this one I would also add that I think many owners of the platforms now their AI companies or the same ownership they also have AI companies so my magic one would also. Focus the calculus of these companies when it comes to social media on on the actual like information sphere because one concern one might have now is that if you have a company that is also very very active in other and sectors of of the economy and in like artificial intelligence and more generally defense etc.

I think it's just much harder to understand is there's some like.

Can really think of them as some having like some sense of like public service that you might have if you're just kind of active as some like editorial force like a newspaper so I would also like to see more. And we'll talk about the magic one right so I don't have to be specific about how this exactly would go but I would like to have these things a bit more separated because I feel like increasingly they're getting these kind of huge players that are active in many different sectors and they impact millions of people in like many sectors at the same time and I think this makes it very very hard to keep track of what's actually happening what are what are the real incentives what is really happening.

And so I would I would find it helpful if this was it more clearly separated and then that sense I think and less concentration of power also.

So then I guess to summarize our conversation what we've gotten to is that what these papers suggest kind of taken together is that online political influence is not just about messages right it's about systems and relationships what the platform decides to show who people decide to trust and how repeated exposure across both can shape.

What feels salient, what feels believable and what feels normal right with th...

And what it means for.

The law here podcast is produced by the law fair institute if you want to support the show and listen ad free even to come a locker material supporter at law fairmedia.org/support.

Special events and other bonus content you don't share anywhere else.

If you enjoy the podcast please rate and review us wherever you listen it really does help.

And be sure to check out our other shows including rational security allies, the act of math and escalation or latest locker presents podcast series about the war in Ukraine.

You can also find all of our written work at lockermedia.org.

The podcast is edited by Jen Potter with audio engineering by Karishon of Goethe Leo. Our theme song is from Alla by Music and as always thanks for listening.

You can also make a podcast in which you have to do a true crime and glamour.

So it's really great and with every main thing in the name of the whole spectrum of all sides and how to modernize things.

And then we will then call Hollywood Killz and then we can call you a new episode. So from Tastish to the Gluc Gip Distent, you are now.

Compare and Explore