This is Planet Money, from NPR.
Alexie Horowitz, Gazi. Mary Childs. Yes. You and I took a little trip up to scenic Montreal, one of the jewels of French Canada, for a little planet money mission.
Yes, we did. And even though it's a little bit sad that that mission did not entail joining the Maple Harvest or infiltrating a Poutine cartel. Next time.
“Next time, it did have much bigger implications for anybody and everybody whose life is”
impacted by science, which I think is basically all of us.
I think that's right. Yeah. We were there to meet a guy named Abel Bruder, about this very energetic economics professor in his late 30s at the University of Ottawa. And we found him bounding around the halls of this modernist school building in downtown Montreal.
He was getting ready to host an event he has become sort of famous for something called the replication games. It's getting exciting now. How are you feeling? I'm feeling good.
It's the beginning of the event. So this is a moment I'm full of energy and enthusiasm. In seven hours from now it's going to be a different conversation. Abel is going to be tired in seven hours because at a replication game, he is running around between 16 teams of three to five people in a kind of hackathon.
“People work all day to replicate recently published social science papers to reproduce”
the results and see if the findings hold up because ever since technology has made it easy to crunch data, we've been able to go back and check old research and turns out it wasn't great. Re-running an old study today, a lot of the time, does not yield the same result. The research no longer proves its conclusion.
And the same thing often happens when we re-conduct whole experiments. Altogether, these problems have become known as the replication crisis. A lot of people across academia have been trying to fix this, so we can trust research so we can actually know what we know. And this event, the replication games, it's part of Abel's attempt to help solve this crisis.
The idea is to change norms through monitoring and just giving a small percentage, a small chance that we will monitor can massively change the behavior of everyone, change the way to behave, change the way to go, change the way to do research, so that's it. After a few minutes, we head into a big lecture hall where Abel takes center stage. Our folks, we're going to get started.
A lot of the replication games, thanks for being here, Montreal with us. Let's get started.
“To do, we have 16 papers that are being reproduced, a couple of small things.”
Around the room, dozens of social scientists are gazing up at Abel, looking a little bit nervous.
Most of them have come from across Canada and most of them are first timers, who now have
to undergo this kind of awkward initiation, right? I'm going to put them music because I know you guys need, like, you know, a bit of motivation. But you need to do the body movement. Everybody has to do it. Alright, so it's a song get it.
So we do it. I need you to do it. It's pretty easy. Abel starts identically clapping like an elder millennial, camp counselor, and his audience joins in.
Guys, thank you so much for being here. I hope you enjoy. I'm going to be fine, and thanks everyone. Hello, and welcome to Planet Money. I'm Alexi Horowitz-Kazzy.
And I'm Mary Childs. Over the past couple decades, the world of science has been stuck in an existential crisis. Over whether we know the things we think we know. It started in psychology, spread to medicine, and economics. Now people across disciplines are trying to figure out how to solve it.
Today on the show, the story of one economist, how he set out to learn what exactly has broken in the way social scientists create new knowledge, and how he came up with his own daring and kind of wacky way to help fix it. By building an internationally crowdsourced surveillance system to keep social scientists honest.
Okay, so the replication crisis has been a pretty big deal for almost 20 years at this point. We've covered it on Planet Money before. The story of how economists Abel Broder first encountered the problem, and why he set out to help fix it, begins back in 2011. Abel was getting his master's in economics, and he was writing a paper on whether smoking
bands in restaurants and workplaces actually made people smoke less. He collected his huge data set.
I had like amazing data from the CDC, which is public.
I had smoking prevalence at the county level. Abel says that all of the established research at the time indicated that smoking bands were hugely effective, but they got in lots of people to stop smoking. But when Abel crunched his numbers? I was funny, absolutely no idea.
None. It was like nobody stopped smoking.
I've played with the data for six months, and I find nothing.
Abel was trying to make a name for himself in academia, which means getting his research
“published in an academic journal, and it's harder to get published if you find no effect.”
Especially given that the existing literature did show an effect. So what Abel needed was something statistically significant. For this statistically uninitiated, significant means the result would be produced by chance less than 5% of the time. So the probability that the result is just random is 5% or less.
That is the cutoff for whether you're findings count or not. There's this 95% 5% cutoff that really matters, where obsessed with these thresholds. So Abel kept tinkering with his data set, changing his computer code to contort the data one way and then another, until eventually one day, he found a way to analyze one subset of his data that gave him what he'd been looking for.
A result demonstrating that smoking bands had decreased smoking, and a result that was significant. He was like, "There you go." I was so happy. I was in the library, just yeah, I was like, "Sing effect and I was so happy."
Finding a significant result meant that if his paper was published, he would get to put a little asterisk or star next to his results. And the more statistically significant the result, the more stars you got to claim. But Abel's happiness did not last long, because the more he thought about how he'd gotten that significant result, the more it started to seem like it was working against the whole
goal of social science, to actually discover true new knowledge about human behavior. For example, policymakers need to know whether smoking bands work to make sound policy decisions. But here he was torturing the data to match the preconceived hypothesis. He thought, "This is stupid. What am I doing?
I'm writing a piece saying that smoking bands are decreasing smoking for elements, because I managed to find one that was like, "This is dumb. I'm doing something wrong."
Abel ultimately decided not to use his tortured results.
He wrote up his paper showing that he'd found no effect, even if it meant his paper was less exciting. And at first, he thought what he'd done to his data might have just been a one-off mistake on his part.
“But then you start talking to other students and people were like, "Oh yeah, that's how”
you publish." Abel started to see that this was a problem of incentives. In order to advance their careers, academics have to publish papers in peer-reviewed journals. And the journals want to publish work that's statistically significant and novel. These papers can win big prizes and define new research agendas for decades.
But because of all that, people were doing what he had done, trimming and squeezing and coaxing the data towards significant results. And that can easily cross over into a kind of data manipulation called P hacking, P as in probability. And Abel says it can happen almost subconsciously.
Because the project took like three, four years, a back-and-forth between co-authors, discussion, and six months later, you go back, you exclude again, these other people, you do something different. And over time, all these decisions actually, when you look at it from the outside, like this is crazy what you've done.
To figure out how widespread this problem might be, Abel decided to research the research. He and a couple of his colleagues scraped these significant data from a bunch of the top academic journals. Distribution of stars that published researchers had racked up.
And when they looked at the distribution, they found a noticeable hump just above that 5 percent
significance threshold. Now, some of this could be because some people whose research only hit 6 percent didn't bother submitting. But it could also be because some researchers were tweaking their data analysis to just barely get results that would be more likely to get published.
But when Abel and his colleagues started submitting the research for publication, they got a resounding series of noes. Academic publishing seemed hesitant to open up an empirical reckoning. After a few years, they did manage to publish their paper 2016.
“They called it "Star Wars", "The Empiric Strike Back", do you get it?”
Oh, definitely got it, thank you, Lexi. Mobile puts aside this whole idea of an empirical reckoning and he moves on to other economic projects. He gets tenure. And eventually, he learns that his little paper has become kind of a sleeper hit.
It took a long time before he realized that the paper was like, "Well, no, before people started talking to me at conferences, I was like, "Are you just Star Wars guy?" That's a moment. I needed someone senior to tell me, like, "No, this is really important what you're doing."
There had been efforts to solve parts of the replication crisis. Some of the top journals had started asking their contributors to release replication packages with their papers.
That's basically the data and code they'd used to find their results.
And researchers were also starting to pre-register their hypotheses before actually doing the research. So that if the data didn't support it, they couldn't fuss around and pretend like they'd been looking for something else all along.
For his part, I wondered if there was anything he could do, like not just stu...
but actually help fix it. How do I change the incentives? How do I potentially have an impact on the norms? How people do research?
“The second, I think, about the norms I think about, or it needs to be large-scale.”
Nobody's going to change our behavior if it's a small-scale thing. So, it needs to be big. Journals do have peer review systems where they try to poke holes in research, but they didn't know it was totally get under the hood to scrutinize all the code and data. So researchers weren't necessarily worried that their stuff would get checked.
A nice analogy, I think, is imagine you're going to date. You might shave, might they care of your body, might they care of yourself, a bit of the auto-ren, you know, a perfume, maybe if it's your thing, if you're going to make an effort to look prettier than you are usually. The other percent fully understand that this is a nice version of you, or fully-or-of-that,
but I don't know about how much. And perhaps it's not, or maybe you made a massive effort.
It's usually your disaster, you never clean nothing.
So when you go to the apartment, it's like, oh my goodness, this is your apartment. So research a bit like this. The published research is the cleaned up version. So when I see a published paper, I know it's been, you know, it's beautiful, it looks nice, but there's an information as symmetry.
I don't know how dirty it is, actually. But I'll thought one thing that might help this problem was to make researchers care as much about the cleanliness of their data analysis as the significance of their results. And to do that, you'd have to go full on room raiders on people's published papers to shine a fluorescent spotlight on the backgrounds of their research.
If you could take all of the data that somebody had gathered for a given paper and meticulously retraced their coding steps, you could see if it was possible to replicate their findings. You could make sure there weren't any errors, conscious or unconscious in what they'd done. But first, you'd have to get the code. People weren't in the habit then of publishing all their data and code.
And when he emailed researchers asking, nobody responded. So he decided to create an official seaming institution. It needs to be a big institution with a website, with tons of famous people on it.
“And when you send the email, people would be like, what the hell is this thing?”
I need to respond. It's legit. In 2022, he creates a website for a thing. He starts calling the Institute for Replication. A friend of mine is wife, did the logo for free, like a design, like you know, I mean,
like just bare bones. He recruits some serious famous economists for the board to put on his legit looking website. And pretty soon, he does start to get responses to some of his emails. He's able to get some data sets and coding packages and he convinces some colleagues and junior researchers to start doing some replications one by one.
Next change for a co-author credit on one big paper.
So about can get the data and the code, but there's still a second problem, which was
the question of scale replicating one paper at a time was not going to do much to change the system. What he needed was to create the sense within the academic community that anybody's work could be checked at any time. It's like an IRS for the ivory tower.
So now I thought, okay, we need to mess reproduce journals. Then I was, okay, I need to get maybe a few hundred replications or reproductions per year. So now I'm thinking, how do you do that? The answer about says came to him kind of by accident around the time he got his
Potemkin website up and running. He got an unrelated invitation to a conference in Oslo to a couple of seminars. He was planning a trip about a month ahead of time. And he noticed that he had seminars on a Wednesday and on a Friday.
“And I felt like, what the hell do I, how am I going to do on Thursday?”
Like, I've never been to Oslo and I'm sure it's pretty and nice, but a full day like
I'm going to walk around and then I'm going to have like six, eight hours just to relax. So I just emailed the person who invited me and I said, could we just like do a small workshop? It would just be like 10, maybe 15 people about posted about it on social media. You can come to Oslo.
It should be fun. If you come, you're going to get caught. It shouldn't matter. Paper. We're going to reproduce papers.
Let's have fun. I'll like 70, 80 people ended up registering really fast. I closed our registration because I have no money. We don't want to have food. I didn't tell the guy would be 80.
I said it would be 10. So a bell is sitting there a couple of months before the conference with this sudden unexpected surge of interest and no plan. I have 80 people. Some coming from Arden and others coming from Sweden and others coming from France.
I, what do I do with these people? He starts collecting papers that people could replicate and he puts everyone into teams by their field, health economics, development economics. The first time I had no idea what was going on of super stress. He had no idea what was going to happen, what they would find.
A bell heads to Oslo and convenes the first ever replication game in October of 2022.
When he checks in on one of the first teams of replicators working on the first
paper, I go talk to them and they'll like, "Evil, there's a problem, like there's tons of duplicates."
“I'm like, "What? It's like, "Hey, one of the data sets, like there's tons of people with the same age."”
And then I come back to later on and it's like, "Okay, 75% of one data sets. Everybody's 60 years old, all woman, all living in the same village, all doing the same thing. It's the same duplicates." And it's a big part about inequality. If everybody is the same, there's no inequality.
And that was driving some of the mechanism. The underlying data upon which this entire paper rested had been merged improperly. Like, a big copy and paste error. To about, this was disconcerting."
And I was like, "Oh boy, that's the first paper."
That's the first game. What did I create? It's going to be like this all the time. People finding crazy mistakes. And did I just open a can of worms that actually most papers are just like terrible, full
of crazy cool layers? A bell was a little afraid. He might be about to discover that all papers were full of worms. And that science wasn't real. Luckily, by the end of the day, like many teams had, like, good day, everything was clean
and so on. And it was, like, not terrible. He could relax. It turns out, most of the papers were not terrible.
Even better, with that first event in Oslo, a bell had found a way to crowdsource this massive
academic auditing project, essentially for free. If he could host enough replication games every year, he just might be able to scare the social sciences into acting right.
“But what actually happens on the ground during these things?”
After the break, we entered the 51st replication game. So we are at a replication game in real life in Montreal. A bell murderer says that the game part is a little bit of a branding exercise. There are no winners or prizes, it's more like an all day hackathon. The teams are mostly economists with a few groups of psychologists.
And they've already chosen the papers they'll focus on. Using just what they have in the replication package, they will have seven hours to check the code, examine the decisions their papers authors made, and see if the results reproduce. And then they'll report on whatever they find, so it'll be out there on the record.
Whether that's a nothing burger or a bombshell.
Here everyone claps their rendition of we will replicate you. The researchers start streaming out of the lecture hall, and we run after them. Jolie, did I talk to you for a sec? Yes. I'm Alexi, I just set the scene for me, like.
So we just finished clapping at cheesy opening song and we're about to split up into rooms.
“The groups are scattering into classrooms across the building to start digging into their”
papers. Economics, PhDs, student Jolie and Hunt, and her team are looking at a paper about education. They're all education economists. And so Jolie and has sort of a pedagogical view of the day. In PhDs, you often don't get a chance to actually work together, we're usually just kind
of on your own in your silo and then like you talk to each other when you're having problems, but it'll be nice to actually work together and see if my friends are actually any good at a job. Rolling up their sleeves, getting down to the actual coding, because they're only going to have seven hours each group has a little list of the things they've decided they're
going to try to get through today. There's one group, let by a guy named Tibo Dupare, who was sitting alert and ready to unpack a paper about pensions in different countries. It's actually the paper focuses on attend something countries, but then the data sets seem to have a few more countries in there.
So why some countries were included or there's one not, what if you drop a few countries out of the data sets, maybe there's something to be explored there. And we wanted to understand the stakes for the day, you know, why people would attend this event to do a full day of like manual economic labor for no dollars. So we asked them.
What are you doing here today? Well, we're trying to see if we can replicate the results from a paper that took a look into the effects of negotiation. I've started with a group in the lecture hall, huddled around their laptops. Trail, Lasso Ed is a researcher at the University of Saskatchewan, and she's in a group
of economists focused on agriculture, with Chichia Wu from the University of Ottawa. You want to find that the paper checks out. Mmm, yes, you can think like that. In terms of your personal incentives, would it be cooler to find like, oh no, this paper is messed up?
Real starts laughing, seemingly at the premise of the question. You're laughing so hard, why? What do you think? Yeah, it's me. I feel like I like to answer it.
Okay, it's me back for Diego and Juan here.
Those are the authors of the paper.
Yeah, there's some people in that.
No, you just have some people in that. Yeah, because we're all been in their shoes. Okay, fair. But we go up to another group, and they're kind of like, duh. Yeah, we are trying to find something.
That's Felix Fosu, a postdoc at Queen's University. His group is digging into a paper about cartels in Mexico. I tell him what the other researchers said, that maybe it isn't very nice to want to find something terribly wrong in someone else's research. But it seems like to Felix, I have now misunderstood things in the opposite direction.
No, we definitely want to find something. Yeah. Why?
“I think the application that is something that we have to take very important in economics.”
We need to make sure that our results indeed claim what's they claim to be. We need to know what we're saying, what that's not worth. Now, regardless of their specific goals, the actual work of replication is divided into two main phases. Phase one is the same for every team, pure and simple replication. They will all check the paper's code, the program instructions that takes some raw data,
and put it into a bunch of tables that comprise the foundations for the paper's conclusions.
So now, each team takes the original code, copy and paste it, and basically hits enter to see if it runs.
And one type of mistake that they might find is if the code is really broken. They might find that when they push the button, the code just doesn't run. The computer just says error. Or another kind of mistake they might find, maybe the code runs great, but it spits out a different answer than what the author's wrote, not so great.
Or maybe the raw data is messed up in some way, like cells merged, or transposed, or erased, or accidentally filled down the whole column. So we ask the agriculture team to show us exactly what they're doing. So I can't code, I don't know what I'm looking at, what am I looking at?
“Uh, well, actually it's kind of nothing if it's just, I just started it.”
This is Chishiya again, the paper herteen picked by Diego and Juan Pablo is about the price of eggs at big firms versus small firms. How much pricing control they have. I look at her laptop over her shoulder. So what do you can see here is are the convertibles, they have, we have the firms, we have the price, we have the day, months, and year.
Now, Chishiya pulls out her iPad to scroll through the published paper. So we're going to firstly check whether we can perfectly reproduce our numbers, and using the original data and codes. If I can run parts of this, maybe you can see it. Okay, she's pushing a little blue arrow, a little play button.
So basically, if I run this code, you would see the results.
Oh, a little box appeared in a different window. Yes, so if you check, the numbers. Minus 18.11432, and I'm looking at the published version it says, minus 18.114 stars star star star. So they're basically exactly the same.
It's the same. Yeah, it's the same, that's good, you know. So we have a win. Yeah, yes, one, and we have more to check, a lot more. But we got one, that's great.
Chishiya will keep plugging in all the data and checking the results. Though so far, it looks like the paper is checking out. And if the paper passes, the whole first phase, if the code does spit out all the answers that the author said it would, then the replicators move on to phase two, robustness checks.
For robustness check, we kind of like change some parts of the model to see whether the original conclusion is still kind of makes sense. This phase is less objective and requires more context and thought. It requires the economists to consider the questions that the paper authors didn't think of or didn't write about.
The decisions the authors made and the decisions they could have made, but didn't. It's like trying to see the negative space in and around the paper. The kind of things they might find in this phase, you know,
“did the authors say that this data set represents something?”
It doesn't. Did they use an appropriate data set? And did they use that data in a way that made sense? Did they include or exclude certain specifications or factors in order to have a result that looked exciting?
There are infinite potential choices that researchers make or don't make. And the replicators have such limited time. So they're not going to be able to consider and analyze everything. They're just going to get through as much as they can. And is the hour start to tick by?
It becomes clear that most teams are not turning up major issues. Until need after noon, we check in with this one group looking at a paper about government policies. The basic premise is when people trust the government, do they tend to comply with policy more? This is Simon Prevo. He's an Econ master student and a public sector researcher.
The paper found that when people trust in government, they comply with policies more readily. So those policies cost the government less money.
Simon and his teammates are now trying to unravel a mystery.
Because when they went to look at the raw data that underlies the paper's findings,
it looked a little funny.
“This has Scott Morier, another Econ master student on the team.”
There was a folder called raw for the raw data, but the files were all labeled clean. So we were confused to how it was counterintuitive. So Florian downloaded the data straight from the source and followed the instructions to create the one data set. They recreated what should be the same data set following the instructions that the authors left. They ran the code.
And then that's when we started getting the errors because variables were missing. And then as we kept going through, we kept finding more variables that were being used in the regression, but weren't necessarily included in the, and the supposedly what is meant to be the raw data set. Some variables are missing from the raw data set. The authors seem to have used data in their analysis that they did not account for, not good.
And then we visited the group looking at that paper about Cartel behavior in Mexico. That group has found something too. So in this paper they look at the presence of different cartels. They tell us the paper looks at 20 cartels and data about what types of crimes were happening in one. To see if cartels change the types of crime they did after the government ramped up a big war on drugs.
What we've found so far is that if you exclude one of the cartels, then the results become insignificant for, yeah. So it's just the one cartel making the results. Cartel making the results. So if you'll remove only one, then the result collapse, right?
I'll know you found something.
Yeah, they found something in the first test they tried.
Is that luck? Would you call that luck? No, I think it's something that we taught about it.
“That's what we place it one on the list. We taught this a good place to search.”
So partly luck but partly because we thought about it carefully. That sounds like not luck. They're going to keep investigating and depending on what they find, this paper is maybe not passing this phase, the robustness check phase. Can you draw a big sweeping conclusion about the effectiveness of a war on drugs from a
change in just one cartel? They suspect this paper will not hold up. A relunch, the cartel team starts puzzling through like, how does this sort of thing even happen?
You have to be honest, for sure when you do this kind of papers,
you do this kind of things, right? You check whether when you have this, you know, you do this type of business checks. David Benatia, a professor on the team, says this is a robustness check that he would have tried if he had been the other. At the end of the day, our researchers limp back into the auditorium to present
what they'd all found. So the way we'd like to finish is to give each team about one minute to tell us how you're day went, the different challenges your face. Maybe we can start from the beginning, move around. We didn't find anything too major.
There was a lot of missing variables and attrition. The professor was in love, like, all the code run, but everything ran fine. We tried to poke holes in it, but we couldn't really do it. For the 71 replicators in the Montreal game, 14 teams got to uphold science by double-checking some published work.
They spent a day coding with their friends and peers, learned some new coding hacks, and new ways to make choices in research. And they'll get a little scholarship credit on a meta paper in a real journal. The other two teams, the group who discovered the missing numbers, the cartel's group, they've gotten, like, a toxic golden ticket.
Now they'll get to write their report, play it in formal, but nonetheless, kind of a bombshell, saying just how flawed the research is. Maybe that makes a splash and everyone thinks they're brilliant, or maybe it makes a splash and everyone paints them. Next, a bell will write an email to the authors.
A somewhat standardized note saying, hey, here's who we are and what we do. We found some mistakes in your paper. Would you like to respond? He does not assume nefarious intentions, and the authors get an opportunity to try to fix the problem.
And prepare their formal response before anything goes public. And because a bell handles it from his position at the Institute for Replication, it doesn't feel so personal. And the replicators have a little bit of insulation. We asked Felix from the cartel's group what this might mean for him,
as a more junior person, a person earlier in his career. It's kind of throwing rocks towards the top of the profession. He'd wanted to find something, and now he has.
“I think it's a good work that we are doing, but what the implications are, I don't know.”
So after a few months, a bell sends his neutral tone official email to the authors of the paper that Felix and his team had replicated in Montreal. Saying that the code had worked, but that they found the results don't hold up. And for the authors of that paper, getting that email, when we opened that email, we were actually happy.
Because we actually read your paper replicates.
This is Jacomo Batiston, a researcher at the Rockwell Foundation Berlin,
“and one of the four co-authors of the paper.”
He says they were thrilled to have their coding results publicly validated. And when it came to the bigger problem, the fact that their results had fallen apart when the replicators removed that one cartel, we were not particularly worried about the content, because it was kind of self-evident that this was not really challenging. Not really challenging their findings, because they think the replicators misunderstood the
basic hypothesis of their study. They say they started with this idea that there was this one big new cartel in Mexico, Los Angeles, and it had been doing a lot of crimes, generating a lot of data points. Here's another author, Marco de Mollier, a researcher at Baconey University in Milan. When we start to think about this project,
that actually had been mind that the specific data overloads did us. They say they set out to investigate if the cartel, Los Satas, had changed the types of crimes they did after the war on drugs. And their papers succeeded at proving that. With the Montreal replicators did, in the opinion of the paper authors,
was to remove the main part of the data set and then say the conclusion was broken. You can do that, but why would you? To be blank, it doesn't make any sense. That is Paolo Pinote, a professor also at Baconey University. He said it was like doing a study on the effect of spreadsheets on productivity,
and then saying, "Oh, but the results don't hold up if you exclude Microsoft Excel." We looked at their paper, and to be fair to the replicators, the original paper does not say explicitly, "Hey, it's just Los Satas we're focusing on." The data from Los Satas is lumped in with several other new cartels. So, if the paper authors meant to study the behavior of just Los Satas,
that was never quite spelled out.
Mary, when we first rocked up to the replication games back in May,
“I think we were both excited at the idea that we might watch some junior economists uncover some”
major problem with a published paper in real time. But Abel had a different take when we asked him about the problems that the teams there had uncovered. Like the team, for example, that had found issues in the government trust paper. That seems like success. But success depends with you to find a success.
Well, the process working, is it supposed to? I mean, in a world in which science works, I think this should have been picked up before it's published, cited, and disseminated, so I don't think it's a success. That's fair. These papers that are replicating have been published, meaning they got past journal referees.
Professional economists who were supposed to be gatekeeping the quality of what they publish. Some of the top journals do check that the code runs, they press play. But in the government trust case, the journal referees apparently didn't catch that numbers were missing. Then when the paper said, "Oh, the documentation is in the replication package.
It was pointing to nothing." The journal declined to comment, though they said they have a robust process to investigate concerns.
To me, this is a failure of this system, which is fine. There's always going to be failures.
I just think that the rate of failure is higher than with a lot of people think. It shouldn't happen that often. In every replication game so far, they have found something. Though not yet any career ending fraud. It's more like major data or coding errors or robustness fails.
So, the broader system is still broken. Even after putting on more than 50 games and replicating about 300 papers. Still, there are signs that the games are having an effect. Several replication gamers told us their experience here will change how they do their research, because they know that their papers, too, might someday end up under a Bell's spotlight.
Bell says the more games he can put on, the more the rest of the academic world will start to shift. Because the evidence shows that people don't actually change their behavior based on the severity of the potential punishment, like losing their job or public shaming or whatever. They change behavior based on the odds of enforcement. The odds of actually getting caught.
Just the idea that someone might walk through their apartment one day? That's enough of a threat to keep it clean. Hey listeners, what are you doing on the evening of Monday April 6th?
“Are you free? Because if you are, I think you should come to the 90s Street Y to hang out with”
me and some of my friends. It is the first debut stop on our 12 city book tour to celebrate
the publication of our first ever book, Planet Money, a guide to the economic forces that shape your life. Every stop on this tour will be unique with different hosts and guests. And if you get a ticket, you can get a tour exclusive to hold back with your purchase once the price passed. So at the 90s Street Y on Monday April 6th, it'll be me, Amanda Ranjick,
Dearing Woods, Book author Alex Measley, and the economist Emily Oster, who i...
I think for letting pregnant women know that they can actually drink coffee.
So please come and bring your very best economic questions for us. We can't wait to hang out.
Find the show nearest you at the link in show notes or go to PlanetMoneyBook.com.
“And thank you. If you want to hear more about the replication crisis,”
we've done a few episodes about it and the efforts to fix it,
we'll link to those in the show notes. If you want to support our work,
“you can donate at npr.org/donate and thank you.”
This episode was produced by Emma Peasley and James Sneed with help from Willa Rubin. It was edited by Jess Zhang, fact checked by Sam Yelomorse Kessler,
“and engineered by Kotaku Sugy-Cheer-Noin. Alex Goldmark is our executive producer.”
I'm Alexie Horowitz-Gassey. And I'm Mary Childs. This is NPR. Thanks for listening.


