Framework is a website builder that turns.
Whether you want to launch a new site, test a few landing pages, or migrate your full.com. Framework has programs for startups, scaleups, and large enterprises to make going from idea to live site as easy and fast as possible. Learn how you can get more out of your.com from a Framework Specialist or get started building for free today. At framework.com/hardfork for 30% off a Framework Pro annual plan, rules and restrictions may apply.
Well, I'm having sort of a weird day. How so? Well, I woke up this morning and I, you know, checked my social media feeds and I saw messages like the following. Your garbage and I hope you lose your job and
become homeless. God, what a waste of sperm you are. And if you have never seen a message like that
before 8 a.m., you might not work for the New York Times. Well, I suspect that I know what this was about, but tell the listeners what made people so mad. So my colleagues to your Thompson and I recently published this quiz, which is basically a set of AI written passages next to unlabeled sort of works from, from masterful human writers. Yeah. And it was sort of designed as kind of a blind taste test for you. You pick which one you liked better. And then it would tell you, you know,
which one is generated by AI and which one was written by a human. And Casey, people did not like this quiz. Well, what were the findings of the quiz? Well, so the big headline finding is that like
it's basically a coin flip, like a slightly more people, at least so far, have preferred the AI
written passages. But when you tell them that they prefer the AI written passages, they give very math. Because they think that they are too smart to fall for AI writing. Yeah, or they just don't like the way that the test was constructed, or they just it makes them uncomfortable, or they think, you know, work were cooked. Now that AI can write passable versions of this thing, or they just start saying, you know, oh, it's just, it's because it was trained on all these books. So obviously,
“it can sort of mimic them. So I think there's a lot of different emotional reactions, but mostly”
the emotional reaction has been to get mad at the people who made the quiz. I have said, you seem excited about this. Like whenever a large group of people gets mad at you, you experience a glee that I rarely see in people. It's not a glee. It's just like, yeah, you're maybe just like, I'm Kevin Ruson, tech columnist at the New York Times. I'm Casey Nude from platformer. This is hard for this week. How AI is reshaping the war in Iran. Then researcher Julie
Bedard joins us to discuss the discovery of a strange new condition they're calling AI brain fry. And finally, I was turned into an AI editor against my will by Grammarly. Here's how I stopped it. And about overwhelming physical force. All right, Kevin. Let's get into the biggest news of the week, which is the war in Iran. Specifically, we want to talk about what we know about how AI is being used in this fight.
“Yeah, I think the reason to talk about this is not just because it's happening. It's the biggest”
story in the world, but also because I think this is really a turning point in the use of AI in the military. We've been hearing for years and reading science fiction books and listening to people talk about the use of AI in military applications. But now I think we are starting to see exactly how these tools are being used on the battlefield and what kind of effects they might be having. We are. And I'll say up top that anytime you're talking about the use of technology in war,
there is always the risk that you are just passing along propaganda, right? Because both the
military and the contractors have a vested interest in telling you, hey, we have some real G-Wiz new stuff and it's totally changing the game, right? Everybody has an incentive to tell you that. And yet as you and I have dug into it, we do believe that there are some notable ways that AI
“are being used. And I think it is worth mentioning them. If for no other reason, then I think it's”
been the experience in the United States over the past couple of decades that tools that are deployed abroad during times of war sometimes come back home after the war and wind up being used against American citizens. Yeah, so I think we should tease apart a few things here, one of which is like, let's talk about how the actual AI tools are being used by the military, what the tools are, what the kind of ramifications of using them this way are. We should talk about
how clawed in particular seems to be a key part of the war in Iran so far. And at least from what we know, seems to be behind a lot of the strategic decisions and operations that the military is making. And finally, about how this conflict is or isn't going to reshape the future of AI by
Doing things like taking aim at data centers, by interrupting the supply chai...
semiconductor materials, all the larger questions about how this conflict is playing out.
And before we get into it, let's briefly do our disclosures. My fiance was an anthropic. And I worked in New York Times, which is seeing open AI, perplexity, and Microsoft over alleged copyright violations. Okay Kevin, so where should we begin? Well, let's talk about how AI is actually being used in the war neuron and what we know about the actual deployment of this stuff. Casey, what do we know? Yeah, so I read a great overview this week in the Wall Street Journal
by Daniel Michaels and Doug Lieber, who goes into good detail about what we know about how the United States and the Israeli militaries are using AI. They're upfront about the fact that the military
“is trying to keep a lot of this secret. They are not apparently going into a lot of detail,”
but there are some things that we know. One is that Israeli intelligence for leaders had been
monitoring traffic cameras and Tehran that they had hacked into and also eavesdropped on senior officials' communications. And this is a big theme Kevin that runs through all of the coverage of AI in the war in Iran, which is that the military is saying that it is very effective as you would probably imagine at processing large quantities of information. Yeah, so you've got all this data coming at you. If you're running a military in the year 2026, you've got data from drones and
sensors and maybe security cameras that you've found a way into and you can kind of use AI to process all of that to put it onto some kind of like a real-time dashboard so that you can just like open a screen and kind of see where all your supplies and all your troops and where all the enemy combatants are and like use it to sort of make sense of this wave of information that is coming at you every day. Yeah, you know, recently on the show, as we've been talking about
the conflict between Anthropic and the Pentagon, we've been talking about the potential eventually to have autonomous weapons out in the battlefield, potentially killing people without human intervention. And the big message that I'm reading in the coverage so far is we are not there yet, right, that the AI tools that are being used were seeing them in fields like intelligence, mission planning, logistics actually pretty far away from the battlefield doing things like
helping to find a target to send a missile at and then after an attack trying to do some kind of
“quick analysis to see, hey, what exactly did we hit and maybe what should our next target be?”
It's also really clear that what's happening in the military is what I would call like shrinking the haystacks where there's sort of these massive troves of data where it's like we have, you know, hundreds of thousands of phone calls or audio recordings or emails or intercepted traffic to Iranian websites and we can like use that AI to kind of narrow down the bits of that that might be useful to us because in all intelligence gathering situations, since the dawn of eternity,
like 99 plus percent of what you're collecting is totally useless and there have been, you know, entire divisions of humans who have been employed to like dig through all that stuff and find this stuff that's actually useful and now AI can do that pretty well. Yeah, and military leaders are
saying that there are many, many missions that just never happened because they didn't have the
manpower to do exactly what you just said and now they do. And I would point out Kevin that again, you know, when in our whole discussion of the anthropic versus the Pentagon, we were talking about, you know, the risk of this technology being deployed against Americans and how effective that
“could be and you know, all sorts of surveillance operations. So I think it's important to highlight,”
like that exact thing that we were talking about, like sort of like a bad scenario in the United States, if the government was doing it to its own people, it's just sort of absolutely happening right now in Iran. Yeah, and we probably won't know the extent to which it's happening because most of it is classified and, you know, nobody in the military wants to like give away their secrets to any potential adversaries. But my best guess and from the people that I've talked to who have
been working on this stuff is that this is happening pretty rapidly. That we are seeing many, many divisions of the military that are essentially using this stuff every day. Yes, now one question that is coming up a lot is to what extent, if any, is the military starting to offload decisions to AI, right? Is it the case that there is some military commander that is typing into a chatbot, hey, should I send the missile here or there? Yeah, the military's public statements are that
they are not doing this, right? They are sort of taking care to say, no, like humans are in the loop here. We are relying on human judgment. But there are other experts that are saying, you know, at some point, if you're going to be consulting with a chatbot and the chatbot is getting smarter and smarter, it supports you long. It's probably not going to feel very different from the AI actually just making the decision for where to shoot a missile. Yeah, I think that's a really good point.
I think there is a difference between a fully autonomous weapon that can sort...
from selecting the target, like firing the weapon all on its own with no humans in the loop.
“But I think what you're talking about is sort of a system that can do everything except fire the”
weapon. It can sort of select the target, it can tell you the right timing, it can like identify all the objects in the surveillance footage, and it can kind of give the military officials the confidence they need to go ahead and push the button. And there's some worry that this is starting to happen with the help or the encouragement of AI. There was a missile strike in Iran that hit an elementary school the other day and according to Iranian officials
killed over 175 people, mostly children horrible thing. And people have been wondering if that was related to cloud or some other AI system telling the military maybe erroneously that this
was a legitimate target. Now we should say that particular incident is still under investigations
and initial reports from the military have said that it was unlikely that AI was responsible in that case. But I think this is the kind of thing you're going to start seeing more and more of is like when there is an attack that you know kills civilians or doesn't hit its intended target people are going to be asking, oh was that a human who made that mistake or was that an AI system? Yeah and I have to imagine Kevin that there is just going to be more and more pressure
within the military to more fully defer these decisions to AI systems, right? Because at some point there will at least be some contingent in the military saying these systems are more trustworthy
“they can make decisions faster and let's do it. So I think that's something that we need to be”
very much on guard for. Yeah. So that is what we know about how AI systems have been deployed so far but Kevin as you mentioned there's also been a lot of discussion about what some particular models may or may not be doing during the war. Yeah and I think Claude and Anthropic have come up a lot in recent weeks for obvious reasons they had this big fight with the Pentagon but it's also the case that right now in this war and Iran Claude is the only AI model that has actually
been deployed inside classified military systems. So it's the extent that AI is having an effect in Iran it is probably Claude. Yes and the Washington Post had a story about AI and the war in which
they said that Claude was so essential to operations that if for some reason Anthropics and
hey we want you to stop using Claude the military would push back and say we're actually going to force you to continue to use this product. So just again the continued strangeness of the
“situation the Pentagon has now formally declared Claude and Anthropic to be a supply chain risk”
this week and Anthropic sued over that. Yeah and there's also been a lot of reporting coming out over the past week or two about the actual ways that Claude is being used and deployed in the military. There's been some reporting on this system built by Palantir called Maven Smart System which from what I can tell is kind of a real-time dashboard for intelligence that basically allows you to pull in a bunch of drone footage and sensor data and track a bunch of supplies and troop
movements and things like that. And by the way this is the system that caused a huge controversy at Google in the late 2010s and you know Google's like quit over there as they did not want the company involved with Project Maven and eventually Google dropped the contract when they did Palantir stepped in and eventually brought on Claude. Right and so Claude has been integrated into Maven Smart System since 2024 and the reporting that I've seen over the past week including
in this article in the Washington Post said that this combination of the Maven Smart System built by Palantir and Claude has already suggested hundreds of targets issued precise location coordinates and prioritize those targets according to importance and according to the same article it says that the use of Maven and Claude has turned weeks long battle planning into real-time operations. So this is not just like a kind of tool that people in the military are using for handling
like routine office work. This is actually sort of a core part of their strategic decision-making process. Now Kevin do you know if this is a like specialized model of Claude again I'm thinking back to our conversation with Amanda Askel where she talked about all these efforts to make sure that Claude is really good I'm sort of imagining that version of Claude being told like hey analyze all this footage and decide like where to send a missile to kill a bunch of people it's
hard for me to imagine that version of Claude being like yeah yes sir right away right so do we understand at all how that how that is working so my understanding is that it is largely the same model that consumers and enterprises would use but that there may be some additional fine tuning to make it work inside these classified systems on these sort of military applications that it may sort of refuse different prompts or fewer prompts than a model aimed at consumers
That there may be some additional kind of changes around the edges but then i...
the same Claude that you and I have. I see well so this appears to be a very temporary phenomenon we know that open AI has signed a deal with the Pentagon and presumably its systems will be onboarded onto a classified defense systems soon Gemini was approved for non-classified uses at
“the Pentagon so I think pretty soon the Pentagon is to have more options to choose from as it”
deploys these systems yeah so that is how AI is being used offensively by the United States and Israel Kevin but we should also talk about what Iran is doing offensively against some of these AI systems yeah this is a part that I have not spent as much time looking into so tell me what you're seeing well so as you know there's been this huge build out of AI infrastructure throughout the
middle east over the past several years we've seen these multi-billion dollar projects being
signed and built in Saudi Arabia and United Arab Emirates and Qatar and these deals involve basically all of the big American tech giants Amazon, Microsoft and Google and I would say they're sort of like two major pieces of infrastructure that are relevant here one is data centers right which are you know being used to run AI systems and also just provide basic cloud hosting and storage services to all sorts of companies and then you have fiber optic cables which connect
those data centers to the rest of the world so let's maybe talk about the data centers first sure so the guardian reported that on the morning of March 1 which was the day after the initial US attacks in Iran it ran responded by a striking a couple of Amazon data centers in the UAE
and they also damaged a third one in Bahrain and in the immediate aftermath of that people
in those countries were opening up their phones and they couldn't check their bank balances they couldn't order a taxi it seems like a lot of services in those countries were being hosted on AWS and they just didn't have access to those services anymore afterwards Iran put out a statement that said that they had gone after the data centers to identify the role that they played in supporting the enemy's military and intelligence activities that's so interesting so they were
basically targeting data centers rather than say troops because they thought it could actually be more disruptive if it turned out that the US or Israel or any of the other allied nations were running their services on data centers located in the Middle East yeah well I mean and also like data centers are a great target like they're just sitting there they don't have any defenses right so you can just send a few missiles over there and do an asymmetric amount of damage
and so now Kevin people are starting to question the logic of doing all these multi-billion dollar
deals in the Middle East they're saying hey should this really be a lynch pin of global AI infrastructure if it's just kind of a rough neighborhood and all of the investments that you're going to build there are just going to be kind of perpetually at risk yeah I think that's a really interesting sort of tactical shift that just speaks to how central all of this AI stuff has become in military conflict and then you have all these other risks of disruptions to the supply chain and
right now there are lots of ships stuck that can't get through this trade of hormones because it's been blocked off and we now have people and companies saying that some of the raw materials
“that you need to make things like semiconductors might be delayed for weeks or months or however”
along this conflict lasts and that prices might go up and it might get harder for companies to build new data centers here in the U.S so all of these ripple effects we're starting to see you're like downstream from the fact that we're at war with a run so that's what's going on with the data that are infrastructure. Kevin you're also probably wondering what is going on with these undersea cables right so there are a very important fiber optic cables that run through
the straight of hormones that are responsible for transporting internet traffic from that region to the rest of the world as of press time as we record this these lines have not been attacked or disrupted but everyone is keeping it really close eye on it because were they to be disrupted there is just simply no obvious way to fix them in the middle of a live war. Casey how does this all make you feel that AI is playing such important and central role
in an ongoing war in a run? I mean this to me just feels like the frog is being boiled right like
“when I think of all of the potential violent uses of AI data analysis is not among those”
that gets me most nervous although of course I do have concerns about you know domestic surveillance but I also know how rapidly these systems are advancing. I know the pressures that are quite
Apparent in our military to use AI for ever more things.
appropriate safeguards on those things and so yeah I just have a high degree of concern about
“where all of this is going. I'm open to the idea that AI systems could be used to wage war”
more safely and to maybe even prevent casualties but I am not sure that we have built systems that will actually do that. Yeah and I would just say like I keep thinking about how all of the companies that are building frontier AI systems today at one point in their existence had decided that they didn't want their stuff being used by the military. You know back in 2014 when deep mind was sort of a little known AI startup in London they sold themselves to Google and one of the
major sticking points in those negotiations. One of the reasons they sold to Google and not what became meta and was at the time Facebook was that Google had allowed them to have this prohibition on using their technology for military applications or surveillance. As recently as a
“couple of years ago Google's AI principles said that we are not going to allow our technology to be”
used for the military and in 2025 it quietly took that language out. Open AI same thing they had language in their terms prohibiting their models from being used for military applications they took that language out quietly in 2024 meta same thing and thropic interestingly is the one sort of front
here AI lab that never had an explicit prohibition on military applications but they did have a bunch
of language in their original terms that they have amended to make it more possible for the military to use this stuff. And so like I understand strategically why you would make the decision to sell your AI tools to the US military but I just don't want us to forget that like all of these companies were run by people who at one point thought this was all a bad idea to be selling these very
“advanced AI tools to the military and then they changed their minds and they did that because of”
some combination of pressure or just maybe market opportunity to get these big military contracts but they did at one point have a principle that involved we don't want our stuff being used to kill people and I would like them to at least reflect on the fact that that has changed yes and for everyone else the next time one of these companies tells you about some unshakable principle that is the foundation that the entire company is built on it should make you wonder whether that can hold
up to pressure as well yeah when we come back are you experiencing AI brain fry if so you may be entitled to compensation well talk to researcher Julie Bedard about this strange new AI psychological phenomenon framework is a website builder that turns dot com's from a formality into a tool for growth whether you want to launch a new site test a few landing pages or migrate your full dot com framework has programs for startups scale ups and large enterprises to make going from idea to
life site as easy and fast as possible learn how you can get more out of your dot com from a framework specialist or get started building for free today at frameor dot com slash hard fork for 30% off a framework pro annual plan rules and restrictions may apply this is agey souls burger i'm the publisher of the New York Times i oversee our news operations and our business but i'm also a former reporter who has watched with a lot of alarm as our profession has shrunk and shrunk in recent years
normally in these ads we talk about the importance of subscribing to the times i'm here today with a different message i'm encouraging you to support any news organization that's dedicated to original reporting if that's your local newspaper terrific local newspapers in particular need your support if that's another national newspaper that's great too and if it's the New York Times we'll use
that money to send reporters out to find the facts and context that you'll never get from a i
that's it not asking you to click on any link to subscribe to a real news organization with real journalists doing first-hand fact-based reporting and if you already do thank you so Kevin i feel like there is this new genre of blogs and social media posts all devoted to the idea that using a i is making people feel completely exhausted yes and insane there's a spectrum
It starts an exhausted and it goes all the way to insane sit on a car who's a...
builds tools for a i agents wrote a blog post that i saw all over social media recently called
a i fatigue is real and nobody talks about it and he said that on one hand he felt like he'd have the most productive corridor of his entire life as he uses all these new agentic coding tools but on the other hand he said he had felt more drained than ever before in his career
“yeah i think people are starting to sort of use these tools more and come to grips with not only”
the effect it's having on their productivity but also like on their brains and on their ability to kind of make sense of how quickly things are shifting i really like this essay uh that venture capitalist wrote a few weeks ago about what he called token anxiety which was this feeling
that like if you don't have a bunch of you know cloud code agents like running parallel tasks for you
uh while you sleep like you're you're feeling like you're missing out and people at dinner parties and San Francisco are now talking about bragging about how many agents they have running at all time so there's like something psychological happening to the people who are using this stuff a lot at work absolutely and recently we have begun to see some actual empirical research on the subject so last month researchers at UC Berkeley published some findings in the Harvard
“Business Review from an eight month study observing workers at one 200% tech company and they found”
that AI was just making work a lot more intense workers were having to multitask a lot more
they felt like if they were not using a lot of AI tools they were not keeping up with expectations
and that they used to have little break stirring the day where you know you go to the water cooler and talk about you know what's going to happen on survivor this week well that doesn't exist any more at least not at this company and then last week a group of researchers at BCG shared some similar findings in the Harvard Business Review and this one really caught our eye because they found that under certain conditions workers are experiencing what the researchers are calling AI brain fry
and to be clear that is different than AI brain rot which is what you get on TikTok when you start looking at videos of Bellarina Capacena that's right you know and actually they thought that Emmanuel Macron might have this but that turned out to be AI French fry so anyways here's what AI brain fry is Kevin they're defining it as mental fatigue from excessive use or oversight
“of AI tools beyond one's cognitive capacity which I think is kind of a funny idea it's almost like”
you got a new coworker and they're really really smart and it's sucking your life force out of your body yeah so we want to know more about this study because I think it gives shape to a conversation that we're seeing rippling out across the economy as more and more managers are telling their workers to start using AI tools it is clear that not all is well out there people are starting to feel kind of bad and there may be going to be less productive and likely to
leave their jobs as a result so to learn more about the findings in this study we've invited the lead author Julie Bedard Julie is a managing director and partner at Boston Consulting Group as well as a fellow at the Henderson Institute which is an internal research group and think tank at BCG so let's bring it in let's do it let's get fried Julie Bedard welcome to hard fork thank you thanks for having me so let's talk about the study you surveyed
1488 workers in January of this year from all different disciplines lots of different companies what kind of questions did you ask these workers you know we asked them all kinds of questions around how they use AI how they feel it work you know traditional burnout metrics we asked some you know sort of proxies for cognitive ability and we did throw in a question about AI brain fry we said specifically like what do you think about this thing that could be
AI brain fry like are you feeling that and tell us how you define AI brain fry and what the survey results told you about it I mean we defined it is really like a type of cognitive strain so we said it was mental fatigue it was related to excessive use of interaction with or oversight of AI and it was about being beyond one's cognitive ability so it's sort of like I'm using the tool but it feels beyond my ability to process it so 14% of people who use AI said that they felt this
and I was especially surprised by the extent to which they told us about it we asked you know free and they're like just tell us what is this thing what does it show up how does it feel to you and people wrote a lot right like they wrote all these things about feels like I have 12 browser tabs open in my head or it feels like I'm working so hard to manage the tools I'm actually
Not really doing the work like I'm not actually managing what I'm supposed to...
thought this was so interesting because on paper if you told me hey we're going to give you a brilliant new assistant they can answer all of your questions they can do many of the tasks
“that you prompt it to do that would sound very exciting you know so sometimes I think what would”
it be like to have like a really great podcast co-host you know some of you kind of came in really
prepared as a lot of great questions having great energy you'll never know and I'll never know okay
but some of these people at work are now having that experience but what you're saying is that that is not an energizing thing for them it's draining them in some way so what do you think is the mechanism by which people are coming to feel so exhausted by working with these systems yeah well I do think it's particularly to these two things that we found which is the oversight of the tools and the intensification of work due to AI and what people reported specifically is they put in more
mental effort they felt more fatigue and they felt information overload and you know we need more research right like this is new and we're learning but my hypothesis right from working with a lot of different companies on this kind of thing is it is fun and exciting combined with
“we feel more pressure everybody's talking about AI AI productivity right and I think it's it's just”
nature to it okay one more thing let me just sort of try this out see what I can do and we're not resentering on like what was I actually trying to achieve today right we're not getting focused on
some of the most important aspects of our work yeah I'm curious how much you think this really
boils down to fear because when I talk to people who are anxious about using AI at work the circle around this issue that like maybe it's materializing as burnout or feelings of overwhelmed but like at at its core what they're nervous about is that we now have these systems that can do parts of their job and they're worried about losing their jobs did anything in your studies sort of get to any of the the economic or sort of survival anxiety that these workers
might have been feeling that might have been registering to them as burnout but deeper were something else yeah so this is probably a good time to just separate the two because the brain
“fry is the cognitive piece burnout is you know physical and mental exhaustion it's more emotional”
it's more about how I feel about work and you know to I feel like I'm doing a good job at work burnout we did not find a correlation with brain fry so I just want to be really like clear it was very interesting I thought we would we did not brain fry is distinct and then what we found is actually you could use AI to reduce burnout so there's a lot of nuance maybe the last thing I would say is we did look at you know how positive or negative you feel but typically the people who are
afraid are not the people who are doing heavy oversight work in my experience right so there's sort of the people who are you know leveraging it more like a search tool right they're not necessarily getting up that learning curve to more of the intensive interactions in your study you found that people in certain industries tended to experience AI brain fry more frequently I was struck by marketing seems to be the the place where people are feeling at the most and people in
areas like management and law and compliance reported significantly less brain fry do you have a theory on why that is yeah so the short answer is unfortunately our survey at least scientifically was not designed to answer that question but I have my theories based on other work that I've done and you know three years ago I worked with some of the models to try to predict skill disruption I was trying to figure out like which jobs will change the most and one of the jobs that changed
the most from a skill perspective was marketing manager a marketing manager was 90 percent
disrupted from a skill perspective so so that's sort of the first fundamental piece about marketing is like they've tended to adopt and is a really different way of working because of the power of the tools the next thing if I really just think about like what is brain fry like it's about the iteration it's about the oversight a lot of marketing lens itself to that like in the field we see stories of folks who are doing image creation they're doing synthetic consumer panels right
they're spinning up a bunch of campaigns at the same time and it really lens itself to that definition of like when do they know they're done when do they know the image is ready like have they define those success thresholds for themselves I'm guessing they haven't yet right like they haven't figured out how do you do all the things to the right level of quality based on the outcome that you're trying to drive for it it makes sense to me that like the more your job is changing
the more kind of vertigo you're going to be experiencing as these new tools are introduced into your workplace you know Kevin you just observed that managers seem to be experiencing this less one of my theories was that well the reason is because they're already used to overseeing a
Bunch of digital abstraction since they're human employees right they're most...
slack messages and sending them emails you know you know hopefully meeting in person you know
“fairly regularly but I think if you're a manager you've already been used to sort of overseeing”
a bunch of stuff and those people just sort of may have skills that people who have not yet been in management roles don't have I think there's there's something to that and I also wonder Julie if you think there's anything that is sort of inherently isolating about these tools one thing that I've found with using AI for my own work is like it's a single player video game right you're you're going back and forth with a machine very rarely am I in a room with other people
using AI with them and I wonder if part of the brain fry is sort of the siloing effect that these tools tend to have in the workplace where it's like everyone is chatting with their chatbots and their agents and no one is talking to each other I'm glad you brought that up Kevin because back to this point around there's ways to use AI that actually reduce burnout the people who were using it for repetitive tasks they actually were doing those types of things like we found
that they felt more socially connected at work and so it's interesting like in all the companies that I go to I do various types of you know AI enablement and workshops and one of the questions
that I always get a lot of engagement on is what could you use AI for which is like the three
worst things on your to-do list like the procrastination things like the things you really wait and do I mean people love to talk about using AI for those and my hypothesis is sometimes that's probably the repetitive work and when you use it for that type of repetitive work you actually reinvest the time and things that give you energy so more work needs to be done but that I think I've seen
“that a bit in the field and and that's what our data would suggest as well I want to ask about the”
three tool cliff which was a funny part of your study basically you found that the sort of number of AI tools that people are using at work has some sort of bearing on their productivity or their feelings of productivity and then actually when you switch from using three to four AI tools at work there's something that happens where you all have started to start experiencing these things is not like a productivity enhancer but actually just more of a stressful thing do you have a
theory on why that is or why there seems to be this threshold? Well I mean classically multi-tasking is not very productive right like we all are you know seduced by the idea that we can do more and more and more in more in case he's playing blotzer right now exactly not so yet no I I think multi-tasking is part of that but it's back to this point of like I'm overseeing more things like I'm actually doing more things I'm starting more things I'm stopping more things I have more output to govern
you know advice for leaders and managers are to help people understand this like one of the things I'd love to see is AI fluency right now mostly was defined by technical skills maybe in the last six to nine months we've started to talk about the human skills that persist I actually think cognitive sort of health should be part of defining AI fluency as we go forward so both again like individuals like I can start to work differently with the tools but also again managers and leaders can
help protect against that let me ask one objection that that some people might have to the research you work for a consultancy consultants have an interest in making AI seem difficult so that company is well higher than them to help manage it is there any chance that we're over pathologizing what is going on here or sort of you know giving the scary sounding name to what might just sort of be a temporary adjustment process as people you know start to use AI tools in the workplace
yeah I'm glad you've asked that maybe what I would say just first about kind of how I look at this
and why I'm doing this research so I am a consultant yes I do advise companies it's sort of the bread and butter of what I do however I'm also a researcher and I care really deeply about the data and what's been very hard is our clients have wanted answers answers that we don't necessarily have all of the playbook for because it's so new and is changing so rapidly so I'd see just you
“know we really design this to be a data driven intervention but beyond that I think I've been for”
like I said for the last three years at the rock face like I've talked to more than a hundred companies I've actually trained teams myself I've been in the room with software developers marketers etc trying to use these tools and I see there like there's something there like there's a real strain where I'm trying to do the right thing but some things getting in the way of me being productive with the tools and we need the redesign work hopefully and particularly you know within
teams to do that better and like if you're a worker out there if people are listening to this and saying yes I am a worker I am using AI tools at work I am feeling the the brain fry that you are describing what can they do to help themselves what what is shown itself to be effective in your
Experience yeah so if you're an individual worker I think first just acknowle...
is the first thing the second thing is really focusing on what you're trying to achieve it's like
back to that outcome piece I mean I know this is really basic but if we were very clear about we're measuring outcomes not output and we're trying to get to the right answer and what are those steps to help me get there and so I you know from our data we would say the things you could do is want engage your manager so managers who engage in questions we saw brain fry grow down and I think it's about creating that sort of open dialogue about how should I use AI when is it valuable
the other thing is to engage your team on this so interestingly when teams were using AI together and they had better integrated into their workflow so like how I hand off work to Kevin and Kevin does to Casey we also saw brain fry go down and you know I don't have the data to say exactly why but my hypothesis would be is we're not bottlenecking work in one person and we're creating actually like a much more effective system where we're getting the work done with the right outcomes
“together it seems tricky to me though because I think there is just so much thrashing around in”
organizations right now I think that the amount of knowledge that any given manager or worker has about AI right now is highly variable whether their knowledge is like keeping pace with the capabilities of the latest models that seems like an open question to me so I have to say like in the near term I actually feel quite pessimistic about this I'm sure there are going to be individual managers and teams that are like doing a great job but at a like economy wide level I think people
are just absolutely all over the map. Yeah I think so too and I think it's also not clear to me that people are going to feel comfortable talking to their managers about how they're feeling about it. Yeah because I think a lot of people have these reasonably well-founded fears that like
if you tell your manager like I'm using AI or do this part of my job the manager first thought is
well gonna be well maybe I can lay you all out there maybe I don't need all these humans anymore
“and I think we're seeing enough of that happening at big companies now where they're laying off”
big percentages of their workforce and attributing that to productivity gains from AI that I think people are sort of feeling like well if I discover how to use AI for my work I'm going to keep it to my damn self absolutely or Kevin I think we also see the reverse of that which is you go on social media and you see people bragging about the insane lengths that they are going to to be using AI at all times to have their you know a clawed swarms up and running and coding you know
while they sleep and I I feel this sort of deep insecurity embedded in that which is if I'm not out there constantly telling you how much AI I'm using you know I might sort of be next on the chopping block. My reaction to that is this is why leaders play a really important role because I think Kevin your point is well taken I think there are things individuals can do there are absolutely things managers can do but this is about systemic redesign of work so Casey to your point like
I don't think AI brain fry is going away unless we tackle it head on like I don't think this is
“something that we can sort of just democratize and let everybody figure it out although I think”
there are things they can do to mitigate but I'm really interested in actually like okay let's rethink how we get the job done like you know we are really bad at stopping work is all work valuable like if we had leaders engage more meaningfully in these questions that's the work we need to do if we really want to address some of this. Julia I'm wondering how much you went back and looked through sort of historical precedent here when I was researching my last book I was
doing a lot of reading about the 1970s when a bunch of manufacturing workplaces like auto plants were getting all these new automated robots to help them do things like assemble cars and there was this whole sort of nationwide panic about this they called it lords town syndrome because the first sort of GM plant to have this level of automation was in lords town Ohio and you know Congress held hearings about this like sort of new wave of work or alienation
that was happening in these blue collar manufacturing workplaces for a lot of the same reasons that to me seem like they rhyme with at least this AI brain fry idea workers were just saying basically like I don't feel like a human anymore I feel like I just push buttons and the robots do all the work I don't talk to people at the office anymore my managers have all these crazy productivity expectations of me and I think what was interesting in that beyond just the parallels
to what people are feeling in white collar workplaces today was that the way that they sort of got out of that was through striking and through organizing and unionizing and getting a bigger share of the the profits that these companies were making from all this productivity so I guess I'm just wondering if you could riff on the maybe some of the historical parallels before and
where this may all be heading well I always get the question around excel and accountants
Like did the rise of excel lead to more fewer accounts um or even if you thin...
actually do the industrial revolution one thing I actually think is a really interesting parallel there
“is you know the rise of technology at that time in many cases it wasn't until there was actually”
a re-architecture of the shop floor did we actually see the productivity gains and to me that's an interesting parallel to what we need to do with redesigning work Julie one of the questions I wanted to ask you was like you know it is the role of the consultant to come in and say I have talked to people all across this land and I understand the best practices and I will bring them to you and you can redesign your shop floor so that you can get back to being maximally productive
but I feel for Kevin and I we feel like the ground never stops shifting under our feet anymore
and that every few weeks some new model comes along where the level of capability goes up and maybe even something that I would not have been able to do in November I actually can now and before too long maybe that's going to be a core expectation for me that is part of my job so part of me wonders like is this actually a good time to be redesigning your work flows if you know three months from now six months from now the landscape might have
completely changed all over again yes and I have tackled this question many many times here's my take for companies who didn't do anything two years ago they would have said the exact same thing to me Casey they would have said the tech is going to change I'm going to wait
“I want to be a fast follower and honestly there is some smart truth to that right like pick”
your bets like I definitely wouldn't be doing this everywhere but I think this is about learning a new capability and muscle as a organization like this is about teaching us how to change so I would say like if you're not if you're on the sidelines yes it's it's just going to keep moving so you could have that excuse you know a year ago two years ago two more years but you're also going to be missing out on that opportunity to build capability as leaders to build that
and your teams to start up skilling people I think there's actual things that you can do to support your talent to go on this journey with you yeah and I would say like also if I could add something to that from 1972 which is apparently where I love going on this subject there was there was this sort of team at GM when the Lord's Town syndrome was taking over that had to figure out how to bring back the striking workers and one thing they did was that they set up these
new humanization councils were basically workers people from the assembly line were invited to give
their thoughts on how the robots were being used and how the machines were set up and how the assembly lines were laid out and feeling like they had some input and some control over their situation and we're not just like passive bystanders actually seem to help so I don't know whether that's directly applicable to white collar workplaces that are going through this today but I do think that having some of the the energy and ideas come from the quote unquote bottom
from from the actual workers doing the the individual contribution seems to matter yeah I mean Kevin this absolutely right like how do we have more agency in this and and if you do that you're going to be really user centric you're going to think about like what work to people enjoy doing what work do they not enjoy doing what are some of the barriers cognitive or
“otherwise to getting actually that work done I think that's exactly right well Julia thank you so”
much for giving us a lesson now if you'll excuse us we have to go deal with our AI brain fry I actually have AI brain freeze it's happens if you use chat GPT while you're drinking a slurpee well so I think that AI brain raw you're fine yeah yeah we we got there a long time ago thanks Julia thanks Julia thanks way come back the worst AI feature we've ever seen makes you more like casing
framer is a website builder that turns dot com's from a formality into a tool for growth whether you want to launch a new site test a few landing pages or migrate your full dot com framer has programs for startups scaleups and large enterprises to make going from idea to live site as easy and fast as possible learn how you can get more out of your dot com from a framer specialist or get started building for free today at framer dot com slash hard fork for 30% off a framer
pro annual plan rules and restrictions may apply we gave times employees a preview of crossplay
from New York Times games and here's what they had to say I can finally play with other people
I'm pretty competitive it's fun to beat friends and co-workers I have a J for 10 points I'm guessing tanga is not a word let's see tanga is a word oh as an English as a second
Language speaker I like to learn new words crossplay the first two player wor...
Times games download it for free today well Casey I heard you got an exciting new job last week
“I did and it was the sort of job Kevin that I didn't even know that I had or was doing”
so you had this crazy experience by being selected against your will and without your permission as one of grammarally the AI kind of writing assistant they have an expert network of people whose voices they have borrowed for the purposes of I guess making people's writing better so a congratulations thank you I assume the royalty checks are just overflowing your mailbox but what actually happened here you had a fascinating news letter about this this week well thank you so
this story I first learned about from the verge their reporter Stevie Bonnafield wrote about this
and it turned out that last summer grammarally had added this feature called expert review I had not actually used grammarally until this have you ever used it no so I decided you know what why don't I sign up for the free trial and see what what grammarally can do for me and
“if you go to the support page for this feature it says that expert review quote is designed to”
take your writing to the next level within sites from leading professionals authors and subject matter experts that sounds pretty cool right well scroll a little further down Kevin and you see the following disclaimer references to experts in expert review are for informational purposes only and do not indicate any affiliation with grammarally or endorsement by those individuals or entities and so I read that and I thought when you say that these insights come from leading professionals
what does the word from mean to you because it sounds like what you're telling me is they don't come from those experts at all yeah it's like when you see like a tub of margarine and it's like you know it's like butter style product in very small type yeah they had sort of an expert network with an asterisk none of the experts were actually consulted and we didn't actually hear from them in any way absolutely so steve over at the verge put a bunch of writing through
expert review to see what sort of expert names would pop up I was one of them got you. Thank you um you know as you might imagine grammarally also picked a bunch of like actual famous people so steven king Neil deGrasse Tyson Carl Sagan and I decided to put this thing through my own paces and loaded up some recent columns that we published in platformer and paces of them
and to see what sort of experts it would suggest and while I was never able to get my own name
Kevin I did see a succession of people that sort of felt like if you made a list of people who would hate this idea the most that is who Grammarly had picked so Timnet Gebrew a very vocal critic of AI systems the way they are built and deployed she showed up as a quote unquote expert so did Julia Angwin who is an investigative reporter she writes for New York Times opinion and it used her writing even though she has written a lot about how tax systems are used for privacy
and surveillance in ways that are contrary to how we probably want them to be used Julia by the way filed a class action complaint against Grammarly's parent company on Wednesday seeking to stop them from quote trading on her name and those of hundreds of other journalists authors and editors
and to stop them from quote attributing words to them that they never uttered and advice that they never
gave we can ask a question about the mechanics of this okay so you're writing in Grammarly which I I gather sort of like an an a bolt on to like a word processor yes and it sort of detects the topic you're writing about and then pops up a little like clippy thing that's like would you like Julia Angwin to edit this for you would you like Casey Newton to give this one a pass
“exactly I'll actually show you an example here if you want to look at my laptop”
you can see that here is the text that I wrote and then in this little left-hand column in this case it just says carouswisher carouswisher my good friend pass hard forecast legendary Silicon Valley journalist and podcaster and someone who has absolutely no involvement with Grammarly but her name just sort of pops up there with no disclaimer at all right and then when you sort of click in it will offer this sort of carrot-inspired advice and this is the point Kevin where I would
like to talk about the kind of advice that this thing actually gives please so you might expect given that they were you know allegedly trying to borrow the expertise of real humans that that expertise would seem like incredibly specific to that person right instead what you're getting is just a bunch
Of very generic advice about something that you might do so I noted for examp...
we public my colleague Ella Marquiano sort of story in platform for last week where she went to
“a protest at OpenAI and there was a suggestion that Grammarly had said was inspired by John Kerry Roo”
the legendary investigative journalist who brought down Theranos and the advice basically boil down to
try opening with a colorful scene and use a lot of rich details and characters right like sort of the most absolute generic advice that you would ever imagine getting and nothing like I would imagine the actual experience of sitting down with John Kerry Roo and saying like hey how did you write that blood yeah how did it say that keroswisher would edit a story so I will just reach you the piece of advice that it gave me this was also a piece of advice about this protest story the fake
AI keros said could you briefly compare how daily AI users versus AI skeptics are articulate risk creating a through line readers can follow a synthesizing sentence here made Titan the narrative arc that I'm laughing because
that is the exact opposite of how I imagined keroswisher would edit someone yeah it would just be like a
string of like four letter words and like you know this socks do it over again yeah it would say stop wasting my time you know like that that would be the advice the the thing that I just read I just want to acknowledge like it is words salad yeah do you know what I mean totally like you can tell what I don't know what underlying model they're using here I'm guessing it is not a frontier what right it's reading very like GPT2 to me you know so this advice is so bad but let's bring this into what I actually
“find upsetting about this Kevin yeah let's make this about you no well here's the thing I'm actually”
not going to make it about me because I have sort of just long since accepted that all of these companies are have stolen all my intellectual property and are having their way with it where I really feel bad is for the subscribers to Grammarly these people are paying $144 a year to be able to use this glorified spell checker okay and they load this thing up and then Grammarly gives them this service and so if you were a paid subscriber to Grammarly you are paying a subscription to get Grammarly
to hallucinate on your behalf right to make up a bunch of stuff that is not true right this is not the actual sort of advice that any of these experts would provide and you are paying for that service when you just as easily could have taken whatever text you had written and pasted into a free chatbot and gotten generic advice that is just as not great as what you were getting here right and the the truly crazy thing about this is that despite charging all of this money for
people to use this sub-standard AI product they are not so my knowledge passing any of this along to you or a carer or John Kerry Rue or any of these authors whose identities they have
“perloined for the purposes of selling this product no they're not and you know look I think”
that all of the AI companies just have a huge entitlement problem in general you know I think that they think look if it's you know if it's on the internet it is in the public domain and it belongs to us and they don't spend enough time thinking about how they are destroying the incentives for anyone to create a public open internet right if you feel like you're just gonna get screwed in this way so I do think that that is really unfortunate yeah so what are Grammarly say when you started
writing about this well when I reached out to them they thought about it for a while and then finally
came back to me and on Monday and said you know what we've thought about it and if you're one of our experts who we didn't consult and we're not paying you can now opt out of this feature how nice of them so you can now send an email and say I don't want to be a part of this system anymore and so you know I wrote the story and got a lot of comments on social media like you know geez it really seems like the least they can do but Kevin as we record this I actually have some
breaking news what's that so I got an email from the spokeswoman over at a superhuman today superhuman is what Grammarly now calls itself they did a rebrand last year and they're now sort of a bundle of mediocre products and they set me a note and said that after careful consideration we have decided to disable expert review as we reimagine the feature to make it more useful for users while giving experts real control over how they want to be represented or not represented at all dot dot dot
thanks for holding us accountable we're committed to getting it right next time and we'll be transparent about how we improve wow results newton gets results newton getting some results I mean look it's clear to me that they are embarrassed about this but this is one where the whole
Time I was using this thing I was like who was the product manager what were ...
imagine meeting and that was there a lawyer involved in it who was the lawyer that signed off
and said yes feel free to misrepresent that you are getting inspiration from all of these different editor so the thing is such a like spectacular atmosphere and it really made me wonder like what is the future of a product like Grammarly that's kind of where I want to end this you just finish writing a book you presumably could have used some sort of AI writing assistance did it ever occurred to you to use Grammarly no why not because I don't know anything about it and I don't need it
“and I have other tools was it talked to me about these other tools because this is what I think”
the real story is which is like in 2009 when Grammarly launched you didn't have a lot of options for writing assistance right you had like whatever spell checker was in google docs and like that
was you know probably going to be the best tool available fast forward to today though you got
chatting with me you got Gemina you got Claude there are free versions of these services if you want a quick Grammar check you can get it my guess is that's that's the experience that you just had yeah if I want a Grammar check I'm just copying and pasting into one of the AI models I'm not using like a purpose build thing for that or it's now built into you know google docs yeah and and to you know emphasize a point when you're using Claude as you did in your book you're using the latest
and greatest version of Claude yeah if you are using some sort of startup that is like using the API of a anthropic they're not actually incentivized to give you the frontier model most of the time but because that's gonna be very expensive so they're gonna give you a model that's a couple generations old because they can get a lower price and their their margin is gonna be better on it so we've talked a lot in recent weeks about the potential for a SaaS apocalypse where these companies
that are selling these sort of you know businessy prosumer services are gonna get crushed by the fact that there is now just a cheaper way to do it I wonder if you think the Grammarly might be one of
“those no I think it's gonna be part of the ass apocalypse which is for software that absolutely sucks”
that there's no reason to be using in the first place and I think that that that software has a
hard road ahead I just do not think there is a future for this product like like when I saw this yes I did have the moment of like outrageous too strong a word I felt supremely annoyed okay I did feel like very annoyed that this was happening but again it's like I know all these companies have like all read my stuff you know you could go into Claude today and say draw inspiration from Casey Newton at my piece Claude is not gonna refuse and say I don't have the rights to his intellectual
property it's just gonna do it and it's not gonna notify me and it's not gonna pay me right so I do think that there is a distinction between what these companies are doing but I I just want to point out that in some way like the violation is the same the bigger thing to me was this really feels like desperation you know and I think that more and more of these consumer sort of internet services that have been able to get away by offering a pretty sub par product and selling it to you
for more than a hundred dollars a year I think the rude awakening is showing that you know where all of a sudden if you have a subscription to your Claude or your Gemini or Chatchee PT you're probably going to be able to get more from that and do more things and you're just not going to need the subscription it's exactly like what we were talking about vibe coding and being like why are we paying square space all this money right I think the why are we paying Grammarly all this
“money moment is coming yeah and I should say if you want to rip off Casey Newton's editing style”
without his permission or without compensating him you should just do that and a free chat about his advice is not worth that trust me I have seen his edits and I would not pay a hundred and forty dollars a year for them I'm a great editor okay ask a rat you should really ask a route I have a very detailed thoughtful feedback but this this is horrible I'm very glad you exposed that I'm very glad they went back and said we're not going to do this anymore but I think this
kind of thing is going to keep happening unfortunately because there's money to be made and if you can get away with it you're going to do it yeah you know I need to ask a question might be like is there a good version of this feature and what would that be do you think so if they have come to you and said hey Casey we're starting this new expert review feature and every time someone edits their emails to sound more like Casey Newton we're going to give you 10 cents would you have
done that I mean I don't know in general I am in favor of AI companies trying to strike deals with creative people that say like we are going to give you some sort of you know we're going to essentially share the revenue that is based on the creative work that you have done so certainly I would like to see some kind of explorations like that but you know I think about some of the editing I've done you know
I can remember like working with one writer once and she was working like on ...
feature story and it just made me think of Catherine Boo the great features writer for the New Yorker
for a long time wrote this incredible book behind the beautiful forever's and I was like go read
Catherine Boo like go read Catherine Boo pieces in the New Yorker and see how she like evokes characters and see how she kind of structures her narratives and like so can I imagine an AI tool that like you were having a conversation with that also said like you need to read some Catherine Boo and click here and hey if you already have a New Yorker subscription maybe you can log in right here
and we'll sort of bring up some of the relevant passages so yes I do think that there is value
“in sort of guiding writers to actual extras the key is you have to guide them to the actual”
expertise right not just what your LLM is hallucinating right that'd be my worry if they had come to me which they did not uh which I'm a little bit offended by frankly they put you in the feature and I was I'm not worth ripping off I'm right here Grammarly I'm pretty good but no had they come to me and said hey we want to make you part of this I would have said well how good is your model because you know my worry about something like that would be that someone would you
you know open up their their word processor and start writing their business memo and say make
“it sound more like heaven ruse and then it would make it sound terrible and generic and people would”
blame me and I would kind of get the bad rap for allowing my reputation to be laundered in this way I was texting with my friend Matt Honen who's the editor of the MIT technology review and he found that he was also being used as an expert and when he clicked on the expertise that he was allegedly providing the user of expert review he looked to see what source they were citing and it was a speaker bio that he had submitted from an event like based on Matt's
speaker bio you should have used to work it wired like I don't even know what it said but again it's just like that they just did not think that you know yeah well now as a result of Grammarly pulling back this feature of quest if you want your emails to sound like Casey Newton you're going to just have to put a bunch of typos and random punctuation in yourself manually and
“if you really want to know what I think if you're writing it's that you should start a podcast”
that's where the future is going okay Casey and glad you escaped Grammarly servitude hard for is produced by Windy Jones and Rachel Conde we're edited by Viren Povich we're fact checked by Caitlin Love today's show was engineered by Chris Wood our executive producer is Jen Pointa original music by Melizano Diane Wong, Rowany Misto and Dan Powell video production
by Sawyer Roke packed on third Jake Nichol and Chris Shot you can watch this whole episode on
YouTube at youtube.com/hardfor special thanks to Paula Schumin, Wewing Tam, Dahlia Haddad you can email us at [email protected] with whatever fake advice I just gave you and Grammarly and I'm sorry for it holy shit they just disabled expert review and Grammarly whoa you're free



