Today on the AI Daily Brief, an 81,000 strong study on what people really wan...
Before that in the headlines, Val Kilmer comes back for one last role.
“The AI Daily Brief is a daily podcast and video about the most important news and discussions”
in AI. Oh, right friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, Robots and Pencils, Blitzie and AI
You see, to get an ad free version of the show, go to patreon.com/aidelebrief or you can subscribe on Apple Podcasts. If you are interested in sponsoring the show, send us a note at [email protected]. Two more quick things before we get out of here first of all, thank you to everyone who has submitted to Agent Madness.
We'll be putting together the bracket over the next couple of days and that will go live next week. Secondly, as you can kind of tell from Agent Madness, along with KPMG and AIDB, New Year and all these things. One of the things that I'm thinking a lot about this year is how to both help people figure
out how to use these tools, but also show off their work when they do. I'm exploring whether there's anything useful we have to contribute around actually connecting
“Agent Builders and orchestrators to the companies and partners that need them.”
And if you're interested in contributing some information to that exploration, go sign up for more info at aidebtalent.ai. Among other things you might get early access to the Chucky Agent portfolio that I talked about in my Agent Tournament episode, so again, that's aidebtalent.ai. Well as you heard in the intro, AI has brought Val Kilmer back to star in one last movie.
The film is called as Deep as the Grave and features Val Kilmer in the role of Father Finton. Kilmer was cast way back in 2021 production began, however, by the time the film was being shot several years later, Kilmer was in the final stages of his battle with throat cancer which would ultimately take his life last year.
Now he's coming back thanks to AI to actually star in the movie. What's generating a lot of discussion around this one is that, as much reflexive antagonism as it's going to get, and there is plenty of that going around, it's harder to paint this with the brush of a cynical use of technology to replace human actors. Instead, in this case, at least according to the people making the film, it's AI being used
to deliver on their original vision. The character, Father Finton, is a Catholic priest and native American spiritualist who
“played a key role in the true story being depicted.”
The film's writer and director Querte Vorhaze said, "He was the actor I wanted to play this role. He was very much designed around him. It drew on his native American heritage and his ties to and love of the Southwest. I was looking at a callsheet the other day and we had him ready to shoot. He was just going through a really, really tough time medically and he
couldn't do it." Now Kilmer ended up not being able to shoot a single scene for the movie, so the entire performance was generated using AI tools. Vorhaze created the performance with full permission from Kilmer's estate and the cooperation and support of his children. Said Vorhaze, "His family kept saying how important they thought this movie was and
that Val really wanted to be a part of this. He really thought it was an important story that he wanted his name on. Who's that support that gave me the confidence to say, "Okay, let's do this." Despite the fact some people might call it controversial, this is what Val wanted. The film was shot on Navajo Land in Arizona and New Mexico and tells the story of archaeologists
and inro Moris working with the local people in the 1920s to uncover the ancient history of the Anasazi people. Kilmer has Cherokee ancestry and made his home in northern New Mexico, so the story about discovering one of the earliest civilizations in the Southwest had a personal importance for him. While AI was used to create the on-screen performance, the film uses Kilmer's actual voice, which was damaged by Tracule surgery in 2015. That worked for the
real-world figure of Father Finton, who suffered from tuberculosis. At one stage during the movie's production, Vorhaze produced a cut that simply omitted Kilmer's character,
but later realized the character was critical to round out the narrative. Vorhaze said,
"We really figured out that this is a major missing element. Normally, we would just recast an actor. I'm all about working with our actors and we have brilliant performances all throughout the movie, but we can't roll camera again. We don't have the budget we're not a big studio film, so we had to think of innovative ways to do it and we realized the technology is there for us. Vorhaze followed all sag guidelines on the use of AI and compensated the
Kilmer estate for his appearance. He says he hopes the film can be a model of the ethical use of AI in filmmaking." Now this is not Kilmer's first rodeo with AI. He previously supported the use of AI to recreate his voice for his reprisal of the Iceman character and top gun Maverick, which was the last time he appeared on screen. He said at the time that he was grateful to the company who produced the effect commenting. As human beings, the ability to communicate
is the core of our existence and the side effects from throat cancer have made it difficult for others to understand me. The chance to narrate my story in a voice that feels authentic and familiar is an incredibly special gift. Said Kilmer's daughter Mercedes about the new film,
he always looked at emerging technologies with optimism as a tool to expand the possibilities of
storytelling. The spirit is something which we are all honoring within this specific film, of which he was an integral part. Now it's not worth going through, like I said, the reflexively negative comments, sitting at five and a half million views on Twitter the variety story is much more viral than anything they produced for some time. One of the more nuanced versions of the critique came from Raymond Arroyo who wrote, "This digital necromancy is a very
bad idea. First of all, what makes a great actor is their unexpected inspire choices in a given role,
A glance at grimace and extended phrase.
human and very personal choices. It will be an extended facsimile of an actor without his fire or ingenuity, a hollow show. In addition to which the disease to Kilmer will be saddled with a performance at a role he has no agency over. This is a violation of his dignity and his work is a living artist. Now on the second part, we don't really have any other way as society to respect the wishes of the dead, unless they made those wishes explicitly known. Or in the
absence of that, we rely on their family and given that his family is very clearly on board,
“I'm not sure what to say about that second one. On the first one, I think there is much more”
true there, but if it is true, we'll see that in practice and I think the market will vote with their feet. In any case, like I said, this one is sure to be very controversial and it'll be interesting to see how the discussion shakes out. Moving back to the core of the AI industry, it's Microsoft's turn to shake up their AI organization with the restructure of their copilot teams. Microsoft is making several big changes to make their AI efforts more coherent,
the team working on the consumer and commercial versions of copilot will be combined, allowing the products to be brought more in line with one another. Customer surveys from earlier in the year showed the multiple different versions of copilot were a major source of confusion. This combined copilot team will be led by product experience executive Jacob Andru, who has been promoted to a new role as EVP of copilot. Andru will now report directly to CEO
Sachin Adela, rather than AI CEO of Mustafa Seliman, giving Adela more direct oversight of copilot, with responsibility for copilot removed, Seliman will now focus on leading Microsoft's proprietary model training and superintelligence efforts. There has been very little progress made on this front over the past year, with Microsoft last releasing a foundation model in August. Adela's announcement makes it seem as though this move is about building out additional
leadership for each aspect of Microsoft's AI efforts. He wrote, "We are bringing the copilot system across commercial and consumer together as one unified effort. This will span four connected pillars, copilot experience, copilot platform, Microsoft 365 apps, and AI models. This is how we move from a collection of great products to a truly integrated system,
one that is simpler and more powerful for customers. Now in his own note,
Seliman said that removing copilot from his portfolio was designed to quote, "enable meet a focus on all my energy on our superintelligence efforts, and be able to deliver world-class models for Microsoft over the next five years." Seliman seems to believe this is the big future bet for Microsoft, telling CNBC, "I'm genuinely thrilled about this change precisely because most of the future value
is going to accrue to the model air, and my job is to create highly cogs optimized, highly efficient enterprise-specific model images for Microsoft over the next three to five years."
“That is singularly the objective precisely because the model is the product, right?”
That is the future direction of all the IP. Now, there are a few big takeaways from the shake-up. Primarily, it resolved the issue that copilot didn't have a single owner within Microsoft. The product was nominally under Seliman's leadership, but in practice it seemed like a fragmented effort implemented across multiple product teams.
The move also reinforces that AI is a critical business unit at Microsoft that requires
more resources and a more structured approach. Betron Microsoft reporter and senior editor at the Verge Tom Warren commented, "It's hard not to also read this as an admission that Microsoft's efforts to separate the copilot experience for consumers and businesses has failed over the past couple of years." Now to be fair to Microsoft, they are certainly not alone in taking a few iterations to get their AI organization right.
Google undertook a major restructuring in late 2024 to set themselves up for a massive comeback the following year. Meta has been constantly reshuffling their teams over the past year in order to get their efforts back on track, and more recently, Ali Baba has also restructured their AI teams to focus on product and business. And that's not even counting OpenAI's new focus and removal of side quests. Given how many users are basically forced to
use copilot by virtue of their company's policies, I hope nothing more that this leads to great things. Lastly, today just one day after launching co-work dispatch, and Thropic is updating the tool to add support for cloud code sessions. Co-work dispatches and Thropic's tool for kicking off and monitoring co-work tasks from a mobile device. Cloud code has its own separate equivalent known as remote control, but as of today the line between co-work and cloud code is getting a lot
more blurry. Announcing the new feature and Thropic's Felix respird posted by popular demand, dispatch can now launch cloud code sessions, ask it to build, make or improve something. One user asked resberg of this feature is a replacement of remote control for cloud code to which he responded. We have some things in the works to make remote control over all of smoother experience, but this is using the same underlying primitives as cloud code's remote control.
“I think the question will be, how much over the next call at year, and Thropic synchronizes the”
experience of co-work and cloud code. Do they become one suite? Do they remain separate but have pretty clear feature parity with just different interfaces? There's an argument that cloud
code for all the other types of knowledge workers might be the most important product line
of their immediate future. For now that is going to do it for the headlines, next up the main episode. All right folks, quick pause. Here's the uncomfortable truth. If your enterprise AI strategy is we bought some tools, you don't actually have a strategy. KPMG took the harder route and became their own client zero. They embedded AI in agents across the enterprise, how work it's done,
How teens collaborate, how decisions move, not as a tech initiative but as a ...
model shift. And here's the real unlock. That shift raised the ceiling on what people could do. Human State firmly at the center while AI reduced friction, surface din site, and accelerated momentum.
“The outcome was a more capable, more empowered workforce. If you want to understand what that”
actually looks like in the real world, go to www.kpmG.us/AI. That's www.kpmG.us/AI. Today's episode is brought to you by robots and pencils. A company that is growing fast. Their work is a high growth AWS and Databricks partner means that they're looking for elite talent ready to create real impact at velocity. Their teams are made up of AI native engineers, strategists, and designers who love solving hard problems and pushing how AI shows up
in real products. They move quickly using robber works, their agentic acceleration platform, so teams can deliver meaningful outcomes in weeks, not months. They don't build big teams, they build high-impact number ones. The people there are wicked smart with patents, published research, and work that's helped shape entire categories. They work in velocity pods and studios that stay focused and moved with intent. If you're ready for career defining work with
peers who challenge you and have your back, robots and pencils is the place. Explore open rolls
“at robotsandpensils.com/careers. That's robotsandpensils.com/careers.”
Want to accelerate enterprise software development velocity by 5x? You need blitzy. The only autonomous software development platform built for enterprise code bases. You're engineers to find
the project, a new feature, refactor, or greenfield build. Blitzy agents first ingest and map
your entire code base, then the platform generates a bespoke agent action plan for your team to review and approve. Once approved, Blitzy gets to work autonomously generating hundreds of thousands of lines of validated and attested code. More than 80% of the work completed in a single run. Blitzy is not generating code, it's developing software at the speed of compute. Your engineers review, refine, and ship. This is how Fortune 500 companies are compressing multi-month projects into
a single sprint, accelerating engineering velocity by 5x. Experience Blitzy first hand at blitzy.com. That's BLI, TZY.com. Regular listeners will know that I've been recently following the new AI agent standard AUC1. Well, peeked my interest initially was a string of leading AI companies like 11 labs, Intercom, and UI paths announcing their certifications back to back. But what's even more interesting than who's participating is the way that AI AUC represents an answer to some of
the key enterprise AI adoption challenges that we talk about on the show all the time.
First of all, the standard actually keeps up with AI being updated every single quarter.
It's comprehensive, designed with over 100 Fortune 500 security leaders to cover all the risks that enterprises care about. And finally, it rigorously tests how agents behave in tricky situations or under adversarial attacks, unlike other standards that are mostly just about policies. The combination gives enterprises the trust they need to deploy AI agents with confidence.
“Head to AUC-1.com if you want to learn more, that's AUC-1.com.”
Welcome back to the AI Daily Brief. As the conversation about artificial intelligence gets more fraught and heightened, there's a lot of people making a lot of assumptions, and a lot of people putting words into each other's mouths. And the party's guilty of doing this come from all sides of the AI debate. From those who are extremely against AI to those who are extremely pro-AI, there's a temptation to reduce people to simple, clear attitudes. When for the
vast majority of people, many of the issues surrounding this technology and its impact in the world and its impact on their jobs and on education and on their kids' lives is complicated and nuanced. And so I was interested when anthropic released their latest research, which was a massive study of nearly 81,000 people who were asked in anthropic's words to share how they use AI, what they dream it could make possible, and what they fear it might do. The interview was
conducted withinthropic interviewer. It's basically a version of claw that is specifically designed
for this type of conversational interview and research. So I want to talk first about what this study actually found and then we'll talk about the various reactions to it. The study was conducted last December, which it is worth noting that although that is recent by social science standards, it was only at the very beginning of this transitional second moment, as I've called it, and I'd be really interested to see this study again, as people fully digest the transformation
that's happened over the course of the last three months or so. The study was truly global. People responded from 159 countries using 70 languages. As an aside, that leads anthropic to believe that this represents the largest and most multilingual qualitative study ever conducted. And at the very front and center of the findings, and cutting to the quick of why nuanced is required as we're having these conversations, is the fact that people didn't so much divide
themselves into different groups, but experienced the full range of emotions in and of themselves. The way that they framed it, across interviews hope and alarm didn't divide people into camps, so much as coexist as tensions within each person. Disruption anthropologist Jasmine's son pulled out this section as well. Anthropic writes, "What people want from AI and what they fear
For them it turned out to be tightly bound.
Anthropic used Claude to break responses into different overarching categories, and found that number one was professional excellence. About 18.8% of the responses around what people hoped for had to do with professional excellence in some way. Other versions of professional impact showed up lower on the list as well. On the scholarship, came in at number 7 with 8.7% and financial independence,
“although I think you could argue that that sort of sits in between professional and personal,”
came in at number 5 at 9.7%. More personal goals, however, were even more prevalent. The number two most common category of response after professional excellence was personal transformation, representing 13.7% of responses. Life management was just after that with 13.5%.
Time freedom, basically winning back time from professional pursuits and from other adult
constraints towards more personal pursuits, came in at number 4 at 11.1%. Learning in growth, came in at number 8 at 8.4%, which admittedly could be both personal and professional, which is the same for number 9 creative expression, which came in at 5.6%. Another 9.4% hoped for some sort of societal transformation. Now, what was interesting is that when you dig even into the professional goals,
they often get very blurry very quickly with the personal. Anthropocrites, many started the interview talking about productivity, but after Anthropoc interviewer asked about their underlying hope behind it,
“what realizing this vision would enable for them, other priorities surfaced.”
It wasn't about doing better work, but increasing their quality of life outside of it. Using AI to automate emails became, in actuality, a desire to spend more time with family. A white collar worker in Columbia wrote, "With AI, I can be more efficient at work. Last Tuesday, it allowed me to cook with my mother instead of finishing tasks." A freelancer in Japan said I want to use less brain power on client problems and have more
time to read more books. Cutting across all 9 clusters, Anthropoc argues that there are
actually three meta clusters that they all fit within. They write, roughly a third of visions
are about making room for life, more time, money, mental bandwidth, by using AI to alleviate current burdens, another quarter revolves around using AI to help people do better, more fulfilling work. Not escaping work, but getting more out of it. About a fifth are about becoming someone better, learning healing and growing, and a smaller share want to make something, or fix the world. Summing up they say the 9 clusters may look disparate, but they are underpinned by recognisably human desires.
“Now, digging into these societal transformation piece, as sort of this group that feels a little”
bit apart from the others, even those are more personal than they appear at first. Anthropoc writes, "Those that wanted societal transformation from AI, often cited a vision for healthcare, people wanted AI to detect cancer earlier, accelerate drug discovery, or enable broad access." Often these desires stemmed from personal experience of losing family members, living with chronic illness, or watching loved ones
receive wrong or delay diagnoses. Similarly, in these societal transformation category, the next most common vision was around transformation of education. Quote, "Respondence in low and middle income countries were quick to cite the possibility that AI might break the association between educational quality and wealth." The point is that across all of these desires, they are very common core human pursuits, newly expressed through the opportunities that AI
represents. When people were asked then, if AI had taken a step towards their stated vision, 81% said yes. Anthropoc grouped those experiences of AI delivering into six categories. By far the most dominant was productivity, with 32% overall saying that a hit-delivered productivity gains, the next most common way in which it had delivered Anthropoc called cognitive partnership. Said one academic, "It's like having a faculty colleague
who knows a lot is never border-tired and is available 24/7." Closely related was learning,
represented by a student in India who said, "My professor teaches 60 people and won't entertain many questions. I can ask AI anything even at 2am, including the dumb ones." Other ways in which AI has delivered on their vision include technical accessibility, research synthesis, and emotional support. Although notably emotional support was the lowest reported category, with just 6.1% of responses. That might be, though, because emotional support is sort of
embedded in the way that AI delivers in the other areas. For example, one response that was counted in the learning category came from a white-collar worker in Brazil, who said, "It's much easier for me to learn without being judged, just friendly feedback. It's harder with friends or family to get that." Now, it makes sense why Anthropoc would classify that as learning, but there is certainly an element of emotional support there as well. Anthropoc wrote that while
emotional support comprised only 6% of responses, they were among the most affecting they encountered. There were many stories they write of people using AI to process grief. Said one woman, "Claude is like a sponge gently holding and catching my longing guilt towards my mother. Unlike real people, Claude has unlimited patience to listen to me, understands my pain and helplessness." And yet in this category, we also start to see the duality.
A respondent from South Korea said, "My relationship with a friend became strained, and I talked more with Claude than, because Claude understood my thoughts and stories well.
It was a stupid choice.
That's how I lost that friend." It's clear why this is going to be one of the complicated
“areas for AI. Anthropoc writes there is real ambiguity in how to interpret the diversity of stories”
we heard, as a wins for human well-being, as double-edged swords, or as mandates for broader institutional failures. In truth they write, it's probably some combination of all three. So what then are people concerned with? This area of questioning I find super interesting, especially in the context of the larger and ever more fraught conversation around AI disruption that we're having on a societal level. Are AI users worried about AI coming to life
and ex-risk? Are they worried about losing their jobs? Interestingly, Anthropoc said that while the positive visions for AI seemed to come from a few core desires like more time or autonomy and more personal connection, concerns were much more varied but also more concrete. At the top of the list was unreliability, representing 26.7% of responses when people asked what they were worried about. Speaking of that duality that we've talked about running through this,
this makes intuitive sense in the fact that as people become more reliant on AI, they're understand that the risk of it being wrong or leading them astray becomes higher.
“Now when it came to jobs in economy, they were an important concern as well.”
Number two, after unreliability representing 22.3% of worries, a loss of autonomy and agency was number 3 at 21.9% and closely related cognitive atrophy represented 16.3%. societal issues were a little bit farther down the list but definitely there. Jobs in economy is a hard category to tell how much of that is personal versus societal, but there were concerns like misinformation, which came up 13.6% of the time,
surveillance and privacy which came up 13.1% of the time, and malicious use which came up 13% of the time. Existential risk did make an appearance, but it was at the bottom of the list representing around 6.7% of conversations. One really interesting area that I find wildly underrepresented in the larger conversation is a concern about overrestriction. In other words, excessive safety measures, paternalistic content filtering and blocking legitimate use cases. A quote from one responded in the
U.S. "The threat isn't that AI becomes too powerful, it's that AI becomes too timid, too smooth,
“too optimized for avoiding discomfort." Notably, there were also 11% of people who expressed”
no concern, and while you might assume those were the Super Bowl accelerationists, they actually just tended to see AI as a neutral tool, akin to electricity or the internet. They tended to be more confident than their peers, that when problems inevitably arose, they could be solved throughout adaptation. Now, notably on this list, you didn't hear a lot of the things that dominate the media conversation. Things like copyright concerns or risks to kids.
Those did show up in the long tail. 5% of concerns were around bias and discrimination, 4% were around IP and data rights, 4% were around environmental costs, 3% were around harm to children and other vulnerable groups, and 3% were concerned around democracy and political integrity. This certainly strike me as the biggest divergence, either A, between media reporting about concerns, and actual expressed concerns, or B, and this is something we'll talk about a
little bit more in just a minute, the difference between the concerns of users and non-users of AI. Now, as anthropic set at the beginning, hopes and fears were not evenly divided into different camps, they were present together in most people. In fact, they found five recurring tensions between directly competing benefits and harms, leading to this idea that what people want from AI and what they fear from it are tightly bound. One is a tension between using AI to learn and
growing so relying on it that you stop thinking for yourself. Another is people finding solace in AI, but worrying about AI standing in for human connection. On the productivity front, people save time on some tasks, only four as anthropic writes it to treadmill to speed up on others, and of course they dream of economic freedom, while at the same time, having fears around being displaced at work. They did note that across most of these tensions, the benefit side is
more grounded in experience while harm leans hypothetical. For example, they write, 33% of people mentioned AI's benefit for learning, while 17% expressed worry about cognitive atrophy from AI use. 91% of those who mentioned learning benefits said that they had actually realized those gains in some ways, as opposed to 46% of those who are worried about
atrophy who had seen it firsthand. In other words, basically, double the number of people
who hoped for learning benefits had actually seen them that those who were worried about learning atrophy had seen those effects. The strongest co-occurrence of light and shade in the same person was around a positive and negative impacts of emotional support. Inthropic said that there was actually triple the baseline co-occurrence right there. On the other end of the spectrum, they write that the economic mobility tension between those yearning for economic empowerment
for AI and those fearing displacement from it is the most speculative, with the highest rate of hopes and hypothetical fears. It's also the area where co-occurrence of upside and downside in experiences weakest. In other words, people who are actually experiencing a lot of the financial benefits of AI are not simultaneously experiencing job displacement. Now cutting a little bit deeper here, economic benefits are definitely accruing to the more nimble. They write
That those benefits skew heavily towards independent workers like entrepreneu...
owners and even people with side projects. They report real economic empowerment and more than
“triple the rate of institutional employees. Interestingly, employees with side projects benefited”
the most with 58% stating some form of real economic gain. Freelancers they found as the most exposed middle. Freelanc's creatives were the group for the upside and downside most nearly canceled out. 23% had lived the benefits, but 17% had lived the downside. As Anthropic puts it, AI is both their tool and their competitor. Now I want to talk about some people's responses to the survey, but the last thing that I'll note is that this once again found similar patterns
that we've seen elsewhere with Western and developed countries, having average or below average sentiment towards AI and southern and developing economies, having more above average sentiment towards AI. Now interestingly, the two sides of the takes were themselves in some ways,
the light and shade opposites of one another. For some, the methodology itself represented
something of a triumph. Drag AI Labs writes, 81,000 responses is a data set that actually means something. What stands out is the methodology. Using cloth itself as the interviewer at that scale removes the interviewer bias problem that kills most qualitative research. The model can hold a consistent interview structure across 159 countries in 70 languages simultaneously. No human research team gets anywhere close to that coverage. On the other side is this take,
represented by Berkeley Hospital Professor Abhishek Nagarashu wrote, "I'm very bullish on the role of AI qualitative interviewers, but all the results from this exercise should have a big asterisk around what the specific sample is and what it says about AI in general. Who are these 81,000 users around the world that are responding to this call? Is this telling us something about how AI is generally perceived? Or what the 81,000 cloud users think about AI? We already know
that cloud users are likely to be quite different than the average user on the consumer side, let alone how this selection varies across countries occupations and continents. Survey research scholars have written entire textbooks about sample selection, but I saw very little discussion on this topic or any disclaimers in this report,
“except for a paragraph below the appendix. This I think is the fair-ish version of the critique.”
Now, I don't think anthropic is trying to hide the fact that this is a survey of 81,000 cloud users. They didn't go to pains to say that everyone should only treat this then as the opinion of cloud users, but they also didn't bury that lead or pretend that this was anything other than that. In fact, they highlight at the very top of their tweet thread about it, among other places, the methodology that had these interviews conducted with the anthropic
interviewer. The version of this critique that I'm not only less convinced by, but I also think is actually quite pernicious, is represented by anonymous Twitter user, librarian, shipwreck. They write, "I'm sorry, but inviting AI users to share their opinions on AI is going to provide you with significantly skewed results." There are some interesting things in here, but it needs to be emphasized that these are the views of AI users, and it is in a shock AI users are largely
pro-AI. This survey may be useful for telling you what cloud users think of AI, and maybe you could jump from that to make broader assumptions about other AI users, but this doesn't really tell us much about broader attitudes towards AI. So here's my issue with this. On the one hand, yes, absolutely, for just extrapolating out what we should take from this, it is completely reasonable to say that to the extent that we want
to make broader assumptions about AI attitudes, we should perhaps limit the boundaries of those broader assumptions to other AI users, as opposed to everyone. And I agree with this in the sense that I would not try to extrapolate general attitudes towards AI from, for example, the percentage of users in this survey that had a positive attitude towards AI. I don't think those things would hold, but what's pretty vicious about this discourse is that it reveals something that is much
more prevalent, which is an implicit idea that somehow the opinions of AI users are less legitimate and less relevant when it comes to understanding the "overall perception of AI," then are the experiences and perceptions of non-AI users, and people who are inherently negative towards AI.
In fact, you can almost see this among many of the AI critics who, basically, without being so
clear about it, are effectively arguing that the only opinions that should matter when it comes to, for example, making AI policy are the people who are against AI and not using AI. There is a presumption in many cases of some sort of moral superiority of that position, as though not having an informed opinion in some way makes it a more pure opinion. This, of course, is intellectual nimbias and masquerading as methodology critique,
and it just doesn't carry water in a world where billions of people are using AI every week.
“We are heading into a period where we are going to be having big, important societal discussions”
about the role of AI. And I want them to be as informed as possible, and what's more for any AI critics who are worried that AI users are going to represent some monolithic, unconcited mass. Just look at for how many people in this study, real concerns coexist with the real feeling of opportunity. And now, obviously, I don't want to be guilty of what I'm critiquing and paint with two broader brush. There are plenty of AI critics who are not trying to discount or dismiss and disenfranchise
the opinion of billions of AI users. But I do find this sentiment, this dismissal, for example, of this type of study, as illegitimate because it is of AI users. It's more prevalent than I'd like.
Just something to keep an eye on as we go deeper into these conversations.
For now, though, I think it's a really fascinating study. I think the implications of being able to
“interview 81,000 people in a week are super cool and go away beyond just figuring out what people”
think about AI. And I'm excited to see what Anthropic do next with their interviewer.
For now, however, that is going to do it for today's AI daily brief. I appreciate you listening or
“watching. As always, and until next time, peace.”
[BLANK_AUDIO]


